record_id,title,abstract,year,label_included,duplicate_record_id 1,Classification-tree models of software-quality over multiple releases,"Software quality models are tools for focusing software enhancement efforts. Such efforts are essential for mission-critical embedded software, such as telecommunications systems, because customer-discovered faults have very serious consequences and are very expensive to repair. We present an empirical study that evaluated software quality models over several releases to address the question, How long will a model yield useful predictions? We also introduce the Classification And Regression Trees (CART) algorithm to software reliability engineering practitioners. We present our method for exploiting CART features to achieve a preferred balance between the two types of misclassification rates. This is desirable because misclassifications of fault-prone modules often have much more severe consequences than misclassifications of those that are not fault-prone. We developed two classification-tree models based on four consecutive releases of a very large legacy telecommunications system. Forty-two software product, process, and execution metrics were candidate predictors. The first software quality model used measurements of the first release as the training data set and measurements of the subsequent three releases as evaluation data sets. The second model used measurements of the second release as the training data set and measurements of the subsequent two releases as evaluation data sets. Both models had accuracy that would be useful to developers.",2000,1, 2,A Software Development Process for Small Projects,"The authors' development process integrates portions of an iterative, incremental process model with a quality assurance process that assesses each process phase's quality and a measurement process that collects data to guide process improvement. The process's goal is to produce the high quality and timely results required for today's market without imposing a large overhead on a small project.",2000,0, 3,A model for the sizing of software updates,"The quality and reliability of software updates (SUs) are critical to a system vendor and its customers. As a result, it is important that SUs shipped to customers be successfully integrated into the field generic. A large amount of code must be shipped in an SU because customers want as many fixes and features as possible without compromising the reliability of their systems. However, as the size of an SU increases, so does its probability of field failure, thus making larger SUs riskier. The fundamental question is: How large should an SU be to keep the risk under control? This paper studies the tradeoff between the desire to ship large SUs and the failure risk carried with them. We formulate the problem as a nonlinear programming (NLP) problem, investigate it under various conditions, and derive sizing strategies for the SU. In particular, we derive a formula for the maximal SU size. We make a connection between software reliability and linear programming which, to the best of our knowledge, appears here for the first time. We also introduce some basic ideas related to the customer operational environment and explain the importance of the environment to software performance using an interesting analogy.",2000,0, 4,Predicting risk of software changes,"Reducing the number of software failures is one of the most challenging problems of software production. We assume that software development proceeds as a series of changes and model the probability that a change to software will cause a failure. We use predictors based on the properties of a change itself. Such predictors include size in lines of code added, deleted, and unmodified; diffusion of the change and its component subchanges, as reflected in the number of files, modules, and subsystems touched, or changed; several measures of developer experience; and the type of change and its subchanges (fault fixes or new code). The model is built on historic information and is used to predict the risk of new changes. In this paper we apply the model to 5ESS software updates and find that change diffusion and developer experience are essential to predicting failures. The predictive model is implemented as a Web-based tool to allow timely prediction of change quality. The ability to predict the quality of change enables us to make appropriate decisions regarding inspection, testing, and delivery. Historic information on software changes is recorded in many commercial software projects, suggesting that our results can be easily and widely applied in practice.",2000,0, 5,Resource-constrained compaction of sequential circuit test sets,"We investigate a new, resource-constrained method for static compaction of large, sequential circuit test sets. Our approach is based on two key observations: (1) since all physical defects cannot be covered using a single defect model, test sets include tests generated using multiple defect models like stuck-at, delay, or bridging fault models. Therefore, it is unlikely that a marginal drop (0.5% or less) in fault coverage during compaction of tests generated for a single defect model will adversely affect the test quality of the overall test set. (2) Fault coverage is an aggregate measure that can be preserved as long as the original and compacted test sets detect the same number of faults. The specific faults detected by the two test sets can be significantly different. In particular, the compacted vector set may detect new faults that are not detected by the original vector set. The new compaction technique was implemented as part of the recently proposed two-phase static compaction technique. Experimental results on ISCAS benchmarks and several production circuits show that: (1) the actual loss in fault coverage, if any, was significantly less than the pre-specified tolerance limit of 1%; (2) fault coverage of the compacted test set can be higher than the original test set; and (3) significantly higher compaction is achieved using fewer CPU seconds, as compared to the baseline system that compacts test sets to preserve fault coverage",2000,0, 6,Use of fault tree analysis for evaluation of system-reliability improvements in design phase,"Traditional failure mode and effects analysis is applied as a bottom-up analytical technique to identify component failure modes and their causes and effects on the system performance, estimate their likelihood, severity and criticality or priority for mitigation. Failure modes and their causes, other than those associated with hardware, primarily electronic, remained poorly addressed or not addressed at all. Likelihood of occurrence was determined on the basis of component failure rates or by applying engineering judgement in their estimation. Resultant prioritization is consequently difficult so that only the apparent safety-related or highly critical issues were addressed. When thoroughly done, traditional FMEA or FMECA were too involved to be used as a effective tool for reliability improvement of the product design. Fault tree analysis applied to the product as a top down in view of its functionality, failure definition, architecture and stress and operational profiles provides a methodical way of following products functional flow down to the low level assemblies, components, failure modes and respective causes and their combination. Flexibility of modeling of various functional conditions and interaction such as enabling events, events with specific priority of occurrence, etc., using FTA, provides for accurate representation of their functionality interdependence. In addition to being capable of accounting for mixed reliability attributes (failure rates mixed with failure probabilities), fault trees are easy to construct and change for quick tradeoffs as roll up of unreliability values is automatic for instant evaluation of the final quantitative reliability results. Failure mode analysis using fault tree technique that is described in this paper allows for real, in-depth engineering evaluation of each individual cause of a failure mode regarding software and hardware components, their functions, stresses, operability and interactions",2000,0, 7,Analysis of safety systems with on-demand and dynamic failure modes,"An approach for the reliability analysis of systems with on demand and dynamic failure modes is presented. Safety systems such as sprinkler systems or other protection systems are characterized by such failure behavior. They have support subsystems to start up the system on demand, and once they start running, they are prone to dynamic failure. Failure on demand requires an availability analysis of components (typically electromechanical components) which are required to start or support the safety system. Once the safety system is started, it is often reasonable to assume that these support components do not fail while running. Further, these support components may be tested and maintained periodically while not in active use. Dynamic failure refers to the failure while running (once started) of the active components of the safety system. These active components may be fault tolerant and utilize spares or other forms of redundancy, but are not maintainable while in use. In this paper, the authors describe a simple yet powerful approach to combining the availability analysis of the static components with a reliability analysis of the dynamic components. This approach is explained using a hypothetical example sprinkler system, and applied to a water deluge system taken from the offshore industry. The approach is implemented in the fault tree analysis software package, Galileo",2000,0, 8,Enhancing the predictive performance of the Goel-Okumoto software reliability growth model,"In this paper, enhancement of the performance of the Goel-Okumoto Reliability Growth model is investigated using various smoothing techniques. The method of parameter estimation for the model is the maximum likelihood method. The evaluation of the performance of the model is judged by the relative error of the predicted number of failures over future time intervals relative to the number of failures eventually observed during the interval. The use of data analysis procedures utilizing the Laplace trend test are investigated. These methods test for reliability growth throughout the data and establish """"windows"""" that censor early failure data and provide better model fits. The research showed conclusively that the data analysis procedures resulted in improvement in the models' predictive performance for 41 different sets of software failure data collected from software development labs in the United States and Europe",2000,0, 9,Software FMEA techniques,"Assessing the safety characteristics of software driven safety critical systems is problematic. The author has performed software FMEA on embedded automotive platforms for brakes, throttle, and steering with promising results. Use of software FMEA at a system and a detailed level has allowed visibility of software and hardware architectural approaches which assure safety of operation while minimizing the cost of safety critical embedded processor designs. Software FMEA has been referred to in the technical literature for more than fifteen years. Additionally, software FMEA has been recommended for evaluating critical systems in some standards, notably draft IEC 61508. Software FMEA is also provided for in the current drafts of SAE ARP 5580. However, techniques for applying software FMEA to systems during their design have been largely missing from the literature. Software FMEA has been applied to the assessment of safety critical real-time control systems embedded in military and automotive products. The paper is a follow on to and provides significant expansion to the software FMEA techniques originally described by the author in the 1993 RAMS paper Validating The Safety Of Real-Time Control Systems Using FMEA",2000,0, 10,A physics/engineering of failure based analysis and tool for quantifying residual risks in hardware,"NASA Code Q is supporting efforts to improve the verification and validation and the risk management processes for spaceflight projects. A physics-of-failure based Defect Detection and Prevention (DDP) methodology previously developed has been integrated into a software tool and is currently being implemented on various NASA projects and as part of NASA's new model-based spacecraft development environment. The DDP methodology begins with prioritizing the risks (or failure modes, FMs) relevant to a mission which need to be addressed. These risks can be reduced through the implementation of a set of detection and prevention activities referred to herein as PACTs (preventative measures, analyses, process controls and tests). Each of these PACTs has some effectiveness against one or more FMs but also has an associated resource cost. The FMs can be weighted according to their likelihood of occurrence and their mission impact should they occur. The net effectiveness of various combinations of PACTs can then be evaluated against these weighted FMs to obtain the residual risk for each of these FMs and the associated resource costs to achieve these risk levels. The process thus identifies the project-relevant tall pole FMs and design drivers and allows real time tailoring with the evolution of the design and technology content. The DDP methodology allows risk management in its truest sense: it identifies and assesses risk, provides options and tools for risk decision making and mitigation and allows for real-time tracking of current risk status",2000,0, 11,The SecureGroup group communication system,"The SecureGroup group communication system multicasts messages to a group of processors over a local-area network, and delivers messages reliably and in total order. It also maintains the membership of the group, detecting and removing faulty processors and admitting new and repaired processors. The SecureGroup system provides resistance against Byzantine faults such as might be caused by a captured or subverted processor or by a Trojan horse. The reliable message delivery protocol employs hardware broadcasts and novel acknowledgment mechanisms that reduce the number of acknowledgments and messages required to ensure reliable delivery. The total ordering protocol continues to order messages despite the presence of Byzantine and crash faults, provided that a resilience requirement is satisfied. The group membership protocol operates above the total ordering protocol and, thus, simplifies its design and protects it against malicious attacks",2000,0, 12,Software fault injection for survivability,"In this paper, we present an approach and experimental results from using software fault injection to assess information survivability. We define information survivability to mean the ability of an information system to continue to operate in the presence of faults, anomalous system behavior, or malicious attack. In the past, finding and removing software flaws has traditionally been the realm of software testing. Software testing has largely concerned itself with ensuring that software behaves correctly-an intractable problem for any non-trivial piece of software. In this paper, we present off-nominal testing techniques, which are not concerned with the correctness of the software, but with the survivability of the software in the face of anomalous events and malicious attack. Where software testing is focused on ensuring that the software computes the specified function correctly, we are concerned that the software continues to operate in the presence of faults, unusual system events or malicious attacks",2000,0, 13,Software reliability models with time-dependent hazard function based on Bayesian approach,"In this paper, two models predicting mean time until next failure based on Bayesian approach are presented. Times between failures follow Weibull distributions with stochastically decreasing ordering on the hazard functions of successive failure time intervals, reflecting the tester's intent to improve the software quality with each corrective action. We apply the proposed models to actual software failure data and show they give better results under sum of square errors criteria as compared to previous Bayesian models and other existing times between failures models. Finally, we utilize likelihood ratios criterion to compare new model's predictive performance",2000,0, 14,Survivability through customization and adaptability: the Cactus approach,"Survivability, the ability of a system to tolerate intentional attacks or accidental failures or errors, is becoming increasingly important with the extended use of computer systems in society. While techniques such as cryptographic methods, intrusion detection, and traditional fault tolerance are currently being used to improve the survivability of such systems, new approaches are needed to help reach the levels that will be required in the near future. This paper proposes the use of fine-grain customization and dynamic adaptation as key enabling technologies in a new approach designed to achieve this goal. Customization not only supports software diversity, but also allows customized tradeoffs to be made between different QoS attributes including performance, security, reliability and survivability. Dynamic adaptation allows survivable services to change their behavior at runtime as a reaction to anticipated or detected intrusions or failures. The Cactus system provides support for both fine-grain customization and dynamic adaptation, thereby offering a potential solution for building survivable software in networked systems",2000,0, 15,The effectiveness of software development technical reviews: a behaviorally motivated program of research,"Software engineers use a number of different types of software development technical review (SDTR) for the purpose of detecting defects in software products. This paper applies the behavioral theory of group performance to explain the outcomes of software reviews. A program of empirical research is developed, including propositions to both explain review performance and identify ways of improving review performance based on the specific strengths of individuals and groups. Its contributions are to clarify our understanding of what drives defect detection performance in SDTRs and to set an agenda for future research. In identifying individuals' task expertise as the primary driver of review performance, the research program suggests specific points of leverage for substantially improving review performance. It points to the importance of understanding software reading expertise and implies the need for a reconsideration of existing approaches to managing reviews",2000,0, 16,Technology transfer issues for formal methods of software specification,"Accurate and complete requirements specifications are crucial for the design and implementation of high-quality software. Unfortunately, the articulation and verification of software system requirements remains one of the most difficult and error-prone tasks in the software development lifecycle. The use of formal methods, based on mathematical logic and discrete mathematics, holds promise for improving the reliability of requirements articulation and modeling. However, formal modeling and reasoning about requirements has not typically been a part of the software analyst's education and training, and because the learning curve for the use of these methods is nontrivial, adoption of formal methods has proceeded slowly. As a consequence, technology transfer is a significant issue in the use of formal methods. In this paper, several efforts undertaken at NASA aimed at increasing the accessibility of formal methods are described. These include the production of the following: two NASA guidebooks on the concepts and applications of formal methods, a body of case studies in the application of formal methods to the specification of requirements for actual NASA projects, and course materials for a professional development course introducing formal methods and their application to the analysis and design of software-intensive systems. In addition, efforts undertaken at two universities to integrate instruction on formal methods based on these NASA materials into the computer science and software engineering curricula are described.",2000,0, 17,Analyzing Java software by combining metrics and program visualization,"Shimba, a prototype reverse engineering environment, has been built to support the understanding of Java software. Shimba uses Rigi and SCED to analyze, visualize, and explore the static and dynamic aspects, respectively, of the subject system. The static software artifacts and their dependencies are extracted from Java byte code and viewed as directed graphs using the Rigi reverse engineering environment. The static dependency graphs of a subject system can be annotated with attributes, such as software quality measures, and then be analyzed and visualised using scripts through the end user programmable interface. Shimba has recently been extended with the Chidamber and Kemerer suite of object oriented metrics. The metrics measure properties of the classes, the inheritance hierarchy, and the interaction among classes of a subject system. Since Shimba is primarily intended for the analysis and exploration of Java software, the metrics have been tailored to measure properties of software systems using a reverse engineering environment. The static dependency graphs of the system under investigation are decorated with measures obtained by applying the object oriented metrics to selected software components. Shimba provides tools to examine these measures, to find software artifacts that have values that are in a given range, and to detect correlations among different measures. The object oriented analysis of the subject Java system can be investigated further by exporting the measures to a spreadsheet",2000,0, 18,Proactive maintenance tools for transaction oriented wide area networks,"The motivation of the work presented in this paper comes from a real network management center in charge of supervising a very large hybrid telecommunications/data transaction-oriented network. We present a set of tools that we have developed and implemented in the AT&T Transaction Access Services (TAS) network, in order to automate and facilitate the process of diagnosing network faults and identifying the potentially affected elements, resources and customers. Specifically in this paper we describe the development implementation and use of the following systems: (a) the TAS Information and Tracking System (TIMATS) that provides a common framework for the storage and retrieval of provisioning, capacity management and maintenance data; (b) the Transactions Event Viewer (TEVIEW) system that generates, filters, and presents diagnostic events that indicate system occurrences or conditions that may cause a degradation of the service; and (c) the Transaction Instantaneous Anomaly Notification (TRISTAN) system which implements an adaptive network anomaly detection software that detects network and service anomalies of TAS as dynamically defined violations of the base-lined performance characteristics and profiles",2000,0, 19,"When management agents become autonomous, how to ensure their reliability?","Increasingly nowadays, networks are managed in a hierarchical, yet evolving to a distributed manner. The managed network is divided into sub-networks or domains that are managed more or less independently by autonomous agents. Once the failure of an agent is detected, it becomes even possible to have a further improvement by re-affecting the management tasks of the unreliable agent among the other agents in a way to ensure that the whole network continues to be reliably managed. This provides the property of graceful degradation to the distributed management system. The work presented in this paper provides a first step towards this interesting improvement. To ensure that the whole network is still managed even if a number of agents become unreliable, it is necessary to install a mechanism that continuously checks the reliability of the agents. When unreliable agents are detected, the management tasks that they have been performing are re-distributed amongst the other still-reliable agents. At some time in the future, the agent with the abnormal behavior might recover, for example following a human intervention, and the tasks that have been re-distributed on the other agents should be assigned back to the recovered agents",2000,0, 20,A network measurement architecture for adaptive applications,"The quality of network connectivity between a pair of Internet hosts can vary greatly. Adaptive applications can cope with these differences in connectivity by choosing alternate representations of objects or streams or by downloading the objects from alternate locations. In order to effectively adapt, applications must discover the condition of the network before communicating with distant hosts. Unfortunately, the ability to predict or report the quality of connectivity is missing in today's suite of Internet services. To address this limitation, we have developed SPAND (shared passive network performance discovery), a system that facilitates the development of adaptive network applications. In each domain, applications make passive application specific measurements of the network and store them in a local centralized repository of network performance information. Other applications may retrieve this information from the repository and use the shared experiences of all hosts in a domain to predict future performance. In this way, applications can make informed decisions about adaptation choices as they communicate with distant hosts. In this paper, we describe and evaluate the SPAND architecture and implementation. We show how the architecture makes it easy to integrate new applications into our system and how the architecture has been used with specifics types of data transport. Finally, we describe LookingGlass, a WWW mirror site selection tool that uses SPAND. LookingGlass meets the conflicting goals of collecting passive network performance measurements and maintaining good client response times. In addition, LookingGlass's server selection algorithms based on application level measurements perform much better than techniques that rely on geographic location or route metrics",2000,0, 21,Sensitivity analysis of modular dynamic fault trees,"Dynamic fault tree analysis, as currently supported by the Galileo software package, provides an effective means for assessing the reliability of embedded computer-based systems. Dynamic fault trees extend traditional fault trees by defining special gates to capture sequential and functional dependency characteristics. A modular approach to the solution of dynamic fault trees effectively applies Binary Decision Diagram (BOD) and Markov model solution techniques to different parts of the dynamic fault tree model. Reliability analysis of a computer-based system tells only part of the story, however. Follow-up questions such as Where are the weak links in the system?, How do the results change if my input parameters change? and What is the most cost effective way to improve reliability? require a sensitivity analysis of the reliability analysis. Sensitivity analysis (often called Importance Analysis) is not a new concept, but the calculation of sensitivity measures within the modular solution methodology for dynamic and static fault trees raises some interesting issues. In this paper we address several of these issues, and present a modular technique for evaluating sensitivity, a single traversal solution to sensitivity analysis for BOD, a simplified methodology for estimating sensitivity for Markov models, and a discussion of the use of sensitivity measures in system design. The sensitivity measures for both the Binary Decision Diagram and Markov approach presented in this paper is implemented in Galileo, a software package for reliability analysis of complex computer-based systems",2000,0, 22,Reaching efficient fault-tolerance for cooperative applications,"Cooperative applications are widely used, e.g. as parallel calculations or distributed information processing systems. Whereby such applications meet the users demand and offer a performance improvement, the susceptibility to faults of any used computer node is raised. Often a single fault may cause a complete application failure. On the other hand, the redundancy in distributed systems can be utilized for fast fault detection and recovery. So, we followed an approach that is based an duplication of each application process to detect crashes and faulty functions of single computer nodes. We concentrate on two aspects of efficient fault-tolerance-fast fault detection and recovery without delaying the application progress significantly. The contribution of this work is first a new fault detecting protocol for duplicated processes. Secondly, we enhance a roll forward recovery scheme so that it is applicable to a set of cooperative processes in conformity to the protocol",2000,0, 23,NFTAPE: a framework for assessing dependability in distributed systems with lightweight fault injectors,"Many fault injection tools are available for dependability assessment. Although these tools are good at injecting a single fault model into a single system, they suffer from two main limitations for use in distributed systems: (1) no single tool is sufficient for injecting all necessary fault models; (2) it is difficult to port these tools to new systems. NFTAPE, a tool for composing automated fault injection experiments from available lightweight fault injectors, triggers, monitors, and other components, helps to solve these problems. We have conducted experiments using NFTAPE with several types of lightweight fault injectors, including driver-based, debugger-based, target-specific, simulation-based, hardware-based, and performance-fault injections. Two example experiments are described in this paper. The first uses a hardware fault injector with a Myrinet LAN; the other uses a Software Implemented Fault Injection (SWIFI) fault injector to target a space-imaging application",2000,0, 24,SAABNet: Managing qualitative knowledge in software architecture assessment,"Quantitative techniques have traditionally been used to assess software architectures. We have found that early in development process there is often insufficient quantitative information to perform such assessments. So far the only way to make qualitative assessments about an architecture, is to use qualitative assessment techniques such as peer reviews. The problem with this type of assessment is that they depend on the techniques knowledge of the expert designers who use them. In this paper we introduce a technique, SAABNet (Software Architecture Assessment Belief Network), that provides support to make qualitative assessments of software architectures",2000,0, 25,Evaluating system dependability in a co-design framework,"The widespread adoption of embedded microprocessor-based systems for safety critical applications mandates the use of co-design tools able to evaluate system dependability at every step of the design cycle. In this paper, we describe how fault injection techniques have been integrated in an existing co-design tool and which advantages come from the availability of such an enhanced tool. The effectiveness of the proposed tool is assessed on a simple case study",2000,0, 26,Scalable QoS guaranteed communication services for real-time applications,"We propose an approach to flow-unaware admission control which is a combination with an aggregate packet forwarding scheme, improving scalability of networks while guaranteeing end-to-end deadlines for real-time applications. We achieve this by using an off-line delay computation and verification step, which allows to reduce the overhead at admission control while keeping admission probability and resource utilization high. Our evaluation data show that our system's admission probabilities are very close to those of significantly more expensive flow-aware approaches. At the same time, the admission control overhead during flow establishment is very low. Our results therefore support the claim from the DS architecture literature that scalability can be achieved through flow aggregation without sacrificing resource utilization and with significant reduction in run time overhead",2000,0, 27,Computing global functions in asynchronous distributed systems prone to process crashes,"Global data is a vector with one entry per process. Each entry must be filled with an appropriate value provided by the corresponding process. Several distributed computing problems amount to compute a function on global data. This paper proposes a protocol to solve such problems in the context of asynchronous distributed systems where processes may fail by crashing. The main problem that has to be solved lies in computing the global data and in providing each non-crashed process with a copy of it, despite the possible crash of some processes. To be consistent, the global data must contain (at least) all the values provided by the processes that do not crash. This defines the global data computation (GDC) problem. To solve this problem, processes execute a sequence of asynchronous rounds during which they construct (in a decentralized way) the value of the global data, and eventually each process gets a copy of it. To cope with process crashes, the protocol uses a perfect failure detector. The proposed protocol has been designed to be time-efficient. It allows early decisions. Let t be the maximum number of processes that may crash (tm and hill-climbing algorithms for finding suboptimal task assignments are presented. Simulation results are provided to confirm the performance of the proposed algorithms",2000,0, 75,Teraflops supercomputer: architecture and validation of the fault tolerance mechanisms,"Intel Corporation developed the Teraflops supercomputer for the US Department of Energy (DOE) as part of the Accelerated Strategic Computing Initiative (ASCI). This is the most powerful computing machine available today, performing over two trillion floating point operations per second with the aid of more than 9,000 Intel processors. The Teraflops machine employs complex hardware and software fault/error handling mechanisms for complying with DOE's reliability requirements. This paper gives a brief description of the system architecture and presents the validation of the fault tolerance mechanisms. Physical fault injection at the IC pin level was used for validation purposes. An original approach was developed for assessing signal sensitivity to transient faults and the effectiveness of the fault/error handling mechanisms. Dependency between fault/error detection coverage and fault duration was also determined. Fault injection experiments unveiled several malfunctions at the hardware, firmware, and software levels. The supercomputer performed according to the DOE requirements after corrective actions were implemented. The fault injection approach presented in this paper can be used for validation of any fault-tolerant or highly available computing system",2000,0, 76,How perspective-based reading can improve requirements inspections,"Because defects constitute an unavoidable aspect of software development, discovering and removing them early is crucial. Overlooked defects (like faults in the software system requirements, design, or code) propagate to subsequent development phases where detecting and correcting them becomes more difficult. At best, developers will eventually catch the defects, but at the expense of schedule delays and additional product-development costs. At worst, the defects will remain, and customers will receive a faulty product. The authors explain their perspective based reading (PBR) technique that provides a set of procedures to help developers solve software requirements inspection problems. PBR reviewers stand in for specific stakeholders in the document to verify the quality of requirements specifications. The authors show how PBR leads to improved defect detection rates for both individual reviewers and review teams working with unfamiliar application domains.",2000,0, 77,Wireless communications based system to monitor performance of rail vehicles,"This paper describes a recently developed remote monitoring system, based on a combination of embedded computing, digital signal processing, wireless communications, GPS, and GIS technologies. The system includes onboard platforms installed on each monitored vehicle and a central station located in an office. Each onboard platform detects various events onboard a moving vehicle, tags them with time and location information, and delivers the data to an office through wireless communications channels. The central station logs the data into a database and displays the location and status of each vehicle, as well as detected events, on a map. Waveform traces from all sensor channels can be sent with each event and can be viewed by the central station operator. The system provides two-way wireless communication between the central station and mobile onboard platforms. Depending on coverage requirements and customer preferences, communication can be provided through satellite, circuit-switch cellular, digital wireless communication links or a combination of these methods. Settings and software changes may be made remotely from the central station, eliminating the need to capture the monitored vehicle. The onboard platform can be configured for installation on any rail vehicle, including locomotives, passenger cars and freight cars. Depending on the application, the onboard platform can monitor either its own sensors or existing onboard sensors. The system has been used for several railroad applications including ride quality measurement, high cant deficiency monitoring, truck hunting detection, and locomotive health monitoring. The paper describes the system, these applications, and discusses some of the results",2000,0, 78,An integrated cost model for software reuse,"Several cost models have been proposed in the past for estimating, predicting, and analyzing the costs of software reuse. The authors analyze existing models, explain their variance, and propose a tool-supported comprehensive model that encompasses most of the existing models",2000,0, 79,Object model resurrection-an object oriented maintenance activity,"This paper addresses the problem of reengineering object-oriented systems that have incurred increased maintenance cost due to long development time-span and project lifecycle. When an Incremental Approach is used to develop an object-oriented system, there is a risk that the class design and the overall object model will deteriorate in quality with each increment. A recent research work suggested a process activity (Class Deterioration Detection and Resurrection-CDDR process activity) and a technique for the detection and resurrection of deteriorated classes. That work focussed on one particular aspect of object-oriented software maintenance-Class Quality Deterioration due to lack of cohesion induced by high coupling. This paper addresses the problem of deteriorating object-oriented design due to code and class growth (increase in the number of classes) within a system. A Code/Class Growth Control process activity (CGC) is suggested to avoid and eliminate Repetitious Code and Classes within the evolving system. The CDDR and CGC process activities are used to build an evolving Maintenance process model for object-oriented systems. The presented maintenance process model is an effective way to periodically assess and resurrect the quality of an object-oriented design during incremental development",2000,0, 80,A replicated assessment and comparison of common software cost modeling techniques,"Delivering a software product on time, within budget, and to an agreed level of quality is a critical concern for many software organizations. Underestimating software costs can have detrimental effects on the quality of the delivered software and thus on a company's business reputation and competitiveness. On the other hand, overestimation of software cost can result in missed opportunities to funds in other projects. In response to industry demand, a myriad of estimation techniques has been proposed during the last three decades. In order to assess the suitability of a technique from a diverse selection, its performance and relative merits must be compared. The current study replicates a comprehensive comparison of common estimation techniques within different organizational contexts, using data from the European Space Agency. Our study is motivated by the challenge to assess the feasibility of using multi-organization data to build cost models and the benefits gained from company-specific data collection. Using the European Space Agency data set, we investigated a yet unexplored application domain, including military and space projects. The results showed that traditional techniques, namely, ordinary least-squares regression and analysis of variance outperformed analogy-based estimation and regression trees. Consistent with the results of the replicated study no significant difference was found in accuracy between estimates derived from company-specific data and estimates derived from multi-organizational data",2000,0, 81,A case study in root cause defect analysis,"There are three interdependent factors that drive our software development processes: interval, quality and cost. As market pressures continue to demand new features ever more rapidly, the challenge is to meet those demands while increasing, or at least not sacrificing, quality. One advantage of defect prevention as an upstream quality improvement practice is the beneficial effect it can have on interval: higher quality early in the process results in fewer defects to be found and repaired in the later parts of the process, thus causing an indirect interval reduction. We report a retrospective root cause defect analysis study of the defect Modification Requests (MRs) discovered while building, testing, and deploying a release of a transmission network element product. We subsequently introduced this analysis methodology into new development projects as an in-process measurement collection requirement for each major defect MR. We present the experimental design of our case study discussing the novel approach we have taken to defect and root cause classification and the mechanisms we have used for randomly selecting the MRs to analyze and collecting the analyses via a Web interface. We then present the results of our analyses of the MRs and describe the defects and root causes that we found, and delineate the countermeasures created to either prevent those defects and their root causes or detect them at the earliest possible point in the development process. We conclude with lessons learned from the case study and resulting ongoing improvement activities",2000,0, 82,Bandera: extracting finite-state models from Java source code,"Finite-state verification techniques, such as model checking, have shown promise as a cost-effective means for finding defects in hardware designs. To date, the application of these techniques to software has been hindered by several obstacles. Chief among these is the problem of constructing a finite-state model that approximates the executable behavior of the software system of interest. Current best-practice involves hand construction of models which is expensive (prohibitive for all but the smallest systems), prone to errors (which can result in misleading verification results), and difficult to optimize (which is necessary to combat the exponential complexity of verification algorithms). The authors describe an integrated collection of program analysis and transformation components, called Bandera, that enables the automatic extraction of safe, compact finite-state models from program source code. Bandera takes as input Java source code and generates a program model in the input language of one of several existing verification tools; Bandera also maps verifier outputs back to the original source code. We discuss the major components of Bandera and give an overview of how it can be used to model check correctness properties of Java programs",2000,0, 83,Automated refactoring to introduce design patterns,"Software systems have to be flexible in order to cope with evolving requirements. However, since it is impossible to predict with certainty what future requirements will emerge, it is also impossible to know exactly what flexibility to build into a system. Design patterns are often used to provide this flexibility, so this question frequently reduces to whether or not to apply a given design pattern. We address this problem by developing a methodology for the construction of automated transformations that introduce design patterns. This enables a programmer to safely postpone the application of a design pattern until the flexibility it provides becomes necessary. Our approach deals with the issues of reuse of existing transformations, preservation of program behaviour and the application of the transformations to existing program code",2000,0, 84,Analyzing software architectures with Argus-I,"This formal research demonstration presents an approach to develop and assess architecture and component-based systems based on specifying software architecture augmented by statecharts representing component behavioral specifications (Dias et al., 2000). The approach is applied for the C2 style (Medvidovic et al., 1999) and associated ADL and is supported within a quality-focused environment, called Argus-I, which assists specification-based analysis and testing at both the component and architecture levels",2000,0, 85,Agent based customer modelling: individuals who learn from their environment,"Understanding the rate of adoption of a telecommunications service in a population of customers is of prime importance to ensure that appropriate network capacity is provided to maintain quality of service. This problem goes beyond assessing the demand for a product based on usage and requires an understanding of how consumers learn about a service and evaluate its worth. Field studies have shown that word of mouth recommendations and knowledge of a service have a significant impact on adoption rates. Adopters of the Internet can be influenced through communications at work or children learning at school. The authors present an agent based model of a population of customers, with rules based on field data, which is being used to understand how services are adopted. Of particular interest is how customers interact to learn about the service through their communications with other customers. We show how the different structure, dynamics and distribution of the social networks affect the diffusion of a service through a customer population. Our model shows that real world adoption rates are a combination of these mechanisms which interact in a non-linear and complex manner. This complex systems approach provides a useful way to decompose these interactions",2000,0, 86,Mining user behavior for resource prediction in interactive electronic malls,"Applications in virtual multimedia catalogs are highly interactive. Thus, it is difficult to estimate resource demands required for presentation of catalog contents. We propose a method to predict presentation resource demands in interactive multimedia catalogs. The prediction is based on the results of mining the virtual mall action log file. The log file typically contains information about previous user interests and browsing behavior. These data are used for modeling users future behavior within a session. We define heuristics to generate a start-up user behavior model as a continuous time Markov chain and adapt this model during a running session to the current user",2000,0, 87,Automatic image event segmentation and quality screening for albuming applications,"In this paper, a system for automatic albuming of consumer photographs is described and its specific core components of event segmentation and screening of low quality images are discussed. A novel event segmentation algorithm was created to automatically cluster pictures into events and sub-events for albuming, based on date/time meta data information as well as color content of the pictures. A new quality-screening is developed based on object quality to detect problematic images due to underexposure, low contrast, and camera defocus or movement. Performance testing of these algorithms was conducted using a database of real consumer photos and showed that these functions provide a useful first-cut album layout for typical rolls of consumer pictures. A first version of the automatic albuming application software was rested through a consumer trial in the United States from August to December 1999",2000,0, 88,Induction machine condition monitoring with higher order spectra,"This paper describes a novel method of detecting and unambiguously diagnosing the type and magnitude of three induction machine fault conditions from the single sensor measurement of the radial electromagnetic machine vibration. The detection mechanism is based on the hypothesis that the induction machine can be considered as a simple system, and that the action of the fault conditions are to alter the output of the system in a characteristic and predictable fashion. Further, the change in output and fault condition can be correlated allowing explicit fault identification. Using this technique, there is no requirement for a priori data describing machine fault conditions, the method is equally applicable to both sinusoidally and inverter-fed induction machines and is generally invariant of both the induction machine load and speed. The detection mechanisms are rigorously examined theoretically and experimentally, and it is shown that a robust and reliable induction machine condition-monitoring system has been produced. Further, this technique is developed into a software-based automated commercially applicable system",2000,0, 89,Renaming detection,"Finding changed identifiers in programs is important for program comparison and merging. Comparing two versions of a program is complicated if renaming has occurred. Textual merging is highly unreliable if, in one version, identifiers were renamed, while in the other version, code using the old identifiers was added or modified. A tool that automatically detects renamed identifiers between pairs of program modules is presented. The detector is part of a suite of intelligent differencing and merging programs that exploit the static semantics of programming languages. No special editor is needed for tracking changes. The core of the renaming detector is language independent. The detector works with multiple file pairs, taking into account renamings that affect multiple files. Renaming detectors for Java and Scheme have been implemented. A case study is presented that demonstrates proof of concept. With renaming detection, a higher quality of program comparison and merging is achievable",2000,0, 90,Mutation operators for specifications,"Testing has a vital support role in the software engineering process, but developing tests often takes significant resources. A formal specification is a repository of knowledge about a system, and a recent method uses such specifications to automatically generate complete test suites via mutation analysis. We define an extensive set of mutation operators for use with this method. We report the results of our theoretical and experimental investigation of the relationships between the classes of faults detected by the various operators. Finally, we recommend sets of mutation operators which yield good test coverage at a reduced cost compared to using all proposed operators",2000,0, 91,The use of abduction and recursion-editor techniques for the correction of faulty conjectures,"The synthesis of programs, as well as other synthetic tasks, often ends up with an unprovable, partially false conjecture. A successful subsequent synthesis attempt depends on determining why the conjecture is faulty and how it can be corrected. Hence, it is highly desirable to have an automated means for detecting and correcting fault conjectures. We introduce a method for patching faulty conjectures. The method is based on abduction and performs its task during an attempt to prove a given conjecture. On input X.G(X), the method builds a definition for a corrective predicate, P(X), such that X.P(X)G(X) is a theorem. The synthesis of a corrective predicate is guided by the constructive principle of formulae as types, relating inference to computation. We take the construction of a corrective predicate as a program transformation task. The method consists of a collection of construction commands. A construction command is a small program that makes use of one or more program editing commands, geared towards building recursive, equational procedures. A synthesised corrective predicate is guaranteed to be correct, turning a faulty conjecture into a theorem. If conditional, it will be well-defined. If recursive, it will also be terminating",2000,0, 92,Automated security checking and patching using TestTalk,"In many computer system security incidents, attackers successfully intruded computer systems by exploiting known weaknesses. Those computer systems remained vulnerable even after the vulnerabilities were known because it requires constant attention to stay on top of security updates. It is often both time-consuming and error-prone to manually apply security patches to deployed systems. To solve this problem, we propose to develop a framework for automated security checking and patching. The framework, named Securibot, provides a self-operating mechanism for security checking and patching. Securibot performs security testing using security profiles and security updates. It can also detect compromised systems using attack signatures. Most important, the Securibot framework allows system vendors to publish recently discovered security weaknesses and new patches in a machine-readable form so that the Securibot system running on deployed systems can automatically check out security updates and apply the patches",2000,0, 93,System-level test bench generation in a co-design framework,"Co-design tools represent an effective solution for reducing costs and shortening time-to-market, when System-on-Chip design is considered. In a top-down design flow, designers would greatly benefit from the availability of tools able to automatically generate test benches, which can be used during every design step, from the system-level specification to the gate-level description. This would significantly increase the chance of identifying design bugs early in the design flow, thus reducing the costs and increasing the final product quality. The paper proposes an approach for integrating the ability to generate test benches into an existing co-design tool. Suitable metrics are proposed to guide the generation, and preliminary experimental results are reported, assessing the effectiveness of the proposed technique",2000,0, 94,Fault detection method for the subcircuits of a cascade linear circuit,"The fault detection method for the subcircuits of a cascade linear circuit is discussed. While there is any fault (either hard or soft and either single or multiple) at one subcircuit of a cascade linear circuit, it can be quickly detected by using the method proposed. While there are faults simultaneously existing at multiple subcircuits, they can generally be detected by the searching approach proposed here. The aforementioned method is the continuation and development of the unified fault detection dictionary method for linear circuits proposed previously by the authors (see ibid., vol. 46, Oct. 1999)",2000,0, 95,Reduction of the number of terminal assignments for detecting feature interactions in telecommunication services,"Telecommunication systems are typical complex systems. Services that independently operate normally will behave differently when simultaneously initiated with another service. This behavior is called feature interaction and is recognized to affect the dependability. This article proposes a method of dramatically reducing the computation time required for detecting feature interactions in telecommunication services. One of the knotty problems in defecting feature interactions at the specification design stage is terminal assignment. For the same service specifications, occurrence of feature interactions depends on how to assign real terminals to terminal variables in the specifications. Consequently, all terminals connected to the network have to be considered in order to detect all interactions. As a result, the number of combinations of terminal assignments is enormous. This causes huge expansion of computation time needed for detection of feature interactions. By considering equivalent states, the proposed method can reduce the number of terminal assignments to one 400th compared with that of the conventional method",2000,0, 96,Logic representation of programs to detect arithmetic anomalies,"Much interest, especially by banks and insurance companies is paid to detect arithmetic anomalies and inexactness of arithmetic expressions. Numerous examples in the past show that although mathematical methods for correct implementation of arithmetic expressions exist and are well understood, many programs contain arithmetic anomalies, impreciseness or faults. Software tests based on conventional coverage criteria (F. Belli, 1998) and functional tests are not well suited for detection of these faults. The detection of arithmetic anomalies by these methods strongly depends on the adequateness of test cases. The selection of effective test cases needs a lot of effort to detect context-sensitive arithmetic inexactness. The authors introduce a novel approach for detecting arithmetic anomalies. The method is based on the specification of fault classes combined with the transformation of the program under test into a predicate logic model. The number of potential context-sensitive faults is deployed as a criterion to precisely select modules in large software systems to increase the test effectiveness",2000,0, 97,Software product improvement with inspection. A large-scale experiment on the influence of inspection processes on defect detection in software requirements documents,"In the early stages of software development, inspection of software documents is the most effective quality assurance measure to detect defects and provides timely feedback on quality to developers and managers. The paper reports on a controlled experiment that investigates the effect of defect detection techniques on software product and inspection process quality. The experiment compares defect detection effectiveness and efficiency of a general reading technique that uses checklist based reading, and a systematic reading technique, scenario based reading, for requirements documents. On the individual level, effectiveness was found to be higher for the general reading technique, while the focus of the systematic reading technique led to a higher yield of severe defects compared to the general reading technique. On a group level, which combined inspectors' contributions, the advantage of a reading technique regarding defect detection effectiveness depended on the size of the group, while the systematic reading technique generally exhibited better defect detection efficiency",2000,0, 98,Scalable hardware-algorithm for mark-sweep garbage collection,"The memory-intensive nature of object-oriented languages such as C++ and Java has created the need for high-performance dynamic memory management. Object-oriented applications often generate higher memory intensity in the heap region. Thus, a high-performance memory manager is needed to cope with such applications. As today's VLSI technology advances, it becomes increasingly attractive to map software algorithms such as malloc(), free() and garbage collection into hardware. This paper presents a hardware design of a sweeping function (for mark-and-sweep garbage collection) that fully utilizes the advantages of combinational logic. In our scheme, the bit sweep can detect and sweep the garbage in a constant time. Bit-map marking in software can improve the cache performance and reduce number of page faults; however, it often requires several instructions to perform a single mark. In our scheme, only one hardware instruction is required per mark. Moreover, since the complexity of the sweeping phase is often higher than the marking phase, the garbage collection time may be substantially improved. The hardware complexity of the proposed scheme (bit-sweeper) is O(n), where n represents the size of the bit map",2000,0, 99,Building Bayesian network-based information retrieval systems,Bayesian networks are suitable models to deal with the information retrieval problem because they are appropriate tools to manage the intrinsic uncertainty with which this area is pervaded. In this paper we introduce several modifications to the previous works on this field adding new features and showing how a good retrieval effectiveness can be achieved by improving the quality of the Bayesian networks used in the model and tuning some of their parameters,2000,0, 100,A case study in on-line intelligent sensing,"A new method is described for online detection of parameter changes in a sensor. This is based on work by Yung and Clarke (1989) which employs a local ARIMA model of the sensor output to generate an innovation sequence. A statistical test, which quantifies the change to the variance of an innovation sequence, is developed and used to provide a decision process based on a likelihood ratio of probabilities. Real-time experimental results for detecting a change in a thermocouple time-constant are presented",2000,0, 101,A practical classification-rule for software-quality models,"A practical classification rule for a SQ (software quality) model considers the needs of the project to use a model to guide targeting software RE (reliability enhancement) efforts, such as extra reviews early in development. Such a rule is often more useful than alternative rules. This paper discusses several classification rules for SQ models, and recommends a generalized classification rule, where the effectiveness and efficiency of the model for guiding software RE efforts can be explicitly considered. This is the first application of this rule to SQ modeling that we know of. Two case studies illustrate application of the generalized classification rule. A telecommunication-system case-study models membership in the class of fault-prone modules as a function of the number of interfaces to other modules. A military-system case-study models membership in the class of fault-prone modules as a function of a set of process metrics that depict the development history of a module. These case studies are examples where balanced misclassification rates resulted in more useful and practical SQ models than other classification rules",2000,0, 102,Analysis and simulation of smart antennas for GSM and DECT in indoor environments based on ray launching modeling techniques,"A software simulation of smart antennas mounted on base stations in a wireless communication system is described. We begin with the introduction of the smart antenna considered and its basic operation: the adaptive arrays based on a temporal reference. Next, the scheme of the simulated system is analysed, and in particular the characterization of the indoor mobile radio channel with ray launching techniques. Finally, we show some simulation results, making reference to the reduction of the uncorrelated multipath contributions, the quality improvement and the rejection of co-channel interference",2000,0, 103,On test application time and defect detection capabilities of test sets for scan designs,"The test application time of test sets for scan designs can be reduced (without reducing the fault coverage) by removing some scan operations, and increasing the lengths of the primary input sequences applied between scan operations. In this paper, we study the effects of such a compaction procedure on the ability of a test set to detect defects. Defect detection is measured by the number of times the test set detects each stuck-at fault, which was shown to be related to the defect coverage of the test set. We also propose a compaction procedure that affects the numbers of detections of stuck-at faults in a controlled way",2000,0, 104,Algorithm-based fault tolerance for spaceborne computing: basis and implementations,"We describe and test the mathematical background for using checksum methods to validate results returned by a numerical subroutine operating in a fault-prone environment that causes unpredictable errors in data. We can treat subroutines whose results satisfy a necessary condition of a linear form; the checksum tests compliance with this necessary condition. These checksum schemes are called algorithm-based fault tolerance (ABFT). We discuss the theory and practice of setting numerical tolerances to separate errors caused by a fault from those inherent in finite-precision numerical calculations. Two series of tests are described. The first tests the general effectiveness of the linear ABFT schemes we propose, and the second verifies the correct behavior of our parallel implementation of them. We find that under simulated fault conditions, it is possible to choose a fault detection scheme that for average case matrices can detect 99% of faults with no false alarms, and that for a worst-case matrix population can detect 80% of faults with no false alarms",2000,0, 105,Detailed radiation fault modeling of the Remote Exploration and Experimentation (REE) first generation testbed architecture,"The goal of the NASA HPCC Remote Exploration and Experimentation (REE) Project is to transfer commercial supercomputing technology into space. The project will use state of the art, low-power, non-radiation-hardened, COTS hardware chips and COTS software to the maximum extent possible, and will rely on software-implemented fault tolerance to provide the required levels of availability and reliability. We outline the methodology used to develop a detailed radiation fault model for the REE Testbed architecture. The model addresses the effects of energetic protons and heavy ions which cause single event upset and single event multiple upset events in digital logic devices and which are expected to be the primary fault generation mechanism. Unlike previous modeling efforts, this model will address fault rates and types in computer subsystems at a sufficiently fine level of granularity (i.e., the register level) that specific software and operational errors can be derived. We present the current state of the model, model verification activities and results to date, and plans for the future. Finally, we explain the methodology by which this model will be used to derive application-level error effects sets. These error effects sets will be used in conjunction with our Testbed fault injection capabilities and our applications' mission scenarios to replicate the predicted fault environment on our suite of onboard applications",2000,0, 106,A method for intellectualized detection and fault diagnosis of vacuum circuit breakers,"In this paper a method for intellectualized detection and fault diagnosis of vacuum circuit breakers is introduced. The system consists of sensors, single-chips, measuring circuits, processing circuits, controlling circuits, extended ports, communication interface, etc. It can monitor on-line the condition of a vacuum circuit breaker, analyze its change tendency, identify and locate and display the detectable faults. This paper describes the main detecting principles and diagnostic foundations. The hardware structure and software design are also given",2000,0, 107,"A reusable state-based guidance, navigation and control architecture for planetary missions","JPL has embarked on the Mission Data System (MDS) project to produce a reusable, integrated flight and ground software architecture. This architecture will then be adapted by future JPL planetary projects to form the basis of their flight and ground software. The architecture is based on identifying the states of the system under consideration. States include aspects of the system that must be controlled to accomplish mission objectives, as well as aspects that are uncontrollable but must be known. The architecture identifies methods to measure, estimate, model, and control some of these states. Some states are controlled by goals, and the natural hierarchy of the system is employed by recursively elaborating goals until primitive control actions are reached. Fault tolerance emerges naturally from this architecture. Failures are detected as discrepancies between state and model-based predictions of state. Fault responses are handled either by re-elaboration of goals, or by failures of goals invoking re-elaboration at higher levels. Failure modes an modelled as possible behaviors of the system, with corresponding state estimation processes. Architectural patterns are defined for concepts such as states, goals, and measurements. Aspects of state are captured in a state-analysis database. UML is used to capture mission requirements as Use Cases and Scenarios. Application of the state-based concepts to specific states is also captured in UML, achieving architectural consistency by adapting base classes for all architectural patterns",2000,0, 108,Protection of gate movement in hydroelectric power stations,"Movement of the gates is an everyday task performed on a dam of hydroelectric a power station. This operation is often controlled remotely by measuring the positioning of the gates and a level of the current in the driving motors. This method of control cannot, however, detect anomalies, such as asymmetric movement of gates, faults in drive gearwheel etc. We therefore decided to devise a new improved system for the protection of gate movement which is described in our paper. It is based on measuring the forces applied to the transmission construction. The transducers with resistive strain gauges are mounted on the frame bearings and the strains are subsequently measured. The output signal from the transducer is proportional to a force applied to the frame. The transducers are installed at the points of the largest strain. For the protection against uneven movement of the left and right chains, the strain transmitters are inserted in the bearings of the main gearwheel, to measure the compression. The whole system is controlled by the microprocessor. The details on sensors, the electronic instrumentation needed, and the software of the controlling computer, are also described in the paper. This system has been tested, and regularly used, on the power stations of Drava river in Slovenia.",2000,0, 109,An empirical investigation of an object-oriented software system,"The paper describes an empirical investigation into an industrial object oriented (OO) system comprised of 133000 lines of C++. The system was a subsystem of a telecommunications product and was developed using the Shlaer-Mellor method (S. Shlaer and S.J. Mellor, 1988; 1992). From this study, we found that there was little use of OO constructs such as inheritance, and therefore polymorphism. It was also found that there was a significant difference in the defect densities between those classes that participated in inheritance structures and those that did not, with the former being approximately three times more defect-prone. We were able to construct useful prediction systems for size and number of defects based upon simple counts such as the number of states and events per class. Although these prediction systems are only likely to have local significance, there is a more general principle that software developers can consider building their own local prediction systems. Moreover, we believe this is possible, even in the absence of the suites of metrics that have been advocated by researchers into OO technology. As a consequence, measurement technology may be accessible to a wider group of potential users",2000,0, 110,Quantitative analysis of faults and failures in a complex software system,"The authors describe a number of results from a quantitative study of faults and failures in two releases of a major commercial software system. They tested a range of basic software engineering hypotheses relating to: the Pareto principle of distribution of faults and failures; the use of early fault data to predict later fault and failure data; metrics for fault prediction; and benchmarking fault data. For example, we found strong evidence that a small number of modules contain most of the faults discovered in prerelease testing and that a very small number of modules contain most of the faults discovered in operation. We found no evidence to support previous claims relating module size to fault density nor did we find evidence that popular complexity metrics are good predictors of either fault-prone or failure-prone modules. We confirmed that the number of faults discovered in prerelease testing is an order of magnitude greater than the number discovered in 12 months of operational use. The most important result was strong evidence of a counter-intuitive relationship between pre- and postrelease faults; those modules which are the most fault-prone prerelease are among the least fault-prone postrelease, while conversely, the modules which are most fault-prone postrelease are among the least fault-prone prerelease. This observation has serious ramifications for the commonly used fault density measure. Our results provide data-points in building up an empirical picture of the software development process",2000,0, 111,Exception handling in workflow management systems,"Fault tolerance is a key requirement in process support systems (PSS), a class of distributed computing middleware encompassing applications such as workflow management systems and process centered software engineering environments. A PSS controls the flow of work between programs and users in networked environments based on a metaprogram (the process). The resulting applications are characterized by a high degree of distribution and a high degree of heterogeneity (properties that make fault tolerance both highly desirable and difficult to achieve). We present a solution for implementing more reliable processes by using exception handling, as it is used in programming languages, and atomicity, as it is known from the transaction concept in database management systems. We describe the mechanism incorporating both transactions and exceptions and present a validation technique allowing to assess the correctness of process specifications",2000,0, 112,Computer analysis of LOS microwaves links clear air performance,"Microwave links have to be designed such that propagation effects do not reduce the quality of the transmitted signals. Measurements and the derived propagation parameters are analysed and discussed, for Cluj-Napoca county, in order to improve future planning of the radio links",2000,0, 113,Task-oriented modelling of autonomous decentralised systems,"An ongoing project of the Programme for Highly Dependable Systems (PHDS) at the University of the Witwatersrand is the development of a dependable decentralised system using readily available hardware and software components. An experimental system has been developed to support multiple-task applications with different levels of criticality. Fault-tolerant protocols are used to detect faults, to mask incorrect results from faulty nodes. A task in a faulty node can he recovered through a system reconfiguration or task reallocation. A faulty node can he repaired and reintegrated into the system. This paper focuses on modelling the system under the occurrence of faults, reconfiguration and repair. The method developed can be used to evaluate individual task's reliability, risk and availability",2000,0, 114,Predicting and measuring quality of service for mobile multimedia,We show how an understanding of human perception and the simulation of a mobile IP network may be used to tackle the relevant issues of providing acceptable quality of service for mobile multimedia,2000,0, 115,Model checking of workflow schemas,"Practical experience indicates that the definition of real-world workflow applications is a complex and error-prone process. Existing workflow management systems provide the means, in the best case, for very primitive syntactic verification, which is not enough to guarantee the overall correctness and robustness of workflow applications. The paper presents an approach for formal verification of workflow schemas (definitions). Workflow behaviour is modelled by means of an automata-based method, which facilitates exhaustive compositional reachability analysis. The workflow behaviour can then be analysed and checked for safety and liveness properties. The model generation and the analysis procedure are governed by well-defined rules that can be fully automated. Therefore, the approach is accessible by designers who are not experts in formal methods",2000,0, 116,Integrating reliability and timing analysis of CAN-based systems,"The paper outlines and illustrates a reliability analysis method developed with a focus on CAN based automotive systems. The method considers the effect of faults on schedulability analysis and its impact on the reliability estimation of the system, and attempts to integrate both to aid system developers. We illustrate the method by modeling a simple distributed antilock braking system, showing that even in cases where the worst-case analysis deem the system unschedulable, it may be proven to satisfy its timing requirements with a sufficiently high probability. From a reliability and cost perspective, the paper underlines the tradeoffs between timing guarantees, the level of hardware and software faults, and per-unit cost",2000,0, 117,Data mining algorithms for web pre-fetching,"To speed up fetching web pages, this paper gives an intelligent technology of web pre-fetching. We use a simplified WWW data model to represent the data in the cache of web browser to mine the association rules. We store these rules in a knowledge base so as to predict the user's actions. Intelligent agents are responsible for mining the users' interest and pre-fetching web pages, based on the interest association repository. In this way user browsing time has been reduced transparently",2000,0, 118,Can metrics help to bridge the gap between the improvement of OO design quality and its automation?,"During the evolution of object-oriented (OO) systems, the preservation of a correct design should be a permanent quest. However, for systems involving a large number of classes and that are subject to frequent modifications, the detection and correction of design flaws may be a complex and resource-consuming task. The use of automatic detection and correction tools can be helpful for this task. Various works have proposed transformations that improve the quality of an OO system while preserving its behavior. In this paper, we investigate whether some OO metrics can be used as indicators for automatically detecting situations where a particular transformation can be applied to improve the quality of a system. The detection process is based on analyzing the impact of various transformations on these OO metrics using quality estimation models",2000,0, 119,C/C++ conditional compilation analysis using symbolic execution,"Conditional compilation is one of the most powerful parts of a C/C++ environment available for building software for different platforms with different feature sets. Although conditional compilation is powerful, it can be difficult to understand and is error-prone. In large software systems, file inclusion, conditional compilation and macro substitution are closely related and are often largely interleaved. Without adequate tools, understanding complex header files is a tedious task. This practice may even be complicated as the hierarchies of header files grow with projects. This paper presents our experiences of studying conditional compilation based on the symbolic execution of preprocessing directives. Our two concrete goals are: for any given preprocessor directive or C/C++ source code line, finding the simplest sufficient condition to reach/compile it, and finding the full condition to reach/compile that code line. Two different strategies were used to achieve these two goals. A series of experiments conducted on the Linux kernel are presented",2000,0, 120,Techniques of maintaining evolving component-based software,"Component based software engineering has been increasingly adopted for software development. Such an approach using reusable components as the building blocks for constructing software, on one hand, embellishes the likelihood of improving software quality and productivity; on the other hand, it consequently involves frequent maintenance activities, such as upgrading third party components or adding new features. The cost of maintenance for conventional software can account for as much as two-thirds of the total cost, and it can likely be even more for maintaining component based software. Thus, an effective maintenance technique for component based software is strongly desired. The authors present a technique that can be applied on various maintenance activities over component based software systems. The technique proposed utilizes a static analysis to identify the interfaces, events and dependence relationships that would be affected by the modification in the maintenance activity. The results obtained from the static analysis along with the information of component interactions recorded during the execution of each test case are used to guide test selection in the maintenance phase. The empirical results show that with 19% effort our technique detected 71% of the faults in an industrial component based system, which demonstrates the great potential effectiveness of the technique",2000,0, 121,A formal mechanism for assessing polymorphism in object-oriented systems,"Although quality is not easy to evaluate since it is a complex concept compound by different aspects, several properties that make a good object-oriented design have been recognized and widely accepted by the software engineering community. We agree that both the traditional and the new object-oriented properties should be analyzed in assessing the quality of object-oriented design. However, we believe that it is necessary to pay special attention to the polymorphism concept and metric, since they should be considered one of the key concerns in determining the quality of an object-oriented system. In this paper, we have given a rigorous definition of polymorphism. On top of this formalization we propose a metric that provides an objective and precise mechanism to detect and quantify dynamic polymorphism. The metric takes information coming from the first stages of the development process giving developers the opportunity to early evaluate and improve the quality of the software product. Finally, a first approach to the theoretical validation of the metric is presented",2000,0, 122,A framework for quantifying error proneness in software,"This paper proposes a framework for assessing quantitatively the error-proneness of computer program modules. The model uses an information theory approach to derive an error proneness index, that can be used in a practical way. Debugging and testing rake at least 40% of a software project's effort, but do not uncover all defects. While current research looks at identifying problem-modules in a program, no attempt is made for a quantitative error-proneness evaluation. By quantitatively assessing a module's susceptibility to error, we are able to identify error prone paths in a program and enhance testing efficiency. The goal is to identify error prone paths in a program using genetic algorithms. This increases software reliability, aids in testing design, and reduces software development cost",2000,0, 123,Software quality prediction using mixture models with EM algorithm,"The use of the statistical technique of mixture model analysis as a tool for early prediction of fault-prone program modules is investigated. The expectation-maximum likelihood (EM) algorithm is engaged to build the model. By only employing software size and complexity metrics, this technique can be used to develop a model for predicting software quality even without the prior knowledge of the number of faults in the modules. In addition, Akaike Information Criterion (AIC) is used to select the model number which is assumed to be the class number the program modules should be classified. The technique is successful in classifying software into fault-prone and non fault-prone modules with a relatively low error rate, providing a reliable indicator for software quality prediction",2000,1, 124,Quality metrics of object oriented design for software development and re-development,"The quality of software has an important bearing on the financial and safety aspects in our daily life. Assessing quality of software at the design level will provide ease and higher accuracy for users. However, there is a great gap between the rapid adoption of Object Oriented (OO) techniques and the slow speed of developing corresponding object oriented metric measures, especially object oriented design measures. To tackle this issue, we look into measuring the quality of Object Oriented designs during both software development and re-development processes. A set of OO design metrics has been derived from the existing work found in literature and been further extended. The paper also presents software tools for assisting software re-engineering using the metric measures developed; this has been illustrated with an example of the experiments conducted during our research, finally concluded with the lessons learned and intended further work",2000,0, 125,Testing for imperfect integration of legacy software components,"In the manufacturing domain, few new distributed systems are built ground-up; most contain wrapped legacy components. While the legacy components themselves are already well-tested, imperfect integration can introduce subtle faults that are outside the prime target area of generic integration and system tests. One might postulate that focused testing for integration faults could improve the yield of detected faults when used as part of a balanced integration and system test effort. We define such a testing strategy and describe a trial application to a prototype control system. The results suggest that focused testing does not add significant value over traditional black-box testing",2000,0, 126,The 9 quadrant model for code reviews,"Discusses a decision-making model which can be used to determine the efficiency of a code review process. This model is based on statistical techniques such as control charts. The model has nine quadrants, each of which depicts a range of values of the cost and yield of a code review. The efficiency of the code review in detecting defects is determined by taking inputs from past data, in terms of the costs and yields of those code reviews. This estimate also provides an in-process decision-making tool. Other tools can be used effectively, in conjunction with this model, to plan for code reviews and to forecast the number of defects that could be expected in the reviews. This model can be successfully used to decide what the next step of the operational process should be. The decisions taken using this model help to reduce the number of defects present in the software delivered to the customer",2000,0, 127,Flexible Network Layer in dynamic networking architecture,"The authors propose an architecture of global communication networks with dynamic functions based on the concept of Flexible Network. The dynamic function enhances the capability of communication networks to deal with various changes detected in both human users and networked environment. In our architecture, a new Flexible Network Layer is introduced between the application layer and the IP network layer of the global communication networks. To elaborate the functions of the Flexible Network Layer, we demonstrate an agent based model of the Flexible Network Layer for a multimedia communication application and discuss the properties of the proposed architecture",2000,0, 128,Fault tolerant shared-object management system with dynamic replication control strategy,"This paper is based on a dynamic replication control strategy for minimizing communications costs. In dynamic environments where the access pattern to share resources cannot be predicted statically, it is required to monitor such parameter during the whole lifetime of the system so as to adapt it to new requirements. The shared-object management system is implemented in a centralized manner in which a master processor deals with the serialization of invocations. On one hand, we attempt to provide fault tolerance as a way to adjust the system parameters to work only with a set of correct processors so as to enhance system functionality. On the other hand, we attempt to furnish availability by masking the failure of the master processor. A new master processor is elected that resumes the master processor processing. Our shared-object management system modularity is realized through a meta level implementation",2000,0, 129,Formalizing UML class diagrams-a hierarchical predicate transition net approach,"Unified Modeling Language (UML) has been widely accepted as the standard object-oriented development methodology in the software industry. However, many graphical notations in UML only have informal English definitions and thus are error-prone and cannot be formally analyzed. We present our preliminary results on an approach to formally define UML class diagrams using hierarchical predicate transition nets (HPrTNs). We show how to define the main concepts related to class diagrams using HPrTN elements",2000,0, 130,Dependability of complex software systems with component upgrading,"Some very large and complex systems, such as telecommunication systems, must (and typically do) exhibit exceptional dependability. These systems are seldom totally replaced with a new system because of the increased likelihood of a lapse in service. Rather, systems are upgraded incrementally while operational, albeit that this often involves large-scale software changes. It is especially important then to ensure that new or replacement components are ready for online installation before they are incorporated into an operational system. It is often costly and time-consuming to determine the readiness of new components for installation. Even then, the result may be unpredictable. Hence, we have developed effective and economical methods for software component verification that ensure and increase the overall system dependability. We tested our technologies using a telecommunication application, an Internet call-agent. Our experimental results show that our dynamic design analysis approach reduces computational costs and detects more errors than conventional approaches. The more frequently that changes are made, the greater the savings in the time required for model analysis and property prediction",2000,0, 131,Defect-based reliability analysis for mission-critical software,"Most software reliability methods have been developed to predict the reliability of a program using only data gathered during the resting and validation of a specific program. Hence, the confidence that can be attained in the reliability estimate is limited since practical resource constraints can result in a statistically small sample set. One exception is the Orthogonal Defect Classification (ODC) method, which uses data gathered from several projects to track the reliability of a new program, Combining ODC with root-cause analysis can be useful in many applications where it is important to know the reliability of a program for a specific type of a fault. By focusing on specific classes of defects, it becomes possible to (a) construct a detailed model of the defect and (b) use data from a large number of programs. In this paper, we develop one such approach and demonstrate its application to modeling Y2K defects",2000,0, 132,Applying reflective middleware techniques to optimize a QoS-enabled CORBA component model implementation,"Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly which is tedious, error-prone, and often suboptimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality of service, which makes them unsuitable for next generation applications with demanding QoS requirements. The paper presents three contributions to the study of middleware for QoS-enabled component based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms; (2) manage QoS properties of CORBA components in their containers; and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next generation applications",2000,0, 133,Design of an improved watchdog circuit for microcontroller-based systems,"This paper presents an improved design for a watchdog circuit. Previously, watchdog timers detected refresh inputs that were slower than usual. If a failure causes the microcontroller to produce faster than usual refresh inputs, the watchdog will not detect it. This new design detects failures that produce faster than usual as well as slower than usual refresh inputs. This will greatly improve the reliability of the system protected by this new design.",2000,0, 134,IEEE 1232 and P1522 standards,"The 1232 family of standards were developed to provide standard exchange formats and software services for reasoning systems used in system test and diagnosis. The exchange formats and services am based on a model of information required to support test and diagnosis. The standards were developed by the Diagnostic and Maintenance Control (D&MC) subcommittee of IEEE SCC20. The current efforts by the D&MC are a combined standard made up of the 1232 family, and a standard on Testability and Diagnosability Metrics, P1522. The 1232 standards describe a neutral exchange format so one diagnostic reasoner can exchange model information with another diagnostic reasoner. In addition, software interfaces are defined whereby diagnostic tools can be developed to process the diagnostic information in a consistent and reliable way. The objective of the Testability and Diagnosability Metrics standard is to provide notionally correct and mathematically precise definitions of testability measures that may be used to either measure the testability characteristics of a system, or predict the testability of a system. The end purpose is to provide an unambiguous source for definitions of common and uncommon testability and diagnosability terms such that each individual encountering it can know precisely what that term means. This paper describes the 1232 and P1522 standards and details the recent changes in the Information models, restructured higher order services and simplified conformance requirements",2000,0, 135,Building trust into OO components using a genetic analogy,"Despite the growing interest for component based systems, few works tackle the question of the trust we can bring into a component. The paper presents a method and a tool for building trustable OO components. It is particularly adapted to a design-by-contract approach, where the specification is systematically derived into executable assertions (invariant properties, pre/postconditions of methods). A component is seen as an organic set composed of a specification, a given implementation and its embedded test cases. We propose an adaptation of mutation analysis to the OO paradigm that checks the consistency between specification/implementation and tests. Faulty programs, called mutants, are generated by systematic fault injection in the implementation. The quality of tests is related to the mutation score, i.e. the proportion of faulty programs it detects. The main contribution is to show how a similar idea can be used in the same context to address the problem of effective test optimization. To map the genetic analogy to the test optimization problem, we consider mutant programs to be detected as the initial preys population and test cases as the predators population. The test selection consists of mutating the predator test cases and crossing them over in order to improve their ability to kill the prey population. The feasibility of component validation using such a Darwinian model and its usefulness for test optimization are studied",2000,0, 136,Thresholds for object-oriented measures,"A practical application of object oriented measures is to predict which classes are likely to contain a fault. This is contended to be meaningful because object oriented measures are believed to be indicators of psychological complexity, and classes that are more complex are likely to be faulty. Recently, a cognitive theory was proposed suggesting that there are threshold effects for many object oriented measures. This means that object oriented classes are easy to understand as long as their complexity is below a threshold. Above that threshold their understandability decreases rapidly, leading to an increased probability of a fault. This occurs, according to the theory, due to an overflow of short-term human memory. If this theory is confirmed, then it would provide a mechanism that would explain the introduction of faults into object oriented systems, and would also provide some practical guidance on how to design object oriented programs. The authors empirically test this theory on two C++ telecommunications systems. They test for threshold effects in a subset of the Chidamber and Kemerer (CK) suite of measures (S. Chidamber and C. Kemerer, 1994). The dependent variable was the incidence of faults that lead to field failures. The results indicate that there are no threshold effects for any of the measures studied. This means that there is no value for the studied CK measures where the fault-proneness changes from being steady to rapidly increasing. The results are consistent across the two systems. Therefore, we can provide no support to the posited cognitive theory",2000,0, 137,Quantitative software reliability modeling from testing to operation,"We first describe how several existing software reliability growth models based on nonhomogeneous Poisson processes (NHPPs) can be derived based on a unified theory for NHPP models. Under this general framework, we can verify existing NHPP models and derive new NHPP models. The approach covers a number of known models under different conditions. Based on these approaches, we show a method of estimating and computing software reliability growth during the operational phase. We can use this method to describe the transitions from the testing phase to operational phase. That is, we propose a method of predicting the fault detection rate to reflect changes in the user's operational environments. The proposed method offers a quantitative analysis on software failure behavior in field operation and provides useful feedback information to the development process",2000,0, 138,Generating test cases for GUI responsibilities using complete interaction sequences,"Testing graphical user interfaces (GUI) is a difficult problem due to the fact that the GUI possesses a large number of states to be tested, the input space is extremely large due to different permutations of inputs and events which affect the GUI, and complex GUI dependencies which may exist. There has been little systematic study of this problem yielding a resulting strategy which is effective and scalable. The proposed method concentrates upon user sequences of GUI objects and selections which collaborate, called complete interaction sequences (CIS), that produce a desired response for the user. A systematic method to test these CIS utilizes a finite-state model to generate tests. The required tests can be substantially reduced by identifying components of the CIS that can be tested separately. Since consideration is given to defects totally within each CIS, and the components reduce required testing further, this approach is scalable. An empirical investigation of this method shows that substantial reduction in tests can still detect the defects in the GUI. Future research will prioritize testing related to the CIS testing for maximum benefit if testing time is limited",2000,0, 139,Assessing the cost-effectiveness of inspections by combining project data and expert opinion,"There is a general agreement among software engineering practitioners that sofware inspections are an important technique to achieve high software quality at a reasonable cost. However, there are many ways to perform such inspections and many factors that affect their cost-effectiveness. It is therefore important to be able to estimate this cost-effectiveness in order to monitor it, improve it, and convince developers and management that the technology and related investments are worthwhile. This work proposes a rigorous but practical way to do so. In particular, a meaningful model to measure cost-effectiveness is selected and a method to determine the cost-effectiveness by combining project data and expert opinion is proposed. To demonstrate the feasibility of the proposed approach, the results of a large-scale industrial case study are presented",2000,0, 140,Contributing to the bottom line: optimizing reliability cost schedule tradeoff and architecture scalability through test technology,"A challenging problem in software testing is finding the optimal point at which costs justify the stop-test decision. We first present an economic model that can be used to evaluate the consequences of various stop-test decisions. We then discuss two approaches for assessing performance, automated load test generation in the context of empirical testing and performance modeling, and illustrate how these techniques can affect the stop-test decision. We then illustrate the application of these two techniques to evaluating the performance of Web servers that performs significant server-side processing through object-oriented (OO) computing. Implications of our work for Web server performance evaluation in general are discussed",2000,0, 141,How to measure the impact of specific development practices on fielded defect density,"This author has mathematically correlated specific development practices to defect density and probability of on time delivery. She summarizes the results of this ongoing study that has evolved into a software prediction modeling and management technique. She has collected data from 45 organizations developing software primarily for equipment or electronic systems. Of these 45 organizations, complete and unbiased delivered defect data and actual schedule delivery data was available for 17 organizations. She presents the mathematical correlation between the practices employed by these organizations and defect density. This correlation can and is used to: predict defect density; and improve software development practices for the best return on investment",2000,0, 142,A software falsifier,"A falsifier is a tool for discovering errors by static source-code analysis. Its goal is to discover them while requiring minimal programmer effort. In contrast to lint-like tools or verifiers, which try to maximize the number of errors reported at the expense of allowing false errors, a falsifier's goal is to guarantee no false errors. To further minimize programmer effort, no specification or extra information about the program is required. That, however, does not preclude project-specific information from being built in. The class of errors that are detectable without any specification is important not only because of the low cost of detection, but also because it includes errors of portability, irreproducible behavior, etc., which are very expensive to detect by testing. This paper describes the design and implementation of such a falsifier, and reports on experience with its use for design automation software. The main contribution of this work lies in combining data-flow analysis with symbolic execution to take advantage of their relative advantages",2000,0, 143,Improving tree-based models of software quality with principal components analysis,"Software quality classification models can predict which modules are to be considered fault-prone, and which are not, based on software product metrics, process metrics and execution metrics. Such predictions can be used to target improvement efforts to those modules that need them the most. Classification-tree modeling is a robust technique for building such software quality models. However, the model structure may be unstable, and accuracy may suffer when the predictors are highly correlated. This paper presents an empirical case study of four releases of a very large telecommunications system, which shows that the tree-based models can be improved by transforming the predictors with principal components analysis, so that the transformed predictors are not correlated. The case study used the regression-tree algorithm in the S-Plus package and then applied a general decision rule to classify the modules",2000,0, 144,A methodology for architectural-level risk assessment using dynamic metrics,"Risk assessment is an essential process of every software risk management plan. Several risk assessment techniques are based on the subjective judgement of domain experts. Subjective risk assessment techniques are human-intensive and error-prone. Risk assessment should be based on product attributes that we can quantitatively measure using product metrics. This paper presents a methodology for risk assessment at the early stages of the development lifecycle, namely the architecture level. We describe a heuristic risk assessment methodology that is based on dynamic metrics obtained from UML specifications. The methodology uses dynamic complexity and dynamic coupling metrics to define complexity factors for the architecture elements (components and connectors). Severity analysis is performed using FMEA (failure mode and effect analysis), as applied to architecture simulation models. We combine severity and complexity factors to develop heuristic risk factors for the architecture components and connectors. Based on component dependency graphs that were developed earlier for reliability analysis, and using analysis scenarios, we develop a risk assessment model and a risk analysis algorithm that aggregates the risk factors of components and connectors to the architectural level. We show how to analyze the overall risk factor of the architecture as the function of the risk factors of its constituting components and connectors. A case study of a pacemaker is used to illustrate the application of the methodology",2000,0, 145,On the repeatability of metric models and metrics across software builds,"We have developed various software metrics models over the years: Boolean discriminate functions (BDFs); the Kolmogorov-Smirnov distance; derivative calculations for assessing achievable quality; a stopping rule; point and confidence interval estimates of quality; relative critical value deviation metrics; and nonlinear regression functions. We would like these models and metrics to be repeatable across the n builds of a software system. The advantage of repeatability is that models and metrics only need to be developed and validated once on build, and then applied n-1 times without modification to subsequent builds, with considerable savings in analysis and computational effort. In practical terms, this approach involves using the same model parameters that were validated and applying them unchanged on subsequent builds. The disadvantage is that the quality and metrics data of builds 2, ..., n, which varies across builds, is not utilized. We make a comparison of this approach with one that involves validating models and metrics on each build i and applying them only on build i+1, and then repeating the process. The advantage of this approach is that all available data are used in the models and analysis but at considerable cost in effort. We report on experiments involving large sets of discrepancy reports and metrics data on the Space Shuttle flight software, where we compare the predictive accuracy and effort of the two approaches for BDFs, critical values, derivative quality and inspection calculations, and the stopping rule",2000,0, 146,Modeling fault-prone modules of subsystems,"Software developers are very interested in targeting software enhancement activities prior to release, so that reworking of faulty modules can be avoided. Credible predictions of which modules are likely to have faults discovered by customers can be the basis for selecting modules for enhancement. Many case studies in the literature build models to predict which modules will be fault-prone without regard to the subsystems defined by the system's functional architecture. Our hypothesis is this: models that are specially built for subsystems will be more accurate than a system-wide model applied to each subsystem's modules. In other words, the subsystem that a module belongs to can be valuable information in software quality modeling. This paper presents an empirical case study which compared software quality models of an entire system to models of a major functional subsystem. The study, modeled a very large telecommunications system with classification trees built by the CART (classification and regression trees) algorithm. For predicting subsystem quality, we found that a model built with training data on the subsystem alone was more accurate than a similar model built with training data on the entire system. We concluded that the characteristics of the subsystem's modules were not similar to those of the system as a whole, and thus, information on subsystems can be valuable",2000,0, 147,Software reliability and maintenance concept used for automatic call distributor MEDIO ACD,"The authors present the software reliability and maintenance concept, which is used in the software development, testing, and maintenance process, for automatic call distributor MEDIO ACD. The concept has been successfully applied on systems, which are installed and fully operational in Moscow and Saint Petersburg, Russia. The authors concentrate on two main issues: (i) set of fault-tolerant mechanisms needed for the system exploitation (error logging, checkpoint-restart, overload protection and tandem configuration support); (ii) MEDIO ACD software maintenance concept, in which the quality of the new software update is predicted on the basis of the current update's metrics and quality, and the new update's metrics. This forecast aids software maintenance efficiency, and cost reduction",2000,0, 148,Expanding design pattern to support parallel programming,"The design pattern concept is widely used in large object-oriented software development, but this should not be limited to the object-oriented field: it can be used in many other areas. Explicit parallel programming is well-known to be complex and error-prone, and design patterns can ease this work. This paper introduces a pattern-based approach for parallel programming, in which we classify design patterns into two levels to support (a) the parallel algorithm design phase and (b) the parallel coding phase, respectively. Through this approach, a programmer doesn't need much additional knowledge about parallel computing; what he need to do is to describe the problem he wants to solve and offer some parameters, sequential code or components. We demonstrate this approach with a case study in this paper",2000,0, 149,WinSURE: a new Windows interface to the SURE program,"WinSURE is new interface to the Semi-Markov Range Evaluator (SURE) program, a reliability analysis program used for calculating upper and lower bounds on the operational and death state probabilities for a large class of semi-Markov models. The SURE program was developed in the late 1980s for the Unix environment and has been distributed free-of-charge for over a decade. The WinSURE program is a port of the SURE program to the Windows 98 operating system, providing the same functionality as the original program, but with a Windows-based graphical user interface. The program provides a rapid computational capability for semi-Markov models useful in describing the fault-handling behavior of fault-tolerant computer systems. The only modeling restriction imposed by the program is that the nonexponential recovery transitions must be fast in comparison to the mission time-a desirable attribute of all fault-tolerant reconfigurable systems. The WinSURE reliability analysis method utilizes a fast bounding theorem based on means and variances that enables the calculation of upper and lower bounds on system reliability. This paper presents an overview of the functionality of the WinSURE program, describe the graphical user interface, and illustrate the use of the program on some simple example problems",2000,0, 150,Software reliability prediction of digital fly control system,"With the rapid development of computer technology, software plays an important and decision-making role in the computer control system, especially in digital flight control systems. How to determine and improve the reliability of software is a critical problem demanding urgent solution. Traditionally, software reliability is determined according to a reliability model based on software testing data after the software has developed, which is not suitable for improving its reliability. This paper studies a method to predict the software reliability in the early development period so as to provide the basis for improving the reliability of software. Application to a digital control software system indicates that this method can predict the reliability of software in the early development period, especially in the requirement period and outline design period, effectively",2000,0, 151,Redundancy management system for the X-33 vehicle and mission computer,"The X-33 is an unmanned advanced technology demonstrator with a mission to validate new technologies for the next generation of reusable launch vehicles. Various system redundancies are designed in the X-33 to enhance the probability of successfully completing its mission in the event of faults and failures during flight. One such redundant system is the vehicle and mission computer that controls the X-33 and manages the avionics subsystems. Historically, redundancy management and applications such as flight control and vehicle management tended to be highly coupled. One of the technologies that the X-33 will demonstrate is the redundancy management system (RMS) that uncouples the applications from the redundancy management details, much in the same way that real-time operating systems have uncoupled applications from task scheduling, communication and synchronization details. This paper describes Honeywell's RMS, its role and implementation in the X-33, some of the tradeoffs that were chosen, the fault tolerance concepts it embodies and its suitability as an off-the-shelf solution for a range of high reliability and high availability applications. This paper concludes with insights on current and future RMS developments",2000,0, 152,The feasibility of applying object-oriented technologies to operational flight software,"As object-oriented technologies move from the laboratory to the mainstream, companies are beginning to realize cost benefits in terms of software reuse and reduced development time. These benefits have been elusive to developers of real-time flight software. Issues such as processing latencies, validated performance of mission critical functions, and integration of legacy code have inhibited the effective use of object-oriented technologies in this domain. Emerging design languages, development tools, and processes offer the potential to address the application of object technologies to real-time operational flight software development. This paper examines emerging object-based technologies and assess their applicability to operational flight software. It includes an analysis that compares and contrasts the current Comanche software development process with one based on object-oriented concepts",2000,0, 153,Quality of service management for real-time embedded information systems,"Explores further how dynamic and distributed quality-of-service (QoS) management functions can be added to avionics applications built using commercial-off-the-shelf (COTS) standards and components, to provide more powerful adaptive software capabilities. This paper describes contributions in two principal areas. First, it outlines a QoS management architecture that meets the distributed resource management needs of real-time information systems, and describes our recent extensions to that architecture. Second, it presents empirical evidence of the utility and feasibility of dynamic and adaptive system behavior in realistic real-time embedded information systems. The discussion centers on the identification of key architectural features, and describes initial qualitative and quantitative results that are used to assess the benefits and costs of these segments of the overall architecture",2000,0, 154,Single byte error control codes with double bit within a block error correcting capability for semiconductor memory systems,"Computer memory systems when exposed to strong electromagnetic waves or radiation are highly vulnerable to multiple random bit errors. Under this situation, we cannot apply existing SEC-DED or SbEC capable codes because they provide insufficient error control performance. This correspondence considers the situation where two random bits in a memory chip are corrupted by strong electromagnetic waves or radioactive particles and proposes two classes of codes that are capable of correcting random double bit errors occurring within a chip. The proposed codes, called Double bit within a block Error Correcting-Single byte Error Detecting ((DEC)B-SbED) code and Double bit within a block Error Correcting-Single byte Error Correcting ((DEC)B-Sb EC) code, are suitable for recent computer memory systems",2000,0, 155,How does resource utilization affect fault tolerance?,"Many fault-tolerant architectures are based on the single-fault assumption, hence accumulation of dormant faults represents a potential reliability hazard. Based on the example of the fail-silent Time-Triggered Architecture we study sources and effects of dormant faults. We identify software as being more prone to dormant faults than hardware. By means of modeling we reveal a high sensitivity of the MTTF to the existence of even a small amount of irregularly used resources. We propose on-line testing as a means of coping with dormant faults and sketch an appropriate test strategy",2000,0, 156,"Correctly assessing the -ilities"""" requires more than marketing hype","Understanding key system qualities can better equip you to correctly assess the technologies you manage. Dot-coms and enterprise systems often use terms like scalability, reliability and availability to describe how well they meet current and future service-level expectations. These ilities characterize an IT solution's architectural and engineering qualities. They collectively provide a vocabulary for discussing an IT solution's performance potential amid ever-changing IT requirements. The paper considers the role that ilities play in the solution architecture. They generally fall into four categories: strategic, systemic, service and user.",2000,0, 157,Augmenting sequence constraints in Z and its application to testing,"The paper introduces sequence constraints into a formal specification language Z. Formal specification languages have been used to specify safety-critical applications, and many static and dynamic aspects of the system can be specified. However, the method calling constraints, a runtime behavior, are often missed. The paper introduces two kinds of sequence constraints: those constraints with respect to a schema and those with respect to multiple schemas. Once sequence constraints are specified, together with parameter specifications already in Z, one can generate test cases including test inputs and their expected outputs using various testing strategies such as partition testing, boundary testing, random testing, stress testing, and negative testing. An application has been specified in Z using sequence constraints, and test cases generated have been used to test the software. The results show that the test cases generated successfully detected all the faults seeded",2000,0, 158,Predicting testability of program modules using a neural network,"J.M. Voas (1992) defines testability as the probability that a test case will fail if the program has a fault. It is defined in the context of an oracle for the test, and a distribution of test cases, usually emulating operations. Because testability is a dynamic attribute of software, it is very computation-intensive to measure directly. The paper presents a case study of real time avionics software to predict the testability of each module from static measurements of source code. The static software metrics take much less computation than direct measurement of testability. Thus, a model based on inexpensive measurements could be an economical way to take advantage of testability attributes during software development. We found that neural networks are a promising technique for building such predictive models, because they are able to model nonlinearities in relationships. Our goal is to predict a quantity between zero and one whose distribution is highly skewed toward zero. This is very difficult for standard statistical techniques. In other words, high testability modules present a challenging prediction problem that is appropriate for neural networks",2000,0, 159,An application of fuzzy clustering to software quality prediction,"The ever increasing demand for high software reliability requires more robust modeling techniques for software quality prediction. The paper presents a modeling technique that integrates fuzzy subtractive clustering with module-order modeling for software quality prediction. First fuzzy subtractive clustering is used to predict the number of faults, then module-order modeling is used to predict whether modules are fault-prone or not. Note that multiple linear regression is a special case of fuzzy subtractive clustering. We conducted a case study of a large legacy telecommunication system to predict whether each module will be considered fault-prone. The case study found that using fuzzy subtractive clustering and module-order modeling, one can classify modules which will likely have faults discovered by customers with useful accuracy prior to release",2000,0, 160,Faulty version recovery in object-oriented N-version programming,"Many long-running applications would greatly benefit from being able to recover faulty versions in N-version programs since their exclusion from further use undermines the availability of the system. Developing a recovery feature, however, is a very complex and error-prone task, which the author believes has not received adequate attention. Although many researchers are aware of the importance of version recovery, there are very few schemes which include these features. Even when they do, they rely on ad hoc programming and are not suitable for object-oriented systems. The author believes that developing systematic approaches here is crucial, and formulates a general approach to version recovery in class diversity schemes, which is based on the concept of the abstract version state. The approach extends the recently-developed class diversity scheme and relies on important ideas motivated by community error recovery. The diversity scheme includes two-level error detection which allows error latency to be controlled. To use it, special application-specific methods for each version object have to be designed, which would map the internal state into the abstract state and at the same time, form a basis for one-level version recovery. The approach is discussed in detail, compared with the existing solutions, and additional benefits of using the abstract version state are shown. The intention is to outline a disciplined way for providing version recovery and thus make it more practical. Two promising approaches which can be used for developing new structuring techniques incorporating the abstract version state concept are discussed",2000,0, 161,Virtual instrumentation and its application in diagnosis of faults in power transformers,"This paper presents a new approach to detect, localize and investigate the feasibility of identifying winding insulation failures. The diagnosis is based on the time-frequency analysis of signals recorded during lightning impulse tests. The virtual instrument is implemented with an acquisition board inserted into a PC and with software developed with lab view tools which sample the voltage and current signal and furnish the extent of insulation failure. The acquired signal is decomposed using multiresolution signal decomposition techniques to detect and localize the time instant of occurrence of fault",2000,0, 162,JavaSymphony: a system for development of locality-oriented distributed and parallel Java applications,"Most Java-based systems that support portable parallel and distributed computing either require the programmer to deal with intricate low-level details of Java which can be a tedious, time-consuming and error-prone task, or prevent the programmer from controlling locality of data. In this paper we describe JavaSymphony, a programming paradigm for distributed and parallel computing that provides a software infrastructure for wide classes of heterogeneous systems ranging from small-scale cluster computing to large scale wide-area meta-computing. The software infrastructure is written entirely in Java and runs on any standard compliant Java virtual machine. In contrast to most existing systems, JavaSymphony provides the programmer with the flexibility to control data locality and load balancing by explicit mapping of objects to computing nodes. Virtual architectures are specified to impose a virtual hierarchy on a distributed system of physical computing nodes. Objects can be mapped and dynamically migrated to arbitrary components of virtual architectures. A high-level API to hardware/software system parameters is provided to control mapping, migration, and load balancing of objects. Objects can interact through synchronous asynchronous and one-sided method invocation. Selective remote class loading may reduce the overall memory requirement of an application. Moreover; objects can be made persistent by explicitly storing and loading objects to/from external storage. A prototype of the JavaSymphony software infrastructure has been implemented. Preliminary experiments on a heterogeneous cluster of workstations are described that demonstrate reasonable performance values",2000,0, 163,Behavioral-level test vector generation for system-on-chip designs,"Co-design tools represent an effective solution for reducing costs and shortening time-to-market, when system-on-chip design is considered. In a top-down design flow, designers would greatly benefit from the availability of tools able to automatically generate test sequences, which can be reused during the following design steps, from the system-level specification to the gate-level description. This would significantly increase the chance of identifying testability problems early in the design flow, thus reducing the costs and increasing the final product quality. The paper proposes an approach for integrating the ability to generate test sequences into an existing co-design tool. Preliminary experimental results are reported, assessing the feasibility of the proposed approach",2000,0, 164,Code simulation concept for S/390 processors using an emulation system,"An innovative simulation concept has been developed for the IBM S/390 system of the year 2000 in the area of microcode verification. The goal is to achieve a long-term improvement in the quality of the delivered microcode, detecting and solving the vast majority of code problems in simulation before the system is first powered on. The number of such problems has a major impact on the time needed during system integration to bring the system up from power on to general availability. Within IBM, this is the first time that much a code simulation concept has been developed and implemented. Our element of that concept is the usage of a large emulation system for hardware/software co-verification",2000,0, 165,Modeling software quality: the Software Measurement Analysis and Reliability Toolkit,"The paper presents the Software Measurement Analysis and Reliability Toolkit (SMART) which is a research tool for software quality modeling using case based reasoning (CBR) and other modeling techniques. Modern software systems must have high reliability. Software quality models are tools for guiding reliability enhancement activities to high risk modules for maximum effectiveness and efficiency. A software quality model predicts a quality factor, such as the number of faults in a module, early in the life cycle in time for effective action. Software product and process metrics can be the basis for such fault predictions. Moreover, classification models can identify fault prone modules. CBR is an attractive modeling method based on automated reasoning processes. However, to our knowledge, few CBR systems for software quality modeling have been developed. SMART addresses this area. There are currently three types of models supported by SMART: classification based on CBR, CBR classification extended with cluster analysis, and module-order models, which predict the rank-order of modules according to a quality factor. An empirical case study of a military command, control, and communications applied SMART at the end of coding. The models built by SMART had a level of accuracy that could be very useful to software developers",2000,0, 166,A genetic algorithm-based system for generating test programs for microprocessor IP cores,"The current digital systems design trend is quickly moving toward a design-and-reuse paradigm. In particular, intellectual property cores are becoming widely used. Since the cores are usually provided as encrypted gate-level netlist, they raise several testability problems. The authors propose an automatic approach targeting processor cores that, by resorting to genetic algorithms, computes a test program able to attain high fault coverage figures. Preliminary results are reported to assess the effectiveness of our approach with respect to a random approach",2000,0, 167,VerifyESD: a tool for efficient circuit level ESD simulations of mixed-signal ICs,"For many classes of technologies and circuits, it is beneficial to perform circuit simulations for ESD design, verification, and performance prediction. This is particularly true for mixed-signal ICs, where complex interaction between I/Os and multiple power supplies make manual analysis difficult and error prone. Unfortunately, high node and component counts typically prohibit simulations of an entire circuit. Thus, a manual intervention by the designer is usually required to minimize the circuit size. This paper introduces a new tool which automatically reduces the number of voltage nodes per ESD simulation by including only those devices that are necessary. In addition, a simple method for modeling ESD device failure while maintaining compatibility with existing CAD tools and libraries is discussed.",2000,0, 168,Experience with designing a requirements and architecture management tool,"Effective tool support is much needed for the tedious and error prone task of managing system requirements and system architectures. With the primary objective of providing practical support for software engineers, we have developed a tool for managing system requirements, system architectures and their traceability which is being used in real-world industrial projects. The tool is based on a well considered information model of system requirements and architecture, and embodies a set of document templates providing guidance for software engineers. The author reports on experience in designing and improving the tool. In particular, we highlight a number of case studies that played a significant role formulating the information model and document templates, and provide an assessment of the tool relative to existing practice",2000,0, 169,Dynamic distributed software architecture design with PARSE-DAT,"The paper presents a novel software architecture design and verification methodology. Architects employ a pragmatic graphical design method called Dynamic PARSE to design the software architecture. At the same time, they capture the concurrent and dynamic features of the system. Such dynamic features include the creation and deletion of processes and re-configurable communication links. Lastly, the correctness of the design can be verified, and possible design faults may be detected by using an automatic design analysis and verification tool called PARSE-DAT",2000,0, 170,Software architecture analysis based on statechart semantics,"High assurance architecture-based and component-based software development relies fundamentally on the quality of the components of which a system is composed and their configuration. Analysis over those components and their integration as a system plays a key role in the software development process. This paper describes an approach to develop and assess architecture and component-based systems based on specifying software architecture augmented by statecharts representing component behavioral specifications. The approach is applied for the C2 style and associated ADL and is supported within a quality-focussed environment, called Argus-I, which assists specification-based analysis and testing at both the component and architecture levels",2000,0, 171,An approach to preserving sufficient correctness in open resource coalitions,"Most software that most people use most of the time needs only moderate assurance of fitness for its intended purpose. Unlike high-assurance software, where the severe consequences of failure justify substantial investment in validation, everyday software is used in settings in which occasional degraded service or even failure is tolerable. Unlike high-assurance software, which has been the subject of extensive scrutiny, everyday software has received only meager attention concerning how good it must be, how to decide whether a system is sufficiently correct, or how to detect and remedy abnormalities. The need for such techniques is particularly strong for software that takes the form of open resource coalitions - loosely-coupled aggregations of independent distributed resources. We discuss the problem of determining fitness for purpose, introduce a model for detecting abnormal behavior, and describe some of the ways to deal with abnormalities when they are detected",2000,0, 172,Self-calibration of metrics of Java methods,"Self-calibration is a new technique for the study of internal product metrics, sometimes called observations and calibrating these against their frequency, or probability of occurring in common programming practice (CPP). Data gathering and analysis of the distribution of observations is an important prerequisite for predicting external qualities, and in particular software complexity. The main virtue of our technique is that it eliminates the use of absolute values in decision-making, and allows gauging local values in comparison with a scale computed from a standard and global database. Method profiles are introduced as a visual means to compare individual projects or categories of methods against the CPP. Although the techniques are general and could in principle be applied to traditional programming languages, the focus of the paper is on object oriented languages using Java. The techniques are employed in a suite of 17 metrics in a body of circa thirty thousand Java methods",2000,0, 173,A maintainability model for industrial software systems using design level metrics,"Software maintenance is a time consuming and expensive phase of a software product's life-cycle. The paper investigates the use of software design metrics to statistically estimate the maintainability of large software systems, and to identify error prone modules. A methodology for assessing, evaluating and, selecting software metrics for predicting software maintainability is presented. In addition, a linear prediction model based on a minimal set of design level software metrics is proposed. The model is evaluated by applying it to industrial software systems",2000,0, 174,SOCRATES on IP router fault detection,"SOCRATES is a software system for testing correctness of implementations of IP routing protocols such as RIP, OSPF and BGP. It uses a probabilistic algorithm to automatically construct random network topologies. For each generated network topology, it checks the correctness of routing table calculation and the IP packet forwarding behavior. For OSPF, it also checks the consistency between network topologies and the link-state databases of router under test. For BGP, it further checks the BGP update redistribution. Being different than commercial testing tools, which select their test cases in an ad-hoc manner, SOCRATES chooses test cases with a guaranteed fault coverage",2000,0, 175,Detecting a network failure,"Measuring the properties of a large, unstructured network can be difficult: one may not have full knowledge of the network topology, and detailed global measurements may be infeasible. A valuable approach to such problems is to take measurements from selected locations within the network and then aggregate them to infer large-scale properties. One sees this notion applied in settings that range from Internet topology discovery tools to remote software agents that estimate the download times of popular Web pages. Some of the most basic questions about this type of approach, however, are largely unresolved at an analytical level. How reliable are the results? How much does the choice of measurement locations affect the aggregate information one infers about the network? We describe algorithms that yield provable guarantees for a particular problem of this type: detecting a network failure. Suppose we want to detect events of the following form: an adversary destroys up to k nodes or edges, after which two subsets of the nodes, each at least an ε fraction of the network, are disconnected from one another. We call such an event an (ε,k) partition. One method for detecting such events would be to place agents at a set D of nodes, and record a fault whenever two of them become separated from each other. To be a good detection set, D should become disconnected whenever there is an (ε,k)-partition; in this way, it witnesses all such events. We show that every graph has a detection set of size polynomial in k and ε-1, and independent of the size of the graph itself. Moreover, random sampling provides an effective way to construct such a set. Our analysis establishes a connection between graph separators and the notion of VC-dimension, using techniques based on matchings and disjoint paths",2000,0, 176,A model-based fault-tolerant CSCW architecture. Application to biomedical signals visualization and processing,"The paper describes a methodological approach that uses Petri nets (PNs) and Time Petri nets (TPNs) for modeling, analysis and behavior control of fault-tolerant computer supported synchronous cooperative work (CSSCW) architectures inside which a high level of interactivity between users is required. Modeling allows architectures to be formally studied under different functioning conditions (normal communications and deficient communications). Results show that the model is able to predict interlocking and state inconsistencies in the presence of errors. TPNs are used to extend PN models in order to detect communication errors and avoid subsequent dysfunctions. The approach is illustrated through the improvement of a recently presented collaborative application dedicated to biomedical signal visualization and analysis",2000,0, 177,Towards automatic verification of autonomous systems,"While autonomous systems offer great promise in terms of capability and flexibility, their reliability is particularly hard to assess. This paper describes research to apply formal verification methods to languages used to develop autonomy software. In particular, we describe tools that automatically convert autonomy software into formal models that are then verified using model checking. This approach has been applied to MPL code for the Livingstone fault diagnosis system and to TDL task descriptions for mobile robot systems. Our long-term objective is to create tools that enable engineers and roboticists to use formal verification as part of the normal software development cycle",2000,0, 178,Single-control testability of RTL data paths for BIST,"This paper presents a new BIST method for RTL data paths based on single-control testability a new concept of testability. The BIST method adopts hierarchical test. Test pattern generators are placed only on primary inputs and test patterns are propagated to and fed into each module. Test responses are similarly propagated to response analyzers placed only on primary outputs. For the propagation of test patterns and test responses, paths existing in the data path are utilized. The DFT method for the single-control testability is also proposed. The advantages of the proposed method are high fault coverage (for single stuck-at faults), low hardware overhead and capability of at-speed testing. Moreover test patterns generated by test pattern generators can be fed into each module at consecutive system clocks, and thus, the BIST can also detect some faults of other fault models (e.g., transition faults and delay faults) that require consecutive application of test patterns at the speed of the system clock",2000,0, 179,Reducing test application time for full scan circuits by the addition of transfer sequences,"A test set for scan designs may consist of tests where primary input vectors are embedded between a scan-in and a scan-out operation. A static compaction procedure proposed earlier reduces the test application time of such a test set by removing the scan operations at the end of one test and at the beginning of another test, and concatenating the primary input vectors of the two tests. In this work, we investigate a method to increase the number of tests that can be combined in this way, thus further reducing the number of scan operations and the test application time. This is done by inserting one or more primary input vectors between the two tests being combined. The inserted vectors help detect faults that were originally detected due to the scan operations, allowing us to combine tests that cannot be combined otherwise. We present experimental results to demonstrate that improved levels of compaction can be achieved by this method",2000,0, 180,Computer-aided fault to defect mapping (CAFDM) for defect diagnosis,"Defect diagnosis in random logic is currently done using the stuck-at fault model, while most defects seen in manufacturing result in bridging faults. In this work we use physical design and test failure information combined with bridging and stuck-at fault models to localize defects in random logic. We term this approach computer-aided fault to defect mapping (CAFDM). We build on top of the existing mature stuck-at diagnosis infrastructure. The performance of the CAFDM software was tested by injecting bridging faults into samples of a Streaming audio controller chip and comparing the predicted defect locations and layers with the actual values. The correct defect location and layer was predicted in all 9 samples for which scan-based diagnosis could be performed. The experiment was repeated on production samples that failed scan test, with promising results",2000,0, 181,Enhanced DO-RE-ME based defect level prediction using defect site aggregation-MPG-D,"Predicting the final value of the defective part level after the application of a set of test vectors is not a simple problem. In order for the defective part level to decrease, both the excitation and observation of defects must occur. This research shows that the probability of exciting an as yet undetected defect does indeed decrease exponentially as the number of observations increases. In addition, a new defective part level model is proposed which accurately predicts the final defective part level (even at high fault coverages) for several benchmark circuits and which continues to provide good predictions even as changes are made an the set of test patterns applied",2000,0, 182,How to predict software defect density during proposal phase,"The author has developed a method to predict defect density based on empirical data. The author has evaluated the software development practices of 45 software organizations. Of those, 17 had complete actual observed defect density to correspond to the observed development practices. The author presents the correlation between these practices and defect density in this paper. This correlation can and is used to: (a) predict defect density as early as the proposal phase, (b) evaluate proposals from subcontractors, (c) perform tradeoffs so as to minimize software defect density. It is found that as practices improve, defect density decreases. Contrary to what many software engineers claim, the average probability of a late delivery is less on average for organizations with better practices. Furthermore, the margin of error in the event that a schedule is missed was smaller on average for organizations with better practices. It is also interesting that the average number of corrective action releases required is also smaller for the organizations with the best practices. This means less downtime for customers. It is not surprising that the average SEI CMM level is higher for the organizations with the better practices",2000,0, 183,Process certification: a double-edged sword,"On 27 June 2000, health authorities in Osaka city received a call from the hospital. They learned that people were suffering from diarrhoea, stomach pains, and vomiting after drinking low-fat milk products produced by Snow Brand Milk Product, one of Japan's largest dairy companies. On 1 July, officials at a medical laboratory in Wakayama prefecture announced that, when they tested the milk the victims had drunk, they detected a gene linked to the toxin present in yellow staphylococcus, an exit toxin found in leftover milk. Milk is far easier to test than software, because we can physically and chemically measure it. However, processing milk is similar to the software development process in terms of tangibility. Consequently, process is an essential part of both food production and software development, which is why both industries require process standards. Thus, the software community can learn from the fiasco at Snow Brand. In particular, there are four areas on which we should focus: process logs are easily faked; process is not a final objective; safety is the highest priority; and formal certification and authorization are a double-edged sword",2000,0, 184,Formal specification techniques as a catalyst in validation,"The American Heritage Dictionary defines a catalyst as a substance, usually present in small amounts relative to the reactants, that modifies and especially increases the rate of a chemical reaction without being consumed in the process. This article reports on the experience gained in an industrial project that formal specification techniques form such a catalyst in the validation of complex systems. These formal development methods improve the validation process significantly by generating precise questions about the system's intended functionality very early and by uncovering ambiguities and faults in textual requirement documents. This project has been a cooperation between the IST and the company Frequentis. The Vienna Development Method (VDM) has been used for validating the functional requirements and the existing acceptance tests of a network node for voice communication in air traffic control. In addition to several detected requirement faults, the formal specification highlighted how additional test-cases could be derived systematically",2000,0, 185,A high-assurance measurement repository system,"High-quality measurement data are very useful for assessing the efficacy of high-assurance system engineering techniques and tools. Given the rapidly evolving suite of modern tools and techniques, it is helpful to have a large repository of up-to-date measurement data that can be used to quantitatively assess the impact of state-of-the-art techniques on the quality of the resulting systems. For many types of defects, including Y2K failures, infinite loops, memory overflow, access violations, arithmetic overflow, divide-by-zero, off-by-one errors, timing errors, deadlocks, etc., it may be possible to combine data from a large number of projects and use these to make statistical inferences. This paper presents a highly secure and reliable measurement repository system for measurement data acquisition, storage and analysis. The system is being used by the QuEST Forum, which is a new industry forum consisting of over 100 leading telecommunications companies. The paper describes the decisions that were made in the design of the measurement repository system, as well as implementation strategies that were used in achieving a high-level of confidence in the security and reliability of the system",2000,0, 186,Prediction of software faults using fuzzy nonlinear regression modeling,"Software quality models can predict the risk of faults in modules early enough for cost-effective prevention of problems. This paper introduces the fuzzy nonlinear regression (FNR) modeling technique as a method for predicting fault ranges in software modules. FNR modeling differs from classical linear regression in that the output of an FNR model is a fuzzy number. Predicting the exact number of faults in each program module is often not necessary. The FNR model can predict the interval that the number of faults of each module falls into with a certain probability. A case study of a full-scale industrial software system was used to illustrate the usefulness of FNR modeling. This case study included four historical software releases. The first release's data were used to build the FNR model, while the remaining three releases' data were used to evaluate the model. We found that FNR modeling gives useful results",2000,0, 187,"Using product, process, and execution metrics to predict fault-prone software modules with classification trees","Software-quality classification models can make predictions to guide improvement efforts to those modules that need it the most. Based on software metrics, a model can predict which modules will be considered fault-prone, or not. We consider a module fault-prone if any faults were discovered by customers. Useful predictions are contingent on the availability of candidate predictors that are actually related to faults discovered by customers. With a diverse set of candidate predictors in hand, classification-tree modeling is a robust technique for building such software quality models. This paper presents an empirical case study of four releases of a very large telecommunications system. The case study used the regression-tree algorithm in the S-Plus package and then applied our general decision rule to classify modules. Results showed that in addition to product metrics, process metrics and execution metrics were significant predictors of faults discovered by customers",2000,0, 188,Bayesian framework for reliability assurance of a deployed safety critical system,"The existence of software faults in safety-critical systems is not tolerable. Goals of software reliability assessment are estimating the failure probability of the program, , and gaining statistical confidence that is realistic. While in most cases reliability assessment is performed prior to the deployment of the system, there are circumstances when reliability assessment is needed in the process of (re)evaluation of the fielded (deployed) system. Post deployment reliability assessment provides reassurance that the expected dependability characteristics of the system have been achieved. It may be used as a basis of the recommendation for maintenance and further improvement, or the recommendation to discontinue the use of the system. The paper presents practical problems and challenges encountered in an effort to assess and quantify software reliability of NASA's Day-of-Launch I-Load Update (DOLILU II) system DOLILU II system has been in operational use for several years. A Bayesian framework is chosen for reliability assessment, because it allows incorporation of (in this specific case failure free) program executions observed in the operational environment. Furthermore, we outline the development of a probabilistic framework that allows accounting of rigorous verification and validation activities performed prior to a system's deployment into the reliability assessment",2000,0, 189,Practical applications of statistical process control [in software development projects],Applying quantitative methods such as statistical process control (SPC) to software development projects can provide a positive cost-benefit return. The authors used SPC on inspection and test data to assess product quality during testing and to predict post-shipment product quality for a major software release,2000,0, 190,A framework to model dependable real-time systems based on real-time object model,"Proposes a framework to model fault-tolerant real-time systems consisting of RobustRTOs (Robust Real-Time Objects) and RMOs (Region Monitor real-time Objects). A RobustRTO is an object which is capable of tolerating faults in itself. Many existing fault-tolerant mechanisms, such as RB (recovery blocks) and NVP (N-version programming), are modeled as RobustRTOs. An RMO is an object which is capable of monitoring a set of objects, named regions. The RMO detects any abnormal behavior of the objects within a region, diagnoses the symptoms and performs appropriate recovery and/or reconfiguration. Although the concepts of RobustRTOs and RMOs are introduced based on a real-time object model, we believe they are applicable to the modeling and design of any dependable embedded real-time system",2000,0, 191,Eliminating annotations by automatic flow analysis of real-time programs,"There is an increasing demand for methods that calculate the worst case execution time (WCET) of real time programs. The calculations are typically based on path information for the program, such as the maximum number of iterations in loops and identification of infeasible paths. Most often, this information is given as manual annotations by the programmer. Our method calculates path information automatically for real time programs, thereby relieving the programmer from tedious and error-prone work. The method, based on abstract interpretation, generates a safe approximation of the path information. A trade-off between quality and calculation cost is made, since finding the exact information is a complex, often intractable problem for nontrivial programs. We describe the method by a simple, worked example. We show that our prototype tool is capable of analyzing a number of program examples from the WCET literature, without using any extra information or consideration of special cases needed in other approaches",2000,0, 192,Bloodshot eyes: workload issues in computer science project courses,"Workload issues in computer science project courses are addressed. We briefly discuss why high workloads occur in project courses and the reasons they are a problem. We then describe some course changes we made to reduce the workload in a software engineering project course, without compromising course quality. The techniques include: adopting an iterative and incremental process, reducing the requirements for writing documents, and gathering accurate data on time spent on various activities. We conclude by assessing the techniques, providing good evidence for a dramatic change in the workload, and an increase in student satisfaction levels. We provide some evidence, and an argument, that learning has not been affected by the changes",2000,0, 193,Analysis of the impact of reading technique and inspector capability on individual inspection performance,"Inspection of software documents is an effective quality assurance measure to detect defects in the early stages of software development. It can provide timely feedback on product quality to both developers and managers. This paper reports on a controlled experiment that investigated the influence of reading techniques and inspector capability on individual effectiveness to find given sets of defects in a requirements specification document. Experimental results support the hypothesis that reading techniques can direct inspectors' attention towards inspection targets, i.e. on specific document parts or severity levels, which enables inspection planners to divide the inspection work among several inspectors. Further, they suggest a tradeoff between specific and general detection effectiveness regarding document coverage and inspection effort. Inspector capability plays a significant role in inspection performance, while the size of the effect varies with the reading technique employed and the inspected document part",2000,0, 194,Predicting class libraries interface evolution: an investigation into machine learning approaches,"Managing the evolution of an OO system constitutes a complex and resource-consuming task. This is particularly true for reusable class libraries since the user interface must be preserved for version compatibility. Thus, the symptomatic detection of potential instabilities during the design phase of such libraries may help avoid later problems. This paper introduces a fuzzy logic-based approach for evaluating the stability of a reusable class library interface, using structural metrics as stability indicators. To evaluate this new approach, we conducted a preliminary study on a set of commercial C++ class libraries. The obtained results are very promising when compared to those of two classical machine learning approaches, top down induction of decision trees and Bayesian classifiers",2000,0, 195,Global random early estimation for nipping cells in ATM networks,"Asynchronous transfer mode was designed for multimedia communication networks. In multimedia networks, there are various kinds of data with different quality of service (QoS) requiring transmission. Consequently, the control of QoS in ATM becomes very important. Therefore, this paper proposes a novel buffer management algorithm named GREEN (Global Random Early Estimation for Nipping cells). In buffer management, to drop a cell randomly and early is the mainstream. Specifically, the designed random probability function considers not only network statuses but also QoS requirements. Also, GREEN speeds up the decision making by early estimation. The properties of global random and early estimation of the GREEN algorithm are obtained by globally considering the delay requirement and early estimating the queue length for decision making",2000,0, 196,Application of data mining in Web pre-fetching,"To speed up fetching Web pages, we give an intelligent technology of Web pre-fetching. We use a simplified WWW data model to represent the data in the cache of a Web browser to mine the association rules. We store these rules in a knowledge base so as to predict the user's actions. Intelligent agents are responsible for mining the users' interest and pre-fetching Web pages, based on the interest association repository. In this way user browsing time has been reduced transparently",2000,0, 197,Implementation and evaluation for dependable bus control using CPLD,"Bus systems are used in computers as essential architecture, and dependability of bus systems should be accomplished reasonably for various applications. In this paper, we will present dependable bus operations with actual implementation and evaluation by CPLD. Most of the bus systems control transition of some classified phases with synchronous clock or guard time to avoid incorrect phase transition. However, these phase control methods may degrade system performance or cause incorrect operations. We design an asynchronous sequential circuit for bus phase control without clock or guard time. This circuit prevents incorrect phase transition at the time when large input delay or erroneous input occurs. We estimate probability of incorrect phase transition with single stuck-at fault on input signals. From the result of estimation, we also design checking system verifying outputs of initiator and target devices. Incorrect phase transition with single stuck-at fault occurred between both sequential circuits is inhibited completely by implementation of the system",2000,0, 198,Managing an employee ownership model in an IPO world,"Athens Group Inc. (AG), founded in June 1998, is an Austin-based, 100% employee-owned consulting firm specializing in technology strategy and custom software development. At a time when start-ups are plentiful-but rarely profitable-and talented people are scarce, AG has almost 50 outstanding software professionals, each with an average of 15 years of experience; has completed projects of global impact for Fortune 100 clients; and expects to earn $6 million in revenue in 2000. Athens Group builds software processes into the infrastructure of each client project. The backbone of each project is a repeatable software development process that is consistent with the Software Engineering Institutue's guidelines and appropriate to the needs of the client. These same principles have also been used on internal projects since the company's first few months, when most start-ups are prone to using a get it done and we'll figure out what we did later approach. Employee ownership as a founding principle has connected profit to process",2000,0, 199,Winner take all experts network for sensor validation,"The validation of sensor measurements has become an integral part of the operation and control of modern industrial equipment. The sensor under a harsh environment must be shown to consistently provide the correct measurements. Analysis of the validation hardware or software should trigger an alarm when the sensor signals deviate appreciably from the correct values. Neural network based models can be used to estimate critical sensor values when neighboring sensor measurements are used as inputs. The discrepancy between the measured and predicted sensor values may then be used as an indicator for sensor health. The proposed winner take all experts (WTAE) network is based on a `divide and conquer' strategy. It employs a growing fuzzy clustering algorithm to divide a complicated problem into a series of simpler sub-problems and assigns an expert to each of them locally. After the sensor approximation, the outputs from the estimator and the real sensor value are compared both in the time domain and the frequency domain. Three fault indicators are used to provide analytical redundancy to detect the sensor failure. In the decision stage, the intersection of three fuzzy sets accomplishes a decision level fusion, which indicates the confidence level of the sensor health. Two data sets, the Spectra Quest Machinery Fault Simulator data set and the Westland vibration data set, were used in simulations to demonstrate the performance of the proposed WTAE network. The simulation results show the proposed WTAE is competitive with or even superior to the existing approaches",2000,0, 200,Effort measurement in student software engineering projects,"Teaching software engineering by means of student involvement in the team development of a product is the most effective way to teach the main issues of software engineering. Some of its difficulties are those of coordinating their work, measuring the time spent by the students (both in individual work and in meetings) and making sure that meeting time will not be excessive. Starting in the academic year 1998/1999, we assessed, improved and documented the development process for the student projects and found that measurement is one of the outstanding issues to be considered. Each week, the students report the time spent on the different project activities. We present and analyze the measurement results for our 16 student teams (each one with around 6 students). It is interesting to note that the time spent in meetings is usually too long, ranging from 46% in the requirements analysis phase to 21% in coding, mainly due to problems of coordination. Results from previous years are analyzed and presented to the following year's students for feedback. In the present year (2000), we have decreased the amount of time spent by the student doing group work, and improved the effectiveness and coordination of the teams",2000,0, 201,A simulation method for estimating supply voltage dips in electrical power networks,"This paper describes the probabilistic approach for estimating voltage dips in electrical power networks. A method has been worked out which can estimate the typical magnitude and frequency of voltage dips, as well as momentary interruptions that may be expected at a given site. Evaluating such characteristics of voltage dips allows assessing compatibility between loads and the supply network. This methodology has been used to draw up a simulation tool by means of the LABVIEW program. The paper presents the results of simulation performed for a given electrical transmission and distribution network and discusses them",2000,0, 202,Australian Snowy Mountains Hydro Scheme earthing system safety assessment,"The task of determining the condition of the earthing in the Upper Tumut generation system, was undertaken as part of the Snowy Mountains Hydro Electric Authority's (SMHEA) safety risk assessment and asset condition monitoring programme. The testing programme to ascertain performance under earth fault and lightning conditions had to overcome considerable physical difficulties as well as the restrictions of 'close' proximity injection loops. The application of software, test instrumentation and testing procedures developed within Australia in collaboration between Energy Australia, Newcastle University, and SMHEA, to obtain real solutions are described in this paper. Also discussed are condition assessment processes that complement the current injection testing programme. This paper also provides a summary of the minimum requirements of an earthing system injection test to satisfactorily assess the condition of complex electrical power system installations",2000,0, 203,"An integrated resource negotiation, pricing, and QoS adaptation framework for multimedia applications","We study a dynamic, usage- and congestion-dependent pricing system in conjunction with price-sensitive user adaptation of network usage. We first present a resource negotiation and pricing (RNAP) protocol and architecture to enable users to select and dynamically renegotiate network services. We develop mechanisms within the RNAP architecture for the network to dynamically formulate prices and communicate pricing and charging information to the users. We then outline a general pricing strategy in this context. We discuss candidate algorithms by which applications (singly, or as part of a multi-application system) can adapt their rate and QoS requests, based on the user-perceived value of a given combination of transmission parameters. Finally, we present experimental results to show that usage- and congestion-dependent pricing can effectively reduce the blocking probability, and allow bandwidth to be shared fairly among applications, depending on the elasticity of their respective bandwidth requirements.",2000,0, 204,Verification and validation of object-oriented artifacts throughout the simulation model development life cycle,"The purpose of this paper is to present a series of questions (or indicators) for assessing the verity and validity of the artifacts produced during the entire object-oriented simulation model development life cycle. Using modern object-oriented development processes, artifacts developed in one phase flow seamlessly from those of the previous phase. This provides forward and backward traceability between artifacts. This inherent backward traceability has been exploited by tracing defects in artifacts back to their defective ancestral artifacts. Questions are then phrased such that when answered in the negative indicate the presence of defects. Use of the Evaluation Environment software tool facilitates the integration of the answers to the assessment questions and enables an overall evaluation. The collection of questions can be useful for the verification and validation of artifacts in any object-oriented simulation model development",2000,0, 205,Wafer probe process verification tools,"We present some tools we have developed that ensure the good quality of the wafer probing, or wafer test, process. Most of the problems at Wafer Probe appear in the same way and by detecting their pattern, even not knowing the exact source of the problem, we can prevent the product and its yield from being affected. The most common patterns of failures are: A certain category failing consecutively, a certain test failing above statistical limits expected, based on the historical results of that product, same wafers yielding different in two different testers, and results in a lot going worse wafer by wafer. For addressing these issues, we have generated a set of programs that are run at the end of every wafer tested, in real time, and that generate alarms and tell actions to the operator when the above problems are detected",2000,0, 206,The application of mold flow simulation in electronic package,"The application of CAE in mold flow of IC packaging has been developed for years. However, to predict EMC flow behavior accurately in IC packages during transfer molding is still a huge challenge due to its intrinsic limitations. In this paper, modeling technologies to analyze mold flow during semiconductor encapsulation have been developed. The leadframe separates the whole molding cavity into top and bottom cavities. Cavity thickness is the most important factor to the mold flow behavior. Unbalanced flow, due to large thickness difference between top and bottom cavities, causes air trapping and die pad tilt. Some packages which have larger thickness difference, such as 1 to 3 thickness-ratio TSOP, LOC-TSOP, DHS, EDHS and DPH Q-series packages, have a seriously unbalanced melt-front during molding. By observing the flow phenomenon from short-shot samples, it is found that the cavity thickness, bonding wire density, the size of leadframe openings, and surface roughness all affect EMC flow behavior. By considering these factors into the construction of a simulation model, numerical results show excellent agreement with actual experimental results for a DPH-LQFP package. The melt-fronts of numerical and experimental results are compared and shown. Further investigation to improve the package moldability was also studied. By using CAE software, molding defects can be easily detected and moldability problems can be improved efficiently to reduce manufacturing cost and design cycle time",2000,0, 207,Mobile agents for personalized information retrieval: when are they a good idea?,Mobile agent technology has been proposed as an alternative to traditional client-server computing for personalized information retrieval by mobile and wireless users from fixed wired servers. We develop a very simplified analytical model that examines the claimed performance benefits of mobile agents over client-server computing for a mobile information retrieval scenario. Our evaluation of this simple model shows that mobile agents are not necessarily better than client-server calls in terms of average response times; they are only beneficial if the space overhead of the mobile agent code is not too large or if the wireless link connecting the mobile user to the fixed servers of the virtual enterprise is error-prone. We quantify the tradeoffs involved for a variety of scenarios and point out issues for further research,2000,0, 208,Achievable QoS for multiple delay classes in cellular TDMA environments,"In a real-time wireless TDMA environment, every packet generated by applications has a deadline associated with it. If the system cannot allocate enough resources to serve the packet before the deadline, the packet would be dropped. Different applications have different delay requirements that should be guaranteed by the system so as to maintain some given packet dropping probabilities. In this paper, a single-cell system traffic of multiple delay classes is mathematically analyzed, and it is proved to be independent of the scheduling algorithm used, for all work-conserving earliest-due-date (WC-EDD) scheduling algorithms. The dropping requirements of all individual applications are guaranteed using deadline-sensitive ordered-head-of-line (DSO-HoL) priority schemes. Verification of the model is shown through extensive simulations",2000,0, 209,Structural defects: general approach and application to textile inspection,"This paper addresses detection of imperfections in repetitive regular structures (textures). Humans can easily find such defects without prior knowledge of the `good' pattern. In this study, it is assumed that structural defects are detected as irregularities, that is, locations of lower regularity. We define pattern regularity features and find defects by robust detection of outliers in the feature space. Two tests are presented to assess the approach. In the first test, diverse texture patterns are processed individually and outliers are searched in each pattern. In the second test, classified defects in a group of textiles are considered. Defect-free patterns are used to learn distance thresholds that separate defects",2000,0, 210,LACE frameworks and technique-identifying the legacy status of a business information system from the perspectives of its causes and effects,"This paper first presents a definition of the concept `legacy status' with a three-dimensional model. It then discusses LACE frameworks and techniques, which can be used to assess legacy status from the cause and effects perspectives. A method of applying the LACE frameworks is shown and a technique with a mathematical model and metric so that the legacy status of a system can be calculated. This paper describes a novel and practical way to identify legacy status of a system, and has pointed out a new direction for research in this area",2000,0, 211,Virtual sensor for fault detection and isolation in flight control systems - fuzzy modeling approach,"A virtual sensor for normal acceleration has been developed and implemented in the flight control system of a small commercial aircraft. The inputs of the virtual sensor are the consolidated outputs of dissimilar sensor signals. The virtual sensor is a fuzzy model of the Takagi-Sugeno type and it has been identified from simulated data, using a detailed, realistic Matlab/SimulinkTM model used by the aircraft manufacturer. This virtual sensor can be applied to identify a failed sensor in the case that only two real sensors are available and even to detect a failure of the last available sensor",2000,0, 212,Optimal decomposition for wavelet image compression,"The paper discusses important features of wavelet transform in compression of still images including the extent to which the quality of image is degraded by process of wavelet compression and decompression. A set of wavelet functions (wavelets) for implementation in a still image compression system is examined. The effects of different wavelet functions, image contents and compression ratios are assessed. The benefit of this transform relating to today's methods is stressed. Our results provide a good reference for application developers to choose a good wavelet compression system for their application",2000,0, 213,Efficient data broadcast scheme on wireless link errors,"As portable wireless computers become popular, mechanisms to transmit data to such users are of significant interest. Data broadcast is effective in dissemination-based applications to transfer the data to a large number of users in the asymmetric environment where the downstream communication capacity is relatively much greater than the upstream communication capacity. Index based organization of data transmitted over wireless channels is very important to reduce power consumption. We consider an efficient (1:m) indexing scheme for data broadcast on unreliable wireless networks. We model the data broadcast mechanism on the error prone wireless networks, using the Markov model. We analyze the average access time to obtain the desired data item and find that the optimal index redundancy (m) is SQRT[Data/{Index*(1-p)K}], where p is the failure rate of the wireless link, Data is the size of the data in a broadcast cycle, Index is the size of index, and K is the index level. We also measure the performance of data broadcast schemes by parametric analysis",2000,0, 214,Modeling SPECT acquisition and processing of changing radiopharmaceutical distributions,"The accuracy of SPECT images is compromised and artifacts may be produced when the radiopharmaceutical distribution changes during image acquisition. Optimization of SPECT acquisition protocols for changing tracer distributions can be difficult not only in patient studies (undesirability of performing repeat studies on the same patient) but also in phantom studies (difficulty of emulating the changing distributions). This study proposes a simulation that allows computer modeling of both tracer kinetics and different acquisition schemes. 99mTc Teboroxime (Bracco Diagnostics) is used as a model. SPECT acquisition of a software phantom (NCAT, UNC Chapel Hill) is simulated with photon attenuation, collimator resolution, Compton scatter, Poisson noise, and changing tracer distribution. Short-axis uniformity is used to assess the severity of artifacts in the myocardium. The simulation produces similar artifacts to those found in patient studies with 99mTc Teboroxime. This simulation methodology can provide a valuable tool for testing novel acquisition and processing techniques and to facilitate the optimization of SPECT images of changing tracer distributions. Summed fanning (back and forth) acquisitions have been tested and artifact reduced short-axis images obtained. Image restoration techniques are proposed to further improve the image quality. Furthermore, the simulated studies can be compared to the simulations with assigned low liver uptake and no tracer clearance from the myocardium to detect and resolve artifacts through variations in the acquisition and processing schemes.",2001,0, 215,Neural network detection and identification of actuator faults in a pneumatic process control valve,"This paper establishes a scheme for detection and identification of actuator faults in a pneumatic process control valve using neural networks. First, experimental performance parameters related to the valve step responses, including dead time, rise time, overshoot, and the steady state error are obtained directly from a commercially available software package for a variety of faulty operating conditions. Acquiring training data in this way has eliminated the need for additional instrumentation of the valve. Next, the experimentally determined performance parameters are used to train a multilayer perceptron network to detect and identify incorrect supply pressure, actuator vent blockage and diaphragm leakage faults. The scheme presented here is novel in that it demonstrates that a pattern recognition approach to fault detection and identification, for pneumatic process control valves, using features of the valve step response alone, is possible.",2001,0, 216,Finite Element Analysis of Internal Winding Faults in Distribution Transformers,"With the appearance of deregulation, distribution transformer predictive maintenance is becoming more important for utilities to prevent forced outages with the consequential costs. To detect and diagnose a transformer internal fault requires a transformer model to simulate these faults. This paper presents finite element analysis of internal winding faults in a distribution transformer. The transformer with a turn-to-earth fault or a turn-to-turn fault is modeled using coupled electromagnetic and structural finite elements. The terminal behaviors of the transformer are studied by an indirect coupling of the finite element method and circuit simulation. The procedure was realized using a commercially available software. The normal case and various faulty cases were simulated and the terminal behaviors of the transformer were studied and compared with field experimental results. The comparison results validate the finite element model to simulate internal faults in a distribution transformer.",2001,0,264 217,A Short-Circuit Current Study for the Power Supply System of Taiwan Railway,"The western Taiwan railway transportation system consists mainly on a mountain route and ocean route. Taiwan Railway Administration (TRA) has conducted a series of experiments on the ocean route in recent years to identify the possible causes of unknown events that cause the trolley contact wires to melt down frequently. The conducted tests include the short-circuit fault test within the power supply zone of the Ho Long Substation (Zhu Nan to Tong Xiao) that had the highest probability for the melt down events. Those test results, based on the actual measured maximum short-circuit current, provide a valuable reference for TRA when comparing against the said events. The Le Blanc transformer is the main transformer of the Taiwan railway electrification system. The Le Blanc transformer mainly transforms the Taiwan Power Company (TPC) generated three-phase alternating power supply system (69kV, 60Hz) into two single-phase alternating power distribution systems (M phase and T phase) (26kV, 60Hz) needed for the trolley traction. As a unique winding connection transformer, the conventional software for fault analysis will not be able to simulate its internal current and phase difference between each phase current. Therefore, besides extracts of the short-circuit test results, this work presents an EMTP model based on the Taiwan Railway Substation equivalent circuit model with a Le Blanc transformer. The proposed circuit model can simulate the same short-circuit test to verify the actual fault current and accuracy of the equivalent circuit model.",2001,0,339 218,Does code decay? Assessing the evidence from change management data,"A central feature of the evolution of large software systems is that change-which is necessary to add new functionality, accommodate new hardware, and repair faults-becomes increasingly difficult over time. We approach this phenomenon, which we term code decay, scientifically and statistically. We define code decay and propose a number of measurements (code decay indices) on software and on the organizations that produce it, that serve as symptoms, risk factors, and predictors of decay. Using an unusually rich data set (the fifteen-plus year change history of the millions of lines of software for a telephone switching system), we find mixed, but on the whole persuasive, statistical evidence of code decay, which is corroborated by developers of the code. Suggestive indications that perfective maintenance can retard code decay are also discussed",2001,0, 219,System reliability analysis: the advantages of using analytical methods to analyze non-repairable systems,"Most of the system analysis software available on the market today employs the use of simulation methods for estimating the reliability of nonrepairable systems. Even though simulation methods are easy to apply and offer great versatility in modeling and analyzing complex systems, there are some limitations to their effectiveness. For example, if the number of simulations performed is not large enough, these methods can be error prone. In addition, performing a large number of simulations can be extremely time-consuming and simulation offers a small range of calculation results when compared to analytical methods. Analytical methods have been avoided due to their complexity in favor of the simplicity of using simulation. A software tool has been developed that calculates the exact analytical solution for the reliability of a system. Given the reliability equation for the system, further analyses on the system can be performed, such as computing exact values of the reliability, failure rate, at specific points in time, as well as computing the system MTTF (mean time to failure), and reliability importance measures for the components of the system. In addition, optimization and reliability allocation techniques can be utilized to aid engineers in their design improvement efforts. Finally, the time-consuming calculations and the non-repeatability issue of the simulation methodology are eliminated",2001,0, 220,Separating recovery strategies from application functionality: experiences with a framework approach,"Industry-oriented fault tolerance solutions for embedded distributed systems should be based on adaptable, reusable elements. Software-implemented fault tolerance can provide such flexibility via the presented framework approach. It consists of (1) a library of fault tolerance functions, (2) a backbone coordinating these functions, and (3) a language expressing configuration and recovery. This language is a sort of ancillary application layer, separating recovery aspects from functional ones. Such a framework approach allows for a flexible combination of the available hardware redundancy with software-implemented fault tolerance. This increases the availability and reliability of the application at a justifiable cost thanks to the re-usability of the library elements in different targets systems. It also increases the maintainability due to the separation of the functional behavior from the recovery strategies that are executed when an error is detected as the modifications to functional and nonfunctional behavior are, to some extent, independent and hence less complex. Practical experience is reported from the integration of this framework approach in an automation system for electricity distribution. This case study illustrates the power of software-based fault tolerance solutions and of the configuration-and-recovery language ARIEL to allow flexibility and adaptability to changes in the environment",2001,0, 221,"Code coverage, what does it mean in terms of quality?","Unit code test coverage has long been known to be an important metric for testing software, and many development groups require 85% coverage to achieve quality targets. Assume we have a test, T1 which has 100% code coverage and it detects a set of defects, D1. The question, which is answered here, is """"What percentage of the defects in D1 will be detected if a random subset of the tests in T1 are applied to the code, which has code coverage of X% of the code?"""" The purpose of this paper is to show the relation between code quality and code coverage. The relationship is derived via a model of code defect levels. A sampling technique is employed and modeled with the hypergeometric distribution while assuming uniform probability and a random distribution of defects in the code, which invokes the binomial distribution. The result of this analysis is a simple relation between defect level and quality of the code delivered after the unit code is tested. This model results in the rethinking of the use of unit code test metrics and the use of support tools",2001,0, 222,Probabilistic communication optimizations and parallelization for distributed-memory systems,"In high-performance systems execution time is of crucial importance justifying advanced optimization techniques. Traditionally, optimization is based on static program analysis. The quality of program optimizations, however, can be substantially improved by utilizing runtime information. Probabilistic data-flow frameworks compute the probability with what data-flow facts may hold at some program point based on representative profile runs. Advanced optimizations can use this information in order to produce highly efficient code. In this paper we introduce a novel optimization technique in the context of High Performance Fortran (HPF) that is based on probabilistic data-flow information. We consider statically undefined attributes which play an important role for parallelization and compute for those attributes the probabilities to hold some specific value during runtime. For the most probable attribute values highly-optimized, specialized code is generated. In this way significantly better performance results can be achieved. The implementation of our optimization is done in the context of VFC, a source-to-source parallelizing compiler for HPF/F90",2001,0, 223,Design and implementation of secure Web-based LDAP management system,"As the Internet grows quickly, more and more services are available. How to provide high quality, convenient, and personalized services to users are the important issues for Internet service providers to keep customers connected to their Web sites. The directory is an important part of Internet technology used to support such needs. It exists in a multitude of applications ranging from operating systems, asset management systems, security systems, etc. Furthermore, The Gartner Group, a market research firm, predicts that 40% to 90% of new software and hardware will be directory related products, at end of the period 2001 to 2003. In the directory industry, we can divide products into 3 fields: directory server, management system, and directory application. The management system is one of the important parts of directory services. The directory management system is focused on non-Web-based systems. While directory services are applied on Internet services, it is necessary to provide a Web-based management interface. This interface will provide the advantages of ubiquity, cross platform, thin client, and reduced TCO (total cost of ownership). We proposed and implemented a Web-based lightweight directory access protocol (LDAP) management architecture to provide such benefits and to manage multiple LDAP servers. We used the standard protocol and popular software of Internet technology usually used to build the system, so the system is easy to be ported and minimized the changes of the original system. In addition, we also considered the security factors while designing and constructing the system",2001,0, 224,TRAM: a tool for requirements and architecture management,"Management of system requirements and system architectures is part of any software engineering project. But it is usually very tedious and error prone. In particular, managing the traceability between system requirements and system architectures is critical but difficult. The article introduces a tool, TRAM, for managing system requirements, system architectures and more importantly the traceability between them. Its primary design objective is being practical and ready for practitioners to use without much overhead. The issues discussed in the paper include an information model that underlies the capture of requirements, architectures and their traceability, a set of document templates implementing the information model, and the support tool",2001,0, 225,Systematically deriving partial oracles for testing concurrent programs,"The problem of verifying the correctness of test executions is well-known: while manual verification is time-consuming and error-prone, developing an oracle to automatically verify test executions can be as costly as implementing the original program. This is especially true for concurrent programs, due to their non-determinism and complexity. In this paper, we present a method that uses partial specifications to systematically derive oracles for concurrent programs. We illustrate the method by deriving an Ada task that monitors the execution of a concurrent Ada program and describe a prototype tool that partially automates the derivation process. We present the results of a study that shows the derived oracles are surprisingly effective at error detection. The study also shows that manual verification is an inaccurate means of failure detection, that large test case sets must be used to ensure adequate testing coverage, and that test cases must be run many times to cover for variations in run-time behaviour",2001,0, 226,Empirical studies of a prediction model for regression test selection,"Regression testing is an important activity that can account for a large proportion of the cost of software maintenance. One approach to reducing the cost of regression testing is to employ a selective regression testing technique that: chooses a subset of a test suite that was used to test the software before the modifications; then uses this subset to test the modified software. Selective regression testing techniques reduce the cost of regression testing if the cost of selecting the subset from the test suite together with the cost of running the selected subset of test cases is less than the cost of rerunning the entire test suite. Rosenblum and Weyuker (1997) proposed coverage-based predictors for use in predicting the effectiveness of regression test selection strategies. Using the regression testing cost model of Leung and White (1989; 1990), Rosenblum and Weyuker demonstrated the applicability of these predictors by performing a case study involving 31 versions of the KornShell. To further investigate the applicability of the Rosenblum-Weyuker (RW) predictor, additional empirical studies have been performed. The RW predictor was applied to a number of subjects, using two different selective regression testing tools, Deja vu and TestTube. These studies support two conclusions. First, they show that there is some variability in the success with which the predictors work and second, they suggest that these results can be improved by incorporating information about the distribution of modifications. It is shown how the RW prediction model can be improved to provide such an accounting",2001,0, 227,Experimental application of extended Kalman filtering for sensor validation,"A sensor failure detection and identification scheme for a closed loop nonlinear system is described. Detection and identification tasks are performed by estimating parameters directly related to potential failures. An extended Kalman filter is used to estimate the fault-related parameters, while a decision algorithm based on threshold logic processes the parameter estimates to detect possible failures. For a realistic evaluation of its performance, the detection scheme has been implemented on an inverted pendulum controlled by real-time control software. The failure detection and identification scheme is tested by applying different types of failures on the sensors of the inverted pendulum. Experimental results are presented to validate the effectiveness of the approach",2001,0, 228,Professional Engineers Ontario's approach to licensing software engineering practitioners,"Professional Engineers Ontario (PEO) has developed a methodology to assess software practitioners' qualifications for licensing purposes. It entails a comprehensive assessment of the applicants' academic preparation and work experience vis-a`-vis PEO's software engineering body of knowledge and criteria for acceptable experience. Using this approach, PEO has licensed close to 200 software engineering practitioners to date",2001,0, 229,Systems failures: an approach to building a coping strategy,"When systems fail, they can cause havoc everywhere. They affect the organisations involved in creating, maintaining and using them and they can have a profound effect on the people involved, directly or indirectly. The causes of systems and project failures, vary considerably. Each case has to be taken in isolation and examined, to see where it has gone wrong in the past, or is starting to go wrong at present. In a true-life scenario, it is essential to be able to predict likely problems that may arise or accurately recognise failure symptoms when they occur. To achieve this it is important to be able to identify what is really going on and when these facts have been established, to be able to select a suitable means of handling the situation. Two European Esprit research projects examined software and multimedia quality practices and provided a framework for addressing these issues. These frameworks did not just promote best practices within software engineering, but sought to address some of the wider issues of systems and their role within the business. Middlesex University has taken this theme forward with a specific remit to address the subject of systems failures",2001,0, 230,Evaluating the effect of inheritance on the modifiability of object-oriented business domain models,"The paper describes an experiment to assess the impact of inheritance on the modifiability of object oriented business domain models. This experiment is part of a research project on the quality determinants of early systems development artefacts, with a special focus on the maintainability of business domain models. Currently there is little empirical information about the relationship between the size, structural and behavioural properties of business domain models and their maintainability. The situation is different in object oriented software engineering where a number of experimental investigations into the maintainability of object oriented software have been conducted. The results of our experiment indicate that extensive use of inheritance leads to models that are more difficult to modify. These findings are in line with the conclusions drawn from three similar controlled experiments on inheritance and modifiability of object oriented software",2001,0, 231,Coupling and cohesion as modularization drivers: are we being over-persuaded?,"For around three decades software engineering gurus have sold us the ideal of minimal coupling and maximal cohesion at all levels of abstraction as a way to reduce the effort to understand and maintain software systems. The object oriented paradigm brought a new design philosophy and encapsulation mechanisms that apparently would help us to achieve that desideratum. However, after a decade where this paradigm has emerged as the dominant one, we are faced with practitioners' reality: coupling and cohesion do not seem to be the dominant driving forces when it comes to modularization. This conclusion was based on a relatively large sample of heterogeneous systems. We describe an environment that allows us not only to assess this reality but also to derive better modularization solutions in what concerns coupling and cohesion. These solutions are generated by means of cluster analysis techniques and partially preserve the original modularization criteria. We believe this approach can be of great help in reengineering actions of object oriented legacy systems",2001,0, 232,A support tool for annotated program manipulation,"The paper describes the AFORT system intended to be an integrated environment for support of analysis, transformation and instrumentation of FORTRAN 77 programs. It takes into account information that is known about the program being processed and conveyed in formalized comments (annotations). The AFORT system is based upon two approaches suggested by the author (V.N. Kasyanov, 1991; 1997): so-called annotated program concretization whereby a given general-purpose program can be correctly transformed into a number of special-purpose programs of higher quality, and so-called implausibility properties (anomalies) which permit us to detect dynamic errors statically and informal errors formally",2001,0, 233,Prediction models for software fault correction effort,"We have developed a model to explain and predict the effort associated with changes made to software to correct faults while it is undergoing development. Since the effort data available for this study is ordinal in nature, ordinal response models are used to explain the effort in terms of measures of fault locality and the characteristics of the software components being changed. The calibrated ordinal response model is then applied to two projects not used in the calibration to examine predictive validity",2001,0, 234,A study on fault-proneness detection of object-oriented systems,"Fault-proneness detection in object-oriented systems is an interesting area for software companies and researchers. Several hundred metrics have been defined with the aim of measuring the different aspects of object-oriented systems. Only a few of them have been validated for fault detection, and several interesting works with this view have been considered. This paper reports a research study starting from the analysis of more than 200 different object-oriented metrics extracted from the literature with the aim of identifying suitable models for the detection of the fault-proneness of classes. Such a large number of metrics allows the extraction of a subset of them in order to obtain models that can be adopted for fault-proneness detection. To this end, the whole set of metrics has been classified on the basis of the measured aspect in order to reduce them to a manageable number; then, statistical techniques were employed to produce a hybrid model comprised of 12 metrics. The work has focused on identifying models that can detect as many faulty classes as possible and, at the same time, that are based on a manageably small set of metrics. A compromise between these aspects and the classification correctness of faulty and non-faulty classes was the main challenge of the research. As a result, two models for fault-proneness class detection have been obtained and validated",2001,0, 235,Assessing optimal software architecture maintainability,"Over the last decade, several authors have studied the maintainability of software architectures. In particular, the assessment of maintainability has received attention. However, even when one has a quantitative assessment of the maintainability of a software architecture, one still does not have any indication of the optimality of the software architecture with respect to this quality attribute. Typically, the software architect is supposed to judge the assessment result based on his or her personal experience. In this paper, we propose a technique for analysing the optimal maintainability of a software architecture based on a specified scenario profile. This technique allows software architects to analyse the maintainability of their software architecture with respect to the optimal maintainability. The technique is illustrated and evaluated using industrial cases",2001,0, 236,Current trends in the design of automotive electronic systems,"Today's situation in this field is characterized by three distinct development phases: First, the analysis and design of functionality. This type of work is typically performed in the laboratory, i.e. on the desk. Second, the implementation of a prototype system, realized by (semi)automatic code generation and followed by a test with a Lab-car or in a real vehicle. The third and final step comprises the calibration and fine-tuning of algorithms and their parameters, commonly done in a real car. However, there are some flaws associated with this approach. There is no support for multiple interconnected electronic control units. Automatic generation of code of production quality is still a challenging task. And there is a large gap between the properties of a virtual car and the behavior of the real vehicle. The latter is one reason why nowadays the adjustment of calibration parameters still needs to be done manually. In the future, the picture outlined above will change remarkably. Function development tools will be able to generate efficient and reliable software code automatically. Vehicle models will mimic the characteristics of the real object to an extent we cannot imagine today. And automated test without manual interference will unprecedented degree of optimization and quality throughout a complex network of electronic control units. Almost the entire development process will be shifted to the desk with no need for costly, risky, and error-prone experiments with prototype engines or vehicles",2001,0, 237,A controlled experiment to assess the effectiveness of inspection meetings,"Software inspection is one of the best practices for detecting and removing defects early in the software development process. In a software inspection, review is first performed individually and then by meeting as a team. In the last years, some empirical studies have shown that inspection meetings do not improve the effectiveness of the inspection process with respect to the number of true discovered defects. While group synergy allows inspectors to find some new defects, these meeting gains are offset by meeting losses, that is defects found by individuals but not reported as a team. We present a controlled experiment with more than one hundred undergraduate students who inspected software requirements documents as part of a university course. We compare the performance of nominal and real teams, and also investigate the reasons for meeting losses. Results show that nominal teams outperformed real teams, there were more meeting losses than meeting gains, and that most of the losses were defects found by only one individual in the inspection team",2001,0, 238,Controlling overfitting in software quality models: experiments with regression trees and classification,"In these days of faster, cheaper, better release cycles, software developers must focus enhancement efforts on those modules that need improvement the most. Predictions of which modules are likely to have faults during operations is an important tool to guide such improvement efforts during maintenance. Tree-based models are attractive because they readily model nonmonotonic relationships between a response variable and its predictors. However, tree-based models are vulnerable to overfitting, where the model reflects the structure of the training data set too closely. Even though a model appears to be accurate on training data, if overfitted it may be much less accurate when applied to a current data set. To account for the severe consequences of misclassifying fault-prone modules, our measure of overfitting is based on the expected costs of misclassification, rather than the total number of misclassifications. In this paper, we apply a regression-tree algorithm in the S-Plus system to the classification of software modules by the application of our classification rule that accounts for the preferred balance between misclassification rates. We conducted a case study of a very large legacy telecommunications system, and investigated two parameters of the regression-tree algorithm. We found that minimum deviance was strongly related to overfitting and can be used to control it, but the effect of minimum node size on overfitting is ambiguous",2001,0, 239,Evaluating software degradation through entropy,"Software systems are affected by degradation as an effect of continuous change. Since late interventions are too much onerous, software degradation should be detected early in the software lifetime. Software degradation is currently detected by using many different complexity metrics, but their use to monitor maintenance activities is costly. These metrics are difficult to interpret, because each emphasizes a particular aspect of degradation and the aspects shown by different metrics are not orthogonal. The purpose of our research is to measure the entropy of a software system to assess its degradation. In this paper, we partially validate the entropy class of metrics by a case study, replicated on successive releases of a set of software systems. The validity is shown through direct measures of software quality, such as the number of detected defects, the maintenance effort and the number of slipped defects",2001,0, 240,"Measurement, prediction and risk analysis for Web applications","Accurate estimates of development effort play an important role in the successful management of larger Web development projects. By applying measurement principles to measure qualities of the applications and their development processes, feedback can be obtained to help understand, control and improve products and processes. The objective of this paper is to present a Web design and authoring prediction model based on a set of metrics which were collected using a case study evaluation (CSE). The paper is organised into three parts. Part I describes the CSE in which the metrics used in the prediction model were collected. These metrics were organised into five categories: effort metrics, structure metrics, complexity metrics, reuse metrics and size metrics. Part II presents the prediction model proposed, which was generated using a generalised linear model (GLM), and assesses its predictive power. Finally, part III investigates the use of the GLM as a framework for risk management",2001,0, 241,Dependability modelling of homogeneous and heterogeneous distributed systems,"In the past few years we have developed an experimental distributed system that supports multi-task applications with different levels of criticality. Software implemented fault-tolerant protocols are used to support dependable computing. This paper first presents Markov models of a distributed system under the occurrence of faults, reconfiguration and repair. As a part of our overall project, these models are intended for solving our particular problems, like assessing the merits of redundant schemes, task allocation and reallocation policies, and fault handling used in our experimental system. However, these models are developed in a generic way. They can also be used in evaluating individual task's reliability, risk and availability under various redundant schemes in any homogeneous distributed system. Then, we extend our study in analysing the dependability of the heterogeneous system consisting of a number homogeneous distributed systems connected through gateways",2001,0, 242,Concept analysis for module restructuring,"Low coupling between modules and high cohesion inside each module are the key features of good software design. This paper proposes a new approach to using concept analysis for module restructuring, based on the computation of extended concept subpartitions. Alternative modularizations, characterized by high cohesion around the internal structures that are being manipulated, can be determined by such a method. To assess the quality of the restructured modules, the trade-off between encapsulation violations and decomposition is considered, and proper measures for both factors are defined. Furthermore, the cost of restructuring is evaluated through a measure of distance between the original and the new modularizations. Concept subpartitions were determined for a test suite of 20 programs of variable size: 10 public-domain and 10 industrial applications. The trade-off between encapsulation and decomposition was measured on the resulting module candidates, together with an estimate of the cost of restructuring. Moreover, the ability of concept analysis to determine meaningful modularizations was assessed in two ways. First, programs without encapsulation violations were used as oracles, assuming the absence of violations as an indicator of careful decomposition. Second, the suggested restructuring interventions were actually implemented in some case studies to evaluate the feasibility of restructuring and to deeply investigate the code organization before and after the intervention. Concept analysis was experienced to be a powerful tool supporting module restructuring",2001,0, 243,Aspect-oriented programming takes aim at software complexity,"As global digitalization and the size of applications expand at an exponential rate, software engineering's complexities are also growing. One feature of this complexity is the repetition of functionality throughout an application. An example of the problems this complexity causes occurs when programmers must change an oft-repeated feature for an updated or new version of an application. It is often difficult for programmers to find every instance of such a feature in millions of lines of code. Failing to do so, however, can introduce bugs. To address this issue, software researchers are developing methodologies based on a new programming element: the aspect. An aspect is a piece of code that describes a recurring property of a program. Applications can, of course, have multiple aspects. Aspects provide cross-cutting modularity. In other words, programmers can use aspects to create software modules for issues that cut across various parts of an application. Aspects have the potential to make programmers' work easier, less time-consuming and less error-prone. Proponents say aspects could also lead to less expensive applications, shorter upgrade cycles and software that is flexible and more customizable. A number of companies and universities are working on aspects or aspect-like concepts",2001,0, 244,The application of neural networks to fuel processors for fuel-cell vehicles,"Passenger vehicles fueled by hydrocarbons or alcohols and powered by proton exchange membrane (PEM) fuel cells address world air quality and fuel supply concerns while avoiding hydrogen infrastructure and on-board storage problems. Reduction of the carbon monoxide concentration in the on-board fuel processor's hydrogen-rich gas by the preferential oxidizer (PrOx) under dynamic conditions is crucial to avoid poisoning of the PEM fuel cell's anode catalyst and thus malfunction of the fuel-cell vehicle. A dynamic control scheme is proposed for a single-stage tubular cooled PrOx that performs better than, but retains the reliability and ease of use of, conventional industrial controllers. The proposed hybrid control system contains a cerebellar model articulation controller artificial neural network in parallel with a conventional proportional-integral-derivative (PID) controller. A computer simulation of the preferential oxidation reactor was used to assess the abilities of the proposed controller and compare its performance to the performance of conventional controllers. Realistic input patterns were generated for the PrOx by using models of vehicle power demand and upstream fuel-processor components to convert the speed sequences in the Federal Urban Driving Schedule to PrOx inlet temperatures, concentrations, and flow rates. The proposed hybrid controller generalizes well to novel driving sequences after being trained on other driving sequences with similar or slower transients. Although it is similar to the PID in terms of software requirements and design effort, the hybrid controller performs significantly better than the PID in terms of hydrogen conversion setpoint regulation and PrOx outlet carbon monoxide reduction",2001,0, 245,Self-aware services: using Bayesian networks for detecting anomalies in Internet-based services,"We propose a general architecture and implementation for the autonomous assessment of the health of arbitrary service elements, as a necessary prerequisite to self-control. We describe a health engine, the central component of our proposed `self-awareness and control' architecture. The health engine combines domain independent statistical analysis and probabilistic reasoning technology (Bayesian networks) with domain dependent measurement collection and evaluation methods. The resultant probabilistic assessment enables open, non-hierarchical communications about service element health. We demonstrate the validity of our approach using HP's corporate email service and detecting email anomalies: mail loops and a virus attack",2001,0, 246,Development of a virtual environment for fault diagnosis in rotary machinery,"Component fault analysis is a very widely researched area and requires a great deal of knowledge and expertise to establish a consistent and accurate tool for analysis. This paper will discuss a virtual diagnostic tool for fault detection of rotary machinery. The diagnostic tool has been developed using FMCELL software, which provides a 3D graphical visualization environment to modeling rotary machinery with virtual data acquisition capabilities. The developed diagnostic tool provides a virtual testbed with suitable graphical user interfaces for rapid diagnostic fault analysis of machinery. In this paper, we will discuss details of this newly developed virtual diagnostic model using FMCELL software and present our approach for diagnostics of a mechanical bearing test bed (TSU-BTB). Furthermore, we will provide some examples of how the virtual diagnostic environment can be used for performing machinery fault diagnostics. Using a frequency pattern matching superimposing technique, the model is proven to be able to detect primary faults in machines with fair accuracy and reliability",2001,0, 247,Diagnosis and prognosis of bearings using data mining and numerical visualization techniques,"Traditionally, condition-based monitoring techniques have been used to diagnose failure in rotary machinery by application of low-level signal processing and trend analysis techniques. Such techniques consider small windows of data from large data sets to give preliminary information of developing fault(s) or failure precursor(s). However, these techniques only provide information of a minute portion of a large data set, which limits the accuracy of predicting the remaining useful life of the system. Diagnosis and prognosis (DAP) techniques should be able to identify the origin of the fault(s), estimate the rate of its progression and determine the remaining useful life of the system. This research demonstrates the use of data mining and numerical visualization techniques for diagnosis and prognosis of bearing vibration data. By using these techniques a comprehensive understanding of large vibration data sets can be attained. This approach uses intelligent agents to isolate particular bearing vibration characteristics using statistical analysis and signal processing for data compression. The results of the compressed data can be visualized in 3-D plots and used to track the origination and evolution of failure in the bearing vibration data. The Bearing Test Bed is used for applying measurable static and dynamic stresses on the bearing and collecting vibration signatures from the stressed bearings",2001,0, 248,Client-transparent fault-tolerant Web service,"Most of the existing fault tolerance schemes for Web servers detect server failure and route future client requests to backup servers. These techniques typically do not provide transparent handling of requests whose processing was in progress when the failure occurred. Thus, the system may fail to provide the user with confirmation for a requested transaction or clear indication that the transaction was not performed. We describe a client-transparent fault tolerance scheme for Web servers that ensures correct handling of requests in progress at the time of server failure. The scheme is based on a standby backup server and simple proxies. The error handling mechanisms of TCP are used to multicast requests to the primary and backup as well as to reliably deliver replies from a server that may fail while sending the reply. Our scheme does not involve OS kernel changes or use of user-level TCP implementations and requires minimal changes to the Web server software",2001,0, 249,On detecting global predicates in distributed computations,"Monitoring of global predicates is a fundamental problem in asynchronous distributed systems. This problem arises in various contexts, such as design, testing and debugging, and fault tolerance of distributed programs. In this paper, we establish that the problem of determining whether there exists a consistent cut of a computation that satisfies a predicate in k-CNF (k⩾2), in which no two clauses contain variables from the same process, is NP-complete in general. A polynomial-time algorithm to find the consistent cut, if it exists, that satisfies the predicate for special cases is provided. We also give algorithms (albeit exponential) that can be used to achieve an exponential reduction in time over existing techniques for solving the general version. Furthermore, we present an algorithm to determine whether there exists a consistent cut of a computation for which the sum x1+x2++xn exactly equals some constant k, where each xi is an integer variable on a process pi such that it is incremented or decremented by at most one at each step. As a corollary, any symmetric global predicate on Boolean variables, such as absence of simple majority and exclusive-OR of local predicates, can now be detected. Additionally, the problem is proved to be NP-complete if each xi can be changed by an arbitrary amount at each step. Our results solve the previously open problems in predicate detection proposed by V.K. Garg (1997) and bridge the wide gap between the known tractability and intractability results that have existed until now",2001,0, 250,Optimistic active replication,"Replication is a powerful technique for increasing availability of a distributed service. Algorithms for replicating distributed services do however face a dilemma: they should be: efficient (low latency); while ensuring consistency of the replicas, which are two contradictory goals. The paper concentrates on active replication, where all the replicas handle the clients' requests. Active replication is usually implemented using the atomic broadcast primitive. To be efficient, some atomic broadcast algorithms deliberately sacrifice consistency, if inconsistency is likely to occur with a low probability. We present an algorithm that handles replication efficiently in most scenarios, while preventing inconsistencies. The originality of the algorithm is to take the client-server interaction into account, while traditional solutions consider atomic broadcast as a black box",2001,0, 251,Design and implementation of a composable reflective middleware framework,"With the evolution of the global information infrastructure, service providers will need to provide effective and adaptive resource management mechanisms that can serve more concurrent clients and deal with applications that exhibit quality-of-service (QoS) requirements. Flexible, scalable and customizable middleware can be used as an enabling technology for next-generation systems that adhere to the QoS requirements of applications that execute in highly dynamic distributed environments. To enable application-aware resource management, we are developing a customizable and composable middleware framework called CompOSE|Q (Composable Open Software Environment with QoS), based on a reflective meta-model. In this paper, we describe the architecture and runtime environment for CompOSE|Q and briefly assess the performance overhead of the additional flexibility. We also illustrate how flexible communication mechanisms can be supported efficiently in the CompOSE|Q framework",2001,0, 252,Investigating the cost-effectiveness of reinspections in software development,"Software inspection is one of the most effective methods to detect defects. Reinspection repeats the inspection process for software products that are suspected to contain a significant number of undetected defects after an initial inspection. As a reinspection is often believed to be less efficient than an inspection an important question is whether a reinspection justifies its cost. In this paper we propose a cost-benefit model for inspection and reinspection. We discuss the impact of cost and benefit parameters on the net gain of a reinspection with empirical data from an experiment in which 31 student teams inspected and reinspected a requirements document. Main findings of the experiment are: a) For reinspection benefits and net gain were significantly lower than for the initial inspection. Yet, the reinspection yielded a positive net gain for most teams with conservative cost-benefit assumptions. B) Both the estimated benefits and number of major defects are key factors for reinspection net gain, which emphasizes the need for appropriate estimation techniques.",2001,0, 253,Generating wrappers for command line programs: the Cal-Aggie Wrap-O-Matic project,"Software developers writing new software have strong incentives to make their products compliant to standards such as CORBA, COM, and Java Beans. Standards compliance facilitates interoperability, component based software assembly, and software reuse, thus leading to improved quality and productivity. Legacy software, on the other hand, is usually monolithic and hard to maintain and adapt. Many organizations, saddled with entrenched legacy software, are confronted with the need to integrate legacy assets into more modern, distributed, componentized systems that provide critical business services. Thus, wrapping legacy systems for interoperability has been an area of considerable interest. Wrappers are usually constructed by hand which can be costly and error-prone. We specifically target command-line oriented legacy systems and describe a tool framework that automates away some of the drudgery of constructing wrappers for these systems. We describe the Cal-Aggie Wrap-O-Matic system (CAWOM), and illustrate its use to create CORBA wrappers for: a) the JDB debugger, thus supporting distributed debugging using other CORBA components; and b) the Apache Web server, thus allowing remote Web server administration, potentially mediated by CORBA-compliant security services. While CORBA has some limitations, in several relatively common settings it can produce better wrappers at lower cost.",2001,0, 254,Incorporating varying test costs and fault severities into test case prioritization,"Test case prioritization techniques schedule test cases for regression testing in an order that increases their ability to meet some performance goal. One performance goal, rate of fault detection, measures how quickly faults are detected within the testing process. In previous work (S. Elbaum et al., 2000; G. Rothermel et al., 1999), we provided a metric, APFD, for measuring rate of fault detection, and techniques for prioritizing test cases to improve APFD, and reported the results of experiments using those techniques. This metric and these techniques, however, applied only in cases in which test costs and fault severity are uniform. We present a new metric for assessing the rate of fault detection of prioritized test cases that incorporates varying test case and fault costs. We present the results of a case study illustrating the application of the metric. This study raises several practical questions that might arise in applying test case prioritization; we discuss how practitioners could go about answering these questions.",2001,0, 255,Theory of software reliability based on components,"We present a foundational theory of software system reliability based on components. The theory describes how component developers can design and test their components to produce measurements that are later used by system designers to calculate composite system reliability, without implementation and test of the system being designed. The theory describes how to make component measurements that are independent of operational profiles, and how to incorporate the overall system-level operational profile into the system reliability calculations. In principle, the theory resolves the central problem of assessing a component, which is: a component developer cannot know how the component will be used and so cannot certify it for an arbitrary use; but if the component buyer must certify each component before using it, component based development loses much of its appeal. This dilemma is resolved if the component developer does the certification and provides the results in such a way that the component buyer can factor in the usage information later without repeating the certification. Our theory addresses the basic technical problems inherent in certifying components to be released for later use in an arbitrary system. Most component research has been directed at functional specification of software components; our theory addresses the other equally important side of the coin: component quality.",2001,0, 256,ATPG for combinational circuits on configurable hardware,"In this paper, a new approach for generating test vectors that detects faults in combinational circuits is introduced. The approach is based on automatically designing a circuit which implements the D-algorithm, an automatic test pattern generation (ATPG) algorithm, specialized for the combinational circuit. Our approach exploits fine-grain parallelism by performing the following in three clock cycles: direct backward/forward implications, conflict checking, selecting next gate to propagate fault or to justify a line, decisions on gate inputs, and loading the state of the circuit after backup. In this paper, we show the feasibility of this approach in terms of hardware cost and speed and how it compares with software-based techniques.",2001,0, 257,An internally replicated quasi-experimental comparison of checklist and perspective based reading of code documents,"The basic premise of software inspections is that they detect and remove defects before they propagate to subsequent development phases where their detection and correction cost escalates. To exploit their full potential, software inspections must call for a close and strict examination of the inspected artifact. For this, reading techniques for defect detection may be helpful since these techniques tell inspection participants what to look for and, more importantly, how to scrutinize a software artifact in a systematic manner. Recent research efforts investigated the benefits of scenario-based reading techniques. A major finding has been that these techniques help inspection teams find more defects than existing state-of-the-practice approaches, such as, ad-hoc or checklist-based reading (CBR). We experimentally compare one scenario-based reading technique, namely, perspective-based reading (PBR), for defect detection in code documents with the more traditional CBR approach. The comparison was performed in a series of three studies, as a quasi experiment and two internal replications, with a total of 60 professional software developers at Bosch Telecom GmbH. Meta-analytic techniques were applied to analyze the data",2001,0, 258,Protected variation: the importance of being closed,"The Pattern Almanac 2000 (Addison Wesley, 2000) lists around 500 software-related patterns, and given this reading list, the curious developer has no time to program! Of course, there are underlying, simplifying themes and principles to this pattern plethora that developers have long considered and discussed. One example is L. Constantine's (1974) coupling and cohesion guidelines. Yet, these principles must continually resurface to help each new generation of developers and architects cut through the apparent disparity in myriad design ideas and help them see the underlying and unifying forces. One such principle, which B. Meyer (1988) describes is the Open-Closed Principle (OCP): modules should be both open (for extension and adaptation) and closed (to avoid modification that affect clients). OCP is essentially equivalent to the Protected Variation (PV) pattern: identify points of predicted variation and create a stable interface around them. OCP and PV formalize and generalize a common and fundamental design principle described in many guises. OCP and PV are two expressions of the same principle: protection against change to the existing code and design at variation and evolution points, with minor differences in emphasis",2001,0, 259,Designing a service of failure detection in asynchronous distributed systems,"Even though introduced for solving the consensus problem in asynchronous distributed systems, the notion of unreliable failure detector can be used as a powerful tool for any distributed protocol in order to get better performance by allowing the usage of aggressive time-outs to detect failures of entities executing the protocol. We present the design of a Failure Detection Service (FDS) based on the notion of unreliable failure detectors introduced by T. Chandra and S. Toueg (1996). FDS is able to detect crashed objects and entities that permanently omit to send messages without imposing changes to the source code of the underlying protocols that use this service. Also, FDS provides an object oriented interface to its subscribers and, more important, it does not add network overhead if no entity subscribes to the service. The paper can be also seen as a first step towards a distributed implementation of a heartbeat-based failure management system as defined in fault-tolerant CORBA specification",2001,0, 260,Recognizing geometric patterns for beautification of reconstructed solid models,"Boundary representation models reconstructed from 3D range data suffer from various inaccuracies caused by noise in the data and the model building software. The quality of such models can be improved in a beautification step, which finds regular geometric patterns approximately present in the model and imposes a maximal consistent subset of constraints deduced from these patterns on the model. This paper presents analysis methods seeking geometric patterns defined by similarities. Their specific types are derived from a part survey estimating the frequencies of the patterns in simple mechanical components. The methods seek clusters of similar objects which describe properties of faces, loops, edges and vertices, try to find special values representing the clusters, and seek approximate symmetries of the model. Experiments show that the patterns detected appear to be suitable for the subsequent beautification steps",2001,0, 261,A probabilistic priority scheduling discipline for high speed networks,"In high speed networks, the strict priority (SP) scheduling discipline is perhaps the most common and simplest method to schedule packets from different classes of applications, each with diverse performance requirements. With this discipline, however, packets at higher priority levels can starve packets at lower priority levels. To resolve this starvation problem, we propose to assign a parameter to each priority queue in the SP discipline. The assigned parameter determines the probability with which its corresponding queue is served when the queue is polled by the server. We thus form a new packet scheduling discipline, referred to as the probabilistic priority (PP) discipline. By properly setting the assigned parameters, service differentiation as well as fairness among traffic classes can be achieved in PP. In addition, the PP discipline can be easily reduced to the ordinary SP discipline or to the reverse SP discipline",2001,0, 262,Logic circuit diagnosis by using neural networks,"This paper presents a new method of logic diagnosis for combinatorial logic circuits. First, for each type of circuit gates, an equivalent neural network gate is constructed. Then, by replacing circuit gate elements with corresponding neural network gates, an equivalent neural network circuit is constructed to the fault-free sample circuit. The testing procedure is to feed random patterns to both the neural network circuit and the fault-prone test circuit at the same time, and comparing, analyzing both outputs, the former circuit generates diagnostic data for the test circuit. Thus, the neural network circuit behaves like a diagnostic engine, and needs basically no preparation of special test patterns nor fault dictionary before diagnosing",2001,0, 263,Empirical comparison of software-based error detection and correction techniques for embedded systems,"Function Tokens and NOP Fills are two methods proposed by various authors to deal with instruction pointer corruption in microcontrollers, especially in the presence of high electromagnetic interference levels. An empirical analysis to assess and compare these two techniques is presented in this paper. Two main conclusions are drawn: [1] NOP Fills are a powerful technique for improving the reliability of embedded applications in the presence of EMI, and [2] the use of function tokens can lead to a reduction in overall system reliability",2001,0, 264,Finite element analysis of internal winding faults in distribution transformers,"With the appearance of deregulation, distribution transformer predictive maintenance is becoming more important for utilities to prevent forced outages with the consequential costs. To detect and diagnose a transformer internal fault requires a transformer model to simulate these faults. This paper presents finite element analysis of internal winding faults in a distribution transformer. The transformer with a turn-to-earth fault or a turn-to-turn fault is modeled using coupled electromagnetic and structural finite elements. The terminal behaviors of the transformer are studied by an indirect coupling of the finite element method and circuit simulation. The procedure was realized using a commercially available software. The normal case and various faulty cases were simulated and the terminal behaviors of the transformer were studied and compared with field experimental results. The comparison results validate the finite element model to simulate internal faults in a distribution transformer.",2001,0, 265,Trading off execution time for reliability in scheduling precedence-constrained tasks in heterogeneous computing,"This paper investigates the problem of matching and scheduling of an application, which is composed of tasks with precedence constraints, to minimize both execution time and probability of failure of the application in a heterogeneous computing system. In general, however, it is impossible to satisfy both objectives at the same time because of conflicting requirements. The best one can do is to trade off execution time for reliability or vice versa, according to users' needs. Furthermore, there is a need for an algorithm which can assign tasks of an application to satisfy both of the objectives to some degree. Motivated from these facts, two different algorithms, which are capable of trading off execution time for reliability, are developed. To enable the proposed algorithms to account for the reliability of resources in the system, an expression which gives the reliability of the application under a given task assignment is derived. The simulation results are provided to validate the performance of the proposed algorithms",2001,0, 266,Performance analysis of image compression using wavelets,"The aim of this paper is to examine a set of wavelet functions (wavelets) for implementation in a still image compression system and to highlight the benefit of this transform relating to today's methods. The paper discusses important features of wavelet transform in compression of still images, including the extent to which the quality of image is degraded by the process of wavelet compression and decompression. Image quality is measured objectively, using peak signal-to-noise ratio or picture quality scale, and subjectively, using perceived image quality. The effects of different wavelet functions, image contents and compression ratios are assessed. A comparison with a discrete-cosine-transform-based compression system is given. Our results provide a good reference for application developers to choose a good wavelet compression system for their application",2001,0, 267,X-33 redundancy management system,"The X-33 is an unmanned advanced technology demonstrator with a mission to validate new technologies for the next generation of Reusable Launch Vehicles. Various system redundancies are designed in the X-33 to enhance the probability of successfully completing its mission in the event of faults and failures during flight. One such redundant system is the Vehicle and Mission Computer that controls the X-33 ea, and manages the avionics subsystems. Historically, redundancy management and applications such as flight control and vehicle management tended to be highly coupled. One of the technologies that the X-33 will demonstrate is the Redundancy Management System (RMS) that uncouples the applications from the redundancy management details, in the same way that real-time operating systems have uncoupled applications from task scheduling, communication and synchronization details",2001,0, 268,Assessing the quality of auction Web sites,"WebQual is an instrument for assessing the quality of Internet sites from the perspective of the customer. Earlier versions of WebQual focused on information and interaction quality. This paper reports on a new version of WebQual that incorporates three quality dimensions: information quality, interaction quality and Web site design quality. WebQual is applied in the domain of Internet auctions and the results are used to assess the reliability of the instrument for assessing the quality of Web sites. Three auction sites (Amazon, eBay and QXL) are evaluated through an intervention that involves buying and selling at auction. The results of the intervention are analyzed quantitatively to assess the validity of the WebQual instrument and supplemented by qualitative data that is used to consider the relative merits of the three sites evaluated.",2001,0, 269,A DSP-based FFT-analyzer for the fault diagnosis of rotating machine based on vibration analysis,"A DSP-based measurement system dedicated to the vibration analysis on rotating machines was designed and realized. Vibration signals are on-line acquired and processed to obtain a continuous monitoring of the machine status. In case of fault, the system is capable of isolating the fault with a high reliability. The paper describes in detail the approach followed to built up fault and unfault models together with the chosen hardware and software solutions. A number of tests carried out on small-size three-phase asynchronous motors highlights high promptness in detecting faults, low false alarm rate, and very good diagnostic performance",2001,0, 270,Design and development of a digital multifunction relay for generator protection,This paper presents the design and development of a rotor earth fault protection function as part of a multifunction generator protection relay. The relay design is based on a low frequency square wave injection method in detecting rotor earth faults. The accuracy of rotor earth fault resistance measurement is improved by applying piecewise quadratic approximation to the nonlinear gain characteristic of the measurement circuit. The paper also presents the hardware and software architecture of the relay,2001,0, 271,A frame-level measurement apparatus for performance testing of ATM equipment,"Performance testing of ATM equipment is here dealt with. In particular, the attention is paid to frame-level metrics, recently proposed by the ATM forum because of their suitability to reflect user-perceived performance better than traditional cell-level metrics. Following the suggestions of the ATM forum, more and more network engineers and production managers are nowadays interested in these metrics, thus increasing the need of instruments and measurement solutions appropriate to their estimation. Trying to satisfy this exigency, a new VXI-based measurement apparatus is proposed in the paper. The apparatus features a suitable software, developed by the authors, which allows the evaluation of the aforementioned metrics by making simply use of common ATM analyzers; only two VXI line interfaces, capable of managing both the physical and ATM layer, are, in fact, adopted. At first, some details about the hierarchical structure of the ATM technology as used as the main differences between frames, peculiar to the ATM adaptation layer, and cells characterizing the lower ATM layer are given. Then, both the hardware and software solutions of the measurement apparatus are described in detail with a particular attention to the measurement procedures implemented. At the end the performance of a new ATM device, developed by Ericsson, is assessed in terms of frame-level metrics by means of the proposed apparatus",2001,0, 272,"Low-cost, software-based self-test methodologies for performance faults in processor control subsystems","A software-based testing methodology for processor control subsystems, targeting hard-to-test performance faults in high-end embedded and general-purpose processors, is presented. An algorithm for directly controlling, using the instruction-set architecture only, the branch-prediction logic, a representative example of the class of processor control subsystems particularly prone to such performance faults, is outlined. Experimental results confirm the viability of the proposed methodology as a low-cost and effective answer to the problem of hard-to-test performance faults in processor architectures",2001,0, 273,Global scheduling for flexible transactions in heterogeneous distributed database systems,"A heterogeneous distributed database environment integrates a set of autonomous database systems to provide global database functions. A flexible transaction approach has been proposed for the heterogeneous distributed database environments. In such an environment, flexible transactions can increase the failure resilience of global transactions by allowing alternate (but in some sense equivalent) executions to be attempted when a local database system fails or some subtransactions of the global transaction abort. We study the impact of compensation, retry, and switching to alternative executions on global concurrency control for the execution of flexible transactions. We propose a new concurrency control criterion for the execution of flexible and local transactions, termed F-serializability, in the error-prone heterogeneous distributed database environments. We then present a scheduling protocol that ensures F-serializability on global schedules. We also demonstrate that this scheduler avoids unnecessary aborts and compensation",2001,0, 274,Reconfigurable semi-virtual computer architecture for long available small space vehicles,"This paper presents a new hardware architecture for a hybrid space computer composed of both physical and virtual processors. The architecture emulates a multiple modular computer, including both physical and virtual spares, with a small amount of physical processors (flight computer) and virtual redundancies (payload processors). The flight computer contains a main processor, as well as a backup and a redundant processor. However, the instrumentation for the Satex mission also includes a redundant LAN with autonomous capabilities to detect its failures, to reconfigure by itself and to provide on-line maintenance by automated means. Communications between flight computer and payload microcomputers are accomplished over this LAN, allowing a versatile operating behavior in terms of data communication as well as in terms of distributed fault tolerance. Under this scenario a semi-virtual expanded flight architecture is periodically implemented in the microsatellite in order to emulate a bigger and safer computer with increased fault-tolerant features. Previous topology is conformed periodically aiming at failure detection, fault isolation, and hardware reconfiguration of processors to obtain high availability; moreover, the architecture can be applied in any small space vehicle. The paper also concerns with fault containment regions, Byzantine majority voting mechanisms, reconfiguration procedures, hardware protections, hardware and software diversity and flight computer interface with satellite instrumentation",2001,0, 275,Data embedding in audio signals,"This paper presents results of two methods of embedding digital audio data into another audio signal for secure communication. The data-embedded, or stego, signal is created for transmission by modifying the power spectral density or the phase spectrum of the cover audio at the perceptually masked frequencies in each frame in accordance with the covert audio data. Embedded data in each frame is recovered from the quantized frames of the received stego signal without synchronization or reference to the original cover signal. Using utterances from Texas Instruments Massachusetts Institute of Technology (TIMIT) databases, it was found that error-free data recovery resulted in voiced and unvoiced frames, while high bit-errors occurred in frames containing voiced/unvoiced boundaries. Modifying the phase, in accordance with data, led to higher successful retrieval than modifying the spectral density of the cover audio. In both cases, no difference was detected in perceived speech quality between the cover signal and the received stego signal",2001,0, 276,Advances in computational resiliency,"The notion of computational resiliency refers to the ability of a distributed application to tolerate intrusion when under information warfare (IW) attack. It is one of several new technologies under development by the U.S. Air Force that aim to harden the battlefield information structure from an IW perspective. These technologies seek to strengthen a military mission, rather than protect its network infrastructure using static defensive measures such as network security, intrusion sensors, and firewalls. Even if an IW attack is never detected, it should be possible to continue information operations and achieve mission objectives. Computational resiliency involves the dynamic use of replication, guided by mission policy, to achieve intrusion tolerance. However, it goes further to dynamically regenerate replication in response to an IW attack, allowing the level of system assurance to be maintained. Replicated structures are protected through several techniques such as camouflage, dispersion, and layered security policy. This paper describes a prototype concurrent programming technology that we have developed to support computational resiliency. Brief outlines describe how the library has been applied to prototypical applications",2001,0, 277,Advanced test cell diagnostics for gas turbine engines,"Improved test cell diagnostics capable of detecting and classifying engine mechanical and performance faults as well as instrumentation problems is critical to reducing engine operating and maintenance costs while optimizing test cell effectiveness. Proven anomaly detection and fault classification techniques utilizing engine Gas Path Analysis (GPA) and statistical/empirical models of structural and performance related engine areas can now be implemented for real-time and post-test diagnostic assessments. Integration and implementation of these proven technologies into existing USAF engine test cells presents a great opportunity to significantly improve existing engine test cell capabilities to better meet today's challenges. A suite of advanced diagnostic and troubleshooting tools have been developed and implemented for gas turbine engine test cells as part of the Automated Jet Engine Test Strategy (AJETS) program. AJETS is an innovative USAF program for improving existing engine test cells by providing more efficient and advanced monitoring, diagnostic and troubleshooting capabilities. This paper describes the basic design features of the AJETS system; including the associated data network, sensor validation and anomaly detection/diagnostic software that was implemented in both a real-time and post-test analysis mode. These advanced design features of AJETS are currently being evaluated and advanced utilizing data from TF39 test cell installations at Travis AFB and Dover AFB",2001,0, 278,A general prognostic tracking algorithm for predictive maintenance,"Prognostic health management (PHIM) is a technology that uses objective measurements of condition and failure hazard to adaptively optimize a combination of availability, reliability, and total cost of ownership of a particular asset. Prognostic utility for the signature features are determined by transitional failure experiments. Such experiments provide evidence for the failure alert threshold and of the likely advance warning one can expect by tracking the feature(s) continuously. Kalman filters are used to track changes in features like vibration levels, mode frequencies, or other waveform signature features. This information is then functionally associated with load conditions using fuzzy logic and expert human knowledge of the physics and the underlying mechanical systems. Herein is the greatest challenge to engineering. However, it is straightforward to track the progress of relevant features over time using techniques such as Kalman filtering. Using the predicted states, one can then estimate the future failure hazard, probability of survival, and remaining useful life in an automated and objective methodology",2001,0, 279,An integrated diagnostics virtual test bench for life cycle support,"Qualtech Systems, Inc. (QSI) has developed an architecture that utilizes the existing TEAMS (Testability Engineering and Maintenance Systems) integrated tool set as the foundation to a computing environment for modeling and rigorous design analysis. This architecture is called a Virtual Test Bench (VTB) for Integrated Diagnostics. The VTB approach addresses design for testability, safety, and risk reduction because it provides an engineering environment to develop/provide: 1. Accurate, comprehensive, and graphical model based failure mode, effects and diagnostic analysis to understand failure modes, their propagation, effects, and ability of diagnostics to address these failure modes. 2. Optimization of diagnostic methods and test sequencing supporting the development of an effective mix of diagnostic methods. 3. Seamless integration from analysis, to run-time implementation, to maintenance process and life cycle support. undetected fault lists, ambiguity group lists, and optimized diagnostic trees. 4. A collaborative, widely distributed engineering environment to """"ring-out"""" the design before it is built and flown. The VTB architecture offers an innovative solution in a COTS package for system/component modeling, design for safety, failure mode/effect analysis, testability engineering, and rigorous integration/testing of the IVHM (Integrated Vehicle Health Management) function with the rest of the vehicle. The VTB approach described in this paper will use the TEAMS software tool to generate detailed, accurate """"failure"""" models of the design, assess the propagation of the failure mode effects, and determine the impact on safety, mission and support costs. It will generate FMECA, mission reliability assessments, incorporate the diagnostic and prognostic test designs, and perform testability analysis. Diagnostic functions of the VTB include fault detection and isolation metrics undetected fault lists, ambiguity group lists, and optimized diagnostic trees",2001,0, 280,A systematic risk management approach employed on the CloudSat project,"The CloudSat Project has developed a simplified approach for fault tree analysis and probabilistic risk assessment. A system-level fault tree has been constructed to identify credible fault scenarios and failure modes leading up to a potential failure to meet the nominal mission success criteria. Risk ratings and fault categories have been defined for each low-level event (failure mode) and a streamlined probabilistic risk assessment has been completed. Although this technique or process will mature and evolve on a schedule that emphasizes added value throughout the development life cycle, it has already served to confirm that project personnel are concentrating risk reduction or elimination/retirement measures in the appropriate areas. A cursory evaluation with an existing fault tree analysis and probabilistic risk assessment software application has helped to validate this simplified approach. It is hoped that this will serve as a model for other NASA flight projects",2001,0, 281,Two-dimensional TMR with partial majority selection and forwarding,"TMR (triple modular redundancy) is one of the most common forms of fault tolerance approaches, which is based on majority voting. Here, after each checkpoint interval, three modules are compared. If the results of at least two modules are same, the system is considered to be fault-free. In this paper, the authors propose a 2-dimensional TMR scheme (TMR-2D) and partial majority selection and forwarding scheme (PMSF) scheme which use several small sub-checkpoints instead of one large checkpoint at the voting time. With very small amount of overhead, the proposed scheme avoids many rollbacks even though the results of all three modules are different. As a result, the rollback probability and average task execution time are significantly reduced compared to the existing: schemes. The availability is also greatly improved. The proposed scheme will be effective for general fault-tolerant systems, especially for time critical systems",2001,0, 282,Intrusion tolerant software architectures,"The complexity of the software systems built today virtually guarantees the existence of security vulnerabilities. When the existence of specific vulnerabilities becomes known - typically as a result of detecting a successful attack - intrusion prevention techniques such as firewalls and anti-virus software seek to prevent future attackers from exploiting these vulnerabilities. However, vulnerabilities cannot be totally eliminated, their existence is not always known and preventing mechanisms cannot always be built. Intrusion tolerance is a new concept, a new design paradigm, and potentially a new capability for dealing with residual security vulnerabilities. In this article, we describe our initial exploration of the hypothesis that intrusion tolerance is best designed and enforced at the software architecture level",2001,0, 283,Optimal distributed generation allocation in MV distribution networks,"The necessity for flexible electric systems, changing regulatory and economic scenarios, energy savings and environmental impact are providing impetus to the development of distributed generation (DG), which is predicted to play an increasing role in the electric power system of the future. With so much new distributed generation being installed, it is critical that the power system impacts be assessed accurately so that DG can be applied in a manner that avoids causing degradation of power quality, reliability and control of the utility system. For these reasons, the paper proposes a new software procedure, based on a genetic algorithm, capable of establishing the optimal distributed generation allocation on an existing MV distribution network, considering all the technical constraints, like feeder capacity limits, feeder voltage profile and three-phase short circuit current in the network nodes",2001,0, 284,Neural network modeling of distribution transformers with internal short circuit winding faults,"To detect and diagnose a transformer internal fault an efficient transformer model is required to characterize the faults for further research. This paper discusses the application of neural network (NN) techniques in the modeling of a distribution transformer with internal short-circuit winding faults. A transformer model can be viewed as a functional approximator constructing an input-output mapping between some specific variables and the terminal behaviors of the transformer. The complex approximating task was implemented using six small simple neural networks. Each small neural network model takes fault specification and energized voltage as the inputs and the output voltage or terminal currents as the outputs. Two kinds of neural networks, back-propagation feedforward network (BPFN) and radial basis function network (RBFN), were investigated to model the faults in distribution transformers. The NN models were trained offline using training sets generated by finite element analysis (FEA) models and field experiments. The FEA models were implemented using a commercial finite element analysis software package. The comparison between some simulation cases and corresponding experimental results shows that the well-trained, neural networks can accurately simulate the terminal behaviors of distribution transformers with internal short circuit faults",2001,0, 285,A knowledge base for program debugging,"We present a Conceptual Model for Software Fault Localization (CMSFL), and an Automated Assistant (AASFL) called BUG-DOCTOR to aid programmers with the problem of software fault localization. A multi-dimensional approach is suggested with both shallow and deep reasoning phases to enhance the probability of localizing many types of faults. BUG-DOCTOR uses these two approaches and switches between them to localize the faults. The AASFL is being developed based on this theoretical model. It is programming language independent, capable of handling different programming styles and implementations",2001,0, 286,Test generation for time critical systems: Tool and case study,"Generating timed test sequences by hand is error-prone and time consuming, and it is easy to overlook important scenarios. The paper presents a tool based on formal methods that automatically computes a test suite for conformance testing of time critical systems. The generated tests are selected on the basis of a coverage criterion of the specification. The tool guarantees production of sound test cases only, and is able to produce a complete covering test suite. We demonstrate the tool by generating test cases for the Philips Audio Protocol",2001,0, 287,Prediction of software reliability: a comparison between regression and neural network non-parametric models,"In this paper, neural networks have been proposed as an alternative technique to build software reliability growth models. A feedforward neural network was used to predict the number of faults initially resident in a program at the beginning of a test/debug process. To evaluate the predictive capability of the developed model, data sets from various projects were used. A comparison between regression parametric models and neural network models is provided",2001,0, 288,Multiple fault diagnostics for communicating nondeterministic finite state machines,"During the last decade, different methods were developed to produce optimized test sequences for detecting faults in, communication protocol implementations. However, the application of these methods gives only limited information about the location of detected faults. We propose a complementary step, which localizes the faults, once detected. It consists of a generalized diagnostic algorithm for the case where more than one fault may be present in the transitions of a system represented by communicating nondeterministic finite state machines, if existing faults are detected, this algorithm permits the generation of a minimal set of diagnoses, each of which is formed by a set of transitions suspected of being faulty. A simple example is used to demonstrate the functioning of the proposed diagnostic algorithm. The complexity of each step in the algorithm are calculated",2001,0, 289,Application of vibration sensing in monitoring and control of machine health,"In this paper, an application for monitoring and control of machine health using vibration sensing is developed. This vibration analyzer is able to continuously monitor and compare the actual vibration pattern against a vibration signature, based on a fuzzy fusion technique. More importantly, this intelligent knowledge-based real-time analyzer is able to detect excessive vibration conditions much sooner than a resulting fault could be detected by an operator. Subsequently, appropriate actions can be taken, say to provide a warning or automatic corrective action. This approach may be implemented independently of the control system and as such can be applied to existing equipment without modification of the normal mode of operation. Simulation and experimental results are provided to illustrate the advantages of the approach taken in this application",2001,0, 290,Application QoS management for distributed computing systems,"As a large number of distributed multimedia systems are deployed on computer networks, quality of service (QoS) for users becomes more important. This paper defines it as application QoS, and proposes the application QoS management system (QMS). It controls the application QoS according to the system environment by using simple measurement-based control methods. QMS consists of three types of modules. These are a notificator for module detecting QoS deterioration, a manager module for deciding the control method according to the application management policies, and a controller module for executing the control. The QMS manages the application QoS by communicating between these modules distributed on the network. Moreover, this paper especially focuses on the function setting QoS management policies to the QMS and proposes the setting method. By a simulation experiment, we confirmed that the system made it possible to negotiate the QoS among many applications and it was able to manage the whole applications according to the policies",2001,0, 291,A predictive measurement-based fuzzy logic connection admission control,"This paper presents a novel measurement-based connection admission control (CAC) which uses fuzzy set and fuzzy logic theory. Unlike conventional CAC, the proposed CAC does not use complicated analytical models or a priori traffic descriptors. Instead, traffic parameters are predicted by an on-line fuzzy logic predictor (Qiu et al. 1999). QoS requirements are targeted indirectly by an adaptive weight factor. This weight factor is generated by a fuzzy logic inference system which is based on arrival traffic, queue occupancy and link load. Admission decisions are then based on real-time measurement of aggregate traffic statistics with the fuzzy logic adaptive weight factor as well as the predicted traffic parameters. Both homogeneous and heterogeneous traffic were used in the simulation. Fuzzy logic prediction improves the efficiency of both conventional and measurement-based CAC. In addition, the measurement-based approach incorporating fuzzy logic inference and using fuzzy logic prediction is shown to achieve higher network utilization while maintaining QoS",2001,0, 292,A simple method for extracting models from protocol code,"The use of model checking for validation requires that models of the underlying system be created. Creating such models is both difficult and error prone and as a result, verification is rarely used despite its advantages. In this paper we present a method for automatically extracting models from low level software implementations. Our method is based on the use of an extensible compiler system, xg++, to perform the extraction. The extracted model is combined with a model of the hardware, a description of correctness, and an initial state. The whole model is then checked with the Mur model checker. As a case study, we apply our method to the cache coherence protocols of the Stanford FLASH multiprocessor. Our system has a number of advantages. First, it reduces the cost of creating models, which allows model checking to be used more frequently. Second, it increases the effectiveness of model checking since the automatically extracted models are more accurate and faithful to the underlying implementation. We found a total of 8 errors using our system. Two errors were global resource errors, which would be difficult to find through any other means. We feel the approach is applicable to other low level systems",2001,0, 293,A framework for segmentation of talk and game shows,"In this paper, we present a method to remove commercials from talk and game show videos and to segment these videos into host and guest shots. In our approach, we mainly rely on information contained in shot transitions, rather than analyzing the scene content of individual frames. We utilize the inherent differences in scene structure of commercials and talk shows to differentiate between them. Similarly, we make use of the well-defined structure of talk shows, which can be exploited to classify shots as host or guest shots. The entire show is first segmented into camera shots based on color histogram. Then, we construct a data-structure (shot connectivity graph) which links similar shots over time. Analysis of the shot connectivity graph helps us to automatically separate commercials from program segments. This is done by first detecting stories, and then assigning a weight to each story based on its likelihood of being a commercial. Further analysis on stories is done to distinguish shots of the hosts from shots of the guests. We have tested our approach on several full-length shows (including commercials) and have achieved video segmentation with high accuracy. The whole scheme is fast and works even on low quality video (160120 pixel images at 5 Hz)",2001,0, 294,Path-based error coverage prediction,"Previous studies have shown that error detection coverage and other dependability measures estimated by fault injection experiments are affected by the workload. The workload is determined by the program executed during the experiments, and the input sequence to the program. In this paper, we present a promising analytical post-injection prediction technique, called path-based error coverage prediction, which reduces the effort of estimating error coverage for different input sequences. It predicts the error coverage for one input sequence based on fault injection results obtained for another input sequence. Although the accuracy of the prediction is low, path based error coverage prediction manages to correctly rank the input sequences with, respect to error detection coverage, provided that the difference in the actually coverage is significant. This technique may, drastically decrease the number of fault injection experiments, and thereby the time, needed to find the input sequence with the worst-case error coverage among a set of input sequences",2001,0, 295,Reliability properties assessment at system level: a co-design framework,"The reliability co-design project aims at integrating in a standard hw/sw co-design flow the elements for achieving a final system able to detect the occurrence of a fault during its operational life. The paper presents the focus of the project, the definition and identification of design methodologies for implementing the nominal, checking and checker functionalities either in hardware or in software. An outline of the system specification and system partitioning aspects is also provided",2001,0, 296,The concept of quality information system (QIS),"The product quality characteristics should be the prime drivers when assessing and improving the quality of the software development process as we are concerned with the product quality. The quality of the software product is determined by the quality of the software process. This seems intuitive but there is no empirical evidence to prove its validity yet. QIS establishes a system that enables to analyse the relation between base practices and processes of the SPICE model and the eleven product quality factors and criteria of McCall's (1977) model for software product evaluation. The main goal of QIS is to evaluate and verify benefits gained by improving the process maturity level. In front line of both, the process model and product quality model is the software product improvement, resulting in a high quality software product delivered on time and at less cost.",2001,0, 297,Automatic support for verification of secure transactions in distributed environment using symbolic model checking,"Electronic commerce needs the aid of software tools to check the validity of business processes in order to fully automate the exchange of information through the network. Symbolic model checking has been used to formally verify specifications of secure transactions in a business-to-business system. The fundamental principles behind symbolic model checking are presented, along with techniques used to model mutual exclusion of processes and atomic transactions. The computational resources required to check the example process are presented, and faults detected in this process through symbolic verification are documented.",2001,0, 298,Automated processing of raw DNA sequence data,"Present-day DNA sequencing techniques have evolved considerably from their early beginnings. A modern sequencing project is essentially an assembly-line environment and is therefore improved and accelerated by the degree to which slow and error-prone manual steps can be replaced by reliable and accurate automatic ones. For hardware, this typically means expanding the use of robotics, for example, to execute the multitude of micro-volume fluid transfers that occur for each of the samples processed in a project. Likewise, automated software replaces manual processing and analysis steps for samples wherever possible. In this article, we focus on one particular aspect of software: the automated handling of raw DNA data. Specifically, we discuss a number of critical software algorithms and components and how they have been woven into a framework for largely hands-off processing of Human Genome Project data at the Genome Sequencing Center. These data represent about 25% of the total public human sequencing project.",2001,0, 299,Detection and identification of odorants using an electronic nose,"Gas sensing systems for detection and identification of odorant molecules are of crucial importance in an increasing number of applications. Such applications include environmental monitoring, food quality assessment, airport security, and detection of hazardous gases. We describe a gas sensing system for detecting and identifying volatile organic compounds (VOC), and discuss the unique problems associated with the separability of signal patterns obtained by using such a system. We then present solutions for enhancing the separability of VOC patterns to enable classification. A new incremental learning algorithm that allows new odorants to be learned is also introduced",2001,0, 300,Run-time fault detection in monitor based concurrent programming,"The monitor concept provides a structured and flexible high-level programming construct to control concurrent accesses to shared resources. It has been widely used in concurrent programming environments for implicitly ensuring mutual exclusion and explicitly achieving process synchronization. This paper proposes an extension to the monitor construct for detecting run-time errors in monitor operations. Monitors are studied and classified according to their functional characteristics. A taxonomy of concurrency control faults over a monitor is then defined. The concepts of a monitor event sequence and a monitor state sequence provide a uniform approach to history information recording and fault detection. Rules for detecting various types of faults are defined. Based on these rules, fault detection algorithms are developed. A prototypical implementation of the proposed monitor construct with run-time fault detection mechanisms has been developed in Java. We briefly report our experience with and evaluation of our robust monitor prototype.",2001,0, 301,Performance validation of fault-tolerance software: a compositional approach,"Discusses the lessons learned in the modeling of a software fault tolerance solution built by a consortium of universities and industrial companies for an Esprit project called TIRAN (TaIlorable fault-toleRANce framework for embedded applications). The requirements of high flexibility and modularity for the software have lead to a modeling approach that is strongly based on compositionality. Since the interest was in assessing both the correctness and the performance of the proposed solution, we have cared for these two aspects at the same time, and, by means of an example, we show how this was a central aspect of our analysis.",2001,0, 302,Fault injection testing for distributed object systems,Interface based fault injection testing (IFIT) is proposed as a technique to assess the fault tolerance of distributed object systems. IFIT uses the description of an object's interface to generate application dependent faults. A set of application independent faults is also proposed. IFIT reveals inadequacies of the fault recovery mechanisms present in the application. The application of IFIT to different distributed object systems is described,2001,0, 303,Criteria for developing clinical decision support systems,"The use of archived information and knowledge derived from data-driven system, both at the point of care and retrospectively, is critical to improving the balance between healthcare expenditure and healthcare quality. Data-driven clinical decision support, augmented by performance feedback and education, is a logical addition to consensus- and evidence-based approaches on the path to widespread use of intelligent search agents, expert recognition and warning systems. We believe that these initial applications should (a) capture and archive, with identifiable end-points, complete episode-of-care information for high-complexity, high-cost illnesses, and (b) utilize large numbers of these cases to drive risk-adjusted individualized probabilities for patients requiring care at the time of intervention",2001,0, 304,Model-aided diagnosis: an inexpensive combination of model-based and case-based condition assessment,"Online condition monitoring and diagnosis are being utilized more and more for increasing the reliability and availability of technical systems and to reduce their maintenance costs. Today's model-based diagnosis (MBD) tools are able to detect and identify incipient and sudden faults very reliably. For application to cost-sensitive equipment, such as high-voltage circuit breakers (HVCBs), however, the presently available MBD systems are not feasible for economic reasons. In this paper, a novel combination of the model-based with the case-based approach to condition diagnosis is presented, which can be implemented on a low-cost computer and which offers satisfactory performance. The technique is divided into two parts: (1) preparation and (2) diagnosis. The diagnosis part can be executed on an inexpensive low-performance computer. Successful tests on real HVCBs confirm the usefulness of this new approach to condition diagnosis",2001,0, 305,Power quality assessment from a wave-power station,This paper describes the development and testing of a software based flickermeter used in order to assess the supply quality from the LIMPET wave-power station on Islay. It describes the phenomenon of voltage flicker and the effect that a wave-power station has on this quantity. The paper also explains techniques developed in order to improve flickermeter performance when used with pre-recorded data. It also shows that the standard flickermeter sample frequency may be reduced for wave-station applications. Finally the paper presents flicker results from preliminary data collected from the LIMPET station and shows that the device is operating well within acceptable limits,2001,0, 306,Discrimination of software quality in a biomedical data analysis system,"Object-oriented visualization-based software systems for biomedical data analysis must deal with complex and voluminous datasets within a flexible yet intuitive graphical user interface. In a research environment, the development of such systems are difficult to manage due to rapidly changing requirements, incorporation of newly developed algorithms, and the needs imposed by a diverse user base. One issue that research supervisors must contend with is an assessment of the quality of the system's software objects with respect to their extensibility, reusability, clarity, and efficiency. Objects from a biomedical data analysis system were independently analyzed by two software architects and ranked according to their quality. Quantitative software features were also compiled at varying levels of granularity. The discriminatory power of these software metrics is discussed and their effectiveness in assessing and predicting software object quality is described",2001,0, 307,An agent-based framework with QoS-aware content negotiation for gateway-based nomadic applications,"The advances in both wireless data communications and portable computing technologies have recently generated a new trend: nomadic computing. The concept of nomadic computing has emerged as the expectation of nomadic end-users to retain one's personal computing environments and access capabilities wherever one happens to be. The changing environment in wireless due to mobility and interference gives rise to varying bandwidth, latency, error rate, loss probability, interoperability, and quality of display. Confronted with these circumstances, the ability to support adaptability and QoS-awareness in a transparent and integrated fashion is essential. This paper describes an agent-based framework with QoS-aware content negotiation for gateway-based nomadic applications",2001,0, 308,Time-to-failure estimation for batteries in portable electronic systems,Nonlinearity of the energy source behavior in portable systems needs to be modeled in order for the system software to make energy-conscious decisions. We describe an analytical battery model for predicting the battery time-to-failure under variable discharge conditions. Our model can be used to estimate the impact of various system load profiles on the energy source lifetime. The quality of our model is evaluated based on the simulation of a lithium-ion battery,2001,0, 309,Benchmarking of advanced technologies for process control: an industrial perspective,"Global competition is forcing industrial plants to continuously improve product quality and reduce costs in order to be more profitable. This scenario doesn't allow producing in less than excellent performance. Combining higher, more consistent product quality with a larger production volume and an increased flexibility however places special stresses on plant assets and equipment jeopardizing safety and environmental compliance. Consequently, process industries are called to operate on a very narrow, constrained path that needs to be continuously monitored, assessed and adjusted. The talk aims at reviewing the main points of the problem and at discussing how benchmarking practices should include and take advantage of advanced automation technologies. It will briefly consider basics for project justification and what an automation vendor may (really should) do in order to help process industries customer to select the best solutions for their plants. A particular emphasis will be placed on sometimes neglected aspects, such as hardware-software integration issues; life-cycle cost-benefit analysis; and operator acceptance and living with the APC tools and strategies. Finally some basic suggestions taken out of field experience will be given",2001,0, 310,The Quadrics network (QsNet): high-performance clustering technology,"The Quadrics interconnection network (QsNet) contributes two novel innovations to the field of high-performance interconnects: (I) integration of the virtual-address spaces of individual nodes into a single, global, virtual-address space and (2) network fault tolerance via link-level and end-to-end protocols that can detect faults and automatically re-transmit packets. QsNet achieves these feats by extending the native operating system in the nodes with a network operating system and specialized hardware support in the network interface. As these and other important features of QsNet can be found in the InfiniBand specification, QsNet can be viewed as a precursor to InfiniBand. In this paper, we present an initial performance evaluation of QsNet. We first describe the main hardware and software features of QsNet, followed by the results of benchmarks that we ran on our experimental, Intel-based, Linux cluster built around QsNet. Our initial analysis indicates that QsNet performs remarkably well, e.g., user-level latency under 2 s and bandwidth over 300 MB/s",2001,0, 311,Fault detection and accommodation in dynamic systems using adaptive neuro-fuzzy systems,"Fault detection and accommodation plays a very important role in critical applications. A new software redundancy approach based on all adaptive neuro-fuzzy inference system (ANFIS) is introduced. An ANFIS model is used to detect the fault while another model is used to accommodate it. An accurate plant model is assumed with arbitrary additive faults. The two models are trained online using a gradient-based approach. The accommodation mechanism is based on matching the output of the plant with the output of a reference model. Furthermore, the accommodation mechanism does not assume a special type of system or nonlinearity. Simulation studies prove the effectiveness of the new system even when a severe failure occurs. Robustness to noise and inaccuracies in the plant model are also demonstrated",2001,0, 312,On the effectiveness of mutation analysis as a black box testing technique,"The technique of mutation testing, in which the effectiveness of tests is determined by creating variants of a program in which statements are mutated, is well known.. Whilst of considerable theoretical interest, the technique requires costly tools and is computationally expensive. Very large numbers of `mutants' can be generated for even simple programs. More recently, it has been proposed that the concept be applied to specification based (black box) testing. The proposal is to generate test cases by systematically replacing data items relevant to a particular part of a specification with a data item relevant to another. If the specification is considered as generating a language that describes the set of valid inputs, then the mutation process is intended to generate syntactically valid and invalid statements. Irrespective of their 'correctness' in terms of the specification, these can then be used to test a program in the usual (black box) manner. For this approach to have practical value it must produce test cases that would not be generated by other popular black box test generation approaches. The paper reports a case study involving the application of mutation based black box testing to two programs of different types. Test cases were also generated using equivalence class testing. and boundary value testing approaches. The test cases from each method were examined to judge the overlap and to assess the value of the additional cases generated. It was found that less than 20% of the mutation test cases for a data-vetting program were generated by the other two methods, as against 75% for a statistical analysis program. The paper analyses these results and suggests classes of specifications for which mutation based test-case generation may be effective",2001,0, 313,On the relationships of faults for Boolean specification based testing,"Various methods of generating test cases based on Boolean specifications have previously been proposed. These methods are fault-based in the sense that test cases are aimed at detecting particular types of faults. Empirical results suggest that these methods are good at detecting particular types of faults. However, there is no information on the ability of these test cases in detecting other types of faults. The paper summarizes the relationships of faults in a Boolean expression in the form of a hierarchy. A test case that detects the faults at the lower level of the hierarchy will always detect the faults at the upper level of the hierarchy. The hierarchy helps us to better understand the relationships of faults in a Boolean expression, and hence to select fault-detecting test cases in a more systematic and efficient manner",2001,0, 314,An agent-based approach to computer assisted code inspections,"Formal code inspections have been established as an effective way to decrease the cost of software development. However, implementing formal code inspections successfully is a challenging endeavour. We propose that the use of software tools to support the inspection process can help reduce the cost of code inspections, while increasing the number of defects detected. Intelligent agents provide a suitable model for designing a set of intelligent and flexible tools",2001,0, 315,Design of dual-duplex system and evaluation of RAM,"We develop the dual-duplex system that detects a fault using a hardware comparator which switches to a hot standby redundancy. This system is designed on the basis of MC68000 and can be used in VMEbus. To improve the reliability and safety, the dual-duplex system is designed in double modular redundancy. The failure rate of the electrical element is calculated in MILSPEC-217F by RELEX6.0 tool, and the system RAMS (reliability, availability, maintainability, safety) and MTTF (mean time to failure) are designed by Markov modeling and evaluated by Matlab. Since the dual-duplex system has high reliability, availability, and safety, it can be applied to embedded control systems like airplanes and high-speed railway systems",2001,0, 316,Plain end-to-end measurement for local area network voice transmission feasibility,"It is well known that company Intranets are growing into ubiquitous communications media for everything. As a consequence, network traffic is notoriously dynamic, and unpredictable. Unfortunately, local area networks were designed for scalability and robustness, not for sophisticated traffic monitoring. This paper introduces a performance measurement method based on widely used IP protocol elements, which allows measurement of network performance criteria to predict the voice transmission feasibility of a given local area network. The measurement does neither depend on special VoIP equipment, nor does it need network monitoring hardware. Rather it uses special payload samples to detect unloaded network conditions to receive reference values. These samples are followed by typical VoIP application payload to obtain real-world measurement conditions. The validation of our method was done within a local area network and showed convincing results",2001,0, 317,TPS precision & accuracy,"In this paper we discuss the benefits of a Test Program Set (TPS) that can precisely isolate down to the absolute lowest component of a UUT (Unit Under Test) versus a TPS's ability to accurately detect AND isolate a fault to some higher level of the UUT. We touch on the idea of not isolating down to this absolute level just because we can, but only when we need to as an optimum characteristic of the support system. As in all things, the competing ideals of theory, need and desire slam up against the boundaries of reality. Essentially, we try to view the environmental realities that a system must deploy in. The cost of complete, precise diagnostics and fault isolation is high. In this paper we discuss various trade-offs that developers and decision makers make to design, develop and deliver a TPS that supports the user in the most optimal and cost effective way",2001,0, 318,An application of diagnostic inference modeling to vehicle health management,"We discuss the approach we have taken to applying diagnostic modeling and associated reasoning techniques to the problem of diagnosing and prognosing faults. as part of vehicle health, management systems. We present a brief background of diagnostic fault modeling based on lessons learned from ongoing research as part of the NASA/FAA Aviation Safety Program. We discuss the application of these techniques and possible implementation scenarios to commercial aircraft health management. We identify information sources available on a typical commercial transport and discuss methods for evaluating them, either singly or in combination, to establish knowledge of the current or predicted health state of the aircraft",2001,0, 319,Fault prognosis using dynamic wavelet neural networks,"Prognostic algorithms for condition based maintenance of critical machine components are presenting major challenges to software designers and control engineers. Predicting time-to-failure accurately and reliably is absolutely essential if such maintenance practices are to find their way into the industrial floor. Moreover, means are required to assess the performance and effectiveness of these algorithms. This paper introduces a prognostic framework based upon concepts from dynamic wavelet neural networks and virtual sensors and demonstrates its feasibility via a bearing failure example. Statistical methods to assess the performance of prognostic routines are suggested that are intended to assist the user in comparing candidate algorithms. The prognostic and assessment methodology proposed here may be combined with diagnostic and maintenance scheduling methods and implemented on a conventional computing platform to serve the needs of industrial and other critical processes",2001,0, 320,Reliability modeling incorporating error processes for Internet-distributed software,"The paper proposes several improvements to conventional software reliability growth models (SRGMs) to describe actual software development processes by eliminating an unrealistic assumption that detected errors are immediately corrected. A key part of the proposed models is the """"delay-effect factor"""", which measures the expected time lag in correcting the detected faults during software development. To establish the proposed model, we first determine the delay-effect factor to be included In the actual correction process. For the conventional SRGMs, the delay-effect factor is basically non-decreasing. This means that the delayed effect becomes more significant as time moves forward. Since this phenomenon may not be reasonable for some applications, we adopt a bell-shaped curve to reflect the human learning process in our proposed model. Experiments on a real data set for Internet-distributed software has been performed, and the results show that the proposed new model gives better performance in estimating the number of initial faults than previous approaches",2001,0, 321,A multi-sensor based temperature measuring system with self-diagnosis,"A new multi-sensor based temperature measuring system with self-diagnosis is developed to replace a conventional system that uses only a single sensor. Controlled by a 16-bit microprocessor, each sensor output from the sensor array is compared with a randomly selected quantised reference voltage at a voltage comparator and the result is a binary """"one"""" or """"zero"""". The number of """"ones"""" and """"zeroes"""" is counted and the temperature can be estimated using statistical estimation and successive approximation. A software diagnostic algorithm was developed to detect and isolate the faulty sensors that may be present in the sensor array and to recalibrate the system. Experimental results show that temperature measurements obtained are accurate with acceptable variances. With the self-diagnostic algorithm, the accuracy of the system in the presence faulty sensors is significantly improved and a more robust measuring system is produced",2001,0, 322,Establishing enterprise communities,"One of the most important challenges facing the builders of enterprise software is the reliable implementation of the policies that are supposed to govern the various communities operating within an enterprise. Such policies are widely considered fundamental to enterprise modeling, and their specification were the subject of several recent investigations. But specification of the policy that is to govern a given community is only the first step towards its implementation; the second, and more critical step is to ensure that all members of the community actually conform to the specified policy. The conventional approach to the implementation of a policy is to build it into all members of the community subject to it. But if the community in question is large and heterogeneous, and if its members are dispersed throughout a distributed enterprise, then such """"manual"""" implementation of its policy would be too laborious and error-prone to be practical. Moreover, a policy implemented in this manual manner would be very unstable with respect to the evolution of the system, because it can be violated by a change in the code of any member of community subject to it. It is our thesis that the only reliable way for ensuring that an heterogeneous distributed community of software modules and people conforms to a given policy is for this policy to be strictly enforced. A mechanism for establishing enterprise communities by formally specifying their policies, and by having these policies enforced is the subject of the paper",2001,0, 323,Management indicators model to evaluate performance of IT organizations,"There is no arguing nowadays about the importance of IT for the growth and competitive edge of organizations. But if technology is to be a true asset for a company, it must be aligned with the business strategic goals by means of a formalized system of strategic planning, maturity of development process, technology management and corporative quality vision. The accrued benefits can be manifold: the development of training and learning environments for an effective improvement of procedures and product quality, efficient use of assets and resources, opportunity for innovation and technologic advancement, an approach to problem solving in areas critical to the organization among others. Many companies make use of these practices, but find it hard to evaluate how effective they are and what is the final quality of the achieved results at diverse customer levels both in project vision and the continuity of service. One cause of these drawbacks is failure to apply measurement models which provide objective pointers to assess how effective the IT strategies used have actually been considering the strategic business goals. To incorporate models of measures is no easy task because it entails working on several aspects: technical, processes, products and the peculiar culture of each organization. This paper presents a model of indicators to evaluate IT performance using three well known methods: balanced scorecard, GQM and PSM",2001,0, 324,In search of efficient reliable processor design,"In this paper, we investigate an efficient reliable processor which can detect and recover from transient faults. There are two driving forces to study fault-tolerant techniques for microprocessors. One is deep submicron fabrication technology. Future semiconductor technologies could become more susceptible to alpha particles and other cosmic radiation. The other is increasing popularity of mobile platforms. Recently cell phones are used for applications which are critical to our financial security, such as flight ticket reservation, mobile banking, and mobile trading. In such applications, it is expected that computer systems will always work correctly. From these observations, we have proposed a mechanism which is based on instruction reissue technique for incorrect data speculation recovery and utilizes time redundancy. In order to mitigate overhead caused by including fault-tolerant facility, we evaluate some alternative designs and find that speculatively updating branch predictors and removing redundant memory accesses are very effective.",2001,0, 325,Documentation as a software process capability indicator,"Summary form only given. In a small software organization, a close and intense relationship with its customers is often a substitute for documentation along the software processes. Nevertheless, according to the quality standards, the inadequacy of the required documentation will retain the assessed capability of software processes on the lowest level. This article describes the interconnections between software process documentation and the maturity of the organization. The data is collected from the SPICE assessment results of small and medium sized software organizations in Finland. The aim of the article is to visualise the necessity of documentation throughout the software engineering processes in order to achieve a higher capability level. In addition we point out that processes with insufficient documentation decrease the chance to improve the quality of the processes, as it is impossible to track and analyse them",2001,0, 326,Management of the cross media impacts of municipal landfill sites: the Delphi technique,"Most of the existing solid waste disposal sites in Malaysia are practising either open dumping or controlled tipping because the technology of proper sanitary landfill practice is not totally implemented. The environmental conditions from these sites are thus expected to be bad especially in terms of the contamination of soil, air, surface and underground water, and also impacts on flora and fauna including human. The contamination associated with solid waste disposal sites involved three major environmental compartments or media, i.e. the atmosphere, water and soil. This 'Cross media' or 'Multimedia' impacts phenomenon has been recognised in various countries as being of potential importance and complicated. This study discusses on the development of simple evaluation systems by using the Delphi Approach, which emphasises on the development of weightage for different parameters selected in the evaluation procedures. Environmental conditions of all closed and active disposal sites in the study area from 9 different points of view (water quality, social, gas emissions, landuse, hydrology, geology, ecotoxicology, plant ecology and chemical constituents) were assessed, which has taken into consideration 59 selected parameters. The Landfill Pollution Index (LPI) was introduced and made into a software, which is more user friendly and requires minimum inputs from the user. The LPI incorporated with 4 other subindices, i.e. the Environmental Degradation Index (EDI) for water quality, gas emission, chemicals in surface water and chemicals in groundwater. The results of assessments indicated that most of the solid waste disposal sites in the study area showed relatively bad environmental conditions especially the operating or active site, i.e. Taman Beringin landfill site. Taman Beringin was the most polluted landfill with the LPI of 719.56, followed by Jinjang Utara (383.51), Paka 1 (197.66), Brickfields (128.90), Paka 2 (113.72), Sri Petaling (30.81) and Sungei Besi (17.87)",2001,0, 327,On systematic design of protectors for employing OTS items,"Off-the-shelf (OTS) components are increasingly used in application areas with stringent dependability requirements. Component wrapping is a well known structuring technique used in many areas. We propose a general approach to developing protective wrappers that assist in integrating OTS items with a focus on the overall system dependability. The wrappers are viewed as redundant software used to detect errors or suspicious activity and to execute appropriate recovery when possible; wrapper development is considered as a part of system integration activities. Wrappers are to be rigorously specified and executed at run time as a means of protecting OTS items against faults in the rest of the system, and the system against the OTS item's faults. Possible symptoms of erroneous behaviour to be detected by a protective wrapper and possible actions to be undertaken in response are listed and discussed. The information required for wrapper development is provided by traceability analysis. Possible approaches to implementing protectors in the standard current component technologies are briefly outlined",2001,0, 328,Object oriented metrics useful in the prediction of class testing complexity,"Adequate metrics of object-oriented software enable one to determine the complexity of a system and estimate the effort needed for testing already in the early stage of system development. The metrics values enable to locate parts of the design that could be error prone. Changes in these parts could significantly, improve the quality of the final product and decrease testing complexity. Unfortunately only few of the existing Computer Aided Software Engineering tools (CASE) calculate object metrics. In this paper methods allowing proper calculation of class metrics for some commercial CASE tool have been developed. New metric, calculable on the basis of information kept in CASE repository and useful in the estimation of testing effort have also been proposed. The evaluation of all discussed metrics does not depend on object design method and on the implementation language",2001,0, 329,Using reading techniques to focus inspection performance,"Software inspection is a quality assurance method to detect defects early during the software development process. For inspection planning there are defect detection techniques, so-called reading techniques, which let the inspection planner focus the effectiveness of individual inspectors on specific sets of defects. For realistic planning it is important to use empirically evaluated defect detection techniques. We report on the replication of a large-scale experiment in an academic environment. The experiment evaluated the effectiveness of defect detection for inspectors who use a checklist or focused scenarios on individual and team level. A main finding of the experiments is that the teams were effective to find defects: In both experiments the inspection teams found on average more than 70% of the defects in the product. The checklist consistently was overall somewhat more effective on individual level, while the scenarios traded overall defect detection effectiveness for much better effectiveness regarding their target focus, in our case specific parts of the documents. Another main result of the study is that scenario-based reading techniques can be used in inspection planning to focus individual performance without significant loss of effectiveness on team level",2001,0, 330,Mobile database procedures in MDBAS,"MDBAS is a prototype of a multidatabase management system based on mobile agents. The system integrates a set of autonomous databases distributed over a network, enables users to create a global database scheme, and manages transparent distributed execution of user requests and procedures including distributed transactions. The paper highlights the issues related to mobile database procedures, especially the MDBAS execution strategy. In order to adequately assess MDBAS's qualities and bottlenecks, we have carried out complex performance evaluation with real databases distributed in a real Internet. The evaluation included a comparison to a commercial database with distributed database capabilities. The most interesting results are presented and commented",2001,0, 331,Avoiding faulty privileges in fast stabilizing rings,"Most conventional studies on self-stabilization have been indifferent to the safety under convergence. This paper investigates how mutual exclusion property can be achieved in self-stabilizing rings even for illegitimate configurations. We present a new method which uses a state with a large state space to detect faults. If some faults are detected, every process is reset and not given a privilege. Even if the reset values are different between processes, our protocol mimics the behavior of Dijkstra's K-state protocol (1974). Then we have a fast and safe mutual exclusion protocol. A simulation study also shows its performance",2001,0, 332,Probabilistic model for segmentation based word recognition with lexicon,"We describe the construction of a model for off-line word recognizers based on over-segmentation of the input image and recognition of segment combinations as characters in a given lexicon word. One such recognizer, the Word Model Recognizer (WMR), is used extensively. Based on the proposed model it was possible to improve the performance of WMR",2001,0, 333,Vertical bar detection for gauging text similarity of document images,"A new method for gauging text similarity of image-based documents using word shape recognition is proposed in this paper. Image features are directly extracted instead of using OCR (optical character recognition). The proposed method forms so-called vertical bar patterns by detecting local extrema points in word units extracted by segmenting the document images. These vertical bar patterns form the feature vector of a document. The pair-wise similarity of document images is measured by calculating the scalar product of two document feature vectors. The proposed method is robust to changing fonts and styles, and is less affected by degradation of document qualities. To test the validity of the method, four corpora of document images were used and the ability of the method to retrieve relevant documents is reported",2001,0, 334,An object-oriented organic architecture for next generation intelligent reconfigurable mobile networks,"Next generation mobile networks have great potential in providing personalised and efficient quality of service by using re-configurable platforms. The foundation is the concept of software radio where both the mobile terminal and the serving network can be re-configurable. This approach becomes more effective when combined with historic-based prediction strategies that enable the system to learn about application behaviour and predict its resource consumption. We extend that concept by proposing the use of an object-oriented intelligent decision making architecture, which supports general and large-scale applications. The proposed architecture applies the principles of business intelligence and data warehousing, together with the concept of organic viable systems. The architecture is applied to the CAST (configurable radio with advanced software technology) platform",2001,0, 335,Online rotor bar breakage detection of three phase induction motors by wavelet packet decomposition and artificial neural network,Online detection algorithm for induction motor rotor bar breakage is presented using a multi-layer perception network (MLP) and wavelet packet decomposition (WPD). New features of rotor bar faults are obtained by wavelet packet decomposition of the stator current. These features are of multiple frequency resolutions and obviously differentiate the healthy and faulty conditions. Features with different frequency resolutions are used together with the speed slip as the input sets of a 4-layer perceptron network. The algorithm is evaluated on a small three-phase induction motor with experiments. The laboratory results show that the proposed method is able to detect the faulty conditions with high accuracy. This algorithm is also applicable to the detection of other electrical faults of induction motors,2001,0, 336,Rational interpolation examples in performance analysis,"The rational interpolation approach has been applied to performance analysis of computer systems previously. In this paper, we demonstrate the effectiveness of the rational interpolation technique in the analysis of randomized algorithms and the fault probability calculation for some real-time systems",2001,0, 337,Supporting usability through software architecture,"Software engineers should consider usability as a quality attribute in their architectural a designs. Usability determines how effectively and comfortably an end-user can achieve the goals that gave rise to an interactive system. It is an important attribute to consider during all phases of software design, but especially during architectural design because of the expense involved in adding usability aspects after users have tested the system. Since the 1980s, ongoing work on supporting usability through software architectural constructs has focused on the iterative design process for the user interface, which involves initial design, user testing, re-design to correct detected flaws, re-testing, and so on. The traditional software architectural response to repeated and expected modifications to the user interface is to use separation, encapsulation and information hiding to localize the user interface",2001,0, 338,Evaluating meta-programming mechanisms for ORB middleware,"Distributed object computing middleware, such as CORBA, COM+, and Java RMI, shields developers from many tedious and error-prone aspects of programming distributed applications. It is hard to evolve distributed applications after they are deployed, however, without adequate middleware support for meta-programming mechanisms, such as smart proxies, interceptors, and pluggable protocols. These mechanisms can help improve the adaptability of distributed applications by allowing their behavior to be modified without changing their existing software designs and implementations significantly. This article examines and compares common meta-programming mechanisms supported by DOC middleware. These mechanisms allow applications to adapt more readily to changes in requirements and runtime environments throughout their lifecycles. Some of these meta-programming mechanisms are relatively new, whereas others have existed for decades. This article provides a systematic evaluation of these mechanisms to help researchers and developers determine which are best suited to their application needs",2001,0, 339,A short circuit current study for the power supply system of Taiwan railway,"The western Taiwan railway transportation system consists mainly of a mountain route and ocean route. The Taiwan Railway Administration (TRA) has conducted a series of experiments on the ocean route in recent years to identify the possible causes of unknown events which cause the trolley contact wires to melt down frequently. The conducted tests include the short circuit fault test within the power supply zone of the Ho Long substation (Zhu Nan to Tong Xiao) that had the highest probability for the melt down events. Those test results based on the actual measured maximum short circuit current provide a valuable reference for TRA when comparing against the said events. The Le Blanc transformer is the main transformer of the Taiwan railway electrification system. The Le Blanc transformer mainly transforms the Taiwan Power Company (TPC) generated three phase alternating power supply system (69 kV, 60 Hz) into a two single-phase alternating power distribution system (M phase and T phase) (26 kV, 60 Hz) needed for the trolley traction. As a unique winding connection transformer, the conventional software for fault analysis will not be able to simulate its internal current and phase difference between each phase currents. Therefore, besides extracts of the short circuit test results, this work presents a EMTP model based on a Taiwan Railway substation equivalent circuit model with the Le Blanc transformer. The proposed circuit model can simulate the same short circuit test to verify the actual fault current and accuracy of the equivalent circuit model. Moreover, the maximum short circuit current is further evaluated with reference to the proposed equivalent circuit. Preliminary inspection of trolley contact wire reveals the possible causes of melt down events based on the simulation results",2001,0, 340,Modeling and prediction of distribution system voltage distortion caused by nonlinear residential loads,"Electric utilities have expressed concern over increased nonlinear loading of residential power distribution systems. The number and variety of power electronic products found in the typical home continues to grow rapidly, imposing a burden on local power companies to supply reliable, distortion-free service. In order to adequately prepare for the future, utilities must be able to predict the harmonic impact of new power electronic equipment and evaluate the ability of existing power systems to accommodate these nonlinear loads. This paper describes a modeling methodology that uses SPICE simulation software to predict the voltage distortion caused by nonlinear residential loads. Results of applying the methodology to a specific distribution system containing either of two different types of harmonic-rich loads, i.e., variable speed air conditioners or electric vehicle battery chargers, are presented as well",2001,0, 341,A novel compression algorithm for cell animation images,"We propose a novel compression algorithm for cell animation images. Conventional software algorithms include Cinepak, MPEG-1, Indeo5, etc., but these are for natural images, not for cell animation images. At low bit-rates, these provide relatively very poor image quality. In the proposed method, for intra frame coding, octree-based color quantization, ADPCM, and Golomb-Rice code are used. And for inter frame coding, block-to-block difference information is classified and coded. Therefore computational complexity is relatively low. The proposed methods outperform MPEG-1, FLC, and Indeo 5",2001,0, 342,Adaptive algorithms for variable-complexity video coding,"Variable-complexity algorithms provide a means of managing the computational complexity of a software video CODEC. The reduction in computational complexity provided by existing variable-complexity algorithms depends on the video scene characteristics and is difficult to predict. A new approach to variable-complexity encoding is proposed. A variable-complexity DCT algorithm is adaptively updated in order to maintain a near-constant computational complexity. The adaptive update algorithm is shown to be capable of providing a significant, predictable, reduction in computational complexity with only a small loss of video quality. The proposed approach may be particularly useful for software-only video encoding, in applications where processing resources are limited",2001,0, 343,Word shape recognition for image-based document retrieval,"We propose a word shape recognition method for retrieving image-based documents. Document images are segmented at the word level first. Then the proposed method detects local extrema points in word segments to form so-called vertical bar patterns. These vertical bar patterns form the feature vector of a document. The scalar product of two document feature vectors is calculated to measure the pairwise similarity of document images. The proposed method is robust to changing fonts and styles, and is less affected by degradation of document qualities. Three groups of words in different fonts and image qualities were used to test the validity of our method. Real-life document images were also used to test the method's ability of retrieving relevant documents",2001,0, 344,Investigating reinspection decision accuracy regarding product-quality and cost-benefit estimates,"After a software inspection the project manager has to decide whether a product has sufficient quality to pass on to the next software development stage or whether a second inspection cycle, a reinspection, is likely to sufficiently improve its quality. The reinspection decision of recent research focused on the estimation of product quality after inspection, which does not take in to account the effect of a reinspection. Thus we propose to use estimation models for the quality improvement during reinspection and the cost and benefit of a reinspection as basis for the reinspection decision. We evaluate the reinspection decision correctness of these models with time-stamped defect data from a large-scale controlled experiment on the inspection and reinspection of a software requirements document. The main finding of the investigation is that the product quality criterion is likely to force products to be reinspected, if a large number of defects were detected in the first inspection. Further the product-quality, criterion is especially sensitive to an underestimation of the number of defects in the product and will let bad products pass as good. The cost-benefit criterion is less sensitive to estimation error than the product-quality criterion and should in practice be used as second opinion, if a product-quality criterion is applied",2001,0, 345,A memory-based reasoning approach for assessing software quality,"Several methods have been explored for assuring the reliability of mission critical systems (MCS), but no single method has proved to be completely effective. This paper presents an approach for quantifying the confidence in the probability that a program is free of specific classes of defects. The method uses memory-based reasoning techniques to admit a variety of data from a variety of projects for the purpose of assessing new systems. Once a sufficient amount of information has been collected, the statistical results can be applied to programs that are not in the analysis set to predict their reliabilities and guide the testing process. The approach is applied to the analysis of Y2K defects based on defect data generated using fault-injection simulation",2001,0, 346,Information theoretic metrics for software architectures,"Because it codifies best practices, and because it supports various forms of software reuse, the discipline of software architecture is emerging as an important branch of software engineering research and practice. Because architectural-level decisions are prone to have a profound impact on finished software products, it is important to apprehend their quality attributes and to quantify them (as much as possible). In this paper, we discuss an information-theoretic approach to the definition and validation of architectural metrics, and illustrate our approach on a sample example",2001,0, 347,Automated generation of statistical test cases from UML state diagrams,"The adoption of the object-oriented (OO) technology for the development of critical software raises important testing issues. This paper addresses one of these issues: how to create effective tests from OO specification documents? More precisely, the paper describes a technique that adapts a probabilistic method, called statistical functional testing, to the generation of test cases from UML state diagrams, using transition coverage as the testing criterion. Emphasis is put on defining an automatic way to produce both the input values and the expected outputs. The technique is automated with the aid of the Rational Software Corporation's Rose RealTime tool. An industrial case study from the avionics domain, formally specified and implemented in Java, is used to illustrate the feasibility of the technique at the subsystem level. Results of first test experiments are presented to exemplify the fault revealing power of the created statistical test cases",2001,0, 348,"The Exu approach to safe, transparent and lightweight interoperability","Exu is a new approach to automated support for safe, transparent and lightweight interoperability in multilanguage software systems. The approach is safe because it enforces appropriate type compatibility across language boundaries. It is transparent since it shields software developers from the details inherent in low-level language-based interoperability mechanisms. It is lightweight for developers because it eliminates tedious and error-prone coding (e.g., JNI) and lightweight at run-time since it does not unnecessarily incur the performance overhead of distributed, IDL-based approaches. The Exu approach exploits and extends the object-oriented concept of meta-object, encapsulating interoperability implementation in meta-classes so that developers can produce interoperating code by simply using meta-inheritance. An example application of Exu to the development of Java/C++ (i.e., multilanguage) programs illustrates the safety and transparency advantages of the approach. Comparing the performance of the Java/C++ programs produced by Exu to the same set of programs developed using IDL-based approaches provides preliminary evidence of the performance advantages of Exu",2001,0, 349,Availability requirement for fault management server,"In this paper, we examine the availability requirement for the fault management server in high-availability communication systems. According to our study, we find that the availability of the fault management server does not need to be 99.999% in order to guarantee a 99.999% system availability as long as the fail-safe ratio (the probability that the failure of the fault management server will not bring the system down) and the fault coverage ratio (the probability that the failure in the system can be detected and recovered by the fault management server) are sufficiently high. Tradeoffs can be made among the availability of the fault management server, the fail-safe ratio and the fault coverage ratio to optimize system availability. A cost-effective design for the fault management server is proposed in this paper",2001,0, 350,Scenario-based functional regression testing,"Regression testing has been a popular quality-assurance technique. Most regression testing techniques are based on code or software design. This paper proposes a scenario-based functional regression testing, which is based on end-to-end (E2E) integration test scenarios. The test scenarios are first represented in a template model that embodies both test dependency and traceability. By using test dependency information, one can obtain a test slicing algorithm to detect the scenarios that are affected and thus they are candidates for regression testing. By using traceability information, one can find affected components and their associated test scenarios and test cases for regression testing. With the same dependency and traceability information one can use the ripple effect analysis to identify all affected, including directly or indirectly, scenarios and thus the set of test cases can be selected for regression testing. This paper also provides several alternative test-case selection approaches and a hybrid approach to meet various requirements. A web-based tool has been developed to support these regression testing tasks",2001,0, 351,Coupling of design patterns: common practices and their benefits,"Object-oriented (OO) design patterns define collections of interconnected classes that serve a particular purpose. A design pattern is a structural unit in a system built out of patterns, not unlike the way a function is a structural unit in a procedural program or a class is a structural unit in an OO system designed without patterns. When designers treat patterns as structural units, they become concerned with issues such as coupling and cohesion at a new level of abstraction. We examine the notion of pattern coupling to classify how designs may include coupled patterns. We find many examples of coupled patterns; this coupling may be """"tight"""" or """"loose"""", and provides both benefits and costs. We qualitatively assess the goodness of pattern coupling in terms of effects on maintainability, factorability, and reusability when patterns are coupled in various ways",2001,0, 352,Calculation of deadline missing probability in a QoS capable cluster interconnect,"The growing use of clusters in diverse applications, many of which have real-time constraints, requires Quality-of-Service (QoS) support from the underlying cluster interconnect. In this paper we propose an analytical model that captures the characteristics of a QoS capable wormhole router which is the basic building block of cluster networks. The model captures the behavior of integrated traffic in a cluster and computes the average deadline missing probability for real-time traffic. The cluster interconnect, considered here, is a hypercube network. Comparison of Deadline Missing Probability (DMP) using the proposed model with that of the simulation shows that our analytical model is accurate and useful",2001,0, 353,Dependability under malicious agreement in N-modular redundancy-on-demand systems,"In a multiprocessor under normal loading conditions, idle processors offer a natural spare capacity. Previous work attempted to utilize this redundancy to overcome the limitations of classic diagnosability and modular redundancy techniques while providing significant fault tolerance. A common approach is task duplexing. The usefulness of this approach for critical applications, unfortunately, is seriously undermined by its susceptibility to agreement on faulty outcomes (malicious agreement). To assess dependability of duplexing under malicious agreement, we propose a stochastic model which dynamically profiles behavior in the presence of malicious faults. The model uses the so-called policy referred to as NMR on demand (NMROD). Each task in a multiprocessor is duplicated, with additional processors allocated for recovery as needed. NMROD relies on a fault model favoring response correctness over actual fault status, and integrates online repair to provide non-stop operation over an extended period",2001,0, 354,A fault-tolerant approach to network security,"Summary form only given. The increasing use of the Internet, especially for internal and business-to-business applications has resulted in the need for increased security for all networked systems to avoid unauthorized access and use. A failure of network security can effectively close the business, its availability is vital to operations. Vital functions such as firewalls and VPNs must remain in operation without loss of time for fallover, without loss of data and must be able to be placed even at remote locations where support personnel may not be readily available. Network firewalls are the first, and often are the only, line of defense against an attack. However, the firewall can be a double-edged sword. In operation, the firewall protects the network from everything from Denial of Service attacks to the entry of known viruses and unauthorized intrusion. If the firewall falls, there are generally only two options: Leave the network open to all or shut down access by anyone. The default condition is to close everything off, but this can be as disastrous as leaving the network open. Due to the importance of the firewall, most leading firewall software provides some method of establishing a form of fail-over redundancy for high availability. Yet in most cases this means some form of clustering using a secondary system as a backup with specialty software to detect and respond to a failure of the primary firewall. Such a clustered approach introduces additional complexity when establishing and configuring the firewall and additional complexity when upgrading. It also adds dramatically to the cost, not only in the hardware for the firewall, but in additional software copies and in the expertise for clustering support software required to establish and maintain the cluster. The approach we will discuss examines the creation of network security based on a hardware approach to fault tolerance. This approach will dramatically reduce the system complexity, simultaneously eliminating the need for special clustering software and special expertise for configuring the system for the kind of continuous availability that is the objective of the network security application. In addition, because the hardware approach is something that is designed in from the inception of the system, there are additional advantages. The fault tolerance is not an afterthought, but rather the purpose of the hardware, meaning that the system can be made to function very smoothly with very little administration. Failure of a part of the system is seamlessly recovered by the redundant elements, without loss of data in memory or loss of state for the system. In sum, this paper discusses the ability to create network security that reaches the standard of being continuously available, what is often referred to as the """"Holy Grail of reliability,"""" 99.999% uptime",2001,0, 355,Software cost estimation with incomplete data,"The construction of software cost estimation models remains an active topic of research. The basic premise of cost modeling is that a historical database of software project cost data can be used to develop a quantitative model to predict the cost of future projects. One of the difficulties faced by workers in this area is that many of these historical databases contain substantial amounts of missing data. Thus far, the common practice has been to ignore observations with missing data. In principle, such a practice can lead to gross biases and may be detrimental to the accuracy of cost estimation models. We describe an extensive simulation where we evaluate different techniques for dealing with missing data in the context of software cost modeling. Three techniques are evaluated: listwise deletion, mean imputation, and eight different types of hot-deck imputation. Our results indicate that all the missing data techniques perform well with small biases and high precision. This suggests that the simplest technique, listwise deletion, is a reasonable choice. However, this will not necessarily provide the best performance. Consistent best performance (minimal bias and highest precision) can be obtained by using hot-deck imputation with Euclidean distance and a z-score standardization",2001,0, 356,Prioritizing test cases for regression testing,"Test case prioritization techniques schedule test cases for execution in an order that attempts to increase their effectiveness at meeting some performance goal. Various goals are possible; one involves rate of fault detection, a measure of how quickly faults are detected within the testing process. An improved rate of fault detection during testing can provide faster feedback on the system under test and let software engineers begin correcting faults earlier than might otherwise be possible. One application of prioritization techniques involves regression testing, the retesting of software following modifications; in this context, prioritization techniques can take advantage of information gathered about the previous execution of test cases to obtain test case orderings. We describe several techniques for using test execution information to prioritize test cases for regression testing, including: 1) techniques that order test cases based on their total coverage of code components; 2) techniques that order test cases based on their coverage of code components not previously covered; and 3) techniques that order test cases based on their estimated ability to reveal faults in the code components that they cover. We report the results of several experiments in which we applied these techniques to various test suites for various programs and measured the rates of fault detection achieved by the prioritized test suites, comparing those rates to the rates achieved by untreated, randomly ordered, and optimally ordered suites",2001,0, 357,"On comparisons of random, partition, and proportional partition testing","Early studies of random versus partition testing used the probability of detecting at least one failure as a measure of test effectiveness and indicated that partition testing is not significantly more effective than random testing. More recent studies have focused on proportional partition testing because a proportional allocation of the test cases (according to the probabilities of the subdomains) can guarantee that partition testing will perform at least as well as random testing. We show that this goal for partition testing is not a worthwhile one. Guaranteeing that partition testing has at least as high a probability of detecting a failure comes at the expense of decreasing its relative advantage over random testing. We then discuss other problems with previous studies and show that failure to include important factors (cost, relative effectiveness) can lead to misleading results",2001,0, 358,A probability-based approach of transaction consistency in mobile environments,"In mobile distributed databases, the communications between sites only provide weak connectivity. To improve the efficiency of transaction processing in mobile computers, lazy replication is used extensively. But this approach either doesn't guarantee serializability and consistency as needed by applications or imposes restrictions on placement of data and which data objects can be updated. The shortcomings make it difficult for lazy replication to be adaptive to the dynamic changes of network connection and configuration in mobile environments. In the paper, we propose a probability-based approach, which guarantees serializability and consistency. We adopt the quality of service specification and achieve transaction consistency dynamically through the collaboration between applications and systems. Probability-based approach is flexible and adaptive to mobile computing environments. The experimental results suggest that probability-based approach outperform ordinary lazy replication",2001,0, 359,The development of security system and visual service support software for on-line diagnostics,"Hitachi's CD-SEM achieves the highest tool availability in the industry. However, efforts to further our performance are continuously underway. The proposed on-line diagnostics system can allow senior technical staff to monitor and investigate tool status by connecting the equipment supplier and the device manufacturer sites through the Internet. The advanced security system ensures confidentiality by firewalls, digital certification, and advanced encryption algorithms to protect device manufacturer data from unauthorized access. Service support software, called DDS (defective part diagnosis support system), will analyze the status of mechanical, evacuation, and optical systems. Its advanced overlay function on a timing chart identifies failed components in the tool and allows on-site or remote personnel to predict potential failures prior to their occurrence. Examples of application shows that the proposed system is expected to reduce repair time, improve availability and lower cost of ownership",2001,0, 360,Top 10 list [software development],"Software's complexity and accelerated development schedules make avoiding defects difficult. We have found, however, that researchers have established objective and quantitative data, relationships, and predictive models that help software developers avoid predictable pitfalls and improve their ability to predict and control efficient software projects. The article presents 10 techniques that can help reduce the flaws in your code",2001,0, 361,Effect of code coverage on software reliability measurement,"Existing software reliability-growth models often over-estimate the reliability of a given program. Empirical studies suggest that the over-estimations exist because the models do not account for the nature of the testing. Every testing technique has a limit to its ability to reveal faults in a given system. Thus, as testing continues in its region of saturation, no more faults are discovered and inaccurate reliability-growth phenomena are predicted from the models. This paper presents a technique intended to solve this problem, using both time and code coverage measures for the prediction of software failures in operation. Coverage information collected during testing is used only to consider the effective portion of the test data. Execution time between test cases, which neither increases code coverage nor causes a failure, is reduced by a parameterized factor. Experiments were conducted to evaluate this technique, on a program created in a simulated environment with simulated faults, and on two industrial systems that contained tenths of ordinary faults. Two well-known reliability models, Goel-Okumoto and Musa-Okumoto, were applied to both the raw data and to the data adjusted using this technique. Results show that over-estimation of reliability is properly corrected in the cases studied. This new approach has potential, not only to achieve more accurate applications of software reliability models, but to reveal effective ways of conducting software testing",2001,0, 362,Signal processing techniques for Diacoustic(R) analysis of mechanical systems,"It has long been understood that mechanical failures, both extant and incipient, can often be detected by the human ear. A good mechanic can spot many kinds of trouble just by listening and will have a stethoscope in his or her tool box. Diacoustic(R) analysis is a method of achieving the same result, more reliably, using advanced signal processing algorithms. An overall diagnostic system consists of both hardware and software. Depending on the application, the Diacoustic(R) functions will be integrated with other sensors and software for their analysis. For example, temperatures and pressures are frequently indicative of machine health. Modern vehicles use gas sensors (such as the oxygen sensor in an automobile) to ascertain combustion efficiency. All other such indicia must be included with Diacoustic(R) techniques to comprise complete failure detection and warning system. In this paper the author addresses the Diacoustic(R) functions only",2001,0, 363,An analysis of the gap between the knowledge and skills learned in academic software engineering course projects and those required in real: projects,"This paper describes how the Software Engineering Body of Knowledge, (SWEBOK) can be used as a guide to assess and improve software engineering courses. A case study is presented in which the guide is applied to a typical undergraduate software engineering course. The lessons learned are presented which the authors believe are generalizable to comparable courses taught at many academic institutions. A novel approach involving largescale software project simulation is also presented a way to overcome some of the course deficiencies identified by the guide",2001,0, 364,Automated video chain optimization,"Video processing algorithms found in complex video appliances such as television sets and set top boxes exhibit an interdependency that makes it is difficult to predict the picture quality of an end product before it is actually built. This quality is likely to improve when algorithm interaction is explicitly considered. Moreover, video algorithms tend to have many programmable parameters, which are traditionally tuned in manual fashion. Tuning these parameters automatically rather than manually is likely to speed up product development. We present a methodology that addresses these issues by means of a genetic algorithm that, driven by a novel objective image quality metric, finds high-quality configurations of the video processing chain of complex video products",2001,0,365 365,Automated video chain optimization,"Video processing algorithms found in complex video appliances such as television sets and set top boxes exhibit an interdependency that makes it is difficult to predict the picture quality of an end product before it is actually built. This quality is likely to improve when algorithm interaction is explicitly considered. Moreover, video algorithms tend to have many programmable parameters, which are traditionally tuned in manual fashion. Tuning these parameters automatically rather than manually is likely to speed up product development. We present a methodology that addresses these issues by means of a genetic algorithm that, driven by a novel objective image quality metric, finds high-quality configurations of the video processing chain of complex video products.",2001,0, 366,A comparison of algorithm-based fault tolerance and traditional redundant self-checking for SEU mitigation,"The use of an algorithmic, checksum-based """"EDAC"""" (error detection and correction) technique for matrix multiply operations is compared with the more traditional redundant self-checking hardware and retry approach for mitigating single event upset (SEU) or transient errors in soft, radiation tolerant signal processing hardware. Compared with the self-checking approach, the check-sum based EDAC technique offers a number of advantages including lower size, weight, power, and cost. In a manner similar to the SECDED (single error correction/double error detection) EDAC technique commonly used on memory systems, the checksum-based technique can detect and correct errors on the same processing cycle, reducing transient error recovery latency and significantly improving system availability. The paper compares the checksum-based technique with the self-checking technique in terms of failure rates; upset rates coverage, percentage overhead, detection latency, recovery latency, size, weight, power, and cost. The paper also looks at the percentage overhead of the checksum-based technique, which decreases as the size of the matrix increases",2001,0, 367,Case study: medical Web service for the automatic 3D documentation for neuroradiological diagnosis,"The case study presents a medical Web service for the automatic analysis of CTA (computer tomography angiography) datasets. It aims at the detection and evaluation of intracranial aneurysms which are malformations of cerebral blood vessels. To obtain a standardized 3D visualization, digital videos are automatically generated. The time-consuming video production caused by the manual delineation of structures, software based volume rendering, and the interactive definition of an optimized camera path is considerably improved with a fully automatic strategy. Therefore, a previously suggested approach (C. Rezk-Salama, 2000) is applied which uses an optimized transfer function as a template and automatically adapts it to an individual dataset. Furthermore, we introduce hardware-accelerated morphologic filtering in order to detect the location of mid-size and giant aneurysms. The actual generation of the video is finally integrated into a hardware accelerated off-screen rendering process based on 3D texture mapping, ensuring fast visualization of high quality. Overall, clinical routine can be considerably assisted by providing a Web based service combining automatic detection and standardized visualization.",2001,0, 368,Analysis of power system transients using wavelets and Prony method,"Transients resulting from switching of capacitor banks in electrical distribution systems affects power quality. The transient overvoltages can theoretically reach peak phase to earth values in the order of 2.0 p.u. High current transients can reach values to ten times the capacitor nominal current with duration of several milliseconds. Another severe operating condition is the switching on a second capacitor bank connected to the same bus. In the work, the characteristics of the transients are analyzed. The time of the beginning of a transient process is detected using a wavelet transform. The frequencies of transient components have been investigated applying the Fourier technique and the Prony model. The investigations show the advantages of the methods basing on the Prony model, over the Fourier technique. A distribution system was simulated using the EMTP software",2001,0, 369,Electrical integration assessment of wind turbines into industrial power systems: the case of a mining unit,"Onsite diesel-based power generation is a common practice within the industry, not just for supplying private demands but also as a solution to power quality problems. Although diesel generation has benefits, the cost of running such systems is rather high. In favoured locations, especially geographically-remote sites, wind-powered generation is an attractive, cost-effective option. This paper focusses on the development of individual generator models and some simulation results to assess the electrical integration of wind turbines into industrial power systems. Also, an insight into the assessment procedure and the proposed case study are also introduced",2001,0, 370,Design and evaluation of context-dependent protective relaying approach,"This paper, introduces a new concept of robust protective relays based on unique type of self-organized neural network. An advanced approach for protective relay testing and evaluation is presented as well. The proposed relaying solution detects and subsequently classifies the faults. A new interactive simulation environment based on MATLAB is selected as the main software environment for synthesis and evaluation of complex protection algorithms. Other application programs may be connected with MATLAB and used for simulation of specific power system faults and events",2001,0, 371,Dynamic simulator for studying WCDMA based hierarchical cell structures,"A dynamic radio network simulator is implemented for studying WCDMA based hierarchical cell structures. The simulator allows estimation of capacity and quality of service related issues in a two-layer network (microcells and macrocells). The input to the simulator is base station and mobile station information and its output is presented as the blocking and dropping probabilities, handoff rate and capacity of the assumed network. Both uplink and downlink are considered. As an example, the impact of between-layer handover on the capacity is investigated. The whole simulator is based entirely on visual C++ software",2001,0, 372,Accelerating learning from experience: avoiding defects faster,"All programmers learn from experience. A few are rather fast at it and learn to avoid repeating mistakes after once or twice. Others are slower and repeat mistakes hundreds of times. Most programmers' behavior falls somewhere in between: They reliably learn from their mistakes, but the process is slow and tedious. The probability of making a structurally similar mistake again decreases slightly during each of some dozen repetitions. Because of this a programmer often takes years to learn a certain rule-positive or negative-about his or her behavior. As a result, programmers might turn to the personal software process (PSP) to help decrease mistakes. We show how to accelerate this process of learning from mistakes for an individual programmer, no matter whether learning is currently fast, slow, or very slow, through defect logging and defect data analysis (DLDA) techniques",2001,0, 373,A dynamic buffer management scheme based on rate estimation in packet-switched networks,"While traffic volume of real-time applications is rapidly increasing, current routers do not guarantee minimum QoS values of fairness and they drop packets in a random fashion. If routers provide a minimum QoS, resulting a less delay, reduced delay-jitter, more fairness, and smooth sending rates, TCP-friendly rate control (TFRC) can be adopted for real-time applications. We propose a dynamic buffer management scheme that meets the requirements described above, and can be applied to TCP flow and to data flow for transfer of real-time applications. The proposed scheme consists of a virtual threshold function, an accurate and stable per-flow rate estimation, and a per-flow exponential drop probability. We discuss how this scheme motivates real-time applications to adopt TCP-friendly rate control",2001,0, 374,Measuring voice readiness of local area networks,"It is well known that company intranets are growing into ubiquitous communications media for everything. As a consequence, network traffic is notoriously dynamic, and unpredictable. In most scenarios, the data network requires tuning to achieve acceptable quality for voice integration. This paper introduces a performance measurement method based on widely used IP protocol elements, which allows measurement of network performance criteria to predict the voice transmission feasibility of a given local area network. The measurement does neither depend on special VoIP (Voice over IP) equipment, nor does it need network monitoring hardware. Rather it uses special payload samples to detect unloaded network conditions to receive reference values. These samples are followed by a typical VoIP application payload to obtain real-world measurement conditions. We successfully validate our method within a local area network and present all captured values that describe important aspects of voice quality",2001,0, 375,FedEx - a fast bridging fault extractor,"Test pattern generation and diagnosis algorithms that target realistic bridging faults must be provided with a realistic fault list. In this work we describe FedEx, a bridging fault extractor that extracts a circuit from the mask layout, identifies the two-node bridges that can occur, their locations, layers, and relative probability of occurrence. Our experimental results show that FedEx is memory efficient and fast",2001,0, 376,A validation fault model for timing-induced functional errors,"The violation of timing constraints on signals within a complex system can create timing-induced functional errors which alter the value of output signals. These errors are not detected by traditional functional validation approaches because functional validation does not consider signal timing. Timing-induced functional errors are also not detected by traditional timing analysis approaches because the errors may affect output data values without affecting output signal timing. A timing fault model, the Mis-Timed Event (MTE) fault model, is proposed to model timing-induced functional errors. The MTE fault model formulates timing errors in terms of their effects on the lifespans of the signal values associated with the fault. We use several examples to evaluate the MTE fault model. MTE fault coverage results shows that it efficiently captures an important class of errors which are not targeted by other metrics",2001,0, 377,A software methodology for detecting hardware faults in VLIW data paths,"The proposed methodology aims at providing concurrent hardware fault detection properties in data paths for VLIW processor architectures. The approach, carried out on the application software consists in the introduction of additional instructions for controlling the correctness of the computation with respect to failures in one of the data path functional units. The paper presents the methodology and its application to a set of media benchmarks",2001,0,956 378,Procedure call duplication: minimization of energy consumption with constrained error detection latency,"This paper presents a new software technique for detecting transient hardware errors. The objective is to guarantee data integrity in the presence of transient errors and to minimize energy consumption at the same time. Basically, we duplicate computations and compare their results to detect errors. There are three choices for duplicate computations: (1) duplicating every statement in the program and comparing their results, (2) re-executing procedures with duplicated procedure calls and comparing the results, (3) re-executing the whole program and comparing the final results. Our technique is the combination of (1) and (2): Given a program, our technique analyzes procedure call behavior of the program and determines which procedures should have duplicated statements (choice (1)) and which procedure calls should be duplicated (choice (2)) to minimize energy consumption while controlling error detection latency constraints. Then, our technique transforms the original program into the program that is able to detect errors with reduced energy consumption by re-executing the statements or procedures. In benchmark program simulation, we found that our technique saves over 25% of the required energy on average compared to previous techniques that do not take energy consumption into consideration",2001,0, 379,Development of the special software tools for the defect/fault analysis in the complex gates from standard cell library,"The development of special software tool named FIESTA (Faults Identification and Estimation of Test Ability) for the defect/fault analysis in the complex gates from industrial cell library is considered. This software tool is destined for the test developers and IC designers and is aimed at: a) probabilistic-based analysis of CMOS physical defects in VLSI circuits: b) facilitation of the work on development of hierarchical probabilistic automatic generation of test patterns; c) improvement of the layout in order to decrease the influence of spot defects on IC manufacturability. We consider the principle concepts of the FIESTA development. They are based on the developed approaches to 1) the identification and estimation of the probability of actual faulty functions resulting from shorts and opens caused by spot defects in the conductive layers of IC layout, and to 2) the evaluation of the effectiveness/usefulness of the test vector components in faults detection",2001,0, 380,Genetic programming model for software quality classification,"We apply genetic programming techniques to build a software quality classification model based on the metrics of software modules. The model we built attempts to distinguish the fault-prone modules from non-fault-prone modules using genetic programming (GP). These GP experiments were conducted with a random subset selection for GP in order to avoid overfitting. We then use the whole fit data set as the validation data set to select the best model. We demonstrate through two case studies that the GP technique can achieve good results. Also, we compared GP modeling with logistic regression modeling to verify the usefulness of GP",2001,0, 381,The SASHA architecture for network-clustered web servers,"We present the Scalable, Application-Space, Highly-Available (SASHA) architecture for network-clustered web servers that demonstrates high performance and fault tolerance using application-space software and Commercial-Off-The-Shelf (COTS) hardware and operating systems. Our SASHA architecture consists of an application-space dispatcher, which performs OSI layer 4 switching using layer 2 or layer 3 address translation; application-space agents that execute on server nodes to provide the capability for any server node to operate as the dispatcher, a distributed state-reconstruction algorithm; and a token-based communications protocol that supports self-configuring, detecting and adapting to the addition or removal of servers. The SASHA architecture of clustering offers a flexible and cost-effective alternative to kernel-space or hardware-based network-clustered servers with performance comparable to kernel-space implementations",2001,0, 382,A probabilistic constructive approach to optimization problems,"We propose a new optimization paradigm for solving intractable combinatorial problems. The technique, named Probabilistic Constructive (PC), combines the advantages of both constructive and probabilistic algorithms. The constructive aspect provides relatively short runtime and makes the technique amenable for the inclusion of insights through heuristic rules. The probabilistic nature facilitates a flexible trade-off between runtime and the quality of solution. In addition to presenting the generic technique, we apply it to the Maximal Independent Set problem. Extensive experimentation indicates that the new approach provides very attractive trade-offs between the quality of the solution and runtime, often outperforming the best previously published approaches.",2001,0, 383,Probability and agents,"To make sense of the information that agents gather from the Web, they need to reason about it. If the information is precise and correct, they can use engines such as theorem provers to reason logically and derive correct conclusions. Unfortunately, the information is often imprecise and uncertain, which means they will need a probabilistic approach. More than 150 years ago, George Boole presented the logic that bears his name. There is concern that classical logic is not sufficient to model how people do or should reason. Adopting a probabilistic approach in constructing software agents and multiagent systems simplifies some thorny problems and exposes some difficult issues that you might overlook if you used purely logical approaches or (worse!) let procedural matters monopolize design concerns. Assessing the quality of the information received from another agent is a major problem in an agent system. The authors describe Bayesian networks and illustrate how you can use them for information quality assessment",2001,0, 384,"Edge, junction, and corner detection using color distributions","For over 30 years (1970-2000) researchers in computer vision have been proposing new methods for performing low-level vision tasks such as detecting edges and corners. One key element shared by most methods is that they represent local image neighborhoods as constant in color or intensity with deviations modeled as noise. Due to computational considerations that encourage the use of small neighborhoods where this assumption holds, these methods remain popular. The research presented models a neighborhood as a distribution of colors. The goal is to show that the increase in accuracy of this representation translates into higher-quality results for low-level vision tasks on difficult, natural images, especially as neighborhood size increases. We emphasize large neighborhoods because small ones often do not contain enough information. We emphasize color because it subsumes gray scale as an image range and because it is the dominant form of human perception. We discuss distributions in the context of detecting edges, corners, and junctions, and we show results for each",2001,0, 385,A framework for assessing the use of third-party software quality assurance standards to meet FDA medical device software process control guideline's,"The proliferation of medical device software (MDS) potentially increases the risks of patient injury from software defects. The US Food and Drug Administration (FDA) in 1998 updated its MDS regulations, moving away from a product-based regulatory approach toward one more focused on quality assurance processes. However, what constituted acceptable software quality assurance (SQA) processes and whether regulations could be met by the use of third-party standards was not specified. The FDA has implicitly sanctioned using third-party SQA audits in submissions for accelerated review of modifications of existing MDS, but it has neither approved nor rejected their use in submissions for new MDS approval. Suppliers must assess whether adopting a third-party SQA standard assures full or only partial conformance with FDA requirements because they remain potentially liable for damages resulting from software defects. However, substantial differences in the philosophy and organization of FDA requirements and third-party standards make this assessment difficult. This research develops a framework to assess whether third-party SQA standards can meet FDA requirements and then employs the framework to determine if ISO 9000-3 or the Software Engineering Institute's Capability Maturity Model is sufficient to meet such requirements. The authors' research analyzes four SQA categories specified by the FDA guidelines: process management, requirements specification, design control, and change control. Analysis indicates that while neither third-party SQA standard by itself fully meets FDA requirements, either standard is worth adopting and is approximately equivalent in its usefulness",2001,0, 386,Detecting heap smashing attacks through fault containment wrappers,"Buffer overflow attacks are a major cause of security breaches in modern operating systems. Not only are overflows of buffers on the stack a security threat, overflows of buffers kept on the heap can be too. A malicious user might be able to hijack the control flow of a root-privileged program if the user can initiate an overflow of a buffer on the heap when this overflow overwrites a function pointer stored on the heap. The paper presents a fault-containment wrapper which provides effective and efficient protection against heap buffer overflows caused by C library functions. The wrapper intercepts every function call to the C library that can write to the heap and performs careful boundary checks before it calls the original function. This method is transparent to existing programs and does not require source code modification or recompilation. Experimental results on Linux machines indicate that the performance overhead is small",2001,0, 387,Assessing inter-modular error propagation in distributed software,"With the functionality of most embedded systems based on software (SW), interactions amongst SW modules arise, resulting in error propagation across them. During SW development, it would be helpful to have a framework that clearly demonstrates the error propagation and containment capabilities of the different SW components. In this paper, we assess the impact of inter-modular error propagation. Adopting a white-box SW approach, we make the following contributions: (a) we study and characterize the error propagation process and derive a set of metrics that quantitatively represents the inter-modular SW interactions, (b) we use a real embedded target system used in an aircraft arrestment system to perform fault-injection experiments to obtain experimental values for the metrics proposed, (c) we show how the set of metrics can be used to obtain the required analytical framework for error propagation analysis. We find that the derived analytical framework establishes a very close correlation between the analytical and experimental values obtained. The intent is to use this framework to be able to systematically develop SW such that inter-modular error propagation is reduced by design",2001,0, 388,Why is it so hard to predict software system trustworthiness from software component trustworthiness?,"When software is built from components, nonfunctional properties such as security, reliability, fault-tolerance, performance, availability, safety, etc. are not necessarily composed. The problem stems from our inability to know a priori, for example, that the security of a system composed of two components can be determined from knowledge about the security of each. This is because the security of the composite is based on more than just the security of the individual components. There are numerous reasons for this. The article considers only the factors of component performance and calendar time. It is concluded that no properties are easy to compose and some are much harder than others",2001,0, 389,An extension of Integrated Services with active networking for providing quality of service in networks with long-range dependent traffic,"Although today's network capacity is increasing exponentially, new applications are demanding higher and higher bandwidth. The available bandwidth always seems to be less than the new applications require. This tendency results in congested networks and packet losses, and we can expect this to continue into the foreseeable future. Congestion can be caused by several factors. The most dangerous cause of congestion is the burstiness of the network traffic. Recent results make it evident that high-speed network traffic is more bursty, and its variability cannot be predicted, as was assumed previously. It has been shown that network traffic has similar statistical properties on many time scales. Traffic that is bursty on many or all time scales can be described statistically using the notion of long-range dependency. Long-range-dependent traffic has observable bursts on all time scales. Factors such as traffic burstiness make providing quality of service (QoS) in high-speed networks increasingly important. QoS implies mechanisms to avoid congestion by allocating network resources optimally, rather than continually increasing network capacities. The objective of our paper is to present an extension of a QoS mechanism called Integrated Services (IntServ) with active networking in networks with long-range-dependent traffic",2001,0, 390,A neural network based fault detection and identification scheme for pneumatic process control valves,"This paper outlines a method for detection and identification of actuator faults in a pneumatic process control valve using a neural network. First, the valve signature and dynamic error band tests, used by specialists to determine valve performance parameters, are carried out for a number of faulty operating conditions. A commercially available software package is used to carry out the diagnostic tests, thus eliminating the need for additional instrumentation of the valve. Next, the experimentally determined valve performance parameters are used to train a multilayer feedforward network to successfully detect and identify incorrect supply pressure, actuator vent blockage, and diaphragm leakage faults",2001,0, 391,Wavelet transform approach to distance protection of transmission lines,"An application of wavelet transform to digital distance protection for transmission lines is presented in this paper. Fault simulation is carried out using the Power System Computer Aided Design program (PSCAD). The simulation results are used as an input to the proposed wavelet transform protection-relaying technique. The technique is based on decomposing the voltage and current signals at the relay location using wavelet filter banks (WFB). From the decomposed signals, faults can be detected and classified. Also the fundamental voltage and current phasors, which are needed to calculate the impedance to the fault point can be estimated. Results demonstrate that wavelets have high potential in distance relaying.",2001,0, 392,Dynamic networking: architecture and prototype systems,"In this paper, we propose anew architecture of the global communication networks, the dynamic networking architecture. The dynamic functions enhance the capabilities of communication networks to deal with various changes detected by human users, applications and networked environment. In this architecture, a new functional layer called flexible network layer (FNL) is introduced between the application layer and the transport layer of the global communication networks. To realize the FNL, we adopt an agent framework to develop and manage various components and related knowledge of agent-based middleware of FNL. We explain the experimental applications of the FNL to discuss the characteristics of the proposed architecture",2001,0, 393,Combined use of intelligent partial discharge analysis in evaluating high voltage dielectric condition,"This paper describes the results of synthesised high voltage impulse tests, conducted on surrogate dielectric samples. The tests conducted under laboratory conditions, were performed using contoured electrodes submersed under technical grade insulating oil. An escalating level of artificial degradation within surrogate samples was assessed, this correlated against magnitude and frequency of events. Withstand of partial discharge activity up to a point of insulation breakdown was observed using a conventional elliptical display partial discharge detector. Measurements of PD activity were simultaneously captured by virtual scope relaying data array captures to a desktop computer. The captured data arrays were duly processed by an artificial neural network program, the net result of which indicated harmony between human-guided opinion and the software aptitude. This paper describes work currently being undertaken for the identification and diagnosis of faults in high voltage dielectrics in furthering development of AI techniques",2001,0, 394,Predictive distribution reliability analysis considering post fault restoration and coordination failure,"Calculation of predicted distribution reliability indexes can be implemented using a distribution analysis model and the algorithms defined by Distribution System Reliability Handbook, EPRI Project 1356-1 Final Report. The calculation of predicted reliability indexes is fairly straightforward until post fault restoration and coordination failure are included. This paper presents the methods used to implement predictive reliability with consideration for post fault restoration and coordination failure into a distribution analysis software model",2001,0,467 395,"Design of integrated software for reconfiguration, reliability, and protection system analysis","Interdependencies among software components for distribution network reconfiguration, reliability and protection system analysis are considered. Software interface specifications are presented. Required functionalities of reconfiguration for restoration are detailed. Two algorithms for reconfiguration for restoration are reviewed and compared. Use of outage analysis data to locate circuit sections in need of reliability improvements and to track predicted improvements in reliability is discussed",2001,0, 396,A case study on reliability improvement of 10 worst performing feeders in Niagara Mohawk Power Corp. (NMPC) service territory,"This case study demonstrates the reliability improvement initiative taken by Niagara Mohawk Power Corporation (NMPC), to analyze 10 of their worst performing feeders. The CYMDIST-RAM (Reliability Assessment Module) from CYME International Inc. was used for the analysis. The program computes system indices (SAIFI, SAIDI, CAIDI etc.) and load point indices (interruption frequency, outage duration etc.) for each zone on the feeder, based on the failure rates and repair times input by the user. Load indices of a zone reflect the trouble areas within a feeder, and are useful for micro-analysis. Variation of different indices along a feeder may be displayed as a color code. Indices may also be reported on the one-line diagram, and as reports in various formats (spreadsheet, Excel, ASCII etc.). Corrective measures such as device addition/relocation, tree-trimming, fault locators were attempted and reliability improvement was assessed in terms of saved CHI (customer-hrs interrupted). Different projects were ranked according to the cost benefit factor (nvestment/CHI saved)",2001,0, 397,Causal reasoning for human supervised process reconfiguration,"As safety is becoming an essential concern in industrial automation, an emerging area in automatic control is fault tolerant control. Within the various techniques, reconfiguration employs the redundancy in the plant and its control to make intelligent software that monitors the behavior of the whole. This paper analyses the reconfiguration problem from the point of view of large-scale processes under the responsibility of human operators. As analytical models cannot be envisaged for representing a process with hundred variables and several unpredictable operating modes, reconfiguration is proposed to rely on a simple qualitative model. This model represents the cause-effect relations between variables under the form of a directed graph. The graph is backward searched off line to find the action means on a variable. Then the most relevant remedial actions are selected online after a fault is detected. An example of a nuclear process is used to illustrate the method",2001,0, 398,Flow analysis to detect blocked statements,"In the context of software quality assessment, the paper proposes two new kinds of data which can be extracted from source code. The first, definitely blocked statements, can never be executed because preceding code prevents the execution of the program. The other data, called possibly blocked statements, may be blocked by blocking code. The paper presents original flow equations to compute definitely and possibly blocked statements in source code. The experimental context is described and results are shown and discussed. Suggestions for further research are also presented",2001,0, 399,Using code metrics to predict maintenance of legacy programs: a case study,"The paper presents an empirical study on the correlation of simple code metrics and maintenance necessities. The goal of the work is to provide a method for the estimation of maintenance in the initial stages of outsourcing maintenance projects, when the maintenance contract is being prepared and there is very little available information on the software to be maintained. The paper shows several positive results related to the mentioned goal",2001,0, 400,Defect prevention through defect prediction: a case study at Infosys,"This paper is an experience report of a software process model which will help in preventing defects through defect prediction. The paper gives a vivid description of how the model aligns itself to business goals and also achieves various quality and productivity goals by predicting the number and type of defects well in advance and corresponding preventive action taken to reduce the occurrence of defects. Data have been collected from the case study of a live project in INFOSYS Technologies Limited, India. A project team always aims at a zero defect software or a quality product with as few a defects as possible. To deliver a defect free software, it is imperative that in the process of development maximum number of defects are captured and fixed them before we deliver to the customer. In other words our process model should help us detect maximum number of defects possible through various Quality Control activities. Also the process model should be able to predict defects and should help us to detect them quite early. Defects can be reduced in two ways - (i) By detecting it at each and every stage in the project life cycle or (ii) By preventing to occur",2001,0, 401,Modeling clones evolution through time series,"The actual effort to evolve and maintain a software system is likely to vary depending on the amount of clones (i.e., duplicated or slightly different code fragments) present in the system. This paper presents a method for monitoring and predicting clones evolution across subsequent versions of a software system. Clones are firstly identified using a metric-based approach, then they are modeled in terms of time series identifying a predictive model. The proposed method has been validated with an experimental activity performed on 27 subsequent versions of mSQL, a medium-size software system written in C. The time span period of the analyzed mSQL releases covers four years, from May 1995 (mSQL 1.0.6) to May 1999 (mSQL 2. 0. 10). For any given software release, the identified models was able to predict the clone percentage of the subsequent release with an average error below 4 %. A higher prediction error was observed only in correspondence of major system redesign",2001,0, 402,Summary of dynamically discovering likely program invariants,"The dissertation dynamically discovering likely program invariants introduces dynamic detection of program invariants, presents techniques for detecting such invariants from traces, assesses the techniques' efficacy, and points the way for future research. Invariants are valuable in many aspects of program development, including design, coding, verification, testing, optimization, and maintenance. They also enhance programmers' understanding of data structures, algorithms, and program operation. Unfortunately, explicit invariants are usually absent from programs, depriving programmers and automated tools of their benefits. The dissertation shows how invariants can be dynamically detected from program traces that capture variable values at program points of interest. The user runs the target program over a test suite to create the traces, and an invariant detector determines which properties and relationships hold over both explicit variables and other expressions. Properties that hold over the traces and also satisfy other tests, such as being statistically justified, not being over unrelated variables, and not being implied by other reported invariants, are reported as likely invariants. Like other dynamic techniques such as testing, the quality of the output depends in part on the comprehensiveness of the test suite. If the test suite is inadequate, then the output indicates how, permitting its improvement. Dynamic analysis complements static techniques, which can be made sound but for which certain program constructs remain beyond the state of the art. Experiments demonstrate a number of positive qualities of dynamic invariant detection and of a prototype implementation, Daikon. Invariant detection is accurate-it rediscovers formal specifications-and useful-it assists programmers in programming tasks. It runs quickly and produces output of modest size. Test suites found in practice tend to be adequate for dynamic invariant detection",2001,0, 403,Bayesian analysis of software cost and quality models,"Due to the pervasive nature of software, software-engineering practitioners have continuously expressed their concerns over their inability to accurately predict the cost, schedule and quality of a software product under development. Thus, one of the most important objectives of the software engineering community has been to develop useful models that constructively explain the software development lifecycle and accurately predict the cost, schedule and quality of developing a software product. Most of the existing parametric models have been empirically calibrated to actual data from completed software projects. The most commonly used technique for empirical calibration has been the popular classical multiple regression approach. This approach imposes a few restrictions often violated by software engineering data and has resulted in the development of inaccurate empirical models that do not perform very well. The focus of this dissertation is to explain the drawbacks of the multiple regression approach for software engineering data and discuss the Bayesian approach which alleviates a few of the problems faced by the multiple regression approach",2001,0, 404,Dynamic and static views of software evolution,"In addition to managing day-to-day maintenance, information system managers need to be able to predict and plan the longer-term evolution of software systems on an objective, quantified basis. Currently this is a difficult task, since the dynamics of software evolution, and the characteristics of evolvable software are not clearly understood. In this paper we present an approach to understanding software evolution. The approach looks at software evolution from two different points of view. The dynamic viewpoint investigates how to model software evolution trends and the static viewpoint studies the characteristics of software artefacts to see what makes software systems more evolvable. The former will help engineers to foresee the actions to be taken in the evolution process, while the latter provides an objective, quantified basis to evaluate the software with respect to its ability to evolve and will help to produce more evolvable software systems",2001,0, 405,A graphical class representation for integrated black- and white-box testing,"Although both black- and white-box testing have the same objective, namely detecting faults in a program, they are often conducted separately. In our opinion, the reason is the lack of techniques and tools integrating both strategies, although an integration can substantially decrease testing costs. Specifically, an integrated technique can generate a reduced test suite, as single test cases can cover both specification and implementation at the same time. The paper proposes a new graphical representation of classes, which can be used for integrated class-level black-and white-box testing. Its distinguishing feature from existing representations is that each method of a class is shown from two perspectives, namely the specification and implementation view. Both the specification of a method and its implementation are represented as control flow graphs, which allows black- and white-box testing by structural techniques. Moreover, a test suite reduction technique has been developed for adjusting white-box test cases to black-box testing",2001,0, 406,Model integrated computing in robot control to synthesize real-time embedded code,"Manufacturing robots present a class of embedded systems with hard real-time constraints. On the one hand controller software has to satisfy tight timing constraints and rigorous memory requirements. Especially nonlinear dynamics and kinematics models are vital to modern model-based controllers and trajectory planning algorithms. Often this is still realized by manually coding and optimizing the software, a labor intensive and error-prone repetitive process. On the other hand shorter design-cycles and a growing number of customer-specific robots demand more flexibility not just in modeling. This paper presents a model-integrated computing approach to automated code synthesis of dynamics models that satisfies the harsh demands by including domain and problem specific constraints prescribed by the robotics application. It is shown that the use of such tailored formalisms leads to very efficient embedded software, competitive with the hand optimized alternative. At the same time it combines flexibility in model specification and usage with the potential for dynamic adaptation and reconfiguration of the model",2001,0, 407,"Framework for modeling software reliability, using various testing-efforts and fault-detection rates","This paper proposes a new scheme for constructing software reliability growth models (SRGM) based on a nonhomogeneous Poisson process (NHPP). The main focus is to provide an efficient parametric decomposition method for software reliability modeling, which considers both testing efforts and fault detection rates (FDR). In general, the software fault detection/removal mechanisms depend on previously detected/removed faults and on how testing efforts are used. From practical field studies, it is likely that we can estimate the testing efforts consumption pattern and predict the trends of FDR. A set of time-variable, testing-effort-based FDR models were developed that have the inherent flexibility of capturing a wide range of possible fault detection trends: increasing, decreasing, and constant. This scheme has a flexible structure and can model a wide spectrum of software development environments, considering various testing efforts. The paper describes the FDR, which can be obtained from historical records of previous releases or other similar software projects, and incorporates the related testing activities into this new modeling approach. The applicability of our model and the related parametric decomposition methods are demonstrated through several real data sets from various software projects. The evaluation results show that the proposed framework to incorporate testing efforts and FDR for SRGM has a fairly accurate prediction capability and it depicts the real-life situation more faithfully. This technique can be applied to wide range of software systems",2001,0, 408,Dependability analysis of fault-tolerant multiprocessor systems by probabilistic simulation,"The objective of this research is to develop a new approach for evaluating the dependability of fault-tolerant computer systems. Dependability has traditionally been evaluated through combinatorial and Markov modelling. These analytical techniques have several limitations, which can restrict their applicability. Simulation avoids many of the limitations, allowing for more precise representation of system attributes than feasible with analytical modelling. However, the computational demands of simulating a system in detail, at a low abstraction level, currently prohibit evaluation of high-level dependability metrics such as reliability and availability. The new approach abstracts a system at the architectural level, and employs life testing through simulated fault-injection to accurately and efficiently measure dependability. The simulation models needed to implement this approach are derived, in part, from the published results of computer performance studies and low-level fault-injection experiments. The developed probabilistic models of processor, memory and fault-tolerant mechanisms take such properties of real systems, as error propagation, different modes of failures, event dependency and concurrency. They have been integrated with a workload model and statistical analysis module into a generalised software tool. The effectiveness of such an approach was demonstrated through the analysis of several multiprocessor architectures",2001,0, 409,To a problem on components operation of a distributed system for image processing and analysis,"The modern tasks of processing and analysis of large scale arrays of graphics information demand real-time operation and considerable computational resources. The ideology of developing such systems does not fit the increasing requirements and does not involve the possibilities of these computational resources. Classifications of such systems and their architectures are conducted. The advantages and disadvantages of existing solutions are detected. New goals and tasks are considered, and research directions in the field are pointed out. The alternative system structure for large distributed graphical information array processing, modes of implementation of data exchange processes between separate components of this system, and methods of integration with existing information storage and visualization tools are offered. The main component of the tendered system architecture is the intelligent computing core realizing the principle of an expert system. The feature of its operation is encompassed by using a self-learning mode during operation. It allows not only to improve the quality of automatic processing, and to use expert knowledge, but also to accumulate its own experience. Possible application areas of such a system are considered",2001,0, 410,Integration of remote sensing and geographic information system technology for monitoring changes in the Northwestern Blue Nile Region of Ethiopia,"Environmental degradation has been identified as a major problem in Ethiopia today. Inappropriate use of land management practices has decreased the country's arable and forest lands, drastically deteriorated soil and water quality and severely affected the biodiversity within the environment. Desertification, deforestation, and urbanization are believed to be the primary causes of the loss in biodiversity and global climate change. It is therefore necessary to assess, take inventory, and determine the effect of land use land cover (LULC) change on the environment in this region. Multi-date satellite imagery was obtained to quantify the changes that have occurred. Integration of the results of the imagery analysis and GIS was used to define policies that encourage intelligent use of natural resources. The study site was the northwest part of Ethiopia surrounding the Blue Nile Region of Lake Tana. The primary objective of this project was to use remotely sensed data (i) to quantify the LULC change that has occurred over a 12-year period; (ii) identify the nature and spatial distribution of the change; and (iii) define a management approach that will prevent further environmental degradation. Landsat TM-5 and 7 imagery from 1987 and 1999 respectively, were acquired and each scene was georeferenced and radiometrically corrected. The images were processed using ERDAS Imagine 8.4 (ERDAS Inc., Atlanta, GA) Image Processing software. Comparing results of the unsupervised classification for 1987 and 1999 we observed a major loss of riparian forest along the bank of the Blue Nile River. It was also evident that a considerable amount of land was deforested which may have contribute to the continuing soil loss from the highlands of Ethiopia",2001,0, 411,"A versatile C++ toolbox for model based, real time control systems of robotic manipulators","Model based technologies form the core of advanced robotic applications such as model predictive control and feedback linearization. More sophisticated models result in higher quality but the use in embedded real-time control systems imposes strict requirements on timing, memory allocation, and robustness. To satisfy these constraints, the model implementation is often optimized by manual coding, an unwieldy and error prone process. The paper presents an approach that exploits code synthesis from high level intuitive and convenient multi-body system (MBS) model descriptions. It relies on an object-oriented C++ library of MBS components tailored to the computations required in robot control such as forward and inverse kinematics, inverse dynamics, and Jacobians. Efficient model evaluation algorithms are developed that apply to multi-body tree structures as well as kinematic loops that are solved analytically for a certain class of loop structures",2001,0, 412,Software architecture for modular self-reconfigurable robots,"Modular, self-reconfigurable robots show the promise of great versatility, robustness and low cost. However, programming such robots for specific tasks, with hundreds of modules and each of which with multiple actuators and sensors, can be tedious and error-prone. The extreme versatility of the modular systems requires a new paradigm in programming. We present a software architecture for this type of robot, in particular the PolyBot, which has been developed through its third generation. The architecture, based on the properties of the PolyBot electro-mechanical design, features a multi-master/multi-slave structure in a multi-threaded environment, with three layers of communication protocols. The architecture is currently being implemented for Motorola PowerPC using vxWorks",2001,0, 413,"The application of remote sensing, geographic information systems, and Global Positioning System technology to improve water quality in northern Alabama","Recently, the water quality status in northern Alabama has been declining due to urban and agricultural growth. Throughout the years, the application of remote sensing and geographic information system technology has undergone numerous modifications and revisions to enhance their ability to control, reduce, and estimate the origin of non-point source pollution. Yet, there is still a considerable amount of uncertainty surrounding the use of this technology as well as its modifications. This research demonstrates how the application of remote sensing, geographic information system, and global positioning system technologies can be used to assess water quality in the Wheeler Lake watershed. In an effort to construct a GIS based water quality database of the study area for future use, a land use cover of the study area will be derived from LANDSAT Thermatic Mapper (TM) imagery using ERDAS IMAGINE image processing software. A Digital Elevation Model of the Wheeler Lake watershed was also from an Environmental Protection Agency Basins database. Physical and chemical properties of water samples including pH, Total Suspended Solids (TSS), Total Fecal Coliform (TC), Total Nitrogen (TN), Total Phosphorus (TP), Biological Oxygen Demand (BOD), Dissolved Oxygen (DO), and selected metal concentrations were measured",2001,0, 414,Quantitative analysis of myocardial perfusion and regional left ventricular function from contrast-enhanced power modulation images,"Our goal was to test the feasibility of using power modulation, a new echocardiographic imaging technique, for combined quantitative assessment of myocardial perfusion and regional LV function. Coronary balloon occlusions were performed in 18 anesthetized pigs. Images were obtained during iv contrast infusion at baseline, during coronary occlusion and reperfusion, and analyzed using custom software. At each phase, regional myocardial perfusion was assessed by calculating mean pixel intensity and the rate of contrast replenishment following high-power ultrasound impulses. LV function was assessed by calculating regional fractional area change. All ischemic episodes caused delectable and reversible changes in perfusion and function. Perfusion defects were visualized in real time and confirmed by a significant decrease in pixel intensity in the LAD territory following balloon inflation and reduced rate of contrast replenishment. Fractional area change significantly decreased in ischemic segments, and was restored with reperfusion. Power modulation allows simultaneous on-line assessment of myocardial perfusion and regional LV wall motion",2001,0, 415,Reliability of fault tolerant control systems: Part I,"The reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a system composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed",2001,0, 416,Abstracting from failure probabilities,"In fault-tolerant computing, dependability of systems is usually demonstrated by abstracting from failure probabilities (under simplifying assumptions on failure occurrences). In the specification framework Focus, we show under which conditions and to which extent this is sound: We use a specification language that is interpreted in the usual abstract model and in a probabilistic model. We give probability bounds showing the degree of faithfulness of the abstract model wrt. the probabilistic one. These include cases where the usual assumptions are not fulfilled",2001,0, 417,Mathematical foundations of minimal cutsets,"Since their introduction in the reliability field, binary decision diagrams have proved to be the most efficient tool to assess Boolean models such as fault trees. Their success increases the need of sound mathematical foundations for the notions that are involved in reliability and dependability studies. This paper clarifies the mathematical status of the notion of minimal cutsets which have a central role in fault-tree assessment. Algorithmic issues are discussed. Minimal cutsets are distinct from prime implicants and they have a great interest from both a computation complexity and practical viewpoint. Implementation of BDD algorithms is explained. All of these algorithms are implemented in the Aralia software, which is widely used. These algorithms and their mathematical foundations were designed to assess efficiently a very large noncoherent fault tree that models the emergency shutdown system of a nuclear reactor",2001,0, 418,A high-level notation for developing network management applications: resource description and manipulation language,"The increasing size, distribution and heterogeneity of today's networks make their management very costly, inefficient and error prone. We are seeking to replace labor-intensive network management with one that is software-intensive. This paper proposes a high-level notation, Resource Description and Manipulation Language (RDML), for facilitating the development of network management applications. As an important aspect of the Architecture of Self-Management Distributed System (ASMDS), RDML is defined as a purely declarative language and serves to describe Virtual Managed Object Class (VMOC), including its attributes and available methods. VMOC can unite object attributes from different sources aid methods enforced on the object attributes into a virtual managed class. RDML bridges the gap between management applications and management data from variety of resources. Because of the simple syntax and the strong describing ability of RDML it is easy to define VMOC in terms of management goals",2001,0, 419,Mobile location by time advance for GSM,"A method that employs the time advance for locating the position of mobile phone is proposed. The advantage of the proposed technique is that it can work for the current system, such as GSM, without any change in hardware equipment. In this paper, we first add the software into the mobile phone for detecting the time advance, power intensity and quality factor and so on. Then, these parameters can be used to estimate the position of mobile handset. This raw data will be transmitted to the Operator Maintenance Center (OMC) for further processing and applications. The experimental results show that the proposed method can provide an accuracy position to trace the mobile phone in the GSM (Global system for Mobile).",2001,0, 420,Combinatorial designs in multiple faults localization for battlefield networks,We present an application of combinatorial designs and variance analysis to correlating events in the midst of multiple network faults. The network fault model is based on the probabilistic dependency graph that accounts for the uncertainty about the state of network elements. Orthogonal arrays help reduce the exponential number of failure configurations to a small subset on which further analysis is performed. The preliminary results show that statistical analysis can pinpoint the probable causes of the observed symptoms with high accuracy and significant level of confidence. An example demonstrates how multiple soft link failures are localized in MIL-STD 188-220's datalink layer to explain the end-to-end connectivity problems in the network layer This technique can be utilized for the networks operating in an unreliable environment such as wireless and/or military networks.,2001,0, 421,"RHIC insertion region, shunt power supply current errors",The Relativistic Heavy Ion Collider (RHIC) was commissioned in 1999 and 2000. RHIC requires power supplies to supply currents to highly inductive superconducting magnets. The RHIC Insertion Region contains many shunt power supplies to trim the current of different magnet elements in a large superconducting magnet circuit. Power Supply current error measurements were performed during the commissioning of RHIC. Models of these power supply systems were produced to predict and improve these power supply current errors using the circuit analysis program MicroCap V by Spectrum Software (TM). Results of the power supply current errors are presented from the models and from the measurements performed during the commissioning of RHIC,2001,0, 422,Development of a dynamic power system load model,"The paper addresses the issue of measurement based power system load model development. The majority of power system loads respond dynamically to voltage disturbances and such contribute to overall system dynamics. Induction motors represent a major portion of system loads that exhibit dynamic behaviour following the disturbance. In this paper, the dynamic behaviours of an induction motor and a combination of induction motor and static load were investigated under different disturbances and operating conditions in the laboratory. A first order generic dynamic, load model is developed based on the test results. The model proposed is in a transfer function form and it is suitable for direct inclusion in the existing power system stability software. The robustness of the proposed model is also assessed.",2001,0, 423,Reliability estimation for a software system with sequential independent reviews,"Suppose that several sequential test and correction cycles have been completed for the purpose of improving the reliability of a given software system. One way to quantify the success of these efforts is to estimate the probability that all faults are found by the end of the last cycle, We describe how to evaluate this probability both prior to and after observing the numbers of faults detected in each cycle and we show when these two evaluations would be the same",2001,0, 424,A controlled experiment in maintenance: comparing design patterns to simpler solutions,"Software design patterns package proven solutions to recurring design problems in a form that simplifies reuse. We are seeking empirical evidence whether using design patterns is beneficial. In particular, one may prefer using a design pattern even if the actual design problem is simpler than that solved by the pattern, i.e., if not all of the functionality offered by the pattern is actually required. Our experiment investigates software maintenance scenarios that employ various design patterns and compares designs with patterns to simpler alternatives. The subjects were professional software engineers. In most of our nine maintenance tasks, we found positive effects from using a design pattern: either its inherent additional flexibility was achieved without requiring more maintenance time or maintenance time was reduced compared to the simpler alternative. In a few cases, we found negative effects: the alternative solution was less error-prone or required less maintenance time. Overall, we conclude that, unless there is a clear reason to prefer the simpler solution, it is probably wise to choose the flexibility provided by the design pattern because unexpected new requirements often appear. We identify several questions for future empirical research",2001,0, 425,An approach to higher reliability using software components,"The general belief that component reuse improves software reliability is based on the assumption that the prior usage has exposed the potential software faults. In reality, this is not necessarily true due to the inherent differences in the environments and usage of the component. To achieve a high reliability for a component-based software system, we need reliable components that interoperate properly in the new environment. In this paper, we present a unified approach to do an evaluation of the interoperablity of components. This involves a generic and systematic capture of the component behavior that expresses the various assumptions made by the designers about components and their interconnections explicitly. With the information captured at a semantic level, this approach can detect potential mismatches between components in the new environment and give guidance on how to resolve the mismatches to fit components in the new context. The capture of this information in an appropriate format and an automated analysis can show serious exposures to reliability in a component-based system, before it is integrated.",2001,0, 426,A Bayesian approach to reliability prediction and assessment of component based systems,"It is generally believed that component-based software development leads to improved application quality, maintainability and reliability. However most software reliability techniques model integrated systems. These models disregard system's internal structure, taking into account only the failure data and interactions with the environment. We propose a novel approach to reliability analysis of component-based systems. Reliability prediction algorithm allows system architects to analyze reliability of the system before it is built, taking into account component reliability estimates and their anticipated usage. Fully integrated with the UML, this step can guide the process of identifying critical components and analyze the effect of replacing them with the more/less reliable ones. Reliability assessment algorithm, applicable in the system test phase, utilizes these reliability predictions as prior probabilities. In the Bayesian estimation. framework, posterior probability of failure is calculated from the priors and test failure data.",2001,0, 427,An application of zero-inflated Poisson regression for software fault prediction,"Poisson regression model is widely used in software quality modeling. When the response variable of a data set includes a large number of zeros, Poisson regression model will underestimate the probability of zeros. A zero-inflated model changes the mean structure of the pure Poisson model. The predictive quality is therefore improved. In this paper, we examine a full-scale industrial software system and develop two models, Poisson regression and zero-inflated Poisson regression. To our knowledge, this is the first study that introduces the zero-inflated Poisson regression model in software reliability. Comparing the predictive qualities of the two competing models, we conclude that for this system, the zero-inflated Poisson regression model is more appropriate in theory and practice.",2001,0, 428,A fault model for subtype inheritance and polymorphism,"Although program faults are widely studied, there are many aspects of faults that we still do not understand, particularly about OO software. In addition to the simple fact that one important goal during testing is to cause failures and thereby detect faults, a full understanding of the characteristics of faults is crucial to several research areas. The power that inheritance and polymorphism brings to the expressiveness of programming languages also brings a number of new anomalies and fault types. This paper presents a model for the appearance and realization of OO faults and defines and discusses specific categories of inheritance and polymorphic faults. The model and categories can be used to support empirical investigations of object-oriented testing techniques, to inspire further research into object-oriented testing and analysis, and to help improve design and development of object-oriented software.",2001,0, 429,Fault tolerant distributed information systems,"Critical infrastructures provide services upon which society depends heavily; these applications are themselves dependent on distributed information systems for all aspects of their operation and so survivability of the information systems is an important issue. Fault tolerance is a mechanism by which survivability can be achieved in these information systems. We outline a specification-based approach to fault tolerance, called RAPTOR, that enables structuring of fault-tolerance specifications and an implementation partially, synthesized from the formal specification. The RAPTOR approach uses three specifications describing the fault-tolerant system, the errors to be detected, and the actions to take to recover from those errors. System specification utilizes an object-oriented database to store the descriptions associated with these large, complex systems. The error detection and recovery specifications are defined using the formal specification notation Z. We also describe an implementation architecture and explore our solution with a case study.",2001,0, 430,Analysis of hypergeometric distribution software reliability model,"The article gives detailed mathematical results on the hypergeometric distribution software reliability model (HGDSRM) proposed by Y. Tohma et al. (1989; 1991). In the above papers, Tohma et al. developed the HGDSRM as a discrete-time stochastic model and derived a recursive formula for the mean cumulative number of software faults detected up to the i-th (>0) test instance in testing phase. Since their model is based on only the mean value of the cumulative number of faults, it is impossible to estimate not only the software reliability but also the other probabilistic dependability measures. We introduce the concept of cumulative trial processes, and describe the dynamic behavior of the HGDSRM exactly. In particular, we derive the probability mass function of the number of software faults detected newly at the i-th test instance and its mean as well as the software reliability defined as the probability that no faults are detected up to an arbitrary time. In numerical examples with real software failure data, we compare several HGDSRMs with different model parameters in terms of least squared sum and show that the mathematical results obtained here are very useful to assess the software reliability with the HGDSRM.",2001,0, 431,Modelling the fault correction process,"In general, software reliability models have focused on modeling and predicting failure occurrence and have not given equal priority to modeling the fault correction process. However, there is a need for fault correction prediction, because there are important applications that fault correction modeling and prediction support. These are the following: predicting whether reliability goals have been achieved, developing stopping rules for testing, formulating test strategies, and rationally allocating test resources. Because these factors are related, we integrate them in our model. Our modeling approach involves relating fault correction to failure prediction, with a time delay between failure detection and fault correction, represented by a random variable whose distribution parameters are estimated from observed data.",2001,0, 432,Efficient deadlock analysis of clients/server systems with two-way communication,"Deadlocks are a common type of fault in distributed programs. To detect deadlocks in a distributed program P, one approach is to construct the reachability graph (RG) of P, which contains all possible states of P. Since the size of RG(P) is an exponential function of the number of processes in P, the use of RGs for deadlock detection has limited success. The authors present an efficient technique for deadlock analysis of client/server programs with two-way communication, where the server and clients communicate through channels supporting synchronous message-passing. We consider client/server programs in which the server saves the IDs of some clients for future communication. For such a program, we describe how to construct its abstract client/server reachability graph (ACSRG), which contains a significantly smaller number of global states than the corresponding RG. One example is that for a solution to the gas station problem with one pump and six customers, its RG has 25394 states and its ACSRG 74 states. We show that the use of ACSRGs not only greatly reduces the effort for deadlock analysis but also provides a basis for proving freedom from deadlocks for any number of clients.",2001,0, 433,Feedback control of the software test process through measurements of software reliability,"A closed-loop feedback control model of the software test process (STP) is described. The model is grounded in the well established theory of automatic control. It offers a formal and novel procedure for using product reliability or failure intensity as a basis for closed loop control of the STP. The reliability or the failure intensity of the product is compared against the desired reliability at each checkpoint and the difference fed back to a controller. The controller uses this difference to compute changes necessary in the process parameters to meet the reliability, or failure intensity objective at the terminal checkpoint (the deadline). The STP continues beyond a checkpoint with a revised set of parameters. This procedure is repeated at each checkpoint until the termination of the STP. The procedure accounts for the possibility of changes (during testing), in reliability or failure intensity objective, the checkpoints, and the parameters that characterize the STP. The effectiveness of this procedure was studied using commercial data available in the public domain and also from the data generated through simulation. In all cases, the use of feedback control produces adequate results allowing the achievement of the objectives.",2001,0, 434,An empirical evaluation of statistical testing designed from UML state diagrams: the flight guidance system case study,"This paper presents an empirical study of the effectiveness of test cases generated from UML state diagrams using transition coverage as the testing criterion. The test cases production is mainly based on an adaptation of a probabilistic method, called statistical testing based on testing criteria. This technique was automated with the aid of the Rational Software Corporation's Rose RealTime tool. The test strategy investigated combines statistical test cases with (few) deterministic test cases focused on domain boundary values. Its feasibility is exemplified on a research version of an avionics system implemented in Java: the Flight Guidance System case study (14 concurrent state diagrams). Then, the results of an empirical evaluation of the effectiveness of the created test cases are presented. The evaluation was performed using mutation analysis to assess the error detection power of the test cases on more than 1500 faults seeded one by one in the Java source code (115 classes, 6500 LOC). A detailed analysis of the test results allows us to draw first conclusions on the expected strengths and weaknesses of the proposed test strategy.",2001,0, 435,Instantiating and detecting design patterns: putting bits and pieces together,"Design patterns ease the designing, understanding, and re-engineering of software. Achieving a well-designed piece of software requires a deep understanding and a good practice of design patterns. Understanding existing software relies on the ability to identify architectural forms resulting from the implementation of design patterns. Maintaining software involves spotting places that can be improved by using better design decisions, like those advocated by design patterns. Nevertheless, there is a lack of tools automatizing the use of design patterns to achieve well-designed pieces of software, to identify recurrent architectural forms, and to maintain software. We present a set of tools and techniques to help OO software practitioners design, understand, and re-engineer a piece of software using design-patterns. A first prototype tool, PATTERNS-BOX, provides assistance in designing the architecture of a new piece of software, while a second prototype tool, PTIDEJ, identifies design patterns used in an existing one. These tools, in combination, support maintenance by highlighting defects in an existing design, and by suggesting and applying corrections based on widely-accepted design pattern solutions.",2001,0, 436,Connectors synthesis for deadlock-free component based architectures,"Nowadays component-based technologies offer straightforward ways of building applications from existing components. Although these technologies might differ in terms of the level of heterogeneity among components they support, e.g. CORBA or COM versus J2EE, they all suffer the problem of dynamic integration. That is, once components are successfully integrated in a uniform context how is it possible to check, control and assess that the dynamic behavior of the resulting application will not deadlock? The authors propose an architectural, connector-based approach to this problem. We compose a system in such a way that it is possible to check whether and why the system deadlocks. Depending on the kind of deadlock, we have a strategy that automatically operates on the connector part of the system architecture in order to obtain a suitably equivalent version of the system which is deadlock-free.",2001,0, 437,"AGATE, access graph based tools for handling encapsulation","Encapsulation and modularity are supported by various static access control mechanisms that manage implementation hiding and define interfaces adapted to different client profiles. Programming languages use numerous and very different mechanisms, the cumulative application of which is sometimes confusing and hard to predict. Furthermore, understanding and reasoning about access control independently from the programming languages is quite difficult. Tools based on a language-independent model of access control are presented to address these issues. These tools support access control handling via visualisation of access, checking of design requirements on access and source code generation. We believe in the contribution of such tools for improving understanding and enhancing use of access control from design to implementation.",2001,0, 438,Automated conversion from a requirements document to an executable formal specification,"Many formal specification languages have been developed to engineer complex systems. However natural language (NL) has remained the choice of domain experts to specify the system because formal specification languages are not easy to master. Therefore NL requirements documentation must be reinterpreted by software engineers into a formal specification language. When the system is very complicated, which is mostly the case when one chooses to use formal specification, this conversion is both non-trivial and error-prone, if not implausible. This challenge comes from many factors such as miscommunication between domain experts and engineers. However the major bottleneck of this conversion is from the inborn characteristic of ambiguity of NL and the different level of the formalism between the two domains of NL and the formal specification. This is why there have been very few attempts to automate the conversion from requirements documentation to a formal specification language. This research project is developed as an application of formal specification and linguistic techniques to automate the conversion from a requirements document written in NL to a formal specification language. Contextual Natural Language Processing (CNLP) is used to handle the ambiguity problem in NL and Two Level Grammar (TLG) is used to deal with the different formalism level between NL and formal specification languages to achieve automated conversion from NL requirements documentation into a formal specification (in our case the Vienna Development Method - VDM++). A knowledge base is built from the NL requirements documentation using CNLP by parsing the documentation and storing the syntactic, semantic, and contextual information.",2001,0, 439,Analysis and implementation method of program to detect inappropriate information leak,"For a program which handles secret information, it is very important to prevent inappropriate information leaks from a program with secret data. D.E. Denning (1976) proposed a mechanism to certify the security of program by statically analyzing information flow, and S. Kuninobu et al. (2000) proposed a more practical analysis framework including recursive procedure handling, although no implementation has been yet made. We propose a method of security analysis implementation, and show a security analysis tool implemented for a procedural language. We extend Kuninobu's algorithm by devising various techniques for analysis of practical programs that have recursive calls and global variables. This method is validated by applying our tools to a simple credit card program, and we confirm that the validation of program security is very useful",2001,0, 440,Exception analysis for multithreaded Java programs,"This paper presents a static analysis that estimates uncaught exceptions in multithreaded Java programs. In Java, throwing exceptions across threads is deprecated because of the safely problem. Instead of restricting programmers' freedom, we extend the Java language to support multithreaded exception handling and propose a tool to detect uncaught exceptions in the input programs. Our analysis consists of two steps. The analysis firstly, estimates concurrently evaluated expressions of the multithreads in Java programs by the synchronization relation among the threads. Using this concurrency information, the program's exception flow is derived as set-constraints, whose least model is our analysis result. Both of these two steps are proved safe",2001,0, 441,Assurance of conceptual data model quality based on early measures,"The increasing demand for quality information systems (IS), has become quality the most pressing challenge facing IS development organisations. In the IS development field it is generally accepted that the quality of an IS is highly dependent on decisions made early in its development. Given the relevant role that data itself plays in an IS, conceptual data models are a key artifact of the IS design: Therefore, in order to build """"better quality """" IS it is necessary to assess and to improve the quality of conceptual data models based on quantitative criteria. It is in this context where software measurement can help IS designers to make better decision during design activities. We focus this work on the empirical validation of the metrics proposed by Genero et al. for measuring the structural complexity of entity relationship diagrams (ERDs). Through a controlled experiment we will demonstrate that these metrics seem to be heavily correlated with three of the sub-factors that characterise the maintainability of an ERD, such as understandability, analysability and modifiability",2001,0, 442,On prediction of cost and duration for risky software projects based on risk questionnaire,"The paper proposes a new approach that can discriminate risky software development projects from smoothly or satisfactorily going projects and give an explanation for the risk. We have already developed a logistic regression model which predicts whether a project becomes risky or not (O. Mizuno et al., 2000). However, the model returned the decision with a calculated probability only. Additionally, a formula was constructed based on the risk questionnaire which includes 23 questions. We therefore try to improve the previous method with respect to accountability and feasibility. In the new approach, we firstly construct a new risk questionnaire including only 9 questions (or risk factors), each of which is concerned with project management. We then apply multiple regression analysis to the actual project data, and clarify a set of factors which contributes essentially to estimate the relative cost error and the relative duration error, respectively. We then apply the constructed formulas to another set of project data. The analysis results show that both the cost and duration of risky projects are estimated fairly well by the formulas. We can thus confirm that our new approach is applicable to software development projects in order to discriminate risky projects from appropriate projects and give reasonable explanations for the risk",2001,0, 443,Incremental fault-tolerant design in an object-oriented setting,"With the increasing emphasis on dependability in complex, distributed systems, it is essential that system development can be done gradually and at different levels of detail. We propose an incremental treatment of faults as a refinement process on object-oriented system specifications. An intolerant system specification is a natural abstraction from which a fault-tolerant system can evolve. With each refinement step a fault and its treatment are introduced, so the fault-tolerance of the system increases during the design process. Different kinds of faults are identified and captured by separate refinement relations according to how the tolerant system relates to abstract properties of the intolerant one in terms of safety, and liveness. The specification language utilized is object-oriented and based upon first-order predicates on communication traces. Fault-tolerance refinement relations are formalized within this framework",2001,0, 444,A new tool to analyze ER-schemas,"Cardinality constraints as well as key constraints and functional dependencies are among the most popular classes of constraints in database models. While each constraint class is now well understood, little is done about their interaction. Today, cardinality constraints and key constraints are embedded in most CASE tools, which are usually based on the entity-relationship model. However, these tools do not offer intelligent consistency checking routines for cardinality constraints and they do not consider the global coherence. Conflicts among the constraints are not detected. Our aim is then, to propose a tool for reasoning about a set of cardinality constraints, key and certain functional dependencies in order to help in database design. We treat the global coherence of cardinality constraints. We propose two steps: a syntactical analysis according to our ER Meta-schema and a semantic analysis in order to verify the cardinality constraints and their interactions",2001,0, 445,QUIM: a framework for quantifying usability metrics in software quality models,"The paper examines current approaches to usability metrics and proposes a new approach for quantifying software quality in use, based on modelling the dynamic relationships of the attributes that affect software usability. The Quality in Use Integrated Map (QUIM) is proposed for specifying and identifying quality in use components, which brings together different factors, criteria, metrics and data defined in different human computer interface and software engineering models. The Graphical Dynamic Quality Assessment (GDQA) model is used to analyse interaction of these components into a systematic structure. The paper first introduces a new classification scheme into a graphical logic based framework using QUIM components (factors, criteria metrics and data) to assess quality in use of interactive systems. Then we illustrate how QUIM and GDQA may be used to assess software usability using subjective measures of quality characteristics as defined in ISO/IEC 9126",2001,0, 446,Using abstraction to improve fault tolerance,"Software errors are a major cause of outages and they are increasingly exploited in malicious attacks. Byzantine fault tolerance allows replicated systems to mask some software errors but it is expensive to deploy. The paper describes a replication technique, BFTA, which uses abstraction to reduce the cost of Byzantine fault tolerance and to improve its ability to mask software errors. BFTA reduces cost because it enables reuse of off-the-shelf service implementations. It improves availability because each replica can be repaired periodically using an abstract view of the state stored by correct replicas, and because each replica can run distinct or non-deterministic service implementations, which reduces the probability of common mode failures. We built an NFS service that allows each replica to run a different operating system. This example suggests that BFTA can be used in practice; the replicated file system required only a modest amount of new code, and preliminary performance results indicate that it performs comparably to the off-the-shelf implementations that it wraps.",2001,0, 447,Self-tuned remote execution for pervasive computing,"Pervasive computing creates environments saturated with computing and communication capability, yet gracefully integrated with human users. Remote execution has a natural role to play, in such environments, since it lets applications simultaneously leverage the mobility of small devices and the greater resources of large devices. In this paper, we describe Spectra, a remote execution system designed for pervasive environments. Spectra monitors resources such as battery, energy and file cache state which are especially important for mobile clients. It also dynamically balances energy use and quality goals with traditional performance concerns to decide where to locate functionality. Finally, Spectra is self-tuning-it does not require applications to explicitly specify intended resource usage. Instead, it monitors application behavior, learns functions predicting their resource usage, and uses the information to anticipate future behavior.",2001,0, 448,The case for resilient overlay networks,"This paper makes the case for Resilient Overlay Networks (RONs), an application-level routing and packet forwarding service that gives end-hosts and applications the ability to take advantage of network paths that traditional Internet routing cannot make use of, thereby improving their end-to-end reliability and performance. Using RON, nodes participating in a distributed Internet application configure themselves into an overlay network and cooperatively forward packets for each other. Each RON node monitors the quality of the links in the underlying Internet and propagates this information to the other nodes; this enables a RON to detect and react to path failures within several seconds rather than several minutes, and allows it to select application-specific paths based on performance. We argue that RON has the potential to substantially improve the resilience of distributed Internet applications to path outages and sustained overload.",2001,0, 449,Dimension recognition and geometry reconstruction in vectorization of engineering drawings,"This paper presents a novel approach for recognizing and interpreting dimensions in engineering drawings. It starts by detecting potential dimension frames, each comprising only the line and text components of a dimension, then verifies them by detecting the dimension symbols. By removing the prerequisite of symbol recognition from detection of dimension sets, our method is capable of handling low quality drawings. We also propose a reconstruction algorithm for rebuilding the drawing entities based on the recognized dimension annotations. A coordinate grid structure is introduced to represent and analyze two-dimensional spatial constraints between entities; this simplifies and unifies the process of rectifying deviations of entity dimensions induced during scanning and vectorization.",2001,0, 450,Quality-assuring scheduling-using stochastic behavior to improve resource utilization,"We present a unified model for admission and scheduling, applicable for various active resources such as CPU or disk to assure a requested quality in situations of temporary overload. The model allows us to predict and control the behavior of applications based on given quality requirements. It uses the variations in the execution time, i.e., the time any active resource is needed We split resource requirements into a mandatory part which must be available and an optional part which should be available as often as possible but at least with a certain percentage. In combination with a given distribution for the execution time we can move away from worst-case reservations and drastically reduce the amount of reserved resources for applications which can tolerate occasional deadline misses. This increases the number of admittable applications. For example, with negligible loss of quality our system can admit more than two times the disk bandwidth than a system based on the worst-case. Finally, we validated the predictions of our model by measurements using a prototype real-time system and observed a high accuracy between predicted and measured values.",2001,0, 451,Mixture of principal axes registration for change analysis in computer-aided diagnosis,"Non-rigid image registration is a prerequisite for many medical image analysis applications, such as image fusion of multi-modality images and quantitative change analysis of a temporal sequence in computer-aided diagnosis. By establishing the point correspondence of the extracted feature points, it is possible to recover the deformation using nonlinear interpolation methods such as the thin-plate-spline approach. However, it is a difficulty task to establish an exact point correspondence due to the high complexity of the nonlinear deformation existing in medical images. In this paper, a mixture of principal axes registration (mPAR) method is proposed to resolve the correspondence problem through a neural computational approach. The novel feature of mPAR is to align two point sets without needing to establish an explicit point correspondence. Instead, it aligns the two point sets by minimizing the relative entropy between their probability distributions, resulting in a maximum likelihood estimate of the transformation matrix. The registration process consists of: (1) a finite mixture scheme to establish an improved point correspondence and (2) a multilayer perceptron (MLP) neural network to recover the nonlinear deformation. The neural computation for registration used a committee machine to obtain a mixture of piecewise rigid registrations, which gives a reliable point correspondence using multiple extracted objects in a finite mixture scheme. Then the MLP was used to determine the coefficients of a polynomial transform using extracted cross-points of elongated structures as control points. We have applied our mPAR method to a temporal sequence of mammograms from a single patient. The experimental results show that mPAR not only improves the accuracy of the point correspondence but also results in a desirable error-resilience property for control point selection errors",2001,0, 452,Information flow analysis of component-structured applications,"Software component technology facilitates the cost-effective development of specialized applications. Nevertheless, due to the high number of principals involved in a component-structured system, it introduces special security problems which have to be tackled by a thorough security analysis. In particular the diversity and complexity of information flows between components hold the danger of leaking information. Since information flow analysis, however, tends to be expensive and error-prone, we apply our object-oriented security analysis and modeling approach. It employs UML-based object-oriented modeling techniques and graph rewriting in order to make the analysis easier and to assure its quality even for large systems. Information flow is modeled based on the decentralized label model (Myers and Liskov, 1997) combining label-based read access policy models and declassification of information with static analysis. We report on the principles of information flow analysis of component-based systems, clarify its application by means of an example, and outline the corresponding tool-support.",2001,0, 453,Practical automated filter generation to explicitly enforce implicit input assumptions,"Vulnerabilities in distributed applications are being uncovered and exploited faster than software engineers can, patch the security holes. All too often these weaknesses result from implicit assumptions made by an application about its inputs. One approach to defending against their exploitation is to interpose a filter between the input source and the application that verifies that the application's assumptions about its inputs actually hold. However, ad hoc design of such filters is nearly as tedious and error-prone as patching the original application itself. We have automated the filter generation process based on a simple formal description of a broad class of assumptions about the inputs to an application. Focusing on the back-end server application case, we have prototyped an easy-to-use tool that generates server-side filtering scripts. These can then be quickly installed on a front-end webs server (either in concert with the application or., when a vulnerability is uncovered), thus shielding the server application from a variety of existing and exploited, attacks, as solutions requiring changes to the applications are developed and tested. Our measurements suggest that input filtering can be done efficiently and should not be a performance concern for moderately loaded web servers. The overall approach may be generalizable to other domains, such as firewall filter generation and API wrapper filter generation.",2001,0, 454,Life cycle process knowledge - application during product design,"The demand for high quality, cost-effective and at the same time environmentally conscious products throughout the entire product life has led to a high complexity of the activities involved in the design phase More and more, design partners need to be integrated in the value chain, resulting in a tremendous increase in the scope of information captured during this phase. Only through measures taken at particular stages of the design process is it possible to cope with this situation. A current research project at Technical University Darmstadt, """"SFB 392. Design for Environment-Methods and Tools """", aims at supporting the product designer in minimizing environmental impacts of his products throughout their entire life cycle. A system environment enveloping several tools was developed, in order to assess environmental (and economic) aspects of a product during the design stage Consequences of design decisions on the product life cycle can also be viewed and manipulated, making it possible to forecast product life trends and influence them",2001,0, 455,The linguistic approach to the natural language requirements quality: benefit of the use of an automatic tool,"Natural language (NL) requirements are widely used in the software industry, at least as the first level of description of a system. Unfortunately they are often prone to errors and this is partially caused by interpretation problems due to the use of NL itself. The paper presents a methodology for the analysis of natural language requirements based on a quality model addressing a relevant part of the interpretation problems that can be approached at linguistic level. To provide an automatic support to this methodology a tool called QuARS (quality analyzer of requirement specification) has been implemented. The methodology and the underlying quality model have been validated by analyzing with QuARS several industrial software NL requirement documents showing interesting results",2001,0, 456,Modeling the dependability of N-modular redundancy on demand under malicious agreement,"In a multiprocessor under normal loading conditions, idle processors naturally offer spare capacity. Previous work attempted to utilize this redundancy to overcome the limitations of classic diagnosability and modular redundancy techniques while providing significant fault tolerance. A popular approach is task duplexing. The usefulness of this approach for critical applications, unfortunately, is seriously undermined by its susceptibility to agreement on faulty outcomes (malicious agreement). To assess the dependability of duplexing under malicious agreement, we propose a stochastic model which dynamically profiles behavior in the presence of malicious faults. The model uses a more or less typical policy we call NMR on demand (NMROD). Each task in a multiprocessor is duplicated, with additional processors allocated for recovery as needed. NMROD relies on a fault model favoring response correctness over actual fault status, and integrates online repair to provide nonstop operation over an extended period",2001,0, 457,Rejuvenation and failure detection in partitionable systems,"Certain gateways (e.g., some cable or DSL modems) are known to have low reliability and low availability. Most failures of these devices can however be """"fixed"""" by rejuvenating the device after a failure has been detected. Such a detection based rejuvenation strategy permits increasing the availability of these gateways. In the considered scenario, rejuvenation is non-trivial since a failure of such a gateway will leave it partitioned away from the network. In particular, network operators that want to rejuvenate these gateways are in a different network partition, and can therefore not initiate a remote rejuvenation. In this paper we propose a failure detection based rejuvenation service and a remote detection service. The rejuvenation service detects and fixes """"soft"""" failures automatically (in one partition), and the detection service detects (in another partition) all rejuvenations exactly once, within a bounded amount of time, even when the gateway is rejuvenated consecutively. The detection service also allows the detection of """"hard"""" failures, and filtering of notifications of soft failures",2001,0, 458,Evaluating low-cost fault-tolerance mechanism for microprocessors on multimedia applications,"We evaluate a low-cost fault-tolerance mechanism for microprocessors, which can detect and recover from transient faults, using multimedia applications. There are two driving forces to study fault-tolerance techniques for microprocessors. One is deep submicron fabrication technologies. Future semiconductor technologies could become more susceptible to alpha particles and other cosmic radiation. The other is the increasing popularity of mobile platforms. Recently cell phones have been used for applications which are critical to our financial security, such as flight ticket reservation, mobile banking, and mobile trading. In such applications, it is expected that computer systems will always work correctly. From these observations, we propose a mechanism which is based on an instruction reissue technique for incorrect data speculation recovery which utilizes time redundancy. Unfortunately, we found significant performance loss when we evaluated the proposal using the SPEC2000 benchmark suite. We evaluate it using MediaBench which contains more practical mobile applications than SPEC2000",2001,0, 459,"A case study: validation of guidance control software requirements for completeness, consistency and fault tolerance","We discuss a case study performed for validating a natural language (NL) based software requirements specification (SRS) in terms of completeness, consistency, and fault-tolerance. A partial verification of the Guidance and Control Software (GCS) Specification is provided as a result of analysis using three modeling formalisms. Zed was applied first to detect and remove ambiguity from the GCS partial SRS. Next, Statecharts and Activity-charts were constructed to visualize the Zed description and make it executable. The executable model was used for the specification testing and fault injection to probe how the system would perform under normal and abnormal conditions. Finally, a Stochastic Activity Networks (SANs) model was built to analyze how fault coverage impacts the overall performability of the system. In this way, the integrity of the SRS was assessed. We discuss the significance of this approach and propose approaches for improving performability/fault tolerance",2001,0, 460,An efficient QoS routing algorithm for quorumcast communication,"This paper extends the concept of multicast to quorumcast, a generalized form of multicast communication. The need of quorumcast communication arises in a number of distributed applications. Little work has been done on routing quorumcast messages. The objective of previous research was to construct a minimum cost tree spanning the source and the quorumcast group members. We further consider the path quality of a constructed spanning tree in terms of delay constraints required by applications that use the tree. As the delay-constrained quorumcast routing problem is NP-complete, we propose an efficient heuristic QoS routing algorithm. We also consider how a loop is detected and removed in the course of tree construction and how to deal with members joining/leaving the quorumcast pool. Our simulation study shows that the proposed algorithm performs well and constructs a quorumcast tree whose cost is close to that of the """"optimal"""" routing tree.",2001,0, 461,A simulation based approach for estimating the reliability of distributed real-time systems,"Designers of safety-critical real-time systems are often mandated by requirements on reliability as well as timing guarantees. For guaranteeing timing properties, the standard practice is to use various analysis techniques provided by hard real-time scheduling theory. The paper presents analysis based on simulation, that considers the effects of faults and timing parameter variations on schedulability analysis, and its impact on the reliability estimation of the system. We look at a wider set of scenarios than just the worst case considered in hard real-time schedulability analysis. The ideas have general applicability, but the method has been developed with modelling the effects of external interferences on the controller area network (CAN) in mind. We illustrate the method by showing that a CAN interconnected distributed system, subjected to external interference, may be proven to satisfy its timing requirements with a sufficiently high probability, even in cases when the worst-case analysis has deemed it non-schedulable.",2001,0, 462,Comparing software prediction techniques using simulation,"The need for accurate software prediction systems increases as software becomes much larger and more complex. We believe that the underlying characteristics: size, number of features, type of distribution, etc., of the data set influence the choice of the prediction system to be used. For this reason, we would like to control the characteristics of such data sets in order to systematically explore the relationship between accuracy, choice of prediction system, and data set characteristic. It would also be useful to have a large validation data set. Our solution is to simulate data allowing both control and the possibility of large (1000) validation cases. The authors compare four prediction techniques: regression, rule induction, nearest neighbor (a form of case-based reasoning), and neural nets. The results suggest that there are significant differences depending upon the characteristics of the data set. Consequently, researchers should consider prediction context when evaluating competing prediction systems. We observed that the more ""messy"" the data and the more complex the relationship with the dependent variable, the more variability in the results. In the more complex cases, we observed significantly different results depending upon the particular training set that has been sampled from the underlying data set. However, our most important result is that it is more fruitful to ask which is the best prediction system in a particular context rather than which is the ""best"" prediction system.",2001,1, 463,Evaluating capture-recapture models with two inspectors,"Capture-recapture (CR) models have been proposed as an objective method for controlling software inspections. CR models were originally developed to estimate the size of animal populations. In software, they have been used to estimate the number of defects in an inspected artifact. This estimate can be another source of information for deciding whether the artifact requires a reinspection to ensure that a minimal inspection effectiveness level has been attained. Little evaluative research has been performed thus far on the utility of CR models for inspections with two inspectors. We report on an extensive Monte Carlo simulation that evaluated capture-recapture models suitable for two inspectors assuming a code inspections context. We evaluate the relative error of the CR estimates as well as the accuracy of the reinspection decision made using the CR model. Our results indicate that the most appropriate capture-recapture model for two inspectors is an estimator that allows for inspectors with different capabilities. This model always produces an estimate (i.e., does not fail), has a predictable behavior (i.e., works well when its assumptions are met), will have a relatively high decision accuracy, and will perform better than the default decision of no reinspections. Furthermore, we identify the conditions under which this estimator will perform best.",2001,1, 464,Assessing multi-version systems through fault injection,"Multi-version design (MVD) has been proposed as a method for increasing the dependability, of critical systems beyond current levels. However, a major obstacle to large-scale commercial usage of this approach is the lack of quantitative characterizations available. We seek to help answer this problem using fault injection. This approach has the potential for yielding highly useful metrics with regard to MVD systems, as well as giving developers a greater insight into the behaviour of each channel within the system. In this research, we develop an automatic fault injection system for multi-version systems called FITMVS. We use this si,stem to test a multi-version system, and then analyze the results produced. We conclude that this approach can yield useful metrics, including metrics related to channel sensitivity, code scope sensitivity, and the likelihood of common-mode failure occurring within a system",2002,0, 465,Configurable services for mobile users,"Mobile devices, such as cellular phones, personal digital assistants (PDAs), and organizers, are becoming increasingly popular. Due to the high volatility of those devices, the achievable quality-of-service (QoS) for mobile services can hardly be predicted. Even for one particular type of device - say a PDA - the implementation of a mobile service may use different communication interfaces over time (i.e.; wireless LAN, IrDA). Within this paper we present a new approach towards configuration of component-based services for mobile systems. Starting from a XML-based configuration language, which defines a set of rules for component configuration depending on a number of environmental parameters, our approach allows for instantiation and configuration of components. In contrast to many other approaches targeting distributed multimedia-style application on PC-class computers, our framework focuses on the extension of distributed services onto mobile devices. As proof-of-concept scenario we have implemented a configurable distributed video surveillance application on the basis of the Microsoft Distributed Component Object Model on Windows 2000 and on the Windows CE-based Pocket PC platform",2002,0, 466,Using simulation to facilitate effective workflow adaptation,"In order to support realistic real-world processes, workflow systems need to be able to adapt to changes. Detecting the need to change and deciding what changes to carry out are very difficult. Simulation analysis can play an important role in this. It can be used in tuning quality of service metrics and exploring """"what-if"""" questions. Before a change is actually made, its possible effects can be explored with simulation. To facilitate rapid feedback, the workflow system (METEOR) and simulation system (JSIM) need to interoperate. In particular, workflow specification documents need to be translated into simulation model specification documents so that the new model can be executed/animated on-the-fly. Fortunately, modern Web technology (e.g., XML, DTD, XSLT) make this relatively straightforward. The utility of using simulation in adapting a workflow is illustrated with an example from a genome workflow.",2002,0, 467,Predictive distribution reliability analysis considering post fault restoration and coordination failure,"The calculation of predicted distribution reliability indexes can be implemented using a distribution analysis model and the algorithms defined by the """"Distribution System Reliability Handbook"""", EPRI Project 1356-1 Final Report. The calculation of predicted reliability indexes is fairly straightforward until post fault restoration and coordination failure are included. This paper presents the methods used to implement predictive reliability with consideration for post fault restoration and coordination failure into a distribution analysis software model",2002,0, 468,Estimation of parametric sensitivity for defects size distribution in VLSI defect/fault analysis,The parametric sensitivity of defect size distribution in VLSI defect/fault analysis is evaluated. The use of special software tool FIESTA for the computational experiment aimed at estimation of the significance of parameters in expressions approximating the actual defect distribution is considered. The obtained experimental results and their usefulness have been analysed,2002,0, 469,Towards self-configuring networks,"Current networks require ad-hoc operating procedures by expert administrators to handle changes. These configuration management operations are costly and error prone. Active networks involve particularly fast dynamics of change that cannot depend on operators and must be automated. This paper describes an architecture called NESTOR that seeks to replace labor-intensive configuration management with one that is automated and software-intensive. Network element configuration state is represented in a unified object-relationship model. Management is automated via policy rules that control change propagation across model objects. Configuration constraints assure the consistency of model transactions. Model objects are stored in a distributed repository supporting atomicity and recovery of configuration change transactions. Element adapters are responsible for populating the repository with configuration objects, and for pushing committed changes to the underlying network elements. NESTOR has been implemented in two complementary versions and is now being applied to automate several configuration management scenarios of increasing complexity, with encouraging results",2002,0, 470,FPGA resource and timing estimation from Matlab execution traces,"We present a simulation-based technique to estimate area and latency of an FPGA implementation of a Matlab specification. During simulation of the Matlab model, a trace is generated that can be used for multiple estimations. For estimation the user provides some design constraints such as the rate and bit width of data streams. In our experience the runtime of the estimator is approximately only 1/10 of the simulation time, which is typically fast enough to generate dozens of estimates within a few hours and to build cost-performance trade-off curves for a particular algorithm and input data. In addition, the estimator reports on the scheduling and resource binding used for estimation. This information can be utilized not only to assess the estimation quality, but also as first starting point for the final implementation",2002,0, 471,Modeling the impact of preflushing on CTE in proton irradiated CCD-based detectors,"A software model is described that performs a """"real world"""" simulation of the operation of several types of charge-coupled device (CCD)-based detectors in order to accurately predict the impact that high-energy proton radiation has on image distortion and modulation transfer function (MTF). The model was written primarily to predict the effectiveness of vertical preflushing on the custom full frame CCD-based detectors intended for use on the proposed Kepler Discovery mission, but it is capable of simulating many other types of CCD detectors and operating modes as well. The model keeps track of the occupancy of all phosphorous-silicon (P-V), divacancy (V-V) and oxygen-silicon (O-V) defect centers under every CCD electrode over the entire detector area. The integrated image is read out by simulating every electrode-to-electrode charge transfer in both the vertical and horizontal CCD registers. A signal level dependency on the capture and emission of signal is included and the current state of each electrode (e.g., barrier or storage) is considered when distributing integrated and emitted signal. Options for performing preflushing, preflashing, and including mini-channels are available on both the vertical and horizontal CCD registers. In addition, dark signal generation and image transfer smear can be selectively enabled or disabled. A comparison of the charge transfer efficiency (CTE) data measured on the Hubble space telescope imaging spectrometer (STIS) CCD with the CTE extracted from model simulations of the STIS CCD show good agreement",2002,0, 472,Reactive objects,"Object-oriented, concurrent, and event-based programming models provide a natural framework in which to express the behavior of distributed and embedded software systems. However, contemporary programming languages still base their I/O primitives on a model in which the environment is assumed to be centrally controlled and synchronous, and interactions with the environment carried out through blocking subroutine calls. The gap between this view and the natural asynchrony of the real world has made event-based programming a complex and error-prone activity, despite recent focus on event-based frameworks and middleware. In thin paper we present a consistent model of event-based concurrency, centered around the notion of reactive objects. This model relieves the object-oriented paradigm from the idea of transparent blocking, and naturally enforces reactivity and state consistency We illustrate our point by a program example that offers substantial improvements in size and simplicity over a corresponding Java-based solution",2002,0, 473,End-to-end latency of a fault-tolerant CORBA infrastructure,"This paper presents measured probability density functions (pdfs) for the end-to-end latency, of two-way, remote method invocations from a CORBA client to a replicated CORBA server in a fault-tolerance infrastructure. The infrastructure uses a multicast group-communication protocol based on a logical token-passing ring imposed on a single local-area network. The measurements show that the peaks of the pd/s for the latency are affected by the presence of duplicate messages for active replication, and by the position of the primary server replica on the ring for semi-active and passive replication. Because a node cannot broadcast a user message until it receives the token, up to two complete token rotations can contribute to the end-to-end latency seen by the client for synchronous remote method invocations, depending on the server processing time and the interval between two consecutive client invocations. For semi-active and passive replication, careful placement of the primary server replica is necessary to alleviate this broadcast delay to achieve the best possible end-to-end latency. The client invocation patterns and the server processing time must be considered together to determine the most favorable position for the primary replica. Assuming that an effective sending-side duplicate suppression mechanism is implemented, active replication can be more advantageous than semi-active and passive replication because all replicas compete for sending and, therefore, the replica at the most favorable position will have the opportunity to send first",2002,0, 474,Enhancing Real-Time Event Service for synchronization in object oriented distributed systems,"Distributed object computing middleware such as CORBA, RMI, and DCOM have gained wide acceptance and has shielded programmers from many tedious and error-prone aspects of distributed programming. In particular, CORBA event service has been used extensively in embedded systems. We propose an aspect oriented approach to develop synchronization code for distributed systems that use event service as the underlying communication middleware. Our approach is to factor out synchronization as a separate aspect, synthesize synchronization code and then compose it with the functional code. We use high-level """"global invariants"""" to specify the synchronization policies which are then automatically translated into synchronization code for the underlying event service. To implement synchronization efficiently using the event service, we propose enhancements to the semantics of the event service. Specifically, we define the notion of condition events and exactly k semantics. Given these enhancements, we describe a synthesis procedure to translate global invariants into synchronization code based on events. We describe the implementation of the enhancements on the Tao's Real-Time Event Service. We present experimental results to demonstrate that the enhanced event service leads to more efficient implementation of synchronization. We feel that our methodology and the enhanced Real-Time Event Service will lead to more confident use of sophisticated synchronization policies in distributed object oriented systems",2002,0, 475,Wait-free objects for real-time systems?,"The aim of this position paper is to promote the use of wait-free implementations for real-time shared objects. Such implementations allow the nonfaulty processes to progress despite the fact the other processes are slow, fast or have crashed. This is a noteworthy property for shared real-time objects. To assess its claim, the paper considers wait-free implementations of three objects: a renaming object, an efficient store/collect object, and a consensus object. On an other side, the paper can also be seen as an introductory survey, to wait-free protocols",2002,0, 476,Experiences with evaluating network QoS for IP telephony,"Successful deployment of networked multimedia applications such as IP telephony depends on the performance of the underlying data network. QoS requirements of these applications are different from those of traditional data applications. For example, while IP telephony is very sensitive to delay and jitter, traditional data applications are more tolerant of these performance metrics. Consequently, assessing a network to determine whether it can accommodate the stringent QoS requirements of IP telephony becomes critical. We describe a technique for evaluating a network for IP telephony readiness. Our technique relies on the data collection and analysis support of our prototype tool, ExamiNetTM. It automatically discovers the topology of a given network and collects and integrates network device performance and voice quality metrics. We report the results of assessing the IP telephony readiness of a real network of 31 network devices (routers/switches) and 23 hosts via ExamiNetTM. Our evaluation identified links in the network that were over utilized to the point at which they could not handle IP telephony.",2002,0, 477,Software quality analysis with the use of computational intelligence,"Effectiveness and clarity of software objects, their adherence to coding standards and programming habits of programmers are important features of overall quality of software systems. This paper proposes an approach towards a quantitative software quality assessment with respect to extensibility, reusability, clarity and efficiency. It exploits techniques of Computational Intelligence (CI) that are treated as a consortium of granular computing, neural networks and evolutionary techniques. In particular, we take advantage of self-organizing maps to gain a better insight into the data, and study genetic decision trees-a novel algorithmic framework to carry out classification of software objects with respect to their quality. Genetic classifiers serve as a """"quality filter"""" for software objects. Using these classifiers, a system manager can predict quality of software objects and identify low quality objects for review and possible revision. The approach is applied to an object-oriented visualization-based software system for biomedical data analysis",2002,0, 478,A note on current approaches to extending software engineering with fuzzy logic,"In this paper, we have attempted a study of current approaches carried out in the confluence of the two technologies: fuzzy set theory and software engineering, that could provide a powerful tool for requirements engineering, formal specifications, software quality prediction, object-oriented modeling, and etc. Various requirements analysis and specifications modeling technologies that utilize fuzzy theory are identified, and works related to the use of fuzzy logic for predicting software quality are also outlined",2002,0, 479,Software quality prediction using median-adjusted class labels,Software metrics aid project managers in predicting the quality of software systems. A method is proposed using a neural network classifier with metric inputs and subjective quality assessments as class labels. The labels are adjusted using fuzzy measures of the distances from each class center computed using robust multivariate medians,2002,1, 480,Goal-oriented software assessment,"Companies that engage in multi-site, multi-project software development continually face the problem of how to understand and improve their software development capabilities. We have defined and applied a goal-oriented process that enables such a company to assess the strengths and weaknesses of those capabilities. Our goals are to help (a) to decrease the time and cost to develop software, (b) to decrease the time needed to make changes to existing software, (c) to improve software quality, (d) to attract and retain a talented engineering staff, and (e) to facilitate more predictable management of software projects. In response to the variety of product requirements, market needs and development environments, we selected a goal-oriented process, rather than a criteria-oriented process, to advance our strategy and ensure relevance of the results. We describe the design of the process, discuss the results achieved and present vulnerabilities of the methodology. The process includes both interviews with projects' personnel and analysis of change data. Several common issues have emerged from the assessments across multiple projects, enabling strategic investments in software technology. Teams report satisfaction with the outcome in that they act on the recommendations, ask for additional future assessments, and recommend the process to sibling organizations.",2002,0, 481,An empirical evaluation of fault-proneness models,"Planning and allocating resources for testing is difficult and it is usually done on an empirical basis, often leading to unsatisfactory results. The possibility of early estimation of the potential faultiness of software could be of great help for planning and executing testing activities. Most research concentrates on the study of different techniques for computing multivariate models and evaluating their statistical validity, but we still lack experimental data about the validity of such models across different software applications. The paper reports on an empirical study of the validity of multivariate models for predicting software fault-proneness across different applications. It shows that suitably selected multivariate models can predict fault-proneness of modules of different software packages.",2002,0, 482,Experiences in assessing product family software architecture for evolution,"Software architecture assessments are a means to detect architectural problems before the bulk of development work is done. They facilitate planning of improvement activities early in the lifecycle and allow limiting the changes on any existing software. This is particularly beneficial when the architecture has been planned to (or already does) support a whole product family, or a set of products that share common requirements, architecture, components or code. As the family requirements evolve and new products are added, the need to assess the evolvability of the existing architecture is vital. The author illustrates two assessment case studies in the mobile telephone software domain: the Symbian operating system platform and the network resource access control software system. By means of simple experimental data, evidence is shown of the usefulness of architectural assessment as rated by the participating stakeholders. Both assessments have led to the identification of previously unknown architectural defects, and to the consequent planning of improvement initiatives. In both cases, stakeholders noted that a number of side benefits, including improvement of communication and architectural documentation, were also of considerable importance. The lessons learned and suggestions for future research and experimentation are outlined.",2002,0, 483,"Recognizing and responding to """"bad smells"""" in extreme programming","The agile software development process called Extreme Programming (XP) is a set of best practices which, when used, promises swifter delivery of quality software than one finds with more traditional methodologies. In this paper, we describe a large software development project that used a modified XP approach, identifying several unproductive practices that we detected over its two-year life that threatened the swifter project completion we had grown to expect. We have identified areas of trouble in the entire life cycle, including analysis, design, development, and testing. For each practice we identify, we discuss the solution we implemented to correct it and, more importantly, examine the early symptoms of those poor practices (""""bad smells"""") that project managers, analysts, and developers need to look out for in order to keep an XP project on its swifter track.",2002,0, 484,Research abstract for semantic anomaly detection in dynamic data feeds with incomplete specifications,"Everyday software must be dependable enough for its intended use. Because this software is not usually mission-critical, it may be cost-effective to detect improper behavior and notify the user or take remedial action. Detecting improper behavior requires a model of proper behavior. Unfortunately, specifications of everyday software are often incomplete and imprecise. The situation is exacerbated when the software incorporates third-party elements such as commercial-off-the-shelf software components, databases, or dynamic data feeds from online data sources. We want to make the use of dynamic data feeds more dependable. We are specifically interested in semantic problems with these feeds-cases in which the data feed is responsive, it delivers well-formed results, but the results are inconsistent, out of range, incorrect, or otherwise unreasonable. We focus on a particular facet of dependability: availability or readiness for usage, and change the fault model from the traditional """"fail-silent"""" (crash failures) to """"semantic"""". We investigate anomaly detection as a step towards increasing the semantic availability of dynamic data feeds.",2002,0, 485,Data mining technology for failure prognostic of avionics,Adverse environmental conditions have combined cumulative effects leading to performance degradation and failures of avionics. Classical reliability addresses statistically-generic devices and is less suitable for the situations when failures are not traced to manufacturing but rather to unique operational conditions of particular hardware units. An approach aimed at the accurate assessment of the probability of failure of any avionics unit utilizing the known history-of-abuse from environmental and operational factors is presented herein. The suggested prognostic model utilizes information downloaded from dedicated monitoring systems of flight-critical hardware and stored in a database. Such a database can be established from the laboratory testing of hardware and supplemented with real operational data. This approach results in a novel knowledge discovery from data technology that can be efficiently used in a wide area of applications and provide a quantitative basis for the modern maintenance concept known as service-when-needed. An illustrative numerical example is provided,2002,0, 486,Two controlled experiments assessing the usefulness of design pattern documentation in program maintenance,Using design patterns is claimed to improve programmer productivity and software quality. Such improvements may manifest both at construction time (in faster and better program design) and at maintenance time (in faster and more accurate program comprehension). The paper focuses on the maintenance context and reports on experimental tests of the following question: does it help the maintainer if the design patterns in the program code are documented explicitly (using source code comments) compared to a well-commented program without explicit reference to design patterns? Subjects performed maintenance tasks on two programs ranging from 360 to 560 LOC including comments. The experiments tested whether pattern comment lines (PCL) help during maintenance if patterns are relevant and sufficient program comments are already present. This question is a challenge for the experimental methodology: A setup leading to relevant results is quite difficult to find. We discuss these issues in detail and suggest a general approach to such situations. A conservative analysis of the results supports the hypothesis that pattern-relevant maintenance tasks were completed faster or with fewer errors if redundant design pattern information was provided. The article provides the first controlled experiment results on design pattern usage and it presents a solution approach to an important class of experiment design problems for experiments regarding documentation,2002,0, 487,CASCADE - configurable and scalable DSP environment,"As the complexity of embedded systems grows rapidly, it is common to accelerate critical tasks with hardware. Designers usually use off-the-shelf components or licensed IP cores to shorten the time to market, but the hardware/software interfacing is tedious, error-prone and usually not portable. Besides, the existing hardware seldom matches the requirements perfectly, CASCADE, the proposed design environment as an alternative, generates coprocessing datapaths from the executing algorithms specified in C/C++ and attaches these datapaths to the embedded processor with an auto-generated software driver. The number of datapaths and their internal parallel functional units are scaled to fit the application. It seamlessly integrates the design tools of the embedded processor to reduce the re-training/design efforts and maintains short product development time as the pure software approaches. A JPEG encoder is built in CASCADE successfully with an auto-generated four-MAC accelerator to achieve 623% performance boost for our video application.",2002,0, 488,Software-based weighted random testing for IP cores in bus-based programmable SoCs,"Presents a software-based weighted random pattern scheme for testing delay faults in IP cores of programmable SoCs. We describe a method for determining static and transition probabilities (profiles) at the inputs of circuits with full-scan using testability metrics based on the targeted fault model, We use a genetic algorithm (GA) based search procedure to determine optimal profiles. We use these optimal profiles to generate a test program that runs on the processor core. This program applies test patterns to the target IP cores in the SoC and analyzes the test responses. This provides the flexibility of applying multiple profiles to the IP core under test to maximize fault coverage. This scheme does not incur the hardware overhead of logic BIST, since the pattern generation and analysis is done by software. We use a probabilistic approach to finding the profiles. We describe our method on transition and path-delay fault models, for both enhanced full-scan and normal full-scan circuits. We present experimental results using the ISCAS 89 benchmarks as IP cores.",2002,0, 489,An industrial environment for high-level fault-tolerant structures insertion and validation,"When designing a VLSI circuits, most of the efforts are now performed at levels of abstractions higher than gate. Correspondingly to this clear trend, there is a growing request to tackle safety-critical issues directly at the RT-level. This paper presents a complete environment for considering safety issues at the RT level. The environment was implemented and tested by an industry for devising a sample safety-critical device. Designers were permitted to assess the effects of transient faults, automatically add fault-tolerant structures, and validate the results working on the same circuit descriptions and acting in a coherent framework. The evaluation showed the effectiveness of the proposed environment.",2002,0, 490,Investigating the influence of inspector capability factors with four inspection techniques on inspection performance,"We report on a controlled experiment with over 170 student subjects to investigate the influence of inspection process, i.e., the defect detection technique applied, and inspector capability factors on the effectiveness and efficiency of inspections on individual and team level. The inspector capability factors include measures on the inspector's experience, as well as a pre-test with a mini-inspection. We use sampling to quantify the gain of defects detected from selecting the best inspectors according to the pre-test results compared to the performance of an average team of inspectors. Main findings are that inspector development and quality assurance capability and experience factors do not significantly distinguish inspector groups with different inspection performance. On the other hand the mini-inspection pre-test has considerable correlation to later inspection performance. The sampling of teams shows that selecting inspectors according to the mini-inspection pretest considerably improves average inspection effectiveness by up to one third.",2002,0, 491,A generic model and tool support for assessing and improving Web processes,"We discuss a generic quality framework, based on a generic model, for evaluating Web processes. The aim is to perform assessment and improvement of web processes by using techniques from empirical software engineering. A web development process can be broadly classified into two almost independent sub-processes: the authoring process (AUTH process) and the process of developing the infrastructure (INF process). The AUTH process concerns the creation and management of the contents of a set of nodes and the way they are linked to produce a web application, whereas the INF development process provides technological support and involves creation of databases, integration of the web application to legacy systems etc. In this paper, we instantiate our generic quality model to the AUTH process and present a measurement framework for this process. We also present a tool support to provide effective guidance to software personnel including developers, managers and quality assurance engineers.",2002,0, 492,An empirical study of the impact of count models predictions on module-order models,"Software quality prediction models are used to achieve high software reliability. A module-order model (MOM) uses an underlying quantitative prediction model to predict this rank-order. This paper compares performances of module-order models of two different count models which are used as the underlying prediction models. They are the Poisson regression model and the zero-inflated Poisson regression model. It is demonstrated that improving a count model for prediction does not ensure a better MOM performance. A case study of a full-scale industrial software system is used to compare performances of module-order models of the two count models. It was observed that improving prediction of the Poisson count model by using zero-inflated Poisson regression did not yield module-order models with better performance. Thus, it was concluded that the degree of prediction accuracy of the underlying model did not influence the results of the subsequent module-order model. Module-order modeling is proven to be a robust and effective method even though both underlying prediction may sometimes lack acceptable prediction accuracy.",2002,0, 493,Tree-based software quality estimation models for fault prediction,"Complex high-assurance software systems depend highly on reliability of their underlying software applications. Early identification of high-risk modules can assist in directing quality enhancement efforts to modules that are likely to have a high number of faults. Regression tree models are simple and effective as software quality prediction models, and timely predictions from such models can be used to achieve high software reliability. This paper presents a case study from our comprehensive evaluation (with several large case studies) of currently available regression tree algorithms for software fault prediction. These are, CART-LS (least squares), S-PLUS, and CART-LAD (least absolute deviation). The case study presented comprises of software design metrics collected from a large network telecommunications system consisting of almost 13 million lines of code. Tree models using design metrics are built to predict the number of faults in modules. The algorithms are also compared based on the structure and complexity of their tree models. Performance metrics, average absolute and average relative errors are used to evaluate fault prediction accuracy.",2002,1, 494,Experience from replicating empirical studies on prediction models,"When conducting empirical studies, replications are important contributors to investigating the generality of the studies. By replicating a study in another context, we investigate what impact the specific environment has, related to the effect of the studied object. In this paper, we define different levels of replication to characterise the similarities and differences between an original study and a replication, with particular focus on prediction models for the identification of fault-prone software components. Further, we derive a set of issues and concerns which are important in order to enable replication of an empirical study and to enable practitioners to use the results. To illustrate the importance of the issues raised, a replication case study is presented in the domain of prediction models for fault-prone software components. It is concluded that the results are very divergent, depending on how different parameters are chosen, which demonstrates the need for well-documented empirical studies to enable replication and use",2002,0, 495,Software-implemented fault-tolerance and separate recovery strategies enhance maintainability [substation automation],"This paper describes a novel approach to software-implemented fault tolerance for distributed applications. This new approach can be used to enhance the flexibility and maintainability of the target applications in a cost-effective way. This is reached through a framework-approach including: (1) a library of fault tolerance functions; (2) a middleware application coordinating these functions; and (3) a language for the expression of nonfunctional services, including configuration, error recovery and fault injection. This framework-approach increases the availability and reliability of the application at a justifiable cost, also thanks to the re-usability of the components in different target systems. This framework-approach further increases the maintainability due to the separation of the functional behavior from the recovery strategies that are executed when an error is detected, because the modifications to functional and nonfunctional behavior are, to some extent, independent, and hence less complex to deal with. The resulting tool matches well, e.g., with current industrial requirements for embedded distributed systems, calling for adaptable and reusable software components. The """"integration of this approach in an automation system of a substation for electricity distribution"""" reports this experience. This case study shows in particular the ability of the configuration-and-recovery language ARIEL to allow adaptability to changes in the environment. This framework-approach is also useful in the context of distributed automation systems that are interconnected via a nondedicated network",2002,0, 496,"An integrated approach to flow, thermal and mechanical modeling of electronics devices","The future success of many electronics companies will depend to a large extent on their ability to initiate techniques that bring schedules, performance, tests, support, production, life-cycle-costs, reliability prediction and quality control into the earliest stages of the product creation process. Earlier papers have discussed the benefits of an integrated analysis environment for system-level thermal, stress and EMC prediction. This paper focuses on developments made to the stress analysis module and presents results obtained for an SMT resistor. Lifetime predictions are made using the Coffin-Manson equation. Comparison with the creep strain energy based models of Darveaux (1997) shows the shear strain based method to underestimate the solder joint life. Conclusions are also made about the capabilities of both approaches to predict the qualitative and quantitative impact of design changes.",2002,0, 497,Hole analysis for functional coverage data,"One of the main goals of coverage tools is to provide the user with informative presentation of coverage information. Specifically, information on large, cohesive sets of uncovered tasks with common properties is very useful. This paper describes methods for discovering and reporting large uncovered spaces (holes) for crossproduct functional coverage models. Hole analysis is a presentation method for coverage data that is both succinct and informative. Using case studies, we show how hole analysis was used to detect large uncovered spaces and improve the quality of verification.",2002,0, 498,Formal approaches to software testing,"The process of testing software is an important technique for checking and validating the correctness of software. Unfortunately, it is usually difficult, expensive, time consuming and often error prone to achieve both an effective and efficient testing process. Formal methods are a method of specifying and verifying software systems using mathematical and logic approaches. This allows the analysis and reasoning of software systems with precision and rigor. Formal methods target the verification and the proving of correctness, while testing can only show the presence of errors. The use of formal methods can also automate the generation of test cases from formal specifications which can lead to less expensive and less error prone testing process.",2002,0, 499,Release date prediction for telecommunication software using Bayesian Belief Networks,"Many techniques are used for cost, quality and schedule estimation in the context of software risk management. Application of Bayesian Belief Networks (BBN) in this area permits process metrics and product metrics (static code metrics) to be considered in a causal way (i.e. each variable within the model has a cause-effect relationship with other variables) and, in addition, current observations can be used to update estimates based on historical data. However, the real situation that researchers face is that process data is often inadequately, or inappropriately, collected and organized by the development organization. In this paper, we explore if BBN could be used to predict appropriate release dates for a new set of products from a telecommunication company based on static code metrics data and limited process information collected from a earlier set of the same products. Two models are evaluated with different methods involved to analyze the available metrics data.",2002,0, 500,Asymptotic efficiency of two-stage disjunctive testing,"We adapt methods originally developed in information and coding theory to solve some testing problems. The efficiency of two-stage pool testing of n items is characterized by the minimum expected number E(n, p) of tests for the Bernoulli p-scheme, where the minimum is taken over a matrix that specifies the tests that constitute the first stage. An information-theoretic bound implies that the natural desire to achieve E(n, p) = o(n) as n can be satisfied only if p(n) 0. Using random selection and linear programming, we bound some parameters of binary matrices, thereby determining up to positive constants how the asymptotic behavior of E(n, p) as n depends on the manner in which p(n) 0. In particular, it is shown that for p(n) = n-+o(1), where 0 < < 1, the asymptotic efficiency of two-stage procedures cannot be improved upon by generalizing to the class of all multistage adaptive testing algorithms",2002,0, 501,Real-time MPEG video encoder with embedded scene change detection and telecine inverse,"This paper describes very cost-effective algorithms to detect scene changes and field repetition in video sequences for a real-time MPEG encoder. It also provides a scheme to support a dynamic GOP structure reflecting the detection outcome on the fly. With these features, the encoder can encode video more efficiently in either quality or bitrate aspect. The proposed detection methods only utilize the existing information, field motion vectors and picture coding type, from MPEG coding and are very suitable for (but not limited to) software-based encoders.",2002,0, 502,Model-based configuration of VPNs,"The design of suitable configurations for virtual private networks (VPNs) is usually difficult and error-prone. The abstract objectives of design are given by high level policies representing various requirements and the designers are often faced with conflicting requirements. Moreover, it is difficult to find a suitable mapping of high level policies to those low level network configurations which correctly and completely implement the abstract objectives. We apply the approach of model-based management where the system itself as well as the management objectives are represented by graphical object instance diagrams. A combination of tool and libraries supports their interactive construction and automated analysis. The implementation of the approach focuses on VPNs which are based on the Linux IPsec software FreeS/WAN.",2002,0, 503,Heaps and stacks in distributed shared memory,"Software-based distributed shared memory (DSM) systems do usually not provide any means to use shared memory regions as stacks or via an efficient heap memory allocator. Instead DSM users are forced to work with very rudimentary and coarse grain memory (de-)allocation primitives. As a consequence most DSM applications have to ??reinvent the wheel??, that is to implement simple stack or heap semantics within the shared regions. Obviously, this has several disadvantages. It is error-prone, timeconsuming and inefficient. This paper presents an all in software DSM that does not suffer from these drawbacks. Stack and heap organization is adapted to the changed requirements in DSM environments and both, stacks and heaps, are transparently placed in DSM space by the operating system.",2002,0, 504,Bond and electron beam welding quality control of the aluminum stabilized and reinforced CMS conductor by means of ultrasonic phased-array technology,"The Compact Muon Solenoid (CMS) is one of the general-purpose detectors to be provided for the LHC project at CERN. The design field of the CMS superconducting magnet is 4 T, the magnetic length is 12.5 m and the free bore is 6 m. The coils for CNIS are wound of aluminum-stabilized Rutherford type superconductors reinforced with high-strength aluminum alloy. For optimum performance of the conductor a void-free metallic bonding between the high-purity aluminum and the Rutherford type cable as well as between the electron beam welded reinforcement and the high-purity aluminum must be guaranteed. It is the main task of this development work to assess continuously the bond quality over the whole width and the total length of the conductors during manufacture. To achieve this goal we use the ultrasonic phased-array technology. The application of multi-element transducers allows an electronic scanning perpendicular to the direction of production. Such a testing is sufficiently fast in order to allow a continuous analysis of the complete bond. A highly sophisticated software allows the on-line monitoring of the bond and weld quality.",2002,0, 505,Predicting TCP throughput from non-invasive network sampling,"In this paper, we wish to derive analytic models that predict the performance of TCP flows between specified endpoints using routinely observed network characteristics such as loss and delay. The ultimate goal of our approach is to convert network observables into representative user and application relevant performance metrics. The main contributions of this paper are in studying which network performance data sources are most reflective of session characteristics, and then in thoroughly investigating a new TCP model based on Padhye et al. (2000) that uses non-invasive network samples to predict the throughput of representative TCP flows between given end-points.",2002,0, 506,Using version control data to evaluate the impact of software tools: a case study of the Version Editor,"Software tools can improve the quality and maintainability of software, but are expensive to acquire, deploy, and maintain, especially in large organizations. We explore how to quantify the effects of a software tool once it has been deployed in a development environment. We present an effort-analysis method that derives tool usage statistics and developer actions from a project's change history (version control system) and uses a novel effort estimation algorithm to quantify the effort savings attributable to tool usage. We apply this method to assess the impact of a software tool called VE, a version-sensitive editor used in Bell Labs. VE aids software developers in coping with the rampant use of certain preprocessor directives (similar to #if/#endif in C source files). Our analysis found that developers were approximately 40 percent more productive when using VE than when using standard text editors.",2002,0, 507,Assessing the applicability of fault-proneness models across object-oriented software projects,"A number of papers have investigated the relationships between design metrics and the detection of faults in object-oriented software. Several of these studies have shown that such models can be accurate in predicting faulty classes within one particular software product. In practice, however, prediction models are built on certain products to be used on subsequent software development projects. How accurate can these models be, considering the inevitable differences that may exist across projects and systems? Organizations typically learn and change. From a more general standpoint, can we obtain any evidence that such models are economically viable tools to focus validation and verification effort? This paper attempts to answer these questions by devising a general but tailorable cost-benefit model and by using fault and design data collected on two mid-size Java systems developed in the same environment. Another contribution of the paper is the use of a novel exploratory analysis technique - MARS (multivariate adaptive regression splines) to build such fault-proneness models, whose functional form is a-priori unknown. The results indicate that a model built on one system can be accurately used to rank classes within another system according to their fault proneness. The downside, however, is that, because of system differences, the predicted fault probabilities are not representative of the system predicted. However, our cost-benefit model demonstrates that the MARS fault-proneness model is potentially viable, from an economical standpoint. The linear model is not nearly as good, thus suggesting a more complex model is required.",2002,0, 508,Software measurement: uncertainty and causal modeling,"Software measurement can play an important risk management role during product development. For example, metrics incorporated into predictive models can give advance warning of potential risks. The authors show how to use Bayesian networks, a graphical modeling technique, to predict software defects and-perform """"what if"""" scenarios.",2002,0, 509,Adaptive parameter tuning for relevance feedback of information retrieval,"Relevance feedback is an effective way to improve the performance of an information retrieval system. In practice, the parameters for feedback were usually determined manually without the consideration of the quality of the query. We propose a new concept (adaptiveness) to measure the quality of the query. We built two models to predict the adaptiveness of the query. The parameters for feedback were then determined by the quality of the query. Our experiments on TREC data showed that the performance was improved significantly when compared with blind relevance feedback.",2002,0, 510,The application of a distributed system-level diagnosis algorithm in dynamic positioning system,"This paper introduces the application of a distributed system-level fault diagnosis algorithm for detecting and diagnosing faulty processors in dynamic positioning system (DPS) of an offshore vessel. The system architecture of DPS is a loose coupling distributed multiprocessor system, which adopts the technique of Intel's MULTIBUS II and develops a software application on the platform of iRMX OS. In this paper a new approach to the diagnosis problem is presented, including an adaptive PMC model, distributed diagnosis including self-diagnosis and interactive-diagnosis, and system graph-theoretic model. The self-diagnosis fully utilises the individual results of built-in self-tests as a part of diagnosis work. Interactive-diagnosis means that in the system fault-free units perform simple periodic tests on one another under the direction of the graph-theoretic model by interactively communicating, and every unit can only send the diagnosis information to its considered fault-free units. Finally, we illustrate the procedure of diagnosis verification. The results obtained show that the adaptive PMC model is applicable, the distributed system-level diagnosis algorithm is proper, and the applications of diagnosis and verification are reliable and practicable.",2002,0, 511,Solving the consensus problem in a dynamic group: an approach suitable for a mobile environment,"It is now well recognised that the consensus problem is a fundamental problem when one has to implement fault-tolerant distributed services. We extend the consensus paradigm to asynchronous distributed mobile systems prone to disconnection and process crash failures. The paper, first, shows that a consensus problem between mobile hosts is reducible to two agreement problems (a consensus problem and a group membership problem) between fixed hosts. Then, following an approach investigated by Guerraoui and Schiper (see IEEE Transactions on Software Engineering, vol.27, no.1, p.29-41, 2001), the paper uses a genetic consensus service as a basic building block to construct a modular and simple solution.",2002,0, 512,Reliability assessment of network elements using black box testing,"In this paper, we outline a procedure for quality assurance of network elements before their deployment. Software reliability is assessed using two models: a process-centric model (Musa's (1987) basic model) and a product-centric model (proposed by Hoeflin (2000)). Simultaneous use of both approaches is for sensitivity analysis of the results. In addition, we introduce the concept of deployability to measure the degree of confidence on the decision to deploy the equipment in the field.",2002,0, 513,Formally verified Byzantine agreement in presence of link faults,"This paper shows that deterministic consensus in synchronous distributed systems with link faults is possible, despite the impossibility result of Gray (1978). Instead of using randomization, we overcome this impossibility by moderately restricting the inconsistency that link faults may cause system-wide. Relying upon a novel hybrid fault model that provides different classes of faults for both nodes and links, we provide a formally verified proof that the m+1-round Byzantine agreement algorithm OMH (Lincoln and Rushby (1993)) requires n > 2fls + flr + flra + 2(fa + fs) + fo + fm + m nodes for transparently masking at most fls broadcast and flr receive link faults (including at most flra arbitrary ones) per node in each round, in addition to at most fa, fs, fo, fm arbitrary, symmetric, omission, and manifest node faults, provided that m fa + fo + 1. Our approach to modeling link faults is justified by a number of theoretical results, which include tight lower bounds for the required number of nodes and an analysis of the assumption coverage in systems where links fail independently with some probability p.",2002,0, 514,Assessing the quality of Web-based applications via navigational structures,"We study the link validity of a Web site's navigational structure to enhance Web quality. Our approach employs the principle of statistical usage testing to develop an efficient and effective testing mechanism. Some advantages of our approach include generating test scripts systematically, providing coverage metrics, and executing hyperlinks only once.",2002,0, 515,Calibration and estimation of redundant signals,This paper presents an adaptive filter for real-time calibration of redundant signals consisting of sensor data and/or analytically derived measurements. The measurement noise covariance matrix is adjusted as a function of the a posteriori probabilities of failure of the individual signals. An estimate of the measured variable is obtained as a weighted average of the calibrated signals. The weighting matrix is recursively updated in real time instead of being fixed a priori. The filter software is presently hosted in a Pentium platform and is portable to other commercial platforms. The filter can be used to enhance the Instrumentation and Control System Software in large-scale dynamical systems.,2002,0, 516,Structural geological study of Southern Apennine (Italy) using Landsat 7 imagery,"A structural geological study has been carried out by automatic and visual interpretation of Landsat 7 imagery. The new improved features of ETM+ imagery are tested in the southern part of the Apennines mountain chain, which is characterized by several inverse faults and overthrusting. Spatial information is crucial for structure detection; nevertheless spectral data can also help in the geological interpretation of optical images. In order to combine the spatial and spectral information, panchromatic and multispectral images were fused in synergetic imagery. A lineament analysis was accomplished by visual interpretation and additional processing techniques such as edge detection and morphologic filtering. The combination of different analytical techniques enables the production of a lineament map of the study area. A spatial statistic analysis of the lineaments was performed to analyze their frequency and main direction. The structural geological interpretation of remotely sensed data was compared to the field data collected over some sample areas and structural geological studies carried out by different authors. The features of geological interest detected during the interpretation process were digitized using a raster-based GIS software. A preliminary vector structural geological map was produced.",2002,0, 517,Stability analysis for reconfigurable systems with actuator saturation,"Discusses a combined analytic and simulation-based approach to assessing the stability of a control law in a system that may be subject to actuator saturation due to failures and subsequent reconfiguration. The analysis is based on linearized plant dynamics, a linearized state-feedback description of the nonlinear controller dynamics, and a nonlinear actuator model. For systems of this type, a method has previously been developed that provides less conservative estimates of the domain of attraction than other available methods. The domain of attraction estimates are used to guide simulation based stability analysis. The combined analytic and simulation based stability assessment approach is implemented in RASCLE, a software package designed to interface with an arbitrary C, C++, or FORTRAN simulation. Through the combination of analytic stability estimates and automated simulation-based analysis, RASCLE can efficiently provide information about the stability of the full nonlinear system under a wide range of conditions for the purpose of validating a reconfigurable controller.",2002,0, 518,Numerical methods for beautification of reverse engineered geometric models,"Boundary representation models reconstructed from 3D range data suffer from various inaccuracies caused by noise in the data and the model building software. The quality of such models can be improved in a beautification step, which finds geometric regularities approximately present in the model and tries to impose a consistent subset of these regularities on the model. A framework for beautification and numerical methods to select and solve a consistent set of constraints deduced from a set of regularities are presented. For the initial selection of consistent regularities likely to be part of the model's ideal design priorities, and rules indicating simple inconsistencies between the regularities are employed. By adding regularities consecutively to an equation system and trying to solve it by using quasi-Newton optimization methods, inconsistencies and redundancies are detected. The results of experiments are encouraging and show potential for an expansion of the methods based on degree of freedom analysis.",2002,0, 519,Automatic detection and exploitation of branch constraints for timing analysis,"Predicting the worst-case execution time (WCET) and best-case execution time (BCET) of a real-time program is a challenging task. Though much progress has been made in obtaining tighter timing predictions by using techniques that model the architectural features of a machine, significant overestimations of WCET and underestimations of GCET can still occur. Even with perfect architectural modeling, dependencies on data values can constrain the outcome of conditional branches and the corresponding set of paths that can be taken in a program. While branch constraint information has been used in the past by some timing analyzers, it has typically been specified manually, which is both tedious and error prone. This paper describes efficient techniques for automatically detecting branch constraints by a compiler and automatically exploiting these constraints within a timing analyzer. The result is significantly tighter timing analysis predictions without requiring additional interaction with a user.",2002,0, 520,A software-reliability growth model for N-version programming systems,"This paper presents a NHPP-based SRGM (software reliability growth model) for NVP (N-version programming) systems (NVP-SRGM) based on the NHPP (nonhomogeneous Poisson process). Although many papers have been devoted to modeling NVP-system reliability, most of them consider only the stable reliability, i.e., they do not consider the reliability growth in NVP systems due to continuous removal of faults from software versions. The model in this paper is the first reliability-growth model for NVP systems which considers the error-introduction rate and the error-removal efficiency. During testing and debugging, when a software fault is found, a debugging effort is devoted to remove this fault. Due to the high complexity of the software, this fault might not be successfully removed, and new faults might be introduced into the software. By applying a generalized NHPP model into the NVP system, a new NVP-SRGM is established, in which the multi-version coincident failures are well modeled. A simplified software control logic for a water-reservoir control system illustrates how to apply this new software reliability model. The s-confidence bounds are provided for system-reliability estimation. This software reliability model can be used to evaluate the reliability and to predict the performance of NVP systems. More application is needed to validate fully the proposed NVP-SRGM for quantifying the reliability of fault-tolerant software systems in a general industrial setting. As the first model of its kind in NVP reliability-growth modeling, the proposed NVP SRGM can be used to overcome the shortcomings of the independent reliability model. It predicts the system reliability more accurately than the independent model and can be used to help determine when to stop testing, which is a key question in the testing and debugging phase of the NVP system-development life cycle",2002,0, 521,Process modelling to support dependability arguments,"Reports work to support dependability arguments about the future reliability of a product before there is direct empirical evidence. We develop a method for estimating the number of residual faults at the time of release from a """"barrier model"""" of the development process, where in each phase faults are created or detected. These estimates can be used in a conservative theory in which a reliability bound can be obtained or can be used to support arguments of fault freeness. We present the work done to demonstrate that the model can be applied in practice. A company that develops safety-critical systems provided access to two projects as well as data over a wide range of past projects. The software development process as enacted was determined and we developed a number of probabilistic process models calibrated with generic data from the literature and from the company projects. The predictive power of the various models was compared.",2002,0, 522,Experimental evaluation of time-redundant execution for a brake-by-wire application,"This paper presents an experimental evaluation of a brake-by-wire application that tolerates transient faults by temporal error masking. A specially designed real-time kernel that masks errors by triple time-redundant execution and voting executes the application on a fail-stop computer node. The objective is to reduce the number of node failures by masking errors at the computer node level. The real-time kernel always executes the application twice to detect errors, and ensures that a fail-stop failure occurs if there is not enough CPU-time available for a third execution and voting. Fault injection experiments show that temporal error masking reduced the number of fail-stop failures by 42% compared to executing the brake-by-wire task without time redundancy.",2002,0, 523,Reliability and availability analysis for the JPL Remote Exploration and Experimentation System,"The NASA Remote Exploration and Experimentation (REE) Project, managed by the Jet Propulsion Laboratory, has the vision of bringing commercial supercomputing technology into space, in a form which meets the demanding environmental requirements, to enable a new class of science investigation and discovery. Dependability goals of the REE system are 99% reliability over 5 years and 99% availability. In this paper we focus on the reliability/availability modeling and analysis of the REE system. We carry out this task using fault trees, reliability block diagrams, stochastic reward nets and hierarchical models. Our analysis helps to determine the ranges of parameters for which the REE dependability goal will be met. The analysis also allows us to assess different hardware and software fault-tolerance techniques.",2002,0, 524,A versatile and modular consensus protocol,"Investigates a modular and versatile approach to solve the consensus problem in asynchronous distributed systems in which up to f processes may crash (fa and Pb of P which are calculating the diverse functions fa and fb in sequence. If no error occurs in the process of designing and executing Pa and Pb, then f= fa=fb holds. A fault in the underlying processor hardware is likely to be detected by the deviation of the results, i.e. fa(i)=fb(i) for input i. Normally, VDSs are generated by manually applying different diversity techniques. This paper, in contrast, presents a new method and a tool for the automated generation of VDSs with a high detection probability for hardware faults. Moreover, for the first time the diversity techniques are selected by an optimization algorithm rather than chosen intuitively. The generated VDSs are investigated extensively by means of software implemented processor fault injection.",2002,0, 526,Soft error sensitivity characterization for microprocessor dependability enhancement strategy,"This paper presents an empirical investigation on the soft error sensitivity (SES) of microprocessors, using the picoJava-II as an example, through software simulated fault injections in its RTL model. Soft errors are generated under a realistic fault model during program run-time. The SES of a processor logic block is defined as the probability that a soft error in the block causes the processor to behave erroneously or enter into an incorrect architectural state. The SES is measured at the functional block level. We have found that highly error-sensitive blocks are common for various workloads. At the same time soft errors in many other logic blocks rarely affect the computation integrity. Our results show that a reasonable prediction of the SES is possible by deduction from the processor's microarchitecture. We also demonstrate that the sensitivity-based integrity checking strategy can be an efficient way to improve fault coverage per unit redundancy.",2002,0, 527,Modeling and quantification of security attributes of software systems,"Quite often failures in network based services and server systems may not be accidental, but rather caused by deliberate security intrusions. We would like such systems to either completely preclude the possibility of a security intrusion or design them to be robust enough to continue functioning despite security attacks. Not only is it important to prevent or tolerate security intrusions, it is equally important to treat security as a QoS attribute at par with, if not more important than other QoS attributes such as availability and performability. This paper deals with various issues related to quantifying the security attribute of an intrusion tolerant system, such as the SITAR system. A security intrusion and the response of an intrusion tolerant system to the attack is modeled as a random process. This facilitates the use of stochastic modeling techniques to capture the attacker behavior as well as the system's response to a security intrusion. This model is used to analyze and quantify the security attributes of the system. The security quantification analysis is first carried out for steady-state behavior leading to measures like steady-state availability. By transforming this model to a model with absorbing states, we compute a security measure called the """"mean time (or effort) to security failure"""" and also compute probabilities of security failure due to violations of different security attributes.",2002,0, 528,Impact of fault management server and its failure-related parameters on high-availability communication systems,"In this paper, we investigate the impact of a fault management server and its failure-related parameters on high-availability communication systems. The key point is that, to achieve high overall availability of a communication system, the availability of the fault management server itself is not as important as its fail-safe ratio and fault coverage. In other words, in building fault management servers, more attention should be paid to improving the server's ability of detecting faults in functional units and its own isolation under failure from the functional units. Tradeoffs can be made between the availability of the fault management server, the fail-safe ratio and the fault coverage ratio to optimize system availability. A cost-effective design for the fault management server is proposed in this paper.",2002,0, 529,A compositional approach to monitoring distributed systems,"This paper proposes a specification-based monitoring approach for automatic run-time detection of software errors and failures of distributed systems. The specification is assumed to be expressed in communicating finite state machines based formalism. The monitor observes the external I/O and partial state information of the target distributed system and uses them to interpret the specification. The approach is compositional as it achieves global monitoring by combining the component-level monitoring. The core of the paper describes the architecture and operations of the monitor The monitor includes several independent mechanisms, each tailored to detecting specific kinds of errors or failures. Their operations are described in detail using illustrative examples. Techniques for dealing with nondeterminism and concurrency issues in monitoring a distributed system are also discussed with respect to the considered model and specification. A case study describing the application of the prototype monitor to an embedded system is presented.",2002,0, 530,Automated algorithm to delineate Z-bands in electron microscopic images of the human skeletal muscle,"The effects of exercise, nutrition, and aging on the development of human skeletal muscles can be observed from the morphological changes of the Z-band under the electron microscope. Quantification of the Z-band damage has provided useful information to exercise physiology research but is usually a labor-intensive process. In this study, an automated image-processing algorithm has been developed to delineate the Z-band with a given start point. The algorithm detects the borders of the Z-band in an incremental fashion along the long axis. At each step of the iteration local border points are detected along radial directions and the centerline is extended toward both ends of the Z-band. The process iterates itself until a stopping criterion is met. The algorithm has been coded in C++ and used in our laboratory for exercise science research. The software has significantly reduced the processing time and provided reliable high-quality data for the study of Z-band damage.",2002,0, 531,Random testing of multi-port static random access memories,"This paper presents the analysis and modeling of random testing for its application to multi-port memories. Ports operate to simultaneously test the memory and detecting multi-port related faults. The state of the memory under test in the presence of inter-port faults has been modeled using Markov state diagrams. In the state diagrams, transition probabilities are established by considering the effects of the memory operations (read and write), the lines involved in the fault (bit and word-lines) as well as the types and number of ports. Test lengths per cell at 99.9% coverage are given.",2002,0, 532,Multi-level fault injection experiments based on VHDL descriptions: a case study,"The probability of transient faults increases with the evolution of technologies. There is a corresponding increased demand for an early analysis of erroneous behaviors. This paper reports on results obtained with SEU-like fault injections in VHDL descriptions of digital circuits. Several circuit description levels are considered, as well as several fault modeling levels. These results show that an analysis performed at a very early stage in the design process can actually give a helpful insight into the response of a circuit when a fault occurs.",2002,0, 533,Analysis of SEU effects in a pipelined processor,"Modern processors embed features such as pipelined execution units and cache memories that can hardly be controlled by programmers through the processor instruction set. As a result, software-based fault injection approaches are no longer suitable for assessing the effects of SEUs in modern processors, since they are not able to evaluate the effects of SEUs affecting pipelines and caches. In this paper we report an analysis of a commercial processor core where the effects of SEUs located in the processor pipeline and cache memories are studied. Moreover the obtained results are compared with those software-based approaches provide. Experimental results show that software-based approaches may lead to errors during the failure rate estimation of up to 400%.",2002,0, 534,Error rate estimation for a flight application using the CEU fault injection approach,This paper aims at validating the efficiency of a fault injection approach to predict error rate on applications devoted to operate in radiation environment. Soft error injection experiments and radiation ground testing were performed on software modules using a digital board built on a digital signal processor which is included in a satellite instrument. The analysis of experimental results put in evidence the potentialities offered by the used methodology to predict the error rate of complex applications.,2002,0, 535,Predictable instruction caching for media processors,"The determinism of instruction cache performance can be considered a major problem in multimedia devices which hope to maximise their quality of service. If instructions are evicted from the cache by competing blocks of code, the running application will take significantly longer to execute than if the instructions were present. Since it is difficult to predict when this interference will occur the performance of the algorithm at a given point in time is unclear We propose the use of an automatically configured partitioned cache to protect regions of the application code from each other and hence minimise interference. As well as being specialised to the purpose of providing predictable performance, this cache can be specialised to the application being run, rather than for the average case, using simple compiler algorithms.",2002,0, 536,"Mathematical modeling, performance analysis and simulation of current Ethernet computer networks","This work describes an object oriented software, to be portable among different types of computer platforms, able to perform the simulation of different components of Ethernet local and long distance area networks by using the TCP/IP protocol. It is also able to detect problems in projects and operation of communication networks through the separate or joint analysis of these elements. The functions of the system are as follows: (a) analysis of elements from different layers and protocols of the simulated network (reference to OSI model); (b) analysis and efficiency measurement (quality of transmission, transfer rate, error rate) of the information transmitted on the network; (c) network performance evaluation and link capability analysis; (d) analysis of error detection and further correction capability as well as analysis of network failure tolerance. The software works using mathematical models that represent elements of different layers (reference to OSI model) of the network to be simulated as well as the performance of the abovementioned joint elements.",2002,0, 537,Optimizing test strategies during PCB design for boards with limited ICT access,"Engineers have used past experience or subjective preference as a means for assigning test strategies to new products without analyzing the benefits and weaknesses of various different test approaches in a quantitative manner. DFT (design for test) software tools that enable testability analysis during board design allow test engineers to work concurrently with designers. Case study results demonstrate that coverage predicted by DFT software is realistic when compared to actual fault coverage achieved in production. Using DFT software during PCB design to model the fault coverage of different test strategies and make ICT (in-circuit test) access tradeoffs can significantly reduce cost and improve quality. Defect capture rates more than doubled when using alternate test strategies and production line beat rates varied significantly depending on the test strategy chosen. When DFT software enables these decisions early in the product life cycle, both OEMs and EMS providers can win by driving cost reductions through the entire product life cycle from NPI (new product introduction) through manufacturing and warranty.",2002,0, 538,Application of ANN to power system fault analysis,"This paper presents the computer architecture development using Artificial Neural Network (ANN) as an approach for predicting fault in a large interconnected transmission system. Transmission line faults can be classified using the bus voltage and line fault current. Monitoring the performance of these two factors are very useful for power system protection devices. The ANN is designed to be incorporated with a matrix based software tool MATLAB Version 6.0, which deals with fault diagnosis in power system. In MATLAB software modules, the balanced and unbalanced fault can be simulated. The data generated from this software are to be used as training and testing sets in the Neural Ware Simulator.",2002,0, 539,Image processing application in seismic reflection to evaluate geohazard region,"GeoJava is GUI (graphical user interface) software that is powerful, yet very simple to use, providing the means to filter color images in RGB (red-green-blue) color space and YUV color space, to produce high quality filtered images. The YUV color space model has been proved to filter images smoothly without losing any data, only enhancing image structure. Different images were used, and satisfactory results were obtained of multi-application objectives. Image processing enhances the images of the CDP (common depth midpoint) seismic reflection sections using different stacking filters to provide supplementary results that are useful for assessing geohazard zones. The results in images clarify the exact location and geometry of cavities and sinkholes, beside localizing the areas that are under stress, and also delineate weak zones.",2002,0, 540,Strategy to improve the indoor coverage for mobile station,"This paper presents an evaluation of whether the indoor signal strength for commercial buildings fulfills the cell planning requirement. In order to provide high quality cellular service, it is necessary to place an array of distributed antennas connected using feeder cable within the building. With the feeder cable approach, splitters and computer software, we develop using Visual Basic 6.0. The effective radiated power (ERP) at the distributed antenna can be calculated and it can also be predicted how far the signal can go with the calculated ERP. This design process can be used to obtain an estimated indoor system requirement. All of the requirements can be achieved by applying the method in this paper.",2002,0, 541,Neighborhood selection for IDDQ outlier screening at wafer sort,"To screen defective dies, IDDQ tests require a reliable estimate of each die's defect-free measurement. The nearest-neighbor residual (NNR) method provides a straightforward, data-driven estimate of test measurements for improved identification of die outliers",2002,0, 542,"QoS tradeoffs for guidance, navigation, and control","Future space missions will require onboard autonomy to reduce data, plan activities, and react appropriately to complex dynamic events. Software to support such behaviors is computationally-intensive but must execute with sufficient speed to accomplish mission goals. The limited processing resources onboard spacecraft must be split between the new software and required guidance, navigation, control, and communication tasks. To-date, control-related processes have been scheduled with fixed execution period, then autonomy processes are fit into remaining slack time slots. We propose the use of quality-of-service (QoS) negotiation to explicitly trade off the performance of all processing tasks, including those related to spacecraft control. We characterize controller performance based on exhaustive search and a Lyapunov optimization technique and present results that analytically predict worst-case performance degradation characteristics. The results are illustrated by application to a second-order linear system with a linear state feedback control law.",2002,0, 543,Fault injection experiment results in space borne parallel application programs,"Development of the REE Commercial-Off-The-Shelf (COTS) based space-borne supercomputer requires a detailed knowledge of system behavior in the presence of Single Event Upset (SEU) induced faults. When combined with a hardware radiation fault model and mission environment data in a medium grained system model, experimentally obtained fault behavior data can be used to: predict system reliability, availability and performance; determine optimal fault detection methods and boundaries; and define high ROI fault tolerance strategies. The REE project has developed a fault injection suite of tools and a methodology for experimentally determining system behavior statistics in the presence of application level SEU induced transient faults. Initial characterization of science data application code for an autonomous Mars Rover geology application indicates that this code is relatively insensitive to SEUs and thus can be made highly immune to application level faults with relatively low overhead strategies.",2002,0, 544,A new audio skew detection and correction algorithm,"The lack of synchronisation between a sender clock and a receiver audio clock in an audio application results in an undesirable effect known as """"audio skew"""". This paper proposes and implements a new approach to detecting and correcting audio skew, focusing on the accuracy of measurements and on the algorithm's effect on the audio experience of the listener. The algorithms presented are shown to remove audio skew successfully, thus reducing delay and loss and hence improving audio quality.",2002,0, 545,Upgrading engine test cells for improved troubleshooting and diagnostics,"Upgrading military engine test cells with advanced diagnostic and troubleshooting capabilities will play a critical role in increasing aircraft availability and test cell effectiveness while simultaneously reducing engine operating and maintenance costs. Sophisticated performance and mechanical anomaly detection and fault classification algorithms utilizing thermodynamic, statistical, and empirical engine models are now being implemented as part of a United States Air Force Advanced Test Cell Upgrade Initiative. Under this program, a comprehensive set of realtime and post-test diagnostic software modules, including sensor validation algorithms, performance fault classification techniques and vibration feature analysis are being developed. An automated troubleshooting guide is also being implemented to streamline the troubleshooting process for both inexperienced and experienced technicians. This artificial intelligence based tool enhances the conventional troubleshooting tree architecture by incorporating probability of occurrence statistics to optimize the troubleshooting path. This paper describes the development and implementation of the F404 engine test cell upgrade at the Jacksonville Naval Air Station.",2002,0, 546,A test station health monitoring system [military aircraft],"This paper presents a process to monitor test station health using the Weibull method and statistical patterns. The methodology is currently being applied to the F-16 automated test equipment (ATE) at the Ogden, Utah Air Logistic Center (OO-ALC) maintenance depot. An automated stream of test data collected from ATEs is used to process test results and to identify improvements necessary to increase the failure forecast accuracy. The paper discusses solutions to identify causes of 're-test OK' (RTOK) due to discrepancies between software testing procedures in the line and shop repairable units. The process includes a decision support system that uses artificial intelligence methods, such as expert system and neural networks, and a knowledge database to improve the troubleshooting capability. The paper also discusses a prototype development that collects malfunction codes (MFL) originated by the aircraft bus monitoring system. The MFL information is correlated with test results to detect RTOK causes.",2002,0, 547,Using SPIN model checking for flight software verification,"Flight software is the central nervous system of modern spacecraft. Verifying spacecraft flight software to assure that it operates correctly and safely is presently an intensive and costly process. A multitude of scenarios and tests must be devised, executed and reviewed to provide reasonable confidence that the software will perform as intended and not endanger the spacecraft. Undetected software defects on spacecraft and launch vehicles have caused embarrassing and costly failures in recent years. Model checking is a technique for software verification that can detect concurrency defects that are otherwise difficult to discover. Within appropriate constraints, a model checker can perform an exhaustive state-space search on a software design or implementation and alert the implementing organization to potential design deficiencies. Unfortunately, model checking of large software systems requires an often-too-substantial effort in developing and maintaining the software functional models. A recent development in this area, however, promises to enable software-implementing organizations to take advantage of the usefulness of model checking without hand-built functional models. This development is the appearance of """"model extractors"""". A model extractor permits the automated and repeated testing of code as built rather than of separate design models. This allows model checking to be used without the overhead and perils involved in maintaining separate models. We have attempted to apply model checking to legacy flight software from NASA's Deep Space One (DS1) mission. This software was implemented in C and contained some known defects at launch that are detectable with a model checker. We describe the model checking process, the tools used, and the methods and conditions necessary to successfully perform model checking on the DS1 flight software.",2002,0, 548,A fuzzy sets approach to new product portfolio management,"The evaluation of R&D projects in a high technology firm is very important. A lot of them quite often do not lead to new products as management did not take into consideration indexes such as probability of commercial success, technological success, strategic fit, etc which cannot be expressed in a quantitative form. An efficient and reliable approach for evaluating R&D projects capable of handling simultaneously the quantitative and qualitative criteria involved based on the theory of fuzzy logic is presented and a software model of the approach has been developed and tested in a real environment. It is a multiple criteria decision-making method where all projects are rated according to a number of quantitative and qualitative criteria capturing possibilities of technical and commercial success and the consistency of the projects with business strategy. We report on the criteria used for the evaluation of the projects and on the operation of the software model.",2002,0, 549,Data-based adviser to operators of complex processes,"A probabilistic advisory tool for operators of complex processes is being developed in the framework of the ProDaCTool international project. The project was motivated by the need to maintain the highest possible quality of the product - metal strip processed on a cold rolling mill - under various conditions. Even though all particular rolling mill controllers are tuned properly, there is a lot of possible settings of manually adjusted parameters which influence the quality of production. When, in addition, the rolling mill processes a variety of material types, it is difficult to find out the causes of potential slight deviations in quality.",2002,0, 550,Automated identification of single nucleotide polymorphisms from sequencing data,"Single nucleotide polymorphisms (SNPs) provide abundant information about genetic variation. Large scale discovery of high frequency SNPs is being undertaken using various methods. However, the publicly available SNP data are not always accurate, and therefore should be verified. If only a particular gene locus is concerned, locus-specific polymerase chain reaction amplification may be useful. Problem of this method is that the secondary peak has to be measured. We have analyzed trace data from conventional sequencing equipment and found an applicable rule to discern SNPs from noise. We have developed software that integrates this function to automatically identify SNPs. The software works accurately for high quality sequences and also can detect SNPs in low quality sequences. Further, it can determine allele frequency, display this information as a bar graph and assign corresponding nucleotide combinations. It is very useful for identifying de novo SNPs in a DNA fragment of interest.",2002,0, 551,Quality-based tuning of cell downlink load target and link power maxima in WCDMA,"The objective of the paper is to validate the feasibility of auto-tuning WCDMA link power maxima and adjust cell downlink load level targets based on quality of service. The downlink cell load level is measured using total wideband transmission power. The quality indicators used are call-blocking probability, packet queuing probability and downlink link power outage. The objective is to improve performance and operability of the network with control software aiming for a specific quality of service. The downlink link maxima in each cell are regularly adjusted with a control method in order to improve performance under different load level targets. The approach is validated using a dynamic WCDMA system simulator. The conducted simulations support the assumption that the downlink performance can be managed and improved by the proposed cell-based automated optimization.",2002,0, 552,Managing software projects with business-based requirements,"For many organizations that are neither software product companies nor system integrators, the expense and cultural change required for full process rollout can be prohibitive. Proponents of agile processes/methods (such as extreme programming) suggest that these """"lightweight"""" approaches are extremely effective. I would agree that there are many powerful aspects within these approaches. I suggest, however, that by taking an objective-based business requirements approach to project management, software projects have a high probability of running on time, and remaining in scope and within budget. Addressing requirement challenges, independent of adopting a full process, can offer many of the benefits of full process adoption while avoiding most of the expense and human issues involved with full process rollout. A business-based requirements approach is an easy-to-adopt, risk-free entry point that offers tangible quality improvements. This approach suits any project scope. Whether building a complex system for enterprise resource planning or customer relationship management, or developing small, single-user software programs, defining business requirements improves any system delivery.",2002,0, 553,First experiments relating behavior selection architectures to environmental complexity,"Assessing the performance of behavior selection architectures for autonomous robots is a complex task that depends on many factors. This paper reports a study comparing four motivated behavior-based architectures in different worlds with varying degrees and types of complexity, and analyzes performance results (in terms of viability, life span, and global life quality) relating architectural features to environmental complexity.",2002,0, 554,Static analysis of SEU effects on software applications,"Control flow errors have been widely addressed in literature as a possible threat to the dependability of computer systems, and many clever techniques have been proposed to detect and tolerate them. Nevertheless, it has never been discussed if the overheads introduced by many of these techniques are justified by a reasonable probability of incurring control flow errors. This paper presents a static executable code analysis methodology able to compute, depending on the target microprocessor platform, the upper-bound probability that a given application incurs in a control flow error.",2002,0, 555,Application of high-quality built-in test to industrial designs,"This paper presents an approach for high-quality built-in test using a neighborhood pattern generator (NPG). The proposed NPG is practically acceptable because (a) its structure is independent of circuit under test, (b) it requires low area overhead and no performance degradation, and (c) it can encode deterministic test cubes, not only for stuck-at faults but also transition faults, with high probability. Experimental results for large industrial circuits illustrate the efficiency of the proposed approach.",2002,0, 556,Measuring Web application quality with WebQEM,"This article discusses using WebQEM, a quantitative evaluation strategy to assess Web site and application quality. Defining and measuring quality indicators can help stakeholders understand and improve Web products. An e-commerce case study illustrates the methodology's utility in systematically assessing attributes that influence product quality",2002,0, 557,The use of Kohonen self-organizing maps in process monitoring,"Process monitoring and fault diagnosis have been studied widely in recent years, and the number of industrial applications with encouraging results has grown rapidly. In the case of complex processes a computer aided monitoring enhances operators' possibilities to run the process economically. In this paper a fault diagnosis system is described and some application results from the Outokumpu Harjavalta smelter are discussed. The system monitors process states using neural networks (Kohonen self-organizing maps, SOM) in conjunction with heuristic rules, which are also used to detect equipment malfunctions.",2002,0, 558,Fuzzy logic system for fuzzy event tree computing,"The paper presents the authors' contribution in developing a fuzzy logic system for event-tree analysis. The fuzzy event-tree method can be used for the protection and automation of power systems independent safety analysis. The main contribution of the proposed analysis is the evaluation or the general fuzzy conclusion named """"general safety"""" associated to all the paths in the tree. A complex software tool named """"Fuzzy Event Tree Analysis"""" had to be elaborated. The program allows """"general safety"""" fuzzy parameter computing and also protected power system-protection system critical analysis.",2002,0, 559,Error detection by selective procedure call duplication for low energy consumption,"As commercial off-the-shelf (COTS) components are used in system-on-chip (SoC) design technique that is widely used from cellular phones to personal computers, it is difficult to modify hardware design to implement hardware fault-tolerant techniques and improve system reliability. Two major concerns of this paper are to: (a) improve system reliability by detecting transient errors in hardware, and (b) reduce energy consumption by minimizing error-detection overhead. The objective of this new technique, selective procedure call duplication (SPCD) is to keep the system fault-secured (preserve data integrity) in the presence of transient errors, with minimum additional energy consumption. The basic approach is to duplicate computations and then to compare their results to detect errors. There are 3 choices for duplicate computation: (1) duplicating every statement in the program and comparing results, (2) re-executing procedures through duplicated procedure calls, and comparing results, and (3) re-executing the whole program, and comparing the final results. SPDC combines choices (1) and(2). For a given program, SPCD analyzes procedure-call behavior of the program, and then determines which procedures can have duplicated statements [choice(1)] and which procedure calls can be duplicated [choice (2)] to minimize energy consumption with reasonable error-detection latency. Then, SPCD transforms the original program into a new program that can detect errors with minimum additional energy consumption by re-executing the statements or procedures. SPCD was simulated with benchmark programs; it requires less than 25% additional energy for error detection than previous techniques that do not consider energy consumption.",2002,0, 560,Software reliability growth with test coverage,"Software test-coverage measures"""" quantify the degree of thoroughness of testing. Tools are now available that measure test-coverage in terms of blocks, branches, computation-uses, predicate-uses, etc. that are covered. This paper models the relations among testing time, coverage, and reliability. An LE (logarithmic-exponential) model is presented that relates testing effort to test coverage (block, branch, computation-use, or predicate-use). The model is based on the hypothesis that the enumerable elements (like branches or blocks) for any coverage measure have various probabilities of being exercised; just like defects have various probabilities of being encountered. This model allows relating a test-coverage measure directly with defect-coverage. The model is fitted to 4 data-sets for programs with real defects. In the model, defect coverage can predict the time to next failure. The LE model can eliminate variables like test-application strategy from consideration. It is suitable for high reliability applications where automatic (or manual) test generation is used to cover enumerables which have not yet been tested. The data-sets used suggest the potential of the proposed model. The model is simple and easily explained, and thus can be suitable for industrial use. The LE model is based on the time-based logarithmic software-reliability growth model. It considers that: at 100% coverage for a given enumerable, all defects might not yet have been found.",2002,0, 561,Using regression trees to classify fault-prone software modules,"Software faults are defects in software modules that might cause failures. Software developers tend to focus on faults, because they are closely related to the amount of rework necessary to prevent future operational software failures. The goal of this paper is to predict which modules are fault-prone and to do it early enough in the life cycle to be useful to developers. A regression tree is an algorithm represented by an abstract tree, where the response variable is a real quantity. Software modules are classified as fault-prone or not, by comparing the predicted value to a threshold. A classification rule is proposed that allows one to choose a preferred balance between the two types of misclassification rates. A case study of a very large telecommunications systems considered software modules to be fault-prone, if any faults were discovered by customers. Our research shows that classifying fault-prone modules with regression trees and the using the classification rule in this paper, resulted in predictions with satisfactory accuracy and robustness.",2002,0, 562,Miro - middleware for mobile robot applications,"Developing software for mobile robot applications is a tedious and error-prone task. Modern mobile robot systems are distributed systems, and their designs exhibit large heterogeneity in terms of hardware, operating systems, communications protocols, and programming languages. Vendor-provided programming environments have not kept pace with recent developments in software technology. Also, standardized modules for certain robot functionalities are beginning to emerge. Furthermore, the seamless integration of mobile robot applications into enterprise information processing systems is mostly an open problem. We suggest the construction and use of object-oriented robot middleware to make the development of mobile robot applications easier and faster, and to foster portability and maintainability of robot software. With Miro, we present such a middleware, which meets the aforementioned requirements and has been ported to three different mobile platforms with little effort. Miro also provides generic abstract services like localization or behavior engines, which can be applied on different robot platforms with virtually no modifications.",2002,0, 563,Application of hazard analysis to software quality modelling,"Quality is a fundamental concept in software and information system development. It is also a complex and elusive concept. A large number of quality models have been developed for understanding, measuring and predicting quality of software and information systems. It has been recognised that quality models should be constructed in accordance to the specific features of the application domain. This paper proposes a systematic method for constructing quality models of information systems. A diagrammatic notation is devised to represent quality models that enclose application specific features. Techniques of hazard analysis for the development and deployment of safety related systems are adapted for deriving quality models from system architectural designs. The method is illustrated by a part of Web-based information systems.",2002,0, 564,Rejection strategies and confidence measures for a k-NN classifier in an OCR task,"In handwritten character recognition, the rejection of extraneous patterns, like image noise, strokes or corrections, can improve significantly the practical usefulness of a system. In this paper a combination of two confidence measures defined for a k-nearest neighbors (NN) classifier is proposed. Experiments are presented comparing the performance of the same system with and without the new rejection rules.",2002,0, 565,Edge color distribution transform: an efficient tool for object detection in images,"Object detection in images is a fundamental task in many image analysis applications. Existing methods for low-level object detection always perform the color-similarity analyses in the 2D image space. However, the crowded edges of different objects make the detection complex and error-prone. The paper proposes to detect objects in a new edge color distribution space (ECDS) rather than in the image space. In the 3D ECDS, the edges of different objects are segregated and the spatial relation of a same object is kept as well, which make the object detection easier and less error-prone. Since uniform-color objects and textured objects have different distribution characteristics in ECDS, the paper gives a 3D edge-tracking algorithm for the former and a cuboid-growing algorithm for the latter. The detection results are correct and noise-free, so they are suitable for the high-level object detection. The experimental results on a synthetic image and a real-life image are included.",2002,0, 566,Private information retrieval in the presence of malicious failures,"In the application domain of online information services such as online census information, health records and real-time stock quotes, there are at least two fundamental challenges: the protection of users' privacy and the assurance of service availability. We present a fault-tolerant scheme for private information retrieval (FT-PIR) that protects users' privacy and ensure service provision in the presence of malicious server failures. An error detection algorithm is introduced into this scheme to detect the corrupted results from servers. The analytical and experimental results show that the FT-PIR scheme can tolerate malicious server failures effectively and prevent any information of users front being leaked to attackers. This new scheme does not rely on any unproven cryptographic premise and the availability of tamperproof hardware. An implementation of the FT-PIR scheme on a distributed database system suggests just a modest level of performance overhead.",2002,0, 567,A structured approach to handling on-line interface upgrades,"The integration of complex systems out of existing systems is an active area of research and development. There are many practical situations in which the interfaces of the component systems, for example belonging to separate organisations, are changed dynamically and without notification. In this paper we propose an approach to handling such upgrades in a structured and disciplined fashion. All interface changes are viewed as abnormal events and general fault tolerance mechanisms (exception handling, in particular) are applied to dealing with them. The paper outlines general ways of detecting such interface upgrades and recovering after them. An Internet Travel Agency is used as a case study",2002,0, 568,Tool support for distributed inspection,"Software inspection is one of the best practices for detecting and removing defects early in the software development process. We present a tool to support geographically distributed inspection teams. The tool adopts a reengineered inspection process to minimize synchronous activities and coordination problems, and a lightweight architecture to maximize easy of use and deployment.",2002,0, 569,Maintenance in joint software development,"The need to combine several efforts in software development has become critical because of the software development requirements and geographically dispersed qualified human resources, background skills, working methods, and software tools among autonomous software enterprises - joint software development. We know that the organization and the development of software depend largely on human initiatives. All human initiatives are subject to change and perpetual evolution. A variety of studies have contributed to highlight the problems arising from the change and the perpetual evolution of the software development. These studies revealed that the majority of the effort spent on the software process is spent on maintenance. To reduce the efforts of software maintenance in joint software development, it is therefore necessary to detect the features of software maintenance processes susceptible to be automatized. The software maintenance problems in joint software development are caused by the changes and perpetual evolutions at the organizational level of an enterprise or at the software development level. The complex nature of changes and perpetual evolutions at the organizational and the development levels includes the following factors: the maintenance contracts among enterprises, the abilities to develop, experiences and background in software development, methodologies of software development, tools available and localization of the organizations. This paper looks into the new software maintenance problems in joint software development by geographically dispersed virtual enterprises",2002,0, 570,Improving the robustness of MPEG-4 video communications over wireless/3G mobile networks,"Two major issues in providing true end-to-end wireless/mobile video capabilities are: interoperability among network platforms and robustness of video compression algorithms in error-prone environments. In this paper, we mainly focus on the second issue and show how error resilience techniques can be used to improve the video quality. We argue that the error resilient tools provided within the MPEG-4 standard are not sufficient to provide acceptable quality in wireless/mobile networks, but that this quality can be significantly improved by the inclusion of hierarchical MPEG-4 video coding techniques. We present a novel hierarchical MPEG-4 video scheme particularly designed for video communications over QoS-capable wireless/mobile network.",2002,0, 571,Automated software robustness testing - static and adaptive test case design methods,"Testing is essential in the development of any software system. Testing is required to assess a system's functionality and quality of operation in its final environment. This is especially of importance for systems being assembled from many self-contained software components. In this article, we focus on automated testing of software component robustness, which is a component's ability to handle invalid input data or environmental conditions. We describe how large numbers of test cases can effectively and automatically be generated from small sets of test values. However, there is a great demand on ways to efficiently reduce this mass of test cases as actually executing them on a data processing machine would be too time consuming and expensive. We discuss static analytic methods for test case reduction and some of the disadvantages they bring. Finally a more intelligent and efficient approach is introduced, the Adaptive Test Procedure for Software Robustness Testing developed at ABB Corporate Research in Ladenburg. Along with these discussions the need for intelligent test approaches is illustrated by the Ballista methodology for automated robustness testing of software component interfaces. An object-oriented approach based on parameter data types rather than component functionality essentially eliminates the need for function-specific test scaffolding.",2002,0, 572,Towards an impact analysis for component based real-time product line architectures,"In this paper we propose a method for predicting the consequences of adding new components to an existing product line in the real-time systems domain. We refer to such a prediction as an impact analysis. New components are added as new features are introduced in the product line. Adding components to a real-time system may affect the temporal correctness of the system. In our approach to product line architectures, products are constructed by assembling components. By having a prediction enabled component technology as the underlying component technology, we can predict the behavior of an assembly of components. We demonstrate our approach by an example in which temporal correctness and consistency between versions of components is predicted.",2002,0, 573,Assessing CBD - what's the difference?,"The use of pre-built software components has increased remarkably in the last few years and a lot of companies are now developing software with a component-based approach. However, software developers as well as software process assessors have experienced problems when using existing models in a component-based context. This paper analyses the need for and proposes a set of processes suitable for CBD development and assessment methods and how and and in what way these processes differ from those defined in the ISO/IEC 15504, the international standard for software process assessment. The paper focuses on the processes needed to assemble components in to a larger application and explains the need for a changed process reference model for CBD, which triggers the need for a new software process assessment methodology.",2002,0, 574,SPiCE in action - experiences in tailoring and extension,"Today the standard ISO/IEC TR 15504: software process assessment commonly known as SPiCE has been in use for more than 5 years, with hundreds of software process assessments performed in organizations around the world. The success of the ISO 15504 approach is demonstrated by its application in and extension to all sectors featuring software development, in particular, space, automotive, finance, healthcare, and electronics. As the current Technical Report makes the transition into an international standard, many initiatives are underway to expand the application of process assessment to areas even outside of software development. This paper reports on experiences in the use of ISO 15504 both in tailoring the standard for particular industrial sectors and in expanding the process assessment approach into new domains. In particular, three projects are discussed: SPiCE for SPACE, a ISO/IEC TR 15504 conformant method of software process assessment developed for the European space industry; SPiCE-9000 for SPACE, an assessment method for space quality management systems, based on ISO 9001:2000; and NOVE-IT, a project of the Swiss federal government to establish and assess processes covering IT procurement, development, operation, and service provision.",2002,0, 575,What has culture to do with SPI?,"This paper addresses cross-cultural issues in software process improvement (SPI). Cultural factors, which may have a bearing on successful adoption and implementation of software quality management systems, were identified during a field-study in five countries. A self-assessment model, called CODES, has been developed for use by organisations developing software in different parts of the world. The CODES model includes two sub-models. One of the sub-models, called the C.HI.D.DI typology tries to identify the national culture and the second sub-model called the top-down bottom-up model tries to identify the organisational culture and structure. The CODES model investigates to what degree there is a fit between the organisational and the national culture and aims to predict a suitable software quality management system.",2002,0, 576,Teletraffic simulation of cellular networks: modeling the handoff arrivals and the handoff delay,The paper presents an analysis of teletraffic variables in cellular networks. The variables studied are the time between two consecutive handoff arrivals and the handoff delay. These teletraffic variables are characterized by means of an advanced software simulator that models several scenarios assuming fixed channel allocation. Information about the quality of service is also provided. A large set of scenarios has been simulated and the characterization results derived from its study have been presented and analyzed.,2002,0, 577,A comprehensive and practical approach for power system security assessment,"This paper proposes a new methodology of the power system dynamic security assessment. It automatically and successively scans contingencies of a power system; furthermore, based on the concept of stability margin, the severity of the contingencies is ranked. Its complement application is on the base of the combination with a dynamic simulation program. The authors demonstrate how to apply this new method to assess the stability security of the real-world network. In the assessment, two types of contingencies (N-1 and N-2) are applied on transformer, generator, bus or line. Assessment results help researchers and operators to make proper adjustment of system operation to ensure system security. It is shown that the new methodology is a comprehensive and practical approach to assess the power system security.",2002,0, 578,Fault location using wavelet packets,"A technique using wavelet packets is presented for accurate fault location on power lines with branches. It relies on detecting fault-generated transient traveling waves and identifies some waves reflected back from discontinuities and the fault point. Wavelet packets analysis is used to decompose and reconstruct high-frequency fault signals. An eigenvector matrix consists of local energies of high-frequency content of the fault signal; the faulty section is determined by comparing local energies in the eigenvector matrix with a given threshold. With the faulty section determined, the time ranges of two reflected waves related with fault point would be found out, then the fault point is located. The paper shows the theoretical development of the algorithm, and together with the results obtained using EMTP simulation software modeling, a simple 10 kV overhead line circuit.",2002,0, 579,Binarization of low quality text using a Markov random field model,"Binarization techniques have been developed in the document analysis community for over 30 years and many algorithms have been used successfully. On the other hand, document analysis tasks are more and more frequently being applied to multimedia documents such as video sequences. Due to low resolution and lossy compression, the binarization of text included in the frames is a non-trivial task. Existing techniques work without a model of the spatial relationships in the image, which makes them less powerful. We introduce a new technique based on a Markov random field model of the document. The model parameters (clique potentials) are learned from training data and the binary image is estimated in a Bayesian framework. The performance is evaluated using commercial OCR software.",2002,0, 580,Reducing No Fault Found using statistical processing and an expert system,"This paper describes a method for capturing avionics test failure results from Automated Test Equipment (ATE) and statistically processing this data to provide decision support for software engineers in reducing No Fault Found (NFF) cases at various testing levels. NFFs have plagued the avionics test and repair environment for years at enormous cost to readiness and logistics support. The costs in terms of depot repair and user exchange dollars that are wasted annually for unresolved cases are graphically illustrated. A diagnostic data model is presented, which automatically captures, archives and statistically processes test parameters and failure results which are then used to determine if an NFF at the next testing level resulted from a test anomaly. The model includes statistical process methods, which produce historical trend patterns for each part and serial numbered unit tested. An Expert System is used to detect statistical pattern changes and stores that information in a knowledge base. A Decision Support System (DSS) provides advisories for engineers and technicians by combining the statistical test pattern with unit performance changes in the knowledge base. Examples of specific F-16 NFF reduction results are provided.",2002,0, 581,"Text localization, enhancement and binarization in multimedia documents","The systems currently available for content based image and video retrieval work without semantic knowledge, i.e. they use image processing methods to extract low level features of the data. The similarity obtained by these approaches does not always correspond to the similarity a human user would expect. A way to include more semantic knowledge into the indexing process is to use the text included in the images and video sequences. It is rich in information but easy to use, e.g. by key word based queries. In this paper we present an algorithm to localize artificial text in images and videos using a measure of accumulated gradients and morphological post processing to detect the text. The quality of the localized text is improved by robust multiple frame integration. Anew technique for the binarization of the text boxes is proposed. Finally, detection and OCR results for a commercial OCR are presented.",2002,0, 582,AGORA: attributed goal-oriented requirements analysis method,"This paper presents an extended version of the goal-oriented requirements analysis method called AGORA, where attribute values, e.g. contribution values and preference matrices, are added to goal graphs. An analyst attaches contribution values and preference values to edges and nodes of a goal graph respectively during the process for refining and decomposing the goals. The contribution value of an edge stands for the degree of the contribution of the sub-goal to the achievement of its parent goal, while the preference matrix of a goal represents the preference of the goal for each stakeholder. These values can help an analyst to choose and adopt a goal from the alternatives of the goals, to recognize the conflicts among the goals, and to analyze the impact of requirements changes. Furthermore the values on a goal graph and its structural characteristics allow the analyst to estimate the quality of the resulting requirements specification, such as correctness, unambiguity, completeness etc. The estimated quality values can suggest which goals should be improved and/or refined. In addition, we have applied AGORA to a user account system and assessed it.",2002,0, 583,Application of linguistic techniques for Use Case analysis,"The Use Case formalism is an effective way of capturing both business process and functional system requirements in a very simple and easy-to-learn way. Use Cases may be modeled in a graphical way (e.g. using the UML notation), mainly serving as a table of content for Use Cases. System behavior can more effectively be specified by structured natural language (NL) sentences. The use of NL as a way to specify the behavior of a system is however a critical point, due to the inherent ambiguity originating from different interpretations of natural language descriptions. We discuss the use of methods, based on a linguistic approach, to analyze functional requirements expressed by means of textual (NL) Use Cases. The aim is to collect quality metrics and detect defects related to such inherent ambiguity. In a series of preliminary experiments, we applied a number of tools for quality evaluation of NL text (and, in particular, of NL requirements documents) to an industrial Use Cases document. The result of the analysis is a set of metrics that aim to measure the quality of the NL textual description of Use Cases. We also discuss the application of selected linguistic analysis techniques that are provided by some of the tools to semantic analysis of NL expressed Use Case.",2002,0, 584,Engineering real-time behavior,"This article presents a process that evaluates an application for real-time correctness throughout development and maintenance. It allows temporal correctness to be designed-in during development, rather than the more typical effort to test-in timing performance at the end of development. It avoids the costly problems that can arise when timing faults are found late in testing or, worse still, after deployment.",2002,0, 585,Investigating the influence of software inspection process parameters on inspection meeting performance,"The question of whether inspection meetings justify their cost has been discussed in several studies. However, it is still open as to how modern defect detection techniques and team size influence meeting performance, particularly with respect to different classes of defect severity. The influence of software inspection process parameters (defect detection technique, team size, meeting effort) on defect detection effectiveness is investigated, i.e. the number of defects found for 31 teams which inspected a requirements document, to shed light on the performance of inspection meetings. The sets of defects reported by each team after the individual preparation phase (nominal-team performance) and after the team meeting (real-team performance) are compared. The main findings are that nominal teams perform significantly more effectively than real teams for all defect classes. This implies that meeting losses are on average higher than meeting gains. Meeting effort was positively correlated with meeting gains, indicating that synergy effects can only be realised if enough time is available. With regard to meeting losses, existing reports are confirmed that for a given defect, the probability of being lost in a meeting decreases with an increase in the number of inspectors who detected this defect during individual preparation.",2002,0, 586,Requirements in the medical domain: Experiences and prescriptions,"Research shows that information flow in health care systems is inefficient and prone to error. Data is lost, and physicians must repeat tests and examinations because the results are unavailable at the right place and time. Cases of erroneous medication - resulting from misinterpreted, misunderstood, or missing information - are well known and have caused serious health problems and even death. We strongly believe that through effective use of information technology, we can improve both the quality and efficiency of the health sector's work. Introducing a new system might shift power from old to young, from doctor to nurse, or from medical staff to administration. Few people appreciate loss of power, but even fewer will admit that the loss of power is why they resist the new system. Thus, we must work hard to bring this into the open and help people realize that a new system doesn't have to threaten their positions. Again, knowledge and understanding of a hospital's organizational structure, both official and hidden, is necessary if the system's introduction is to be successful.",2002,0, 587,Disaggregating and calibrating the CASE tool variable in COCOMO II,"CASE (computer aided software engineering) tools are believed to have played a critical role in improving software productivity and quality by assisting tasks in software development processes since the 1970s. Several parametric software cost models adopt """"use of software tools"""" as one of the environmental factors that affects software development productivity. Several software cost models assess the productivity impacts of CASE tools based only on breadth of tool coverage without considering other productivity dimensions such as degree of integration, tool maturity, and user support. This paper provides an extended set of tool rating scales based on the completeness of tool coverage, the degree of tool integration, and tool maturity/user support. Those scales are used to refine the way in which CASE tools are effectively evaluated within COCOMO (constructive cost model) II. In order to find the best fit of weighting values for the extended set of tool rating scales in the extended research model, a Bayesian approach is adopted to combine two sources of (expert-judged and data-determined) information to increase prediction accuracy. The extended model using the three TOOL rating scales is validated by using the cross-validation methodologies, data splitting, and bootstrapping. This approach can be used to disaggregate other parameters that have significant impacts on software development productivity and to calibrate the best-fit weight values based on data-determined and expert-judged distributions. It results in an increase in the prediction accuracy in software parametric cost estimation models and an improvement in insights on software productivity investments.",2002,0, 588,Timed Wp-method: testing real-time systems,"Real-time systems interact with their environment using time constrained input/output signals. Examples of real-time systems include patient monitoring systems, air traffic control systems, and telecommunication systems. For such systems, a functional misbehavior or a deviation from the specified time constraints may have catastrophic consequences. Therefore, ensuring the correctness of real-time systems becomes necessary. Two different techniques are usually used to cope with the correctness of a software system prior to its deployment, namely, verification and testing. In this paper, we address the issue of testing real-time software systems specified as a timed input output automaton (TIOA). TIOA is a variant of timed automaton. We introduce the syntax and semantics of TIOA. We present the potential faults that can be encountered in a timed system implementation. We study these different faults based on TIOA model and look at their effects on the execution of the system using the region graph. We present a method for generating timed test cases. This method is based on a state characterization technique and consists of the following three steps: First, we sample the region graph using a suitable granularity, in order to construct a subautomaton easily testable, called grid automaton. Then, we transform the grid automaton into a nondeterministic timed finite state machine (NTFSM). Finally, we adapt the generalized Wp-method to generate timed test cases from NTFSM. We assess the fault coverage of our test cases generation method and prove its ability to detect all the possible faults. Throughout the paper, we use examples to illustrate the various concepts and techniques used in our approach.",2002,0, 589,Scenario-based specification and evaluation of architectures for health monitoring of aerospace structures,"HUMS (Health and Usage Monitoring Systems) have been an area of increased research in the recent times due to two main reasons: (a) increase in the occurrences of accidents in the aerospace, and (b) stricter FAA regulations on aircraft maintenance There are several problems associated with the maintenance of aircraft that the HUMS systems can solve through the use of several monitoring technologies. Currently, a variety of maintenance programs are institutionalized by the aircraft carriers that mostly involve visual inspections and hence are error-prone Automatic, continuous health monitoring systems could simplify the maintenance tasks as well as improve the efficiency of the operation, thereby enhancing the safety of air travel and also lowering the total lifecycle costs of aircraft. This paper documents our methodology of employing scenarios in the specification and evaluation of architecture for HUMS. It investigates related works that use scenarios in software development and describes how we use scenarios in our work. Finally, a demonstration of our methods in the development of HUMS is presented.",2002,0, 590,Validation of mission critical software design and implementation using model checking [spacecraft],"Over the years, the complexity of space missions has dramatically increased with more of the critical aspects of a spacecraft's design being implemented in software. With the added functionality and performance required by the software to meet system requirements, the robustness of the software must be upheld. Traditional software validation methods of simulation and testing are being stretched to adequately cover the needs of software development in this growing environment. It is becoming increasingly difficult to establish traditional software validation practices that confidently confirm the robustness of the design in balance with cost and schedule needs of the project. As a result, model checking is emerging as a powerful validation technique for mission critical software. Model checking conducts an exhaustive exploration of all possible behaviors of a software system design and as such can be used to detect defects in designs that are typically difficult to discover with conventional testing approaches.",2002,0, 591,Integrating reliability and timing analysis of CAN-based systems,"This paper presents and illustrates a reliability analysis method developed with a focus on controller-area-network-based automotive systems. The method considers the effect of faults on schedulability analysis and its impact on the reliability estimation of the system, and attempts to integrate both to aid system developers. The authors illustrate the method by modeling a simple distributed antilock braking system, and showing that even in cases where the worst case analysis deems the system unschedulable, it may be proven to satisfy its timing requirements with a sufficiently high probability. From a reliability and cost perspective, this paper underlines the tradeoffs between timing guarantees, the level of hardware and software faults, and per-unit cost.",2002,0,116 592,Automatic test vector generation for bridging faults detection in combinational circuits using false Boolean functions,"The paper presents an automatic test vector generation program for detecting bridging faults in combinational circuits using false Boolean functions. These functions are used for solving the system of equations of the circuit concerning controllability, observability and interconnectivity concepts. The presented model applies to bridging faults that do not change the circuit nature i.e. the combinational free-fault circuit with these interconnectivity faults remains a combinational one.",2002,0, 593,An approach to rapid prototyping of large multi-agent systems,"Engineering individual components of a multi-agent system and their interactions is a complex and error-prone task in urgent need of methods and tools. Prototyping is a valuable technique to help software engineers explore the design space while gaining insight and a """"feel"""" for the dynamics of the system; prototyping also allows engineers to learn more about the relationships among design features and the desired computational behaviour. In this paper we describe an approach to building prototypes of large multi-agent systems with which we can experiment and analyse results. We have implemented an environment embodying our approach. This environment is supported by a distributed platform that helps us achieve controlled simulations.",2002,0, 594,No Java without caffeine: A tool for dynamic analysis of Java programs,"To understand the behavior of a program, a maintainer reads some code, asks a question about this code, conjectures an answer, and searches the code and the documentation for confirmation of her conjecture. However, the confirmation of the conjecture can be error-prone and time-consuming because the maintainer has only static information at her disposal. She would benefit from dynamic information. In this paper, we present Caffeine, an assistant that helps the maintainer in checking her conjecture about the behavior of a Java program. Our assistant is a dynamic analysis tool that uses the Java platform debug architecture to generate a trace, i.e., an execution history, and a Prolog engine to perform queries over the trace. We present a usage scenario based on the n-queens problem, and two real-life examples based on the Singleton design pattern and on the composition relationship.",2002,0, 595,System testing for object-oriented frameworks using hook technology,"An application framework provides a reusable design and implementation for a family of software systems. If the framework contains defects, the defects will be passed on to the applications developed from the framework. Framework defects are hard to discover at the time the framework is instantiated. Therefore, it is important to remove all defects before instantiating the framework. The problem addressed in this paper is developing an automated state-based test suite generator technique that uses hook technology to produce test suites to test frameworks at the system level. A case study is reported and its results show that the proposed technique is reasonably effective at detecting faults. A supporting tool that automatically produces framework test cases, executes them, and evaluates the results is presented.",2002,0, 596,What makes finite-state models more (or less) testable?,"This paper studies how details of a particular model can effect the efficacy of a search for detects. We find that if the test method is fixed, we can identity classes of software that are more or less testable. Using a combination of model mutators and machine learning, we find that we can isolate topological features that significantly change the effectiveness of a defect detection tool. More specifically, we show that for one defect detection tool (a stochastic search engine) applied to a certain representation (finite state machines), we can increase the average odds of finding a defect from 69% to 91%. The method used to change those odds is quite general and should apply to other defect detection tools being applied to other representations.",2002,0, 597,Combining and adapting software quality predictive models by genetic algorithms,"The goal of quality models is to predict a quality factor starting from a set of direct measures. Selecting an appropriate quality model for a particular software is a difficult, non-trivial decision. In this paper, we propose an approach to combine and/or adapt existing models (experts) in such way that the combined/adapted model works well on the particular system. Test results indicate that the models perform significantly better than individual experts in the pool.",2002,0, 598,Predicting software stability using case-based reasoning,"Predicting stability in object-oriented (OO) software, i.e., the ease with which a software item can evolve while preserving its design, is a key feature for software maintenance. We present a novel approach which relies on the case-based reasoning (CBR) paradigm. Thus, to predict the chances of an OO software item breaking downward compatibility, our method uses knowledge of past evolution extracted from different software versions. A comparison of our similarity-based approach to a classical inductive method such as decision trees, is presented which includes various tests on large datasets from existing software.",2002,0, 599,Asymptotics of quickest change detection procedures under a Bayesian criterion,"The optimal detection procedure for detecting changes in independent and identically distributed sequences (i.i.d.) in a Bayesian setting was derived by Shiryaev in the nineteen sixties. However, the analysis of the performance of this procedure in terms of the average detection delay and false alarm probability has been an open problem. In this paper, we investigate the performance of Shiryaev's procedure in an asymptotic setting where the false alarm probability goes to zero. The asymptotic study is performed not only in. the i.d.d. case where the Shiryaev's procedure is optimal but also in a general, non-i.i.d. case. In the latter case, we show that Shiryaev's procedure is asymptotically optimum under mild conditions. We also show that the two popular non-Bayesian detection procedures, namely the Page and Shiryaev-Roberts-Pollak procedures, are not optimal (even asymptotically) under the Bayesian criterion. The results of this study are shown to be especially important in studying the asymptotics of decentralized quickest change detection procedures.",2002,0, 600,Handling preprocessor-conditioned declarations,"Many software systems are developed with configurable functionality, and for multiple hardware platforms and operating systems. This can lead to thousands of possible configurations, requiring each configuration-dependent programming entity or variable to have different types. Such configuration-dependent variables are often declared inside preprocessor conditionals (e.g., C language). Preprocessor-conditioned declarations may be a source of problems. Commonly used configurations are type-checked by repeated compilation. Rarely used configurations are unlikely to be recently type checked, and in such configurations a variable may have a type not compatible to its use or it may contains uses of variables never defined. This paper proposes an approach to identify all possible types each variable declared in a software system can assume, and under which conditions. Inconsistent variable usages can then be detected for all possible configurations. Impacts of preprocessor-conditioned declaration in 17 different open source software systems are also reported.",2002,0, 601,Predictive detection methods as the next era in biological signal processing: a case study of ECG analysis,"An idea of a detection method in biological signal processing to predict the possibility of proneness to a disease is described. The goal of this study is to introduce a new methodology in better understanding of biological signals and create new tools for predictive diagnosis rather than detection of the existing defects, so that, make it possible to give a proper preventative procedure/treatment to the patient in advance. Although the complete achievement of such a goal is very far, it is proposed by a prospective study on the ECG signals that the development of such a method is possible. The preliminary results have been encouraging enough to justify our idea. Such a new generation of predictive detection methods would have a profound impact on the medical diagnosis.",2002,0, 602,The design and implementation of the intel real-time performance analyzer,"Modern PCs support growing numbers of concurrently active independently authored real-time software applications and device drivers. The non realtime nature of PC OSes (Linux, Microsoft Windows, etc.) means that robust real-time software must cope with hold-offs without degradation in user perceivable application quality of service. The open nature of the PC platform necessitates measuring OS interrupt and thread latencies under concurrent load in order to determine with how much hold-off the application must cope. The Intel Real-Time Performance Analyzer is a toolkit for PCs running Microsoft Windows. The toolkit statistically characterizes thread and interrupt latencies plus Windows Deferred Procedure Call (DPC) and kernel Work Item latencies. The toolkit also has facilities for analyzing the causes of long latencies. These latencies can then be incorporated as additional blocking times in a real-time schedulability analysis. An isochronous workload tool is included to model thread and DPC based computation and detect missed deadlines.",2002,0, 603,A study on a garbage collector for embedded applications,"In general, embedded systems, such as cellular phones and PDAs are provided with small amounts of memory and a low power processor that is slower than desktop ones. Despite these limited resources, present technology allows designers to integrate, in a single chip, an entire system. In this scenario, software development for embedded systems is an error-prone operation. In order to develop better code in less time, Java technology has gained a lot of interest from developers of embedded systems in the last few years, mainly because of its portability, code reuse, and object-oriented paradigm. On the other hand, Java requires an automatic memory management system in Java processors. This paper presents a garbage collection technique based on a software approach for an embedded Java processor. This technique is targeted for applications that are used in portable embedded systems. This paper discusses the most suited algorithm for such applications, showing also some performance overhead results.",2002,0, 604,Performance management in component-oriented systems using a Model Driven ArchitectureTM approach,"Developers often lack the time or knowledge to profoundly understand the performance issues in largescale component-oriented enterprise applications. This situation is further complicated by the fact that such applications are often built using a mix of in-house and commercial-off-the-shelf (COTS) components. This paper presents a methodology for understanding and predicting the performance of component-oriented distributed systems both during development and after they have been built. The methodology is based on three conceptually separate parts: monitoring, modelling and performance prediction. Performance predictions are based on UML models created dynamically by monitoring-and-analysing a live or under-development system. The system is monitored using non-intrusive methods and run-time data is collected. In addition, static data is obtained by analysing the deployment configuration of the target application. UML models enhanced with performance indicators are created based on both static and dynamic data, showing performance hot spots. To facilitate the understanding of the system, the generated models are traversable both horizontally at the same abstraction level between transactions, and vertically between different layers of abstraction using the concepts defined by the Model Driven Architecture. The system performance is predicted and performance-related issues are identified in different scenarios by generating workloads and simulating the performance models. Work is under way to implement a framework for the presented methodology with the current focus on the Enterprise Java Beans technology.",2002,0, 605,On the evaluation of JavaSymphony for cluster applications,"In the past few years, increasing interest has been shown in using Java as a language for performance-oriented distributed and parallel computing. Most Java-based systems that support portable parallel and distributed computing either require the programmer to deal with intricate low level details of Java which can be a tedious, time-consuming and error-prone task, or prevent the programmer from controlling locality of data. In contrast to most existing systems, JavaSymphony - a class library written entirely in Java - allows to control parallelism, load balancing and locality at a high level. Objects can be explicitly distributed and migrated based on virtual architectures which impose a virtual hierarchy on a distributed/parallel system of physical computing nodes. The concept of blocking/nonblocking remote method invocation is used to exchange data among distributed objects and to process work by remote objects. We evaluate the JavaSymphony programming API for a variety of distributed/parallel algorithms which comprises backtracking, N-body, encryption/decryption algorithms and asynchronous nested optimization algorithms. Performance results are presented for both homogeneous and heterogeneous cluster architectures. Moreover, we compare JavaSymphony with an alternative well-known semi-automatic system.",2002,0, 606,Using a pulsed supply voltage for delay faults testing of digital circuits in a digital oscillation environment,"High-performance digital circuits with aggressive timing constraints are usually very susceptible to delay faults. Much research done on delay fault detection needs a rather complicated test setup together with precise test clock requirements. In this paper, we propose a test technique based on the digital oscillation test method. The technique, which was simulated in software, consists of sensitizing a critical path in the digital circuit under test and incorporating the path into an oscillation ring. The supply voltage to the oscillation ring is then varied to detect delay and stuck-at faults in the path.",2002,0, 607,Structural assessment of cost of quality,"Evolving accreditation standards require that software engineering programs, and perhaps even individual courses, demonstrate that students acquire the knowledge and skills necessary to participate effectively in professional practice. To this end, we must be able to assess our students to determine if we have achieved these goals. More problematic, in the assessment realm, is the difference between """"knowledge in the head"""" and """"knowledge in practice"""". We need assessment methods to help us account for not only what is known, but how it is known. Structural assessment may represent a valuable resource in this endeavor. This method assesses a student's knowledge of the relationships among concepts, methodologies, and problems within a particular domain, and may well illuminate the issues that require our attention. Through the use of concept maps, a particular method for representing and conveying structural knowledge, the assessment can be based upon the differences between learner's and expert's structural knowledge. In this paper, we detail our plan to develop and use structural assessment of cost of quality in undergraduate software engineering.",2002,0, 608,Virtual center for renal support: technological approach to patient physiological image,"The patient physiological image (PPI) is a novel concept which manages the knowledge of the virtual center for renal support (VCRS), currently being developed by the Biomedical Engineering Group of the University of Seville. PPI is a virtual """"replica"""" of the patient, built by means of a mathematical model, which represents several physiological subsystems of a renal patient. From a technical point of view, PPI is a component-oriented software module based on cutting-edge modeling and simulation technology. This paper provides a methodological and technological approach to the PPI. Computational architecture of PPI-based VCRS is also described. This is a multi-tier and multi-protocol system. Data are managed by several ORDBMS instances. Communications design is based on the virtual private network (VPN) concept. Renal patients have a minimum reliable access to the VCRS through a public switch telephone network-X.25 gateway. Design complies with the universal access requirement, allowing an efficient and inexpensive connection even in rural environments and reducing computational requirements in the patient's remote access unit. VCRS provides support for renal patients' healthcare, increasing the quality and quantity of monitored biomedical signals, predicting events as hypotension or low dialysis dose, assisting further to avoid them by an online therapy modification and easing diagnostic tasks. An online therapy adjustment experiment simulation is presented. Finally, the presented system serves as a computational aid for research in renal physiology. This is achieved by an open and reusable modeling and simulation architecture which allows the interaction among models and data from different scales and computer platforms, and a faster transference of investigation models toward clinical applications.",2002,0, 609,Review of condition assessment of power transformers in service,"As transformers age, their internal condition degrades, which increases the risk of failure. To prevent these failures and to maintain transformers in good operating condition is a very important issue for utilities. Traditionally, routine preventative maintenance programs combined with regular testing were used. The change to condition-based maintenance has resulted in the reduction, or even elimination, of routine time-based maintenance. Instead of doing maintenance at a regular interval, maintenance is only carried out if the condition of the equipment requires it. Hence, there is an increasing need for better nonintrusive diagnostic and monitoring tools to assess the internal condition of the transformers. If there is a problem, the transformer can then be repaired or replaced before it fails. An extensive review is given of diagnostic and monitoring tests, and equipment available that assess the condition of power transformers and provide an early warning of potential failure.",2002,0, 610,An implementation of a distributed algorithm for detection of local knots and cycles in directed graphs based on the CSP model and Java,"Cycles and knots in directed graphs are problems that can be associated with deadlocks in database and communication systems. Many algorithms to detect cycles and knots in directed graphs were proposed. Boukerche and Tropper (1998) have proposed a distributed algorithm that solves the problem in a efficient away. Their algorithm has a message complexity of 2 m vs. (at least) 4 m for the Chandy and Misra algorithm, where m is the number of links in the graph, and requires O (n log n) bits of memory, where n is the number of nodes. We have implemented Boukerche and Tropper's algorithm according to the construction of processes of the CSP model. Our implementation was done using JCSP, an implementation of CSP for Java, and the results are presented.",2002,0, 611,Is prior knowledge of a programming language important for software quality?,"Software engineering is human intensive. Thus, it is important to understand and evaluate the value of different types of experiences, and their relation to the quality of the developed software. Many job advertisements focus on requiring knowledge of specific programming languages. This may seem sensible at first sight, but maybe it is sufficient to have general knowledge in programming and then it is enough to learn a specific language within the new job. A key question is whether prior knowledge actually does improve software quality. This paper presents an empirical study where the programming experience of students is assessed using a survey at the beginning of a course on the Personal Software Process (PSP), and the outcome of the course is evaluated, for example, using the number of defects and development time. Statistical tests are used to analyse the relationship between programming experience and the performance of the students in terms of software quality. The results are mostly unexpected, for example, we are unable to show any significant relation between experience in the programming language used and the number of defects detected.",2002,0, 612,An approach to experimental evaluation of software understandability,"Software understandability is an important characteristic of software quality because it can influence cost or reliability of software evolution in reuse or maintenance. However, it is difficult to evaluate software understandability in practice because understanding is an internal process of humans. This paper proposes """"software overhaul"""" as a method for externalizing the process of understanding and presents a probability model to use process data of overhauling to estimate software understandability. An example describes an overhaul tool and its application.",2002,0, 613,An approach for estimation of software aging in a Web server,"A number of recent studies have reported the phenomenon of """"software aging"""", characterized by progressive performance degradation or a sudden hang/crash of a software system due to exhaustion of operating system resources, fragmentation and accumulation of errors. To counteract this phenomenon, a proactive technique called """"software rejuvenation"""" has been proposed. This essentially involves stopping the running software, cleaning its internal state and then restarting it. Software rejuvenation, being preventive in nature, begs the question as to when to schedule it. Periodic rejuvenation, while straightforward to implement, may not yield the best results. A better approach is based on actual measurement of system resource usage and activity that detects and estimates resource exhaustion times. Estimating the resource exhaustion times makes it possible for software rejuvenation to be initiated or better planned so that the system availability is maximized in the face of time-varying workload and system behavior. We propose a methodology based on time series analysis to detect and estimate resource exhaustion times due to software aging in a Web server while subjecting it to an artificial workload. We first collect and log data on several system resource usage and activity parameters on a Web server. Time-series ARMA models are then constructed from the data to detect aging and estimate resource exhaustion times. The results are then compared with previous measurement-based models and found to be more efficient and computationally less intensive. These models can be used to develop proactive management techniques like software rejuvenation which are triggered by actual measurements.",2002,0, 614,How much information is needed for usage-based reading? A series of experiments,"Software inspections are regarded as an important technique to detect faults throughout the software development process. The individual preparation phase of software inspections has enlarged its focus from only comprehension to also include fault searching. Hence, reading techniques to support the reviewers on fault detection are needed. Usage-based reading (UBR) is a reading technique, which focuses on the important parts of a software document by using prioritized use cases. This paper presents a series of three UBR experiments on design specifications, with focus on the third. The first experiment evaluates the prioritization of UBR and the second compares UBR against checklist-based reading. The third experiment investigates the amount of information needed in the use cases and whether a more active approach helps the reviewers to detect more faults. The third study was conducted at two different places with a total of 82 subjects. The general result from the experiments is that UBR works as intended and is efficient as well as effective in guiding reviewers during the preparation phase of software inspections. Furthermore, the results indicate that use cases developed in advance are preferable compared to developing them as part of the preparation phase of the inspection.",2002,0, 615,An experimental comparison of checklist-based reading and perspective-based reading for UML design document inspection,"This paper describes an experimental comparison of two reading techniques, namely Checklist-based reading (CBR) and Perspective-based reading (PBR) for Object-Oriented (OO) design inspection. Software inspection is an effective approach to detect defects in the early stages of the software development process. However inspections are usually applied for defect detection in software requirement documents or software code modules, and there is a significant lack of information how inspections should be applied to OO design documents. The comparison was performed in a controlled experiment with 59 subject students. The results of individual data analysis indicate that (a) defect detection effectiveness using both inspection techniques is similar (PBR: 69%, CBR: 70%); (b) reviewers who use PBR spend less time on inspection than reviewers who use CBR; (c) cost per defect of reviewers who use CBR is smaller. The results of 3-person virtual team analysis show that CBR technique is more effective than PBR technique.",2002,0, 616,The detection of faulty code violating implicit coding rules,"In the field of legacy software maintenance, there unexpectedly arises a large number of implicit coding rules, which we regard as a cancer in software evolution. Since such rules are usually undocumented and each of them is recognized only by a few members in a maintenance team, a person who is not aware of a rule often violates it while doing various maintenance activities such as adding a new functionality or repairing faults. The problem here is not only such a violation introduces a new fault but also the same kind of fault will be generated again and again in the future by different maintainers. This paper proposes a method for detecting code fragments that violate implicit coding rules. In the method, an expert maintainer, firstly, investigates the cause of each failure, described in the past failure reports, and identifies all the implicit coding rules that lie behind the faults. Then, the code patterns violating the rules (which we call """"faulty code patterns"""") are described in a pattern description language. Finally, the potential faulty code fragments are automatically detected by a pattern matching technique. The result of a case study with large legacy software showed that 32.7% of the failures, which have been reported during a maintenance process, were due to the violation of implicit coding rules. Moreover, 152 faults existed in 772 code fragments detected by the prototype matching system, while 111 of them were not reported.",2002,0, 617,Elimination of crucial faults by a new selective testing method,"Recent software systems contain a lot of functions to provide various services. According to this tendency, software testing becomes more difficult than before and cost of testing increases so much, since many test items are required. In this paper we propose and discuss such a new selective software testing method that is constructed from previous testing method by simplifying testing specification. We have presented, in the previous work, a selective testing method to perform highly efficient software testing. The selective testing method has introduced an idea of functional priority testing and generated test items according to their functional priorities. Important functions with high priorities are tested in detail, and functions with low priorities are tested less intensively. As a result, additional cost for generating testing instructions becomes relatively high. In this paper in order to reduce its cost, we change the way of giving information, with respect to priorities. The new method gives the priority only rather than generating testing instructions to each test item, which makes the testing method quite simple and results in cost reduction. Except for this change, the new method is essentially the same as the previous method. We applied this new method to actual development of software tool and evaluated its effectiveness. From the result of the application experiment, we confirmed that many crucial faults can be detected by using the proposed method.",2002,0, 618,Empirical validation of class diagram metrics,"As a key early artefact in the development of OO software, the quality of class diagrams is crucial for all later design work and could be a major determinant for the quality of the software product that is finally delivered. Quantitative measurement instruments are useful to assess class diagram quality in an objective way, thus avoiding bias in the quality evaluation process. This paper presents a set of metrics - based on UML relationships $which measure UML class diagram structural complexity following the idea that it is related to the maintainability of such diagrams. Also summarized are two controlled experiments carried out in order to gather empirical evidence in this sense. As a result of all the experimental work, we can conclude that most of the metrics we proposed (NAssoc, NAgg, NaggH, MaxHAgg, NGen, NgenH and MaxDIT) are good indicators of class diagram maintainability. We cannot, however, draw such firm conclusions regarding the NDep metric.",2002,0, 619,Modeling the cost-benefits tradeoffs for regression testing techniques,"Regression testing is an expensive activity that can account for a large proportion of the software maintenance budget. Because engineers add tests into test suites as software evolves, over time, increased test suite size makes revalidation of the software more expensive. Regression test selection, test suite reduction, and test case prioritization techniques can help with this, by reducing the number of regression tests that must be run and by helping testers meet testing objectives more quickly. These techniques, however can be expensive to employ and may not reduce overall regression testing costs. Thus, practitioners and researchers could benefit from cost models that would help them assess the cost-benefits of techniques. Cost models have been proposed for this purpose, but some of these models omit important factors, and others cannot truly evaluate cost-effectiveness. In this paper, we present new cost-benefits models for regression test selection, test suite reduction, and test case prioritization, that capture previously omitted factors, and support cost-benefits analyses where they were not supported before. We present the results of an empirical study assessing these models.",2002,0, 620,An integrated failure detection and fault correction model,"In general, software reliability models have focused an modeling and predicting failure occurrence and have not given equal priority to modeling the fault correction process. However, there is a need for fault correction prediction, because there are important applications that fault correction modeling and prediction support. These are the following: predicting whether reliability goals have been achieved, developing stopping rules for testing, formulating test strategies, and rationally allocating test resources. Because these factors are related, we integrate them in our model. Our modeling approach involves relating fault correction to failure prediction, with a time delay estimated from a fault correction queuing model.",2002,0, 621,Testing Web applications,"The rapid diffusion of Internet and open standard technologies is producing a significant growth of the demand of Web sites and Web applications with more and more strict requirements of usability, reliability, interoperability and security. While several methodological and technological proposals for developing Web applications are coining both from industry and academia, there is a general lack of methods and tools to carry out the key processes that significantly impact the quality of a Web application (WA), such as the validation & verification (V&V), and quality assurance. Some open issues in the field of Web application testing are addressed in this paper. The paper exploits an object-oriented model of a WA as a test model, and proposes a definition of the unit level for testing the WA. Based on this model, a method to test the single units of a WA and for the integration testing is proposed. Moreover, in order to experiment with the proposed technique and strategy, an integrated platform of tools comprising a Web application analyzer, a repository, a test case generator and a test case executor, has been developed and is presented in the paper. A case study, carried out with the aim of assessing the effectiveness of the proposed method and tools, produced interesting and encouraging results.",2002,0, 622,Combining software quality predictive models: an evolutionary approach,"During the last ten years, a large number of quality models have been proposed in the literature. In general, the goal of these models is to predict a quality factor starting from a set of direct measures. The lack of data behind these models makes it hard to generalize, cross-validate, and reuse existing models. As a consequence, for a company, selecting an appropriate quality model is a difficult, non-trivial decision. In this paper, we propose a general approach and a particular solution to this problem. The main idea is to combine and adapt existing models (experts) in such a way that the combined model works well on the particular system or in the particular type of organization. In our particular solution, the experts are assumed to be decision tree or rule-based classifiers and the combination is done by a genetic algorithm. The result is a white-box model: for each software component, not only does the model give a prediction of the software quality factor, it also provides the expert that was used to obtain the prediction. Test results indicate that the proposed model performs significantly better than individual experts in the pool.",2002,0, 623,Change-oriented requirements traceability. Support for evolution of embedded systems,"Planning of requirements changes is often inaccurate and implementation of changes is time consuming and error prone. One reason for these problems is imprecise and inefficient approaches to analyze the impact of changes. This thesis proposes a precise and efficient impact analysis approach that focuses on functional system requirements changes of embedded control systems. It consists of three parts: (1) a fine-grained conceptual trace model, (2) process descriptions of how to establish traces and how to analyze the impact of changes, and (3) supporting tools. Empirical investigation shows that the approach has a beneficial effect on the effectiveness and efficiency of impact analyses and that it supports a more consistent implementation of changes.",2002,0, 624,Relating expectations to automatically recovered design patterns,"At MITRE we are developing tools to aid analysts in assessing the operational usability and quality of object-oriented code. Our tools statically examine source code, automatically recognize the use of design patterns and relate pattern use to software qualities, coding goals, and system engineering expectations about the source code. Thus, through the use of automated design pattern analysis, we are working to reveal originating software design decisions.",2002,0, 625,Java quality assurance by detecting code smells,"Software inspection is a known technique for improving software quality. It involves carefully examining the code, the design, and the documentation of software and checking these for aspects that are known to be potentially problematic based on past experience. Code smells are a metaphor to describe patterns that are generally associated with bad design and bad programming practices. Originally, code smells are used to find the places in software that could benefit from refactoring. In this paper we investigate how the quality of code can be automatically assessed by checking for the presence of code smells and how this approach can contribute to automatic code inspection. We present an approach for the automatic detection and visualization of code smells and discuss how this approach can be used in the design of a software inspection tool. We illustrate the feasibility of our approach with the development of jCOSMO, a prototype code smell browser that detects and visualizes code smells in JAVA source code. Finally, we show how this tool was applied in a case study.",2002,0, 626,Verifying provisions for post-transaction user input error correction through static program analysis,Software testing is a time-consuming and error-prone process. Automated software verification is an important key to improve software testing. This paper presents a novel approach for the automated approximate verification of provisions of transactions for correcting effects that result from executing database transactions with wrong user inputs. The provision is essential in any database application. The approach verifies the provision through analyzing the source codes of transactions in a database application. It is based on some patterns that in all likelihood exist between the control flow graph of a transaction and the control flow graphs of transactions for correcting some post-transaction user input errors of the former transaction. We have validated the patterns statistically.,2002,0, 627,Open source software research activities in AIST towards secure open systems,"National Research Institutes of Advanced Industrial Science and Technology (AIST) is governed by the Ministry of Economy Trade and Industry of Japanese government. The Information Technology Research Institute of AIST has noticed that the open source software approaches are important issues to have high quality and secure software. In this paper, after we have shown four projects of open source software carried out at AIST, we show a typical and simple security problem named """"cross site scripting"""" of Web servers. If the application software for the Web server were opened, this security hole would be quickly fixed because the problem is very simple and the way to fix is quite easy. Then we show several reports on Linux operating system of using governmental computer network infrastructures. We see that a lot of countries are considering using Linux and its application software as their infrastructures. Because of the national securities and the deployment costs AIST is now planning to use Linux office applications in order to assess the feasibility of using open source software as an important infrastructure.",2002,0, 628,Cost-sensitive boosting in software quality modeling,"Early prediction of the quality of software modules prior to software testing and operations can yield great benefits to the software development teams, especially those of high-assurance and mission-critical systems. Such an estimation allows effective use of the testing resources to improve the modules of the software system that need it most and achieve high reliability. To achieve high reliability, by the means of predictive methods, several tools are available. Software classification models provide a prediction of the class of a module, i.e., fault-prone or not fault-prone. Recent advances in the data mining field allow to improve individual classifiers (models) by using the combined decision from multiple classifiers. This paper presents a couple of algorithms using the concept of combined classification. The algorithms provided useful models for software quality modeling. A comprehensive comparative evaluation of the boosting and cost-boosting algorithms is presented. We demonstrate how the use of boosting algorithms (original and cost-sensitive) meets many of the specific requirements for software quality modeling. C4.5 decision trees and decision stumps were used to evaluate these algorithms with two large-scale case studies of industrial software systems.",2002,0, 629,An overview of industrial software documentation practice,"A system documentation process maturity model and assessment procedure were developed and used to assess 91 projects at 41 different companies over a seven year period. During this time the original version evolved into a total of four versions based on feedback from industry and the experience gained from the assessments. This paper reports the overall results obtained from the assessments which strongly suggest that the practice of documentation is not getting a passing grade in the software industry. The results show a clear maturity gap between documentation practices concerned with defining policy and practices concerned with adherence to those policies. The results further illustrate the need to recognize the importance of improving the documentation process, and to transform the good intentions into explicit policies and actions.",2002,0, 630,"Automatic failure detection, logging, and recovery for high-availability Java servers","Many systems and techniques exist for detecting application failures. However, previously known generic failure detection solutions are only of limited use for Java applications because they do not take into consideration the specifics of the Java language and the Java execution environment. In this article, we present the application-independent Java Application Supervisor (JAS). JAS can automatically detect, log, and resolve a variety of execution problems and failures in Java applications. In most cases, JAS requires neither modifications nor access to the source code of the supervised application. A set of simple user-specified policies guides the failure detection, logging, and recovery process in JAS. A JAS configuration manager automatically generates default policies from the bytecode of an application. The user can modify these default policies as needed. Our experimental studies show that JAS typically incurs little execution time and memory overhead for the target application. We describe an experiment with a Web proxy that exhibits reliability and performance problems under heavy load and demonstrate an increase in the rate of successful requests to the server by almost 33% and a decrease in the average request processing time by approximately 22% when using JAS.",2002,0, 631,Dependability analysis of a client/server software system with rejuvenation,"Long running software systems are known to experience an aging phenomenon called software aging, one in which the accumulation of errors during the execution of software leads to performance degradation and eventually results in failure. To counteract this phenomenon an active fault management approach, called software rejuvenation, is particularly useful. It essentially involves gracefully terminating an application or a system and restarting it in a clean internal state. We deal with dependability analysis of a client/server software system with rejuvenation. Three dependability measures in the server process, steady-state availability, loss probability of requests and mean response time on tasks, are derived from the well-known hidden Markovian analysis under the time-based software rejuvenation scheme. In numerical examples, we investigate the sensitivity of some model parameters to the dependability measures.",2002,0, 632,Data coverage testing of programs for container classes,"For the testing of container classes and the algorithms or programs that operate on the data in a container, these data have the property of being homogeneous throughout the container. We have developed an approach for this situation called data coverage testing, where automated test generation can systematically generate increasing test data size. Given a program and a test model, it can be theoretically shown that there exists a sufficiently large test data set size N, such that testing with a data set size larger than N does not detect more faults. A number of experiments have been conducted using a set of C++ STL programs, comparing data coverage testing with two other testing strategies: statement coverage and random generation. These experiments validate the theoretical analysis for data coverage, confirming the predicted sufficiently large N for each program.",2002,0, 633,Genes and bacteria for automatic test cases optimization in the .NET environment,"The level of confidence in a software component is often linked to the quality of its test cases. This quality can in turn be evaluated with mutation analysis: faulty components (mutants) are systematically generated to check the proportion of mutants detected (""""killed"""") by the test cases. But while the generation of basic test cases set is easy, improving its quality may require prohibitive effort. We focus on the issue of automating the test optimization. We looked at genetic algorithms to solve this problem and modeled it as follows: a test case can be considered as a predator while a mutant program is analogous to a prey. The aim of the selection process is to generate test cases able to kill as many mutants as possible. To overcome disappointing experimentation results on the studied .NET system, we propose a slight variation on this idea, no longer at the """"animal"""" level (lions killing zebras) but at the bacteriological level. The bacteriological level indeed better reflects the test case optimization issue: it introduces a memorization function and suppresses the crossover operator. We describe this model and show how it behaves on the case study.",2002,0, 634,Fault detection capabilities of coupling-based OO testing,"Object-oriented programs cause a shift in focus from software units to the way software classes and components are connected. Thus, we are finding that we need less emphasis on unit testing and more on integration testing. The compositional relationships of inheritance and aggregation, especially when combined with polymorphism, introduce new kinds of integration faults, which can be covered using testing criteria that take the effects of inheritance and polymorphism into account. This paper demonstrates, via a set of experiments, the relative effectiveness of several coupling-based OO testing criteria and branch coverage. OO criteria are all more effective at detecting faults due to the use of inheritance and polymorphism than branch coverage.",2002,0, 635,Improving usefulness of software quality classification models based on Boolean discriminant functions,"BDF (Boolean discriminant functions) are an attractive technique for software quality estimation. Software quality classification models based on BDF provide stringent rules for classifying not fault-prone modules (nfp), thereby predicting a large number of modules as fp. Such models are practically not useful from software quality assurance and software management points of view. This is because, given the large number of modules predicted as fp, project management will face a difficult task of deploying, cost-effectively, the always-limited reliability improvement resources to all the fp modules. This paper proposes the use of generalized Boolean discriminant functions (GBDF) as a solution for improving the practical and managerial usefulness of classification models based on BDF. In addition, the use of GBDF avoids the need to build complex hybrid classification models in order to improve usefulness of models based on BDF. A case study of a full-scale industrial software system is presented to illustrate the promising results obtained from using the proposed classification technique using GBDF.",2002,0, 636,A case study using the round-trip strategy for state-based class testing,"A number of strategies have been proposed for state-based class testing. An important proposal made by Chow (1978), that was subsequently adapted by Binder (1999), consists in deriving test sequences covering all round-trip paths in a finite state machine (FSMs). Based on a number of (rather strong) assumptions, and for traditional FSMs, it can be demonstrated that all operation and transfer errors in the implementation can be uncovered. Through experimentation, this paper investigates this strategy when used in the context of UML statecharts. Based on a set of mutation operators proposed for object-oriented code we seed a significant number of faults in an implementation of a specific container class. We then investigate the effectiveness of four test teams at uncovering faults, based on the round-trip path strategy, and analyze the faults that seem to be difficult to detect. Our main conclusion is that the round-trip path strategy is reasonably effective at detecting faults (87% average as opposed to 69% for size-equivalent, random test cases) but that a significant number of faults can only exhibit a high detection probability by augmenting the round-trip strategy with a traditional black-box strategy such as category-partition testing. This increases the number of test cases to run -and therefore the cost of testing- and a cost-benefit analysis weighting the increase of testing effort and the likely gain in fault detection is necessary.",2002,0, 637,Worst case reliability prediction based on a prior estimate of residual defects,"In this paper we extend an earlier worst case bound reliability theory to derive a worst case reliability function R(t), which gives the worst case probability of surviving a further time t given an estimate of residual defects in the software N and a prior test time T. The earlier theory and its extension are presented and the paper also considers the case where there is a low probability of any defect existing in the program. For the """"fractional defect"""" case, there can be a high probability of surviving any subsequent time t. The implications of the theory are discussed and compared with alternative reliability models.",2002,0, 638,Mutation of Java objects,"Fault insertion based techniques have been used for measuring test adequacy and testability of programs. Mutation analysis inserts faults into a program with the goal of creating mutation-adequate test sets that distinguish the mutant from the original program. Software testability is measured by calculating the probability that a program will fail on the next test input coming from a predefined input distribution, given that the software includes a fault. Inserted faults must represent plausible errors. It is relatively easy to apply standard transformations to mutate scalar values such as integers, floats, and character data, because their semantics are well understood. Mutating objects that are instances of user defined types is more difficult. There is no obvious way to modify such objects in a manner consistent with realistic faults, without writing custom mutation methods for each object class. We propose a new object mutation approach along with a set of mutation operators and support tools for inserting faults into objects that instantiate items from common Java libraries heavily used in commercial software as well as user defined classes. Preliminary evaluation of our technique shows that it should be effective for evaluating real-world software testing suites.",2002,0, 639,Inter-class mutation operators for Java,"The effectiveness of mutation testing depends heavily on the types of faults that the mutation operators are designed to represent. Therefore, the quality of the mutation operators is key to mutation testing. Mutation testing has traditionally been applied to procedural-based languages, and mutation operators have been developed to support most of their language features. Object-oriented programming languages contain new language features, most notably inheritance, polymorphism, and dynamic binding. Not surprisingly; these language features allow new kinds of faults, some of which are not modeled by traditional mutation operators. Although mutation operators for OO languages have previously been suggested, our work in OO faults indicate that the previous operators are insufficient to test these OO language features, particularly at the class testing level. This paper introduces a new set of class mutation operators for the OO language Java. These operators are based on specific OO faults and can be used to detect faults involving inheritance, polymorphism, and dynamic binding, thus are useful for inter-class testing. An initial Java mutation tool has recently been completed, and a more powerful version is currently under construction.",2002,0, 640,Effect of disturbances on the convergence of failure intensity,"We report a study to determine the impact of four types of disturbances on the failure intensity of a software product undergoing system test. Hardware failures, discovery of a critical fault, attrition in the test team, are examples of disturbances that will likely affect the convergence of the failure intensity to its desired value. Such disturbances are modeled as impulse, pulse, step, and white noise. Our study examined, in quantitative terms, the impact of such disturbances on the convergence behavior of the failure intensity. Results from this study reveal that the behavior of the state model, proposed elsewhere, is consistent with what one might predict. The model is useful in that it provides a quantitative measure of the delay one can expect when a disturbance occurs.",2002,0, 641,A portable gait analysis and correction system using a simple event detection method,"Microcontrollers are widely used in the area of portable control systems, though they are only beginning to be used for portable, unobtrusive Functional Electrical Stimulation (FES) systems. This paper describes the initial prototyping of such a portable system. This has the intended use of detecting time variant gait anomalies in patients with hemiplegia, and correcting for them. The system is described in two parts. Firstly, the portable hardware implementing two independent communicating microcontrollers for low powered parallel processing and secondly the simplified low power software. Both are designed specifically for long term, stable use and also to communicate with PC based visual software for testing and evaluation. The system operates by using bend sensors to defect the angles of the hip, knee and ankle of both legs. It computes an error signal with which to produce a stimulation wave cycle, that is synchronised and timed for the new gait cycle from that in which the error was observed. This system uses a PID controller to correct for the instability inherent with such a large time delay between observation and correction.",2002,0, 642,Injecting bit flip faults by means of a purely software approach: a case studied,"Bit flips provoked by radiation are a main concern for space applications. A fault injection experiment performed using a software simulator is described in this paper. Obtained results allow us to predict a low sensitivity to soft errors for the studied application, putting in evidence critical memory elements.",2002,0, 643,A research on multi-level networked RAID based on cluster architecture,"Storage networks is a popular solution to constraint servers in storage field. As described by Gibson's metrics, the performance of multi-level networked RAID (redundant arrays of inexpensive disks) based on cluster is almost the same to that of improved 2D-parity. Compared with other schemes, it is lower cost and easier to realize.",2002,0, 644,Composition and decomposition of quality of service parameters in distributed component-based systems,"It is becoming increasingly acceptable that component-based development is an effective, efficient and promising approach to develop distributed systems. With components as the building blocks, it is expected that the quality of the end system can be predicted based on the qualities of components in the system. UniFrame is one such framework that facilitates seamless interoperation of heterogeneous distributed software components. As a part of UniFrame, a catalog of quality of service (QoS) parameters has been created to provide a standard method for quantifying the QoS of software components. In this paper, an approach for composition and decomposition of these QoS parameters is proposed. A case study from the financial domain is indicated to validate this model.",2002,0, 645,Numerical simulation of car crash analysis based on distributed computational environment,"Automobile CAE software is mainly used to assess the performance quality of vehicles. As the automobile is a product of technology intensive complexity, its design analysis involves a broad range of CAE simulation techniques. An integrated CAE solution of automobiles can include comfort analysis (vibration and noise analysis), safety analysis (car body collision analysis), process-cycle analysis, structural analysis, fatigue analysis, fluid dynamics analysis, test analysis, material data information system and system integration. We put an emphasis on simulation of a whole automobile collision process, which will bring a breakthrough to the techniques of CAE simulation based on high performance computing. In addition, we carry out simulation for a finite-element car model in a distributed computation environment and accomplish coding-and-programming of DAYN3D. We also provide computational examples and a user handbook. Our research collects almost ten numerical automobile models such as Honda, Ford, etc. Moreover, we also deal with different computational scales for the same auto model and some numerical models of air bags are included. Based on the numerical auto model, referring to different physical parameters and work conditions of the auto model, we can control the physical parameters for the numerical bump simulation and analyze the work condition. The result of our attempt conduces to the development of new auto models.",2002,0, 646,Immune mechanism based computer security design,"Referring to the mechanism of biological immune system, a novel model of computer security system is proposed, which is a dynamic, multi-layered and co-operational system. Through dynamically supervising abnormal behaviors with multi agent immune systems, a two-level defense system is set up for improving the whole performance: one is based on a host and mainly used for. detecting viruses; and the other is based on a network for supervising potential attacks. On the other hand, a pseudo-random technology is adopted for designing the sub-system of data transmission, in order to increase the ability of protecting information against intended interference and monitoring. Simulations on information transmission show that this system has good robustness, error tolerance and self-adaptiveness, although more practice is needed.",2002,0, 647,Race condition and concurrency safety of multithreaded object-oriented programming in Java,"To ensure the reliability and quality, software systems should be safe. The software safety requires the data consistency in the software. In the multithreaded object-oriented programming, the coherency problem, also called a race condition, may destroy the data consistency. In order to overcome the coherency problem Java sets up the """"synchronized"""" mechanism. However, improper use of the """"synchronized"""" mechanism in Java will result in system deadlock, which also violates the requirement of software safety. We find that it is necessary to supplement a new function to the """"synchronized"""" mechanism of Java. Another contribution in the paper is to propose a new approach for detecting system deadlock in Java multithreaded programs with the synchronized mechanism.",2002,0, 648,Exploring visualization of complex telecommunications systems network data,"High-speed broadband telecommunication systems are built with extensive redundancy and complex management systems to ensure robustness. The presence of a fault may be detected by the offending component, its parent component or by other components. This can potentially result in a net effect of a large number of alarm events being raised. There can be a considerable amount of alarm depending on the size and configuration of the network. Data visualization can reduce mountains of data to visually insightful representations, which can aid decision-making and identification of faults. The paper explores data visualizations to provide a context to assist in the identification of such faults.",2002,0, 649,HACKER: human and computer knowledge discovered event rules for telecommunications fault management,"Visualization integrated with data mining can offer 'human-assisted computer discovery' and 'computer-assisted human discovery'. Such a visual environment; reduces the time to understand complex data, thus enabling practical solutions to many real world problem to be developed far more rapidly than either humans or computers operating independently. In doing so the remarkable perceptual abilities that humans possess can be utilized, such as the capacity to recognize lanes quickly, and detect the subtlest changes in size, color, shape, movement or texture. One such complex real world problem is fault management in global telecommunication systems. These system have a large amount of built in redundancy to ensure robustness and quality of service. Unfortunately, this means that when a fault does occur, it can trigger a cascade of alarm events as individual parts of the system discover and report fallen making it difficult to locate the origin of the fault. This alarm behavior has been described as appearing to an operator as non-deterministic, yet it does result in a large data mountain that is ideal for data mining. The paper presents a visualization data mining prototype that incorporates the principles of human and computer discovery, the combination of computer-assisted human discovery with human-assisted computer discovery through a three-tier framework. The prototype is specifically designed to assist in the semi-automatic discovery of previously unknown alarm rules that can then be utilized in commercial role based component solutions, """"business rules"""", which are at the heart of many of todays fault management systems.",2002,0, 650,Microarchitectural exploration with Liberty,"To find the best designs, architects must rapidly simulate many design alternatives and have confidence in the results. Unfortunately, the most prevalent simulator construction methodology, hand-writing monolithic simulators in sequential programming languages, yields simulators that are hard to retarget, limiting the number of designs explored, and hard to understand, instilling little confidence in the model. Simulator construction tools have been developed to address these problems, but analysis reveals that they do not address the root cause, the error-prone mapping between the concurrent, structural hardware domain and the sequential, functional software domain. This paper presents an analysis of these problems and their solution, the Liberty Simulation Environment (LSE). LSE automatically constructs a simulator from a machine description that closely resembles the hardware, ensuring fidelity in the model. Furthermore, through a strict but general component communication contract, LSE enables the creation of highly reusable component libraries, easing the task of rapidly exploring ever more exotic designs.",2002,0, 651,Safe virtual execution using software dynamic translation,"Safe virtual execution (SVE) allows a host computer system to reduce the risks associated with running untrusted programs. SVE prevents untrusted programs from directly accessing system resources, thereby giving the host the ability to control how individual resources may be used. SVE is used in a variety, of safety-conscious software systems, including the Java Virtual Machine (JVM), software fault isolation (SFI), system call interposition layers, and execution monitors. While SVE is the conceptual foundation for these systems, each uses a different implementation technology. The lack of a unifying framework for building SVE systems results in a variety of problems: many useful SVE systems are not portable and therefore are usable only on a limited number of platforms; code reuse among different SVE systems is often difficult or impossible; and building SVE systems from scratch can be both time consuming and error prone. To address these concerns, we have developed a portable, extensible framework for constructing SVE systems. Our framework, called Strata, is based on software dynamic translation (SDT), a technique for modifying binary programs as they execute. Strata is designed to be ported easily to new platforms and to date has been targeted to SPARC/Solaris, x86/Linux, and MIPS/IRIX. This portability ensures that SVE applications implemented in Strata are available to a wide variety of host systems. Strata also affords the opportunity for code reuse among different SVE applications by establishing a common implementation framework. Strata implements a basic safe virtual execution engine using SDT The base functionality supplied by this engine is easily extended to implement specific SVE systems. In this paper we describe the organization of Strata and demonstrate its extension by building two SVE systems: system call interposition and stack-smashing prevention. To illustrate the use of the system call interposition extensions, the paper presents implementations of several useful security policies.",2002,0, 652,Reaction to errors in robot systems,"The paper analyzes the problem of error (failure) detection and handling in robot programming. First an overview of the subject is provided and later error detection and handling in MRROC++ are described. To facilitate system reaction to the detected failures, the errors are classified and certain suggestions are made as to how to handle those classes of errors.",2002,0, 653,Application of automated mapping system to distribution transformer load management,"This paper develops an application program that is based on the automated mapping and facility management system (AM/FM) to provide load forecasting and power flow calculation capability in distribution systems. First, the database and related data structure used in the Taipower distribution automation pilot system is studied and thoroughly analyzed. Then, our program, developed by the AM/FM ODL software, is integrated into the above pilot system. This program can predict future load growth on distribution feeders, considering the effects of temperature variation, and power needed for air conditioners. In addition, on the basis of load density and diversity factors of typical customers, the saturation load of a new housing zone can be estimated. As for the power flow analysis, it can provide three-phase quantities of voltage drop at each node, the branch current, and the system loss. The program developed in this study can effectively aid public electric utilities in distribution system planning and operation.",2002,0, 654,A DSP-based FFT-analyzer for the fault diagnosis of rotating machine based on vibration analysis,"A DSP-based measurement system dedicated to the vibration analysis on rotating machines was designed and realized. Vibration signals are on-line acquired and processed to obtain a continuous monitoring of the machine status. In case of a fault, the system is capable of isolating the fault with a high reliability. The paper describes in detail the approach followed to built up fault and non-fault models together with the chosen hardware and software solutions. A number of tests carried out on small-size three-phase asynchronous motors highlight the excellent promptness in detecting faults, low false alarm rate, and very good diagnostic performance.",2002,0,269 655,The quality of service evaluating tool for WAAS,"In this paper, we introduce a software kit for evaluating the Quality of Service (QoS) provided by the Wide Area Augmentation System (WAAS). This tool, named Service Volume Model (SVM), is a professional software providing types of real-time QoS criteria evaluations for WAAS, such as service accuracy, integrity, continuity, availability and safety. Those QoS evaluations can provide the user accurate safety-critical positioning reference information.",2002,0, 656,A fault-tolerant approach to secure information retrieval,"Several private information retrieval (PIR) schemes were proposed to protect users' privacy when sensitive information stored in database servers is retrieved. However, existing PIR schemes assume that any attack to the servers does not change the information stored and any computational results. We present a novel fault-tolerant PIR scheme (called FT-PIR) that protects users' privacy and at the same time ensures service availability in the presence of malicious server faults. Our scheme neither relies on any unproven cryptographic assumptions nor the availability of tamper-proof hardware. A probabilistic verification function is introduced into the scheme to detect corrupted results. Unlike previous PIR research that attempted mainly to demonstrate the theoretical feasibility of PIR, we have actually implemented both a PIR scheme and our FT-PIR scheme in a distributed database environment. The experimental and analytical results show that only modest performance overhead is introduced by FT-PIR while comparing with PIR in the fault-free cases. The FT-PIR scheme tolerates a variety of server faults effectively. In certain fail-stop fault scenarios, FT-PIR performs even better than PIR. It was observed that 35.82% less processing time was actually needed for FT-PIR to tolerate one server fault.",2002,0, 657,Service time optimal self-stabilizing token circulation protocol on anonymous undirectional rings,"We present a self-stabilizing token circulation protocol on unidirectional anonymous rings. This protocol requires no processor identifiers or distinguished processor (i.e. all processors perform the same algorithm). The protocol is randomized and self-stabilizing, meaning that starting from an arbitrary configuration (in response to an arbitrary perturbation modifying the memory state), it reaches (with probability 1) a legitimate configuration (i.e. a configuration with only one token in the network). All previous randomized self-stabilizing token circulation protocols designed to work under unfair distributed schedulers have the same drawback: once stabilized, service time is slow (in the best case, it is bounded by 2N where N is the ring size). Once stabilized, our protocol provides an optimal service: after N computation steps, each processor has obtained the token once. The protocol can be used to implement fair distributed mutual exclusion in any ring topology network.",2002,0, 658,An analysis of fault detection latency bounds of the SNS scheme incorporated into an Ethernet based middleware system,"The supervisor-based network surveillance (SNS) scheme is a semi-centralized network surveillance scheme for detecting the health status of computing components in a distributed real-time (RT) system. An implementation of the SNS scheme in a middleware architecture, named ROAFTS (real-time object-oriented adaptive fault-tolerance support), has been underway in the authors' lab. ROAFTS is a middleware subsystem which is layered above a COTS (commercial-off-the-shelf) operating system (OS), such as Windows XP or UNIX, and functions as the core of a reliable RT execution engine for fault-tolerant (FT) distributed RT applications. The applications supported by ROAFTS are structured as a network of RT objects, named time-triggered message-triggered objects (TMOs). The structure of the prototype implementation of the SNS scheme is discussed first, then a rigorous analysis of the time bounds for fault detection and recovery is provided.",2002,0, 659,Preventing network instability caused by propagation of control plane poison messages,"We present a framework of fault management for a particular type of failure propagation that we refer to as """"poison message failure propagation"""": Some or all of the network elements have a software or protocol 'bug' which is activated on receipt of a certain network control/management message (the poison message). This activated 'bug' will cause the node to fail with some probability. If the network control or management is such that this message is persistently passed among the network nodes, and if the node failure probability is sufficiently high, large-scale instability can result. In order to mitigate this problem. we propose a combination of passive diagnosis and active diagnosis. Passive diagnosis includes protocol analysis of messages received and sent by failed nodes, correlation of messages among multiple failed nodes and analysis of the pattern of failure propagation. This is combined with active diagnosis in which filters are dynamically configured to block suspect protocols or message types. OPNET simulations show the effectiveness of passive diagnosis. Message filtering is formulated as a sequential decision problem, and a heuristic policy is proposed for this problem.",2002,0, 660,Software measurement data analysis using memory-based reasoning,"The goal of accurate software measurement data analysis is to increase the understanding and improvement of software development process together with increased product quality and reliability. Several techniques have been proposed to enhance the reliability prediction of software systems using the stored measurement data, but no single method has proved to be completely effective. One of the critical parameters for software prediction systems is the size of the measurement data set, with large data sets providing better reliability estimates. In this paper, we propose a software defect classification method that allows defect data from multiple projects and multiple independent vendors to be combined together to obtain large data sets. We also show that once a sufficient amount of information has been collected, the memory-based reasoning technique can be applied to projects that are not in the analysis set to predict their reliabilities and guide their testing process. Finally, the result of applying this approach to the analysis of defect data generated from fault-injection simulation is presented.",2002,0, 661,Software quality classification modeling using the SPRINT decision tree algorithm,"Predicting the quality of system modules prior to software testing and operations can benefit the software development team. Such a timely reliability estimation can be used to direct cost-effective quality improvement efforts to the high-risk modules. Tree-based software quality classification models based on software metrics are used to predict whether a software module is fault-prone or not fault-prone. They are white box quality estimation models with good accuracy, and are simple and easy to interpret. This paper presents an in-depth study of calibrating classification trees for software quality estimation using the SPRINT decision tree algorithm. Many classification algorithms have memory limitations including the requirement that data sets be memory resident. SPRINT removes all of these limitations and provides a fast and scalable analysis. It is an extension of a commonly used decision tree algorithm, CART, and provides a unique tree-pruning technique based on the minimum description length (MDL) principle. Combining the MDL pruning technique and the modified classification algorithm, SPRINT yields classification trees with useful prediction accuracy. The case study used comprises of software metrics and fault data collected over four releases from a very large telecommunications system. It is observed that classification trees built by SPRINT are more balanced and demonstrate better stability in comparison to those built by CART.",2002,0, 662,Verification of Web service flows with model-checking techniques,"Web service is an emerging software technology to use remote services in the Internet. As it becomes pervasive, some """"language"""" to describe Web service flows is needed to combine existing services flexibly. The flow essentially describes distributed collaborations and is not easy to write and verify, while the fault that the flow description may contain can only be detected at runtime. The faulty flow description is not desirable because a tremendous amount of publicly shared network resources are consumed. The verification of the Web service flow prior to its execution in the Internet is mandatory. This paper proposes to use the software model-checking technology for the verification of the Web service flow descriptions. For a concrete discussion, the paper adapts WSFL (Web Services Flow Language) as the language to describe the Web service flows, and uses the SPIN model-checker for the verification engine. The experiment shows that the software model-checking technology is usable as a basis for the verification of WSFL descriptions.",2002,0, 663,A case for exploiting self-similarity of network traffic in TCP congestion control,"Analytical and empirical studies have shown that self-similar traffic can have a detrimental impact on network performance including amplified queuing delay and packet loss ratio. On the flip side, the ubiquity of scale-invariant burstiness observed across diverse networking contexts can be exploited to design better resource control algorithms. We explore the issue of exploiting the self-similar characteristics of network traffic in TCP congestion control. We show that the correlation structure present in long-range dependent traffic can be detected on-line and used to predict future traffic. We then devise an novel scheme, called TCP with traffic prediction (TCP-TP), that exploits the prediction result to infer, in the context of AIMD (additive increase, multiplicative decrease) steady-state dynamics, the optimal operational point for a TCP connection. Through analytical reasoning, we show that the impact of prediction errors on fairness is minimal. We also conduct ns-2 simulation and FreeBSD 4.1-based implementation studies to validate the design and to demonstrate the performance improvement in terms of packet loss ratio and throughput attained by connections.",2002,0, 664,A framework for performability modeling of messaging services in distributed systems,"Messaging services are a useful component in distributed systems that require scalable dissemination of messages (events) from suppliers to consumers. These services decouple suppliers and consumers, and take care of client registration and message propagation, thus relieving the burden on the supplier Recently performance models for the configurable delivery and discard policies found in messaging services have been developed, that can be used to predict response time distributions and discard probabilities under failure-free conditions. However, these messaging service models do not include the effect of failures. In a distributed system, supplier, consumer and messaging services can fail independently leading to different consequences. In this paper we consider the expected loss rate associated with messaging services as a performability measure and derive approximate closed-form expressions for three different quality of service settings. These measures provide a quantitative framework that allows different messaging service configurations to be compared and design trade-off decisions to be made.",2002,0, 665,Fault detection effectiveness of spathic test data,"This paper presents an approach for generating test data for unit-level, and possibly integration-level, testing based on sampling over intervals of the input probability distribution, i.e., one that has been divided or layered according to criteria. Our approach is termed """"spathic"""" as it selects random values felt to be most likely or least likely to occur from a segmented input probability distribution. Also, it allows the layers to be further segmented if additional test data is required later in the test cycle. The spathic approach finds a middle ground between the more difficult to achieve adequacy criteria and random test data generation, and requires less effort on the part of the tester. It can be viewed as guided random testing, with the tester specifying some information about expected input. The spathic test data generation approach can be used to augment """"intelligent"""" manual unit-level testing. An initial case study suggests that spathic test sets defect more faults than random test data sets, and achieve higher levels of statement and branch coverage.",2002,0, 666,Syntactic fault patterns in OO programs,"Although program faults are widely studied, there are many aspects of faults that we still do not understand, particularly about OO software. In addition to the simple fact that one important goal during testing is to cause failures and thereby detect faults, a full understanding of the characteristics of faults is crucial to several research areas. The power that inheritance and polymorphism brings to the expressiveness of programming languages also brings a number of new anomalies and fault types. In prior work we presented a fault model for the appearance and realization of OO faults that are specific to the use of inheritance and polymorphism. Many of these faults cannot appear unless certain syntactic patterns are used. The patterns are based on language constructs, such as overriding methods that directly define inherited state variables and non-inherited methods that call inherited methods. If one of these syntactic patterns is used, then we say the software contains an anomaly and possibly a fault. We describe the syntactic patterns for each OO fault type. These syntactic patterns can potentially be found with an automatic tool. Thus, faults can be uncovered and removed early in development.",2002,0, 667,Managing software evolution with a formalized abstraction hierarchy,"Complex computer systems are seldom made from scratch but they contain significant amounts of legacy code, which then is under continuous pressure for evolution. Therefore, a need for a rigorous method for managing evolution in this setting is evident. We propose a management method for reactive and distributed systems. The method is based on creating a formal abstraction hierarchy to model the system with abstractions that exceed those that are used as implementation facilities. This hierarchy is then used to assess the cost of a modification by associating the modification to appropriate abstractions in the hierarchy and by determining the abstractions that need to be revisited to retain the hierarchy consistent.",2002,0, 668,Probabilistic analysis of CAN with faults,"As CANs (controller area networks) are being increasingly used in safety-critical applications, there is a need for accurate predictions of failure probability. In this paper we provide a general probabilistic schedulability analysis technique which is applied specifically to CANs to determine the effect of random network faults on the response times of messages. The resultant probability distribution of response times can be used to provide probabilistic guarantees of real-time behaviour in the presence of faults. The analysis is designed to have as little pessimism as possible but never be optimistic. Through simulations, this is shown to be the case. It is easy to apply and can provide useful evidence for justification of an event-triggered bus in a critical system.",2002,0, 669,An empirical study on the design effort of Web applications,"We study the effort needed for designing Web applications from an empirical point of view. The design phase forms an important part of the overall effort needed to develop a Web application, since the use of tools can help automate the implementation phase. We carried out an empirical study with students of an advanced university class that used W2000 as a Web application design technique. Our first goal was to compare the relative importance of each design activity. Second, we tried to assess the accuracy of a priori design effort predictions and the influence of factors on the effort needed for each design activity. Third, we also studied the quality of the designs obtained.",2002,0, 670,Exact computation of maximally dominating faults and its application to n-detection tests,"n-detection test sets for stuck-at faults have been shown to be useful in detecting unmodeled defects. It was also shown that a set of faults, called maximally dominating faults, can play an important role in controlling the increase in the size of an n-detection test set as n is increased. In an earlier work, a superset of the maximally dominating fault set was used. In this work, we propose a method to determine exact sets of maximally dominating faults. We also define a new type of n-detection test sets based on the exact set of maximally dominating faults. We present experimental results to demonstrate the usefulness of this exact set in producing high-quality n-detection test sets.",2002,0, 671,Maximum distance testing,"Random testing has been used for years in both software and hardware testing. It is well known that in random testing each test requires to be selected randomly regardless of the tests previously generated. However, random testing could be inefficient for its random selection of test patterns. This paper, based on random testing, introduces the concept of Maximum Distance Testing (MDT) for VLSI circuits in which the total distance among all test patterns is chosen maximal so that the set of faults detected by one test pattern is as different as possible from that of faults detected by the tests previously applied. The procedure for constructing a Maximum Distance Testing Sequence (MDTS) is described in detail. Experimental results on Benchmark as well as other circuits are also given to evaluate the performances of our new approach.",2002,0, 672,Statistical analysis of time series data on the number of faults detected by software testing,"According to a progress of the software process improvement, the time series data on the number of faults detected by the software testing are collected extensively. In this paper, we perform statistical analyses of relationships between the time series data and the field quality of software products. At first, we apply the rank correlation coefficient to the time series data collected from actual software testing in a certain company, and classify these data into four types of trends: strict increasing, almost increasing, almost decreasing, and strict decreasing. We then investigate, for each type of trend, the field quality of software products developed by the corresponding software projects. As a result of statistical analyses, we showed that software projects having trend of almost or strict decreasing in the number of faults detected by the software testing could produce the software products with high quality.",2002,0, 673,Reliable file transfer in Grid environments,Grid-based computing environments are becoming increasingly popular for scientific computing. One of the key issues for scientific computing is the efficient transfer of large amounts of data across the Grid. In this paper we present a reliable file transfer (RFT) service that significantly improves the efficiency of large-scale file transfer. RFT can detect a variety of failures and restart the file transfer from the point of failure. It also has capabilities for improving transfer performance through TCP tuning.,2002,0, 674,VAASAANUBAADA: automatic machine translation of bilingual Bengali-Assamese news texts,"This paper presents a project for translating bilingual Bengali-Assamese news texts using an example-based machine translation technique. The work involves machine translation of bilingual texts at sentence level. In addition, the work also includes preprocessing and post-processing tasks. The work is unique because of the language pair that is chosen for experimentation. We constructed and aligned the bilingual corpus manually by feeding real examples using pseudo code. The longer input sentence is fragmented at punctuations, which resulted in high quality translation. Backtracking is used when an exact match is not found at the sentence/fragment level, leading to further fragmentation of the sentence. Since bilingual Bengali-Assamese languages belong to the Magadha Prakrit group, the grammatical form of sentences is very similar and has no lexical word groups. The results when tested are fascinating with quality translation.",2002,0, 675,An interaction testing technique between hardware and software in embedded systems,"An embedded system is an electronically controlled system combining hardware and software. Many systems used in real life such as power plants, medical instrument systems and home appliances are embedded. However, studies related to embedded system testing are insufficient. In embedded systems, it is necessary to develop a test technique to detect faults in interaction between hardware and software. We propose a test data selection technique using fault injection for the interaction between hardware and software. The proposed test data selection technique first simulates behavior of a software program from requirements specification. Hardware faults, after being converted to software faults, are then injected into the simulated program. We finally select effective test data to detect faults caused by the interactions between hardware and software. We apply our technique to a digital plant protection system and evaluate the effectiveness of selected test data through experiments.",2002,0, 676,Data coverage testing,"Generating test data sets which are sufficiently large to effectively cover all the tests required before a software component can be certified as reliable is a time consuming and error-prone task if carried out manually. A key parameter when testing collections is the size of the collection to be tested: an automatic test generator builds a set of collections containing n elements where n ranges from 0 to ncrit. Data coverage analysis allows us to determine rigorously a collection size such that testing with collections of size > ncrit does not provide any further useful information, i.e. will not uncover any new faults. We conducted a series of experiments on modules from the C++ Standard Template Library which were seeded with errors. Using a test model appropriate to each module, we generated data sets of sizes up to and exceeding the predicted value of ncrit and verified that after all collections of size ncrit have been tested, no further errors are discovered. Data coverage was also compared with statement coverage testing and random test data set generation. The three testing techniques were compared for effectiveness at revealing errors compared to the number of test data sets used. Statement coverage testing was confirmed as the cheapest, in the sense that it produces its maximal effect for the smallest number of tests applied, but the least effective technique in terms of numbers of errors uncovered. Data coverage was significantly better than random test generation: it uncovered more faults with fewer tests at every point.",2002,0, 677,Industrial utilization of linguistic equations for defect detection on printed circuit boards,"This paper describes how linguistic equations, an intelligent method derived from fuzzy algorithms, have been used in a decision-helping tool for electronic manufacturing. In our case the company involved in the project, PKC Group, is mainly producing control cards for the automotive industry. In their business, nearly 70 percent of the cost of a product is material cost. Detecting defects and repairing the printed circuit boards is therefore a necessity. With an ever increasing complexity of the products, defects are very likely to occur, no matter how much attention is put into their prevention. That's the reason why the system described in this paper comes into use only during the final testing of the product and is purely oriented towards the detection and localization of defects. Final control is based on functional testing. Using linguistic equations and expert knowledge, the system is able to analyze that data and successfully detect and trace a defect into a small area of the printed circuit board. If sufficient amount of data is provided, self-tuning and self-learning methods can be used. Diagnosis effectiveness can therefore be improved from detection of a functional area towards component level analysis.",2002,0, 678,2D automated visual inspection system for the remote quality control of SMD assembly,"In this paper, we present a general description of a new automated visual inspection system designed with a mechatronic approach, to address the problem of quality control in a SMT assembly process line. The system provides hardware and software facilities to be used as a test bench for new algorithms related to inspection and decision making, and it allows for remote access. We include a description of our first application to detect presence/absence or misplace of surface mounted devices using 2D analysis and 3D reconstruction. Also, we describe the remote access package developed with JiniTM technologies to integrate our system to the JiniTM Virtual Manufacturing Lab. This AVI system was created as a part of the Mexico-USA project in manufacturing research, MANET.",2002,0, 679,Hardware/software co-reliability of configurable digital systems,"This paper investigates the co-effect of hardware and software on the reliability as measured by quality level (or defect level) of configurable multichip module (CMCM) systems. Hardware architecture of CMCM can be configured to accommodate target application design. An application, as provided in a form of software, is partitioned and mapped on the provided configurable hardware. Granularity of an application can be used as a criteria of partitioning and mapping, and can determine the utilization pattern of hardware resources. The utilization pattern of CMCM determines the configuration strategy of available hardware resources based on the application's granularity. Different utilization patterns of an application design on CMCM may result in various impacts on escape tolerance (i.e. the probability to avoid inclusion of hardware resources in the configuration that escaped from testing). A quality level model of CMCM is proposed to capture and trace the co-effect of hardware and software, referred to as co-reliability, with respect to escape-tolerance. Various configuration strategies are proposed and evaluated against various criterion granularity and utilization distributions based on the proposed models and evaluation techniques. Extensive analytical and parametric simulation results are shown.",2002,0, 680,Reliability evaluation of multi-state systems subject to imperfect coverage using OBDD,"This paper presents an efficient approach based on OBDD for the reliability analysis of a multi-state system subject to imperfect fault-coverage with combinatorial performance requirements. Since there exist dependencies between combinatorial performance requirements, we apply the multi-state dependency operation (MDO) of OBDD to deal with these dependencies in a multi-state system. In addition, this OBDD-based approach is combined with the conditional probability methods to find solutions for the multi-state imperfect coverage models. Using conditional probabilities, we can also apply this method for modular structures. The main advantage of this algorithm is that it will take computational time that is equivalent to the same problem without assuming imperfect coverage (i.e. with perfect coverage). This algorithm is very important for complex systems such as fault-tolerant computer systems, since it can obtain the complete results quickly and accurately even when there exist a number of dependencies such as shared loads (reconfiguration), degradation and common-cause failures.",2002,0, 681,Diagram-based CBA using DATsys and CourseMaster,"Supporting the assessment of an ever-increasing number of students is an error-prone and resource intensive process. Computer based assessment (CBA) software aids educators by automating aspects of the assessment of student work. Using CBA benefits pedagogically and practically both students and educators. The Learning Technology Group at the University of Nottingham has been actively researching, developing and using software to automatically assess programming coursework for 14 years. Two of the systems developed, Ceilidh and its successor CourseMaster, are being used by an increasing number of academic institutions. Research has resulted in a system for supporting the full lifecycle of free-response CBA that has diagram-based solutions. The system, DATsys, is an authoring environment for developing diagram-based CBA. It has been designed to support the authoring of coursework for most types of diagram notations. Exercises have been developed and tested for circuit diagrams, flowcharts and class diagrams. Future research plans are for authoring exercises in many more diagram notations.",2002,0, 682,Assessment design for mathematics web project-based learning,"Four mathematics project-based learning activities were carried out on a web environment in this study. Several mindtools such as dynamic geometry scratch pad, concept mapping and Powerpoint were also adopted as learning aids. Forty second to fifth grade elementary school students from Tainan and Kaohsiung cities were included. They registered as the collaborative learners for one year. The quality of final reports, collaborating skills and the ability of using mindtools for each group were assessed while they were doing mathematics project learning through Internet collaboration. Based upon the process-oriented assessment design, students' learning progress was discussed in detail. Overall, the inter-rater reliability of the three assignment designs is between 0.84 and 0.92. These flexible structured and process oriented assessment designs would be of great potential to monitor the web collaborative learning.",2002,0, 683,An algorithm for dividing ambiguity sets for analog fault dictionary,"A new algorithm for dividing ambiguity sets based on the lowest error probability for analog fault dictionary is proposed. The problem of tolerance affecting diagnostic accuracy in analog circuits is discussed. A statistical approach is used to derive the probability distribution of the tolerances of the output signal characteristics both in the absence and in the presence of faults in the circuit. For example, in this paper, Monte Carlo technique has been applied for the analysis of tolerance. The lowest error probabilities are computed according to Bayesian strategy. Using the PSpice software package, a detailed simulation program was developed to implement the proposed technique. The simulation software was packaged and then integrated with a symbolic analysis program that divides the ambiguity sets and structure the software package for the analysis before testing in the fault dictionary. Furthermore, the proposed approach can be easily extended to select the testing nodes leading to the selection of optimized nodes for the analog fault diagnosis.",2002,0, 684,Connecting physical layer and networks layer QoS in DS-CDMA networks-multiple traffic case,"This paper proposes and evaluates two tier call admission control in DS-CDMA networks which reserves bandwidth for handoff events. Under this scheme outage conditions due to handoffs occur only with small and controlled probability. A two-level admission policy is defined: in tier 1 policy, the network capacity is calculated on the basis of the bound on outage probability. However, this policy does not suffice to prevent outage events upon handoffs for various traffic types, and henceforth, we propose an extension which reserves extra bandwidth for handoff calls, thus ensuring that handoff calls will not violate the outage probability bound. The modified second-tier bandwidth reservation policy is adaptive with respect to the traffic intensity, and we show that it can provide satisfactory call (flow) quality during its lifetime.",2002,0, 685,Application of neural networks and filtered back projection to wafer defect cluster identification,"During an electrical testing stage, each die on a wafer must be tested to determine whether it functions as it was originally designed. In the case of a clustered defect on the wafer, such as scratches, stains, or localized failed patterns, the tester may not detect all of the defective dies in the flawed area. To avoid the defective dies proceeding to final assembly, an existing tool is currently used by a testing factory to detect the defect cluster and mark all the defective dies in the flawed region or close to the flawed region; otherwise, the testing factory must assign five to ten workers to check the wafers and hand mark the defective dies. This paper proposes two new wafer-scale defect cluster identifiers to detect the defect clusters, and compares them with the existing tool used in the industry. The experimental results verify that one of the proposed algorithms is very effective in defect identification and achieves better performance than the existing tool.",2002,0, 686,Forward resource reservation for QoS provisioning in OBS systems,"This paper addresses the issue of providing QoS services for optical burst switching (OBS) systems. We propose a linear predictive filter (LPF)-based forward resource reservation method to reduce the burst delay at edge routers. An aggressive reservation method is proposed to increase the successful forward reservation probability and to improve the delay reduction performance. We also discuss a QoS strategy that achieves burst delay differentiation for different classes of traffic by extending the FRR scheme. We analyze the latency reduction improvement gained by our FRR scheme, and evaluate the bandwidth cost of the FRR-based QoS strategy. Our scheme yields significant delay reduction for time-critical traffic, while maintaining the bandwidth overhead within limits.",2002,0, 687,Developing highly dependable application in a distributed system environment,"The aims of the research are to investigate techniques that support the development of highly dependable applications in a distributed system environment. Techniques we are investigating include task allocation and fault-tolerant protocols supporting redundant task allocation, load balance, fault-tolerant computing and communication, error detecting and reconfiguration, test case generation and fault injection. The highly dependable environment co-exists with the original communication and operating system. It is transparent to applications that do not need the highly dependable environment. Applications that wish to use the highly dependable environment need only to specify the level of criticality of their tasks in order for the system to assign the level of redundancy and to activate the relevant fault tolerant protocols. The application we intend to implement in the environment is the firewall application. The firewall is run in redundant mode. Each incoming or outgoing packet is checked by two or more copies of the firewall application. Only when the majority of the firewall copies decide to accept the packet, the packet can go through the firewall. Otherwise, the packet will be rejected: Different decisions from the different firewall copies signify a possible hardware fault or a software error in the underlying system.",2002,0, 688,Optimal age-dependent checkpoint strategy with retry of rollback recovery,"We consider a file system with age-dependent checkpointing, where the rollback recovery operation after a system failure may be imperfect with positive unsuccessful probability. After a few mathematical preliminaries, we formulate the steady-state availability and obtain the age-dependent checkpoint strategy analytically, maximizing the steady-state availability. In a numerical example, we examine the dependence of the unsuccessful probability of rollback recovery on the optimal checkpoint interval, and refer to the effect of imperfect rollback recovery quantitatively.",2002,0, 689,Air quality data remediation by means of ANN,"We present an application of neural networks to air quality time series remediation. The focus has been set on photochemical pollutants, and particularly on ozone, considering statistical correlations between precursors and secondary pollutants. After a preliminary study of the phenomenon, we tried to adapt a predictive MLP (multi layer perceptron) network to fulfill data gaps. The selected input was, along with ozone series, ozone precursors (NOx) and meteorological variables (solar radiation, wind velocity and temperature). We then proceeded in selecting the most representative periods for the ozone cycle. We ran all tests for a 80-hours validation set (the most representative gap width in our data base) and an accuracy analysis with respect to gap width as been performed too. In order to maximize the process automation, a software tool has been implemented in the MatlabTM environment. The ANN validation showed generally good results but a considerable instability in data prediction has been found out. The re-introduction of predicted data as input of following simulations generates an uncontrolled error propagation scarcely highlighted by the error autocorrelation analysis usually performed.",2002,0, 690,Proactive detection of software aging mechanisms in performance critical computers,"Software aging is a phenomenon, usually caused by resource contention, that can cause mission critical and business critical computer systems to hang, panic, or suffer performance degradation. If the incipience or onset of software aging mechanisms can be reliably detected in advance of performance degradation, corrective actions can be taken to prevent system hangs, or dynamic failover events can be triggered in fault tolerant systems. In the 1990 's the U.S. Dept. of Energy and NASA funded development of an advanced statistical pattern recognition method called the multivariate state estimation technique (MSET) for proactive online detection of dynamic sensor and signal anomalies in nuclear power plants and Space Shuttle Main Engine telemetry data. The present investigation was undertaken to investigate the feasibility and practicability of applying MSET for realtime proactive detection of software aging mechanisms in complex, multiCPU servers. The procedure uses MSET for model based parameter estimation in conjunction with statistical fault detection and Bayesian fault decision processing. A realtime software telemetry harness was designed to continuously sample over 50 performance metrics related to computer system load, throughput, queue lengths, and transaction latencies. A series of fault injection experiments was conducted using a """"memory leak"""" injector tool with controllable parasitic resource consumption rates. MSET was able to reliably detect the onset of resource contention problems with high sensitivity and excellent false-alarm avoidance. Spin-off applications of this NASA-funded innovation for business critical eCommerce servers are described.",2002,0, 691,A rigorous approach to reviewing formal specifications,"A new approach to rigorously reviewing formal specifications to ensure their internal consistency and validity is forwarded. This approach includes four steps: (1) deriving properties as review targets based on the syntax and semantics of the specification, (2) building a review task tree to present all the necessary review tasks for each property, (3) carrying out reviews based on the review task tree, and (4) analyzing the review results to determine whether faults are detected or not. We apply this technique to the SOFL specification language, which is an integrated formalism of VDM, Petri nets, and data flow diagrams to discuss how each step is performed.",2002,0, 692,Software reliability corroboration,"We suggest that subjective reliability estimation from the development lifecycle, based on observed behavior or the reflection of one's belief in the system quality, be included in certification. In statistical terms, we hypothesize that a system failure occurs with the estimated probability. Presumed reliability needs to be corroborated by statistical testing during the reliability certification phase. As evidence relevant to the hypothesis increases, we change the degree of belief in the hypothesis. Depending on the corroboration evidence, the system is either certified or rejected. The advantage of the proposed theory is an economically acceptable number of required system certification tests, even for high assurance systems so far considered impossible to certify.",2002,0, 693,An investigation of the applicability of design of experiments to software testing,"Approaches to software testing based on methods from the field of design of experiments have been advocated as a means of providing high coverage at relatively low cost. Tools to generate all pairs, or higher n-degree combinations, of input values have been developed and demonstrated in a few applications, but little empirical evidence is available to aid developers in evaluating the effectiveness of these tools for particular problems. We investigate error reports from two large open-source software projects, a browser and Web server, to provide preliminary answers to three questions: Is there a point of diminishing returns at which generating all n-degree combinations is nearly as effective as all n+1-degree combinations? What is the appropriate value of n for particular classes of software? Does this value differ for different types of software, and by how much? Our findings suggest that more than 95% of errors in the software studied would be detected by test cases that cover all 4-way combinations of values, and that the browser and server software were similar in the percentage of errors detectable by combinations of degree 2 through 6.",2002,0, 694,An on-line test platform for component-based systems,"One of the most provocative research areas in software engineering field is the testing of modern component based distributed applications in order to assure required quality parameters. Dynamic interactions and structural embedding, run-time loadable configurations, and services that can be deployed in arbitrary executions environments results in an increased complexity. Moreover, that the variety of possible states and behaviors becomes unpredictable. Thus, since testing during the development phase is always applied in simulated environments, it is almost impossible to detect faults, which appear under real condition, during production phase of a system. We therefore aim at concepts and methodologies that achieve on-line testing of distributed component based systems in their production phase. In comparison with off-line testing (i.e. testing that takes place during system development), on-line testing addresses particular aspects of the behavior of distributed systems, such as: functionality under limited time and resources available, complex transactions that are performed between components provided by different vendors, deployment, and composition of different services.",2002,0, 695,Packaging and disseminating lessons learned from COTS-based software development,"The appropriate management of experience and knowledge has become a crucially important capability for organizations of all types and software organizations are no exception. We describe an initiative aimed at helping the software engineering community share experience, in the form of lessons learned. The Center for Empirically Based Software Engineering (CeBASE) COTS lessons learned repository (CLLR) is described, including its motivation, its current status and capabilities, and the plans for its evolution. The contribution of this work lies not only in the approach itself and its validation, but also in the creation of a community of interest, which is fundamental in order to ensure the success of such an initiative. The knowledge and experience that are captured, carefully processed, and made available to the software engineering community also form part of this contribution. The community is supported by eWorkshops that bring COTS experts together, letting them discuss, share, and synthesize COTS knowledge. This knowledge is analyzed, refined and shared through the repository, which is designed to be self-monitoring in several ways. It provides several mechanisms for users to provide feedback, both in the form of new lessons learned and additional insight into existing lessons in the repository. This feedback is used to shape the repository contents and capabilities over time. Also, the repository itself tracks its own usage patterns in order to better assess and meet the needs of its users. Although the focus of the CLLR has been on COTS based software development, the technologies and approaches we have employed are applicable to any sub-area of software engineering or any other community of interest.",2002,0, 696,A process for software architecture evaluation using metrics,"Software systems often undergo changes. Changes are necessary not only to fix defects but also to accommodate new features demanded by users. Most of the time, changes are made under schedule and budget constraints and developers lack time to study the software architecture and select the best way to implement the changes. As a result, the code degenerates, making it differ from the planned design. The time spent on the planned design to create architecture to satisfy certain properties is lost, and the systems may not satisfy those properties any more. We describe an approach to systematically detect and correct deviations from the planned design as soon as possible based on architectural guidelines. We also describe a case study, in which the process was applied.",2002,0, 697,Application of neural network for predicting software development faults using object-oriented design metrics,"In this paper, we present the application of neural network for predicting software development faults including object-oriented faults. Object-oriented metrics can be used in quality estimation. In practice, quality estimation means either estimating reliability or maintainability. In the context of object-oriented metrics work, reliability is typically measured as the number of defects. Object-oriented design metrics are used as the independent variables and the number of faults is used as dependent variable in our study. Software metrics used include those concerning inheritance measures, complexity measures, coupling measures and object memory allocation measures. We also test the goodness of fit of neural network model by comparing the prediction result for software faults with multiple regression model. Our study is conducted on three industrial real-time systems that contain a number of natural faults that has been reported for three years (Mei-Huei Tang et al., 1999).",2002,0, 698,A SOM-based method for feature selection,"This paper presents a method, called feature competitive algorithm (FCA), for feature selection, which is based on an unsupervised neural network, the self-organising map (SOM). The FCA is capable of selecting the most important features describing target concepts from a given whole set of features via the unsupervised learning. The FCA is simple to implement and fast in feature selection as the learning can be done automatically and no need for training data. A quantitative measure, called average distance distortion ratio, is figured out to assess the quality of the selected feature set. An asymptotic optimal feature set can then be determined on the basis of the assessment. This addresses an open research issue in feature selection. This method has been applied to a real case, a software document collection consisting of a set of UNIX command manual pages. The results obtained from a retrieval experiment based on this collection demonstrated some very promising potential.",2002,0, 699,Assessing harmonic penetration in terms of phase and sequence component indices,"The purpose of this paper is to simulate a balanced and unbalanced network with the DIgSILENT PowerFactory software package and to perform a harmonic penetration study on these networks in terms of phase and sequence component indices and verify the results obtained by hand calculations. The hand calculations were done using the IEEE Working Group Definitions for calculating powers in a system with nonsinusoidal waveforms. Modelling and the correct referencing of harmonic source phase angles are important when calculating power indices. A third case study is conducted to demonstrate two methods of phase angle referencing and modelling of harmonic sources, one used by DIgSILENT and the other used by the ERACS software program. The spectrum of a drive measured in the field is used. The results are compared and they show that the same values for the power indices are obtained provided that the harmonic phase angles are adjusted correctly.",2002,0, 700,The harmonic impact of self-generating in power factor correction equipment of industrial loads: real cases studies,"This paper shows the impact of the self-generating installation in industrial loads, the problems occurred in field and the proposed solutions based from the harmonic point of view. To illustrate these points, the paper describes two facilities that installed self-generating and all the measurements and studies performed to analyze the electrical problems detected. Some studies results are shown and the implemented solutions are also described.",2002,0, 701,Compensation of waveform distortions and voltage fluctuations of DC arc furnaces: the decoupled compensator,"DC arc furnaces are more and more applied in large industrial power systems. They represent a source of perturbations for the feeding system, in dependence on the available short-circuit power at the point of common coupling and on the arc furnace rating. This paper deals with the problem of the compensation of perturbations due to DC arc furnaces behavior; in particular, the application of a new device, the decoupled compensator, is investigated. The study is performed with reference to a typical DC arc furnace and by using computer simulations developed by means of power system blockset, that is a tool of Matlab. Waveform distortions and voltage fluctuations indices are assessed in presence and in absence of the compensation device.",2002,0, 702,Subjective evaluation of synthetic intonation,"The paper describes a method for evaluating the quality of synthetic intonation using subjective techniques. This perceptual method of assessing intonation, not only evaluates the quality of synthetic intonation, but also allows us to compare different models of intonation to know which one is the most natural from a perceptual point of view. This procedure has been used to assess the quality of an implementation of Fujisaki's intonation model (Fujisaki, H. and Hirose, K., 1984) for the Basque language (Navas, E. et al., 2000). The evaluation involved 30 participants and results show that the intonation model developed has introduced a considerable improvement and that the overall quality achieved is good.",2002,0, 703,Test generation for hardware-software covalidation using non-linear programming,"Hardware-software covalidation involves the cosimulation of a system description with a functional test sequence. Functional test generation is heavily dependent on manual interaction, making it a time-consuming and expensive process. We present an automatic test generation technique to detect design errors in hardware-software systems. The design errors targeted are those caused by incorrect synchronization between concurrent tasks/processes whose detection is dependent on event timing. We formulate the test generation problem as a nonlinear program on integer variables and we use a public domain finite domain solver to solve the problem. We present the formulation and show the results of test generation for a number of potential design errors.",2002,0, 704,Development and evaluation of physical properties for digital intra-oral radiographic system,"As a part of the development of a dental digital radiographic (DDR) system using a CMOS sensor, we developed hardware and software based on a graphical user interface (GUI) to acquire and display intra-oral images. The aims of this study were to develop the DDR system and evaluate its physical properties. Electric signals generated from the CMOS sensor were transformed to digital images through a control computer equipped with a USB board. The distance between the X-ray tube and the CMOS sensor was varied between 10-40 cm for optimal image quality. To evaluate the image quality according to dose variance, phantom images (60 kVp, 7 mA) were obtained at 0.03, 0.05, 0.08, 0.10, and 0.12 s of exposure time and signal-to-noise ratio (SNR) was calculated form the phantom image data. The modulation transfer function (MTF) was obtained as the Fourier transform of the line spread function (LSF), a derivative of the edge spread function (ESF) of sharp edge images acquired at exposure conditions of 60 kVp and 0.56 mA. The most compatible contrast and distinct focal point length was recorded at 20 cm. The resolution of the DDR system was approximately 6.2 line pair per mm. The developed DDR system could be used for clinical diagnosis with improvement of acquisition time and resolution. Measurement of other physical factors such as detected quantum efficiency (DQE) would be necessary to evaluate the physical properties of the DDR system.",2002,0, 705,New directions in measurement for software quality control,"Assessing and controlling software quality is still an immature discipline. One of the reasons for this is that many of the concepts and terms that are used in discussing and describing quality are overloaded with a history from manufacturing quality. We argue in this paper that a quite distinct approach is needed to software quality control as compared with manufacturing quality control. In particular, the emphasis in software quality control is in design to fulfill business needs, rather than replication to agreed standards. We will describe how quality goals can be derived from business needs. Following that, we will introduce an approach to quality control that uses rich causal models, which can take into account human as well as technological influences. A significant concern of developing such models is the limited sample sizes that are available for eliciting model parameters. In the final section of the paper we will show how expert judgment can be reliably used to elicit parameters in the absence of statistical data. In total this provides an agenda for developing a framework for quality control in software engineering that is freed from the shackles of an inappropriate legacy.",2002,0, 706,The importance of life cycle modeling to defect detection and prevention,"In many low mature organizations dynamic testing is often the only defect detection method applied. Thus, defects are detected rather late in the development process. High rework and testing effort, typically under time pressure, lead to unpredictable delivery dates and uncertain product quality. This paper presents several methods for early defect detection and prevention that have been in existence for quite some time, although not all of them are common practice. However, to use these methods operationally and scale them to a particular project or environment, they have to be positioned appropriately in the life cycle, especially in complex projects. Modeling the development life cycle, that is the construction of a project-specific life cycle, is an indispensable first step to recognize possible defect injection points throughout the development project and to optimize the application of the available methods for defect detection and prevention. This paper discusses the importance of life cycle modeling for defect detection and prevention and presents a set of concrete, proven methods that can be used to optimize defect detection and prevention. In particular, software inspections, static code analysis, defect measurement and defect causal analysis are discussed. These methods allow early, low cost detection of defects, preventing them from propagating to later development stages and preventing the occurrence of similar defects in future projects.",2002,0, 707,Modeling nonlinear loads for aerospace power systems,"More and more electronic equipment is being added to airplane electrical systems. The addition of these nonlinear loads could result in a number of power quality problems when the system is fully integrated. These disturbances include excessive harmonic distortion, voltage sags, transient voltages outside specified limits, and power transfer problems. We will describe modeling and simulation of nonlinear loads for aerospace power systems using Saber simulation software. These modeling tools help in developing and validating requirements, predicting power quality disturbances before the power system is fully integrated, troubleshooting once the system is fully integrated, and verification of the system. We will describe modeling a suite of nonlinear single phase and three-phase loads including various types of rectifiers including 6-pulse, 12-pulse, and 18-pulse circuits. We will also present system simulation studies using these nonlinear models and various types of linear models. System studies include the use of corrective measures such as the use active power factor correction (PFC) circuits that are used to reduce harmonic distortion.",2002,0, 708,DB2 and Web services,"The World Wide Web offers a tremendous amount of information. Accessing and integrating the available information is a challenge. Screen scraping and reverse template engineering are manual and error-prone integration techniques from the past. The advent of Simple Object Access Protocol (SOAP) from the World Wide Web Consortium (W3C) allowed Web sites to become programmable Web services. W3C SOAP is a lightweight protocol, based on Extensible Markup Language (XML), that provides a service-oriented architecture for applications on the Web. Clients compose requests and send SOAP envelopes to providers, who reply through SOAP responses. In this paper, we describe DB2 and Web services, with techniques for integrating information from multiple Web service providers and exposing the collective information through Web services.",2002,0, 709,ED4I: error detection by diverse data and duplicated instructions,"Errors in computing systems can cause abnormal behavior and degrade data integrity and system availability. Errors should be avoided especially in embedded systems for critical applications. However, as the trend in VLSI technologies has been toward smaller feature sizes, lower supply voltages and higher frequencies, there is a growing concern about temporary errors as well as permanent errors in embedded systems; thus, it is very essential to detect those errors. Software-implemented hardware fault tolerance (SIHFT) is a low-cost alternative to hardware fault-tolerance techniques for embedded processors: It does not require any hardware modification of commercial off-the-shelf (COTS) processors. ED4I (error detection by data diversity and duplicated instructions) is a SIHFT technique that detects both permanent and temporary errors by executing two """"different"""" programs (with the same functionality) and comparing their outputs. ED4I maps each number, x, in the original program into a new number x', and then transforms the program so that it operates on the new numbers so that the results can be mapped backwards for comparison with the results of the original program. The mapping in the transformation of ED4I is x' = kx for integer numbers, where kf determines the fault detection probability and data integrity of the system. For floating-point numbers, we find a value of kf for the fraction and ke for the exponent separately, and use k = kf2k for the value of k. We have demonstrated how to choose an optimal value of k for the transformation. This paper shows that, for integer programs, the transformation with k = -2 was the most desirable choice in six out of seven benchmark programs we simulated. It maximizes the fault detection probability under the condition that the data integrity is highest",2002,0, 710,Test generation and testability alternatives exploration of critical algorithms for embedded applications,"Presents an analysis of the behavioral descriptions of embedded systems to generate behavioral test patterns that are used to perform the exploration of design alternatives based on testability. In this way, during the hardware/software partitioning of the embedded system, testability aspects can be considered. This paper presents an innovative error model for algorithmic (behavioral) descriptions, which allows for the generation of behavioral test patterns. They are converted into gate-level test sequences by using more-or-less accurate procedures based on scheduling information or both scheduling and allocation information. The paper shows, experimentally, that such converted gate-level test sequences provide a very high stuck-at fault coverage when applied to different gate-level implementations of the given behavioral specification. For this reason, our behavioral test patterns can be used to explore testability alternatives, by simply performing fault simulation at the gate level with the same set of patterns, without regenerating them for each circuit. Furthermore, whenever gate-level ATPGs are applied on the synthesized gate-level circuits, they obtain lower fault coverage with respect to our behavioral test patterns, in particular when considering circuits with hard-to-detect faults",2002,0, 711,Modeling the economics of testing: a DFT perspective,"Decision-makers typically make test tradeoffs using models that mainly represent direct costs such as test generation time and tester use. Analyzing a test strategy's impact on other significant factors such as test quality and yield learning requires an understanding of the dynamic nature of the interdomain dependencies of test, manufacturing, and design. Our research centers on modeling the tradeoffs between these domains. To answer the DFT question, we developed the Carnegie Mellon University Test Cost Model, a DFT cost-benefit model, derived inputs to the model for various IC cases with different assumptions about volume, yield, chip size, test attributes, and so forth; and studied DFT's impact on these cases. We used the model to determine the domains for which DFT is beneficial and for which DFT should not be used. The model is a composite of simple cause-and-effect relationships derived from published research. It incorporates many factors affecting test cost, but we don't consider it a complete model. Our purpose is to illustrate the necessity of using such models in assessing the effectiveness of various test strategies",2002,0, 712,Projecting advanced enterprise network and service management to active networks,"Active networks is a promising technology that allows us to control the behavior of network nodes by programming them to perform advanced operations and computations. Active networks are changing considerably the scenery of computer networks and, consequently, affect the way network management is conducted. Current management techniques can be enhanced and their efficiency can be improved, while novel techniques can be deployed. This article discusses the impact of active networks on current network management practice by examining network management through the functional areas of fault, configuration, accounting, performance and security management. For each one of these functional areas, the limitations of the current applications and tools are presented, as well as how these limitations can be overcome by exploiting active networks. To illustrate the presented framework, several applications are examined. The contribution of this work is to analyze, classify, and assess the various models proposed in this area, and to outline new research directions",2002,0, 713,Making the (business) case for software reliability,"A business case can be developed to justify the use of higher reliability software to senior management based on the potential profits and improved market position associated with improved software development processes that use software process improvement (SPI) techniques, Results from the literature that demonstrate a positive return on investment (ROI) resulting from SPI program initiatives suggest that the highest potential ROI comes from -SPJ initiatives aimed at the earliest stages of software development. Established analytical reliability techniques are well-suited to supporting development of inherently reliable software early in the life cycle. A spreadsheet-based model developed by the Data and Analysis Center for Software (DACS) to assess the ROI, as well as secondary benefits and risks resulting from various types and levels of SPI, demonstrates the relationship of improved software reliability to the financial bottom line. Future capabijities of the DACS model will leverage new data to expand and refresh the existing model",2002,0, 714,Utility of popular software defect models,Numerical models can be used to track and predict the number of defects in developmental and operational software. This paper introduces techniques to critically assess the effectiveness of software defect reduction efforts as the data is being gathered. This can be achieved by observing the fundamental shape of the cumulative defect discovery curves and by judging how quickly the various defect models converge to common predictions of long term software performance,2002,0, 715,Validation of guidance control software requirements specification for reliability and fault-tolerance,"A case study was performed to validate the integrity of a software requirements specification (SRS) for guidance control software (GCS) in terms of reliability and fault-tolerance. A partial verification of the GCS specification resulted. Two modeling formalisms were used to evaluate the SRS and to determine strategies for avoiding design defects and system failures. Z was applied first to detect and remove ambiguity from a part of the natural language based (NL-based) GCS SRS. Next, statecharts and activity-charts were constructed to visualize the Z description and make it executable. Using this formalism, the system behavior was assessed under normal and abnormal conditions. Faults were seeded into the model (i.e., an executable specification) to probe how the system would perform. The result of our analysis revealed that it is beneficial to construct a complete and consistent specification using this method (Z-to-statecharts). We discuss the significance of this approach, compare our work with similar studies, and propose approaches for improving fault tolerance. Our findings indicate that one can better understand the implications of the system requirements using Z-statecharts approach to facilitate their specification and analysis. Consequently, this approach can help to avoid the problems that result when incorrectly specified artifacts (i.e., in this case requirements) force corrective rework",2002,0, 716,On reliability modeling and analysis of highly-reliable large systems,"Modem systems are getting more and more complex, incorporating new technology to meet customer's high expectations. Hardware, software and data communications are integrated to make systems function properly. One example is a critical power system, which provides electrical power to a data center or to semiconductor manufacturing equipment. Reliability techniques of fault-tolerance, true redundancy, multiple grid connections, concurrent maintenance and so forth are applied in design to provide high system reliability. The techniques make the system larger and more complicated in configuration and behavior. Application of modeling tools and analysis methods to such highly reliable, large, complex and repairable systems is discussed in this paper, based on the experience of assessing critical power systems. The use of reliability block diagram plus simulation is recommended as one of the best engineering practices in planning for such large complex repairable systems",2002,0, 717,Multidimensional modeling of image quality,"In this paper, multidimensional models of image quality are discussed. In such models, alternative images, for instance, obtained through different processing or coding of the same scene, are represented as points in a multidimensional space. The positioning is such that the correlation between geometrical properties of the points and the subjective impressions mediated by the corresponding images is optimized. More specifically, perceived dissimilarities between images are monotonically related to interpoint distances, while the strengths of image quality attributes (such as perceived noise and blur or image quality) are, for instance, monotonically related to point coordinates along specified directions. The goal of multidimensional models is to capture subjective impressions into a single picture that is easy to interpret. We apply multidimensional models to two existing data sets to demonstrate that they indeed account very well for experimental data on image quality. The program XGms is introduced as a new interactive tool for constructing multidimensional models from experimental data. Although XGms is introduced here within the context of image-quality modeling, it is also potentially useful in other applications that rely on multidimensional models",2002,0, 718,Vision-model-based impairment metric to evaluate blocking artifacts in digital video,"In this paper investigations are conducted to simplify and refine a vision-model-based video quality metric without compromising its prediction accuracy. Unlike other vision-model-based quality metrics, the proposed metric is parameterized using subjective quality assessment data recently provided by the Video Quality Experts Group. The quality metric is able to generate a perceptual distortion map for each and every video frame. A perceptual blocking distortion metric (PBDM) is introduced which utilizes this simplified quality metric. The PBDM is formulated based on the observation that blocking artifacts are noticeable only in certain regions of a picture. A method to segment blocking dominant regions is devised, and perceptual distortions in these regions are summed up to form an objective measure of blocking artifacts. Subjective and objective tests are conducted and the performance of the PBDM is assessed by a number of measures such as the Spearman rank-order correlation, the Pearson correlation, and the average absolute error The results show a strong correlation between the objective blocking ratings and the mean opinion scores on blocking artifacts",2002,0, 719,Body of knowledge for software quality measurement,"Measuring quality is the key to developing high-quality software. The author describes two approaches that help to identify the body of knowledge software engineers need to achieve this goal. The first approach derives knowledge requirements from a set of issues identified during two standards efforts: the IEEE Std. 1061-1998 for a Software Quality Metrics Methodology and the American National Standard Recommended Practice for Software Reliability (ANSI/AIAA R-013-1992). The second approach ties these knowledge requirements to phases in the software development life cycle. Together, these approaches define a body of knowledge that shows software engineers why and when to measure quality. Focusing on the entire software development life cycle, rather than just the coding phase, gives software engineers the comprehensive knowledge they need to enhance software quality and supports early detection and resolution of quality problems. The integration of product and process measurements lets engineers assess the interactions between them throughout the life cycle. Software engineers can apply this body of knowledge as a guideline for incorporating quality measurement in their projects. Professional licensing and training programs will also find it useful",2002,0, 720,Planning and executing time-bound projects,Looks at how the SPID (statistically planned incremental deliveries) approach combines critical chain planning with incremental development and rate monitoring to help software developers meet project deadlines. SPID focuses on how best to organize a project to guarantee delivery of at least a working product with an agreed subset of the total functionality by the required date,2002,0, 721,Image processing techniques for wafer defect cluster identification,"Electrical testing determines whether each die on a wafer functions as originally designed. But these tests don't detect all the defective dies in clustered defects on the wafer, such as scratches, stains, or localized failed patterns. Although manual checking prevents many defective dies from continuing on to assembly, it does not detect localized failure patterns-caused by the fabrication process-because they are invisible to the naked eye. To solve these problems, we propose an automatic, wafer-scale, defect cluster identifier. This software tool uses a median filter and a clustering approach to detect the defect clusters and to mark all defective dies. Our experimental results verify that the proposed algorithm effectively detects defect clusters, although it introduces an additional 1% yield loss of electrically good dies. More importantly, it makes automated wafer testing feasible for application in the wafer-probing stage.",2002,0, 722,An approach for intelligent detection and fault diagnosis of vacuum circuit breakers,"In this paper, an approach for intelligent detection and fault diagnosis of vacuum circuit breakers is introduced, by which, the condition of a vacuum circuit breaker can be monitored on-line, and the detectable faults can be identified, located, displayed and saved for the use of analyzing their change tendencies. The main detecting principles and diagnostics are described. Both the hardware structure and software design are also presented.",2002,0, 723,Current status of information technologies used in support of task-oriented collaboration,"Many organizations use information technology (IT) as a way to enable global networking, group negotiations, and expertise sharing amongst end-users in distributed work environments. IT can potentially play a significant role in effective and efficient negotiation and collaboration if it can enhance the quality of communication and coordination between group members asynchronously or synchronously. The paper empirically assesses the pattern of deployment of IT in task-oriented collaboration in US organizations. Data collected from one hundred and nineteen organizations is analyzed to gain insights into adoption and use patterns, and the benefits of seven popular IT approaches that have the capability to support collaboration and negotiation between workgroup members. Our analyses show that e-mail and audio teleconferencing are the most widely adopted and used technologies, while Web-based tools and electronic meeting systems (EMS) have the lowest level of adoption and use. Implications of these findings are discussed, along with some directions for practice and research.",2002,0, 724,Prototype implementations of an architectural model for service-based flexible software,"The need to change software easily to meet evolving business requirements is urgent, and a radical shift is required in the development of software, with a more demand-centric view leading to software which will be delivered as a service, within the framework of an open marketplace. We describe a service architecture and its rationale, in which components may be bound instantly, just at the time they are needed and then the binding may be disengaged. This allows highly flexible software services to be evolved in """"Internet time"""". The paper focuses on early results: some of the aims have been demonstrated and amplified through two experimental implementations, enabling us to assess the strengths and weakness of the approach. It is concluded that some of the key underpinning concepts discovery and late binding - are viable and demonstrate the basic feasibility of the architecture.",2002,0, 725,Monitoring software requirements using instrumented code,"Ideally, software is derived from requirements whose properties have been established as good. However, it is difficult to define and analyze requirements. Moreover derivation of software from requirements is error prone. Finally, the installation and use of compiled software can introduce errors. Thus, it can be difficult to provide assurances about the state of a software's execution. We present a framework to monitor the requirements of software as it executes. The framework is general, and allows for automated support. The current implementation uses a combination of assertion and model checking to inform the monitor. We focus on two issues: (1) the expression of """"suspect requirements"""", and (2) the transparency of the software and its environment to the monitor. We illustrate these issues with the widely known problems of the Dining Philosophers and the CCITT X.509 authentication. Each are represented as Java programs which are then instrumented and monitored.",2002,0, 726,Testable design and testing of high-speed superconductor microelectronics,"True software-defined radio cellular base stations require extremely fast data converters, which can not currently be implemented in semiconductor technology. Superconductor niobium-based delta ADCs have shown to be able to perform this task. The problem of testing these devices is a severe task, as very little is known about possible defects in this technology. This paper shows an approach for gaining information on these defects and illustrates how BIST can be a solution of detecting defects in ADCs under extreme conditions",2002,0, 727,Detecting changes in XML documents,"We present a diff algorithm for XML data. This work is motivated by the support for change control in the context of the Xyleme project that is investigating dynamic warehouses capable of storing massive volumes of XML data. Because of the context, our algorithm has to be very efficient in terms of speed and memory space even at the cost of some loss of quality. Also, it considers, besides insertions, deletions and updates (standard in diffs), a move operation on subtrees that is essential in the context of XML. Intuitively, our diff algorithm uses signatures to match (large) subtrees that were left unchanged between the old and new versions. Such exact matchings are then possibly propagated to ancestors and descendants to obtain more matchings. It also uses XML specific information such as ID attributes. We provide a performance analysis of the algorithm. We show that it runs in average in linear time vs. quadratic time for previous algorithms. We present experiments on synthetic data that confirm the analysis. Since this problem is NP-hard, the linear time is obtained by trading some quality. We present experiments (again on synthetic data) that show that the output of our algorithm is reasonably close to the optimal in terms of quality. Finally we present experiments on a small sample of XML pages found on the Web",2002,0, 728,Error detection by duplicated instructions in super-scalar processors,"This paper proposes a pure software technique """"error detection by duplicated instructions"""" (EDDI), for detecting errors during usual system operation. Compared to other error-detection techniques that use hardware redundancy, EDDI does not require any hardware modifications to add error detection capability to the original system. EDDI duplicates instructions during compilation and uses different registers and variables for the new instructions. Especially for the fault in the code segment of memory, formulas are derived to estimate the error-detection coverage of EDDI using probabilistic methods. These formulas use statistics of the program, which are collected during compilation. EDDI was applied to eight benchmark programs and the error-detection coverage was estimated. Then, the estimates were verified by simulation, in which a fault injector forced a bit-flip in the code segment of executable machine codes. The simulation results validated the estimated fault coverage and show that approximately 1.5% of injected faults produced incorrect results in eight benchmark programs with EDDI, while on average, 20% of injected faults produced undetected incorrect results in the programs without EDDI. Based on the theoretical estimates and actual fault-injection experiments, EDDI can provide over 98% fault-coverage without any extra hardware for error detection. This pure software technique is especially useful when designers cannot change the hardware, but they need dependability in the computer system. To reduce the performance overhead, EDDI schedules the instructions that are added for detecting errors such that """"instruction-level parallelism"""" (ILP) is maximized. Performance overhead can be reduced by increasing ILP within a single super-scalar processor. The execution time overhead in a 4-way super-scalar processor is less than the execution time overhead in the processors that can issue two instructions in one cycle",2002,0, 729,Automatic model refinement for fast architecture exploration [SoC design],We present a methodology and algorithms for automatic refinement from a given design specification to an architecture model based on decisions in architecture exploration. An architecture model is derived from the specification through a series of well defined steps in our design methodology. Traditional architecture exploration relies on manual refinement which is painfully time consuming and error prone. The automation of the refinement process provides a useful tool to the system designer to quickly evaluate several architectures in the design space and make the optimal choice. Experiments with the tool on a system design example show the robustness and usefulness of the refinement algorithm,2002,0, 730,Diagnosing quality of service faults in distributed applications,"QoS management refers to the allocation and scheduling of computing resources. Static QoS management techniques provide a guarantee that resources will be available when needed. These techniques allocate resources based on worst-case needs. This is especially important for applications with hard QoS requirements. However, this approach can waste resources. In contrast, a dynamic approach allocates and deallocates resources during the lifetime of an application. In the dynamic approach the application is started with an initial resource allocation. If the application does not meet its QoS requirements, a resource manager attempts to allocate more resources to the application until the application's QoS requirement is met. While this approach offers the opportunity to better manage resources and meet application QoS requirements, it also introduces a new set of problems. In particular, a key problem is detecting why a QoS requirement is not being satisfied and determining the cause and, consequently, which resource needs to be adjusted. This paper investigates a policy-based approach for addressing these problems. An architecture is presented and a prototype described. This is followed by a case study in which the prototype is used to diagnose QoS problems for a web application based on Apache",2002,0, 731,Adapting extreme programming for a core software engineering course,"Over a decade ago, the manufacturing industry determined it needed to be more agile to thrive and prosper in a changing, nonlinear, uncertain and unpredictable business environment The software engineering community has come to the same realization. A group of software methodologists has created a set of software development processes, termed agile methodologies that have been specifically designed to respond to the demands of the turbulent software industry. Each of the processes in the set of agile processes comprises a set of practices. As educators, we must assess the emerging agile practices, integrate them into our courses (carefully), and share our experiences and results from doing so. The paper discusses the use of extreme programming, a popular agile methodology, in a senior software engineering course at North Carolina State University. It then provides recommendations for integrating agile principles into a core software engineering course",2002,0, 732,A versatile family of consensus protocols based on Chandra-Toueg's unreliable failure detectors,"This paper is on consensus protocols for asynchronous distributed systems prone to process crashes, but equipped with Chandra-Toueg's (1996) unreliable failure detectors. It presents a unifying approach based on two orthogonal versatility dimensions. The first concerns the class of the underlying failure detector. An instantiation can consider any failure detector of the class S (provided that at least one process does not crash), or oS (provided that a majority of processes do not crash). The second versatility dimension concerns the message exchange pattern used during each round of the protocol. This pattern (and, consequently, the round message cost) can be defined for each round separately, varying from O(n) (centralized pattern) to O(n2) (fully distributed pattern), n being the number of processes. The resulting versatile protocol has nice features and actually gives rise to a large and well-identified family of failure detector-based consensus protocols. Interestingly, this family includes at once new protocols and some well-known protocols (e.g., Chandra-Toueg's oS-based protocol). The approach is also interesting from a methodological point of view. It provides a precise characterization of the two sets of processes that, during a round, have to receive messages for a decision to be taken (liveness) and for a single value to be decided (safety), respectively. Interestingly, the versatility of the protocol is not restricted to failure detectors: a simple timer-based instance provides a consensus protocol suited to partially synchronous systems",2002,0, 733,Predicting fault-proneness using OO metrics. An industrial case study,"Software quality is an important external software attribute that is difficult to measure objectively. In this case study, we empirically validate a set of object-oriented metrics in terms of their usefulness in predicting fault-proneness, an important software quality indicator We use a set of ten software product metrics that relate to the following software attributes: the size of the software, coupling, cohesion, inheritance, and reuse. Eight hypotheses on the correlations of the metrics with fault-proneness are given. These hypotheses are empirically tested in a case study, in which the client side of a large network service management system is studied. The subject system is written in Java and it consists of 123 classes. The validation is carried out using two data analysis techniques: regression analysis and discriminant analysis",2002,0, 734,Architecture-centric software evolution by software metrics and design patterns,"It is shown how software metrics and architectural patterns can be used for the management of software evolution. In the presented architecture-centric software evolution method the quality of a software system is assured in the software design phase by computing various kinds of design metrics from the system architecture, by automatically exploring instances of design patterns and anti-patterns from the architecture, and by reporting potential quality problems to the designers. The same analysis is applied in the implementation phase to the software code, thus ensuring that it matches the quality and structure of the reference architecture. Finally, the quality of the ultimate system is predicted by studying the development history of previous projects with a similar composition of characteristic software metrics and patterns. The architecture-centric software evolution method is supported by two integrated software tools, the metrics and pattern-mining tool Maisa and the reverse-engineering tool Columbus",2002,0, 735,WARE: a tool for the reverse engineering of Web applications,"The development of Web sites and applications is increasing dramatically to satisfy the market requests. The software industry is facing the new demand under the pressure of a very short time-to-market and an extremely high competition. As a result, Web sites and applications are usually developed without a disciplined process: Web applications are directly coded and no, or poor, documentation is produced to support the subsequent maintenance and evolution activities, thus compromising the quality of the applications. This paper presents a tool for reverse engineering Web applications. UML diagrams are used to model a set of views that depict several aspects of a Web application at different abstraction levels. The recovered diagrams ease the comprehension of the application and support its maintenance and evolution. A case study, carried out with the aim of assessing the effectiveness of the proposed tool, allowed relevant information about some real Web applications to be successfully recovered and modeled by UML diagrams",2002,0, 736,Reliability test of MESFETs in presence of hot electrons,"Temperature profiles of hot electrons were modeled in MESFETs undergoing stress tests, where the gate voltage was close to pinch-off and the drain voltage was slightly lower than breakdown. These profiles were compared with the results of degradation during the stress. We present the results of two-terminal hot electron stress on MESFETs, and discuss the probability of various defect formations resulting from this stress.",2002,0, 737,Testing of analogue circuits via (standard) digital gates,"The possibility of using window comparators for on-chip (and potentially on-line) response evaluation of analogue circuits is investigated. No additional analogue test inputs are required and the additional circuitry can be realised either by means of standard digital gates taken from an available library or by full custom designed gates to obtain an observation window tailored to the application. With this approach, the test overhead can be kept extremely low. Due to the low gate capacitance also the load on the observed nodes is very low. Simulation results for some examples show that 100% of all assumed layout-realistic faults could be detected.",2002,0, 738,VC rating and quality metrics: why bother? [SoC],"System-on-a-chip (SoC) is the paramount challenge of the electronic industry for the next millennium. The semiconductor industry has delivered what we were expecting and what was predicted: silicon availability for over 10 million gates. The VSIA (Virtual Socket Initiative Alliance) has defined industry standards and data formats for SoC. The reuse methodology manual, first 'how-to-do' book to create reusable IPs (intellectual properties) for SoC designs has been published. EDA tool providers understand the issues and are proposing new tools and solutions on a quarterly basis. The last stage needs to be run: consolidate the experience and know-how of VSIA and IP OpenMORE rating system into an industry adopted VC (virtual component) quality metrics, and then pursue to tackle the next challenges: formal system specifications and VC transfer infrastructure. The objective of this paper is to set the stage for the final step towards a VC quality metrics effort that the industry needs to adopt, and define the next achievable goals.",2002,0, 739,Improving the efficiency and quality of simulation-based behavioral model verification using dynamic Bayesian criteria,"In order to improve the effectiveness of simulation-based behavioral verification, it is important to determine when to stop the current test strategy and to switch to an expectantly more rewarding test strategy. The location of a stopping point is dependent on the statistical model one chooses to describe the coverage behavior during verification. In this paper, we present dynamic Bayesian (DB) and confidence-based dynamic Bayesian (CDB) stopping rules for behavioral VHDL model verification. The statistical assumptions of the proposed stopping rules are based on experimental evaluation of probability distribution functions and correlation functions. Fourteen behavioral VHDL models were experimented with to determine the high efficiency of the proposed stopping rules over the existing ones. Results show that the DB and the CDB stopping rules outperform all the existing stopping rules with an average improvement of at least 69% in coverage per testing patterns used.",2002,0, 740,Integrated inductors modeling and tools for automatic selection and layout generation,In this work we propose new equivalent circuit models for integrated inductors based on the conventional lumped element model. Automatic tools to assist the designers in selecting and automatically laying-out integrated inductors are also reported. Model development is based on measurements taken from more than 100 integrated spiral inductors designed and fabricated in a standard silicon process. We demonstrate the capacity of the proposed models to accurately predict the integrated inductor behavior in a wider frequency range than the conventional model. Our equations are coded in a set of tools that requests the desired inductance value at a determined frequency and gives back the geometry of the better inductors available in a particular technology.,2002,0, 741,A performance model of a PC based IP software router,"We can define a software router as a general-purpose computer that executes a computer program capable of forwarding IP datagrams among network interface cards attached to its I/O bus. This paper presents a parametrical model of a PC based IP software router. Validation results clearly show that the model accurately estimates the performance of the modeled system at different levels of detail. On the other hand, the paper presents experimental results that provide insights about the detailed functioning of such a system and demonstrate the model is valid not only for the characterized systems but for a reasonably range of CPU, memory and I/O bus operation speeds.",2002,0, 742,Recording and analyzing eye movements during ocular fixation in schizophrenic subjects,"Previous studies have been shown that schizophrenic patient compared to healthy subject present abnormality in eye fixation tasks. But in these studies the evaluations of the eye movement are not objective. They are based on visual inspection of the records. The quality of fixation is assessed in term of absence of saccades. By using a predefined scale the records are rating from the best to worst. In this paper, we propose a new method to quantify eye fixation. in this method, our analyze examine the metric proprieties of each component of eye fixation movement (saccades, square wave jerks, and drift). A computer system is developed to record, stimulate and analyze the eye movement. A variety of software tools are developed to assist a clinician in the analysis of the data",2002,0, 743,Airborne sensor concept to image shallow-buried targets,This paper develops an airborne sensor concept to detect and image shallow-buried targets with a focus on the remote sensing of landmines. Our ongoing ground-based bistatic ground-penetrating radar (GPR) experiments have demonstrated deep penetration and subwavelength resolution. Simulation software (ground penetrating radar processing-GPRP) was developed and validated using experimental results. Extrapolation of the experimental results to higher frequencies using the simulation software indicates the ability to provide high-quality images of shallow-buried targets.,2002,0, 744,Flexible development of dependability services: an experience derived from energy automation systems,"This paper describes an approach for the flexible development of dependable automation services starting from their requirements. The approach is presented through the use of a case study in the field of energy automation systems. The approach is based on the use of a custom compositional recovery language that allows one to achieve, in software, a flexible and dependable solution for the specified requirements. The qualitative and quantitative properties of different configurations of the solution are then assessed by modelling, using stochastic Petri nets",2002,0, 745,On the relation between design contracts and errors: a software development strategy,"When designing a software module or system, a systems engineer must consider and differentiate between how the system responds to external and internal errors. External errors cannot be eliminated and must be tolerated by the system, while the number of internal errors should be minimized and the resulting faults should be detected and removed. This paper presents a development strategy based on design contracts and a case study of an industrial project in which the strategy was successfully applied. The goal of the strategy is to minimize the number of internal errors during the development of a software system while accommodating external errors. A distinction is made between weak and strong contracts. These two types of contracts are applicable to external and internal errors, respectively. According to the strategy, strong contracts should be applied initially to promote the correctness of the system. Before releasing, the contracts governing external interfaces should be weakened and error management of external errors enabled. This transformation of a strong contract to a weak one is harmless to client modules",2002,0, 746,Automatic QoS control,"User sessions, usually consisting of sequences of consecutive requests from customers, comprise most of an e-commerce site's workload. These requests execute e-business functions such as browse, search, register, login, add to shopping cart, and pay. Once we properly understand and characterize a workload, we must assess its effect on the site's quality of service (QoS), which is defined in terms of response time, throughput, the probability that requests will be rejected, and availability. We can assess an e-commerce site's QoS in many different ways. One approach is by measuring the site's performance, which we can determine from a production site using a real workload or from a test site using a synthetic workload (as in load testing). Another approach consists of using performance models. I look at the approach my colleagues at George Mason and I took that uses performance models in the design and implementation of automatic QoS controller for e-commerce sites.",2003,0, 747,Multiparadigm scheduling for distributed real-time embedded computing,"Increasingly complex requirements, coupled with tighter economic and organizational constraints, are making it hard to build complex distributed real-time embedded (DRE) systems entirely from scratch. Therefore, the proportion of DRE systems made up of commercial-off-the-shelf (COTS) hardware and software is increasing significantly. There are relatively few systematic empirical studies, however, that illustrate how suitable COTS-based hardware and software have become for mission-critical DRE systems. This paper provides the following contributions to the study of real-time quality-of-service (QoS) assurance and performance in COTS-based DRE systems: it presents evidence that flexible configuration of COTS middleware mechanisms, and the operating system (OS) settings they use, allows DRE systems to meet critical QoS requirements over a wider range of load and jitter conditions than statically configured systems; it shows that in addition to making critical QoS assurances, noncritical QoS performance can be improved through flexible support for alternative scheduling strategies; and it presents an empirical study of three canonical scheduling strategies; specifically the conditions that predict success of a strategy for a production-quality DRE avionics mission computing system. Our results show that applying a flexible scheduling framework to COTS hardware, OSs, and middleware improves real-time QoS assurance and performance for mission-critical DRE systems.",2003,0, 748,Model-based programming of intelligent embedded systems and robotic space explorers,"Programming complex embedded systems involves reasoning through intricate system interactions along lengthy paths between sensors, actuators, and control processors. This is a challenging, time-consuming, and error-prone process requiring significant interaction between engineers and software programmers. Furthermore, the resulting code generally lacks modularity and robustness in the presence of failure. Model-based programming addresses these limitations, allowing engineers to program reactive systems by specifying high-level control strategies and by assembling commonsense models of the system hardware and software. In executing a control strategy, model-based executives reason about the models """"on the fly,"""" to track system state, diagnose faults, and perform reconfigurations. This paper develops the reactive model-based programming language (RMPL) and its executive, called Titan. RMPL provides the features of synchronous, reactive languages, with the added ability of reading and writing to state variables that are hidden within the physical plant being controlled. Titan executes an RMPL program using extensive component-based declarative models of the plant to track states, analyze anomalous situations, and generate novel control sequences. Within its reactive control loop, Titan employs propositional inference to deduce the system's current and desired states, and it employs model-based reactive planning to move the plant from the current to the desired state.",2003,0, 749,Assessing the quality of a cross-national e-government Web site: a study of the forum on strategic management knowledge exchange,"As organizations have begun increasingly to communicate and interact with consumers via the Web, so the appropriate design of offerings has become a central issue. Attracting and retaining consumers requires acute understanding of the requirements of users and appropriate tailoring of solutions. Recently, the development of Web offerings has moved beyond the commercial domain to government, both national and international. In this paper, we examine the results of a quality survey of a cross-national e-government Web site provided by the OECD. The site is examined before and after a major redesign process. The instrument, WebQual, draws on previous work in three areas: Web site usability, information quality, and service interaction quality to provide a rounded framework for assessing e-commerce and e-government offerings. The metrics and findings demonstrate not only the strengths and weaknesses of the sites before and after design, but the very different impressions of users in different member countries. These findings have implications for cross-national e-government Web site offerings.",2003,0, 750,Formal behavioural synthesis of Handel-C parallel hardware implementations from functional specifications,"Enormous improvements in efficiency can be achieved through exploiting parallelism and realizing implementation in hardware. On the other hand, conventional methods for achieving these improvements are traditionally costly, complex and error prone. Two significant advances in the past decade have radically changed these perceptions. Firstly, the FPGA, which gives us the ability to reconfigure hardware through software, dramatically reducing the costs of developing hardware implementations. Secondly, the language Handel-C with primitive explicit parallelism which can compile programs down to an FPGA. In this paper, we build on these recent technological advances and present a systematic approach of behavioural synthesis. Starting with an intuitive high level functional specification of a problem, given without annotation of parallelism, the approach aims at deriving an efficient parallel implementation in Handel-C, which is subsequently compiled into a circuit implemented on reconfigurable hardware. Algebraic laws are systematically used for exposing implicit parallelism and transforming the specification into a collection of interacting components. Formal methods based on data refinement and a small library of higher order functions are then used to derive behavioural description in Handel-C of each component. A small case study illustrates the use of this approach.",2003,0, 751,Identifying extensions required by RUP (rational unified process) to comply with CMM (capability maturity model) levels 2 and 3,"This paper describes an assessment of the rational unified process (RUP) based on the capability maturity model (CMM). For each key practice (KP) identified in each key process area (KPA) of CMM levels 2 and 3, the Rational Unified Process was assessed to determine whether it satisfied the KP or not. For each KPA, the percentage of the key practices supported was calculated, and the results were tabulated. The report includes considerations about the coverage of each key process area, describing the highlights of the RUP regarding its support for CMM levels 2 and 3, and suggests where an organization using it will need to complement it to conform to CMM. The assessment resulted in the elaboration of proposals to enhance the RUP in order to satisfy the key process areas of CMM. Some of these are briefly described in this article.",2003,0, 752,Implementation and control of grid connected AC-DC-AC power converter for variable speed wind energy conversion system,"30 kW electrical power conversion system is developed for a variable speed wind turbine system. In the wind energy conversion system (WECS) a synchronous generator converts the mechanical energy into electrical energy. As the voltage and frequency of generator output vary along the wind speed change, a DC-DC boosting chopper is utilized to maintain constant DC link voltage. The input DC current is regulated to follow the optimized current reference for maximum power point operation of turbine system. Line side PWM inverter supply currents into the utility line by regulating the DC link voltage. The active power is controlled by q-axis current whereas the reactive power can be controlled by d-axis current. The phase angle of utility voltage is detected using software PLL (phased locked loop) in d-q synchronous reference frame. Proposed scheme gives a low cost and high quality power conversion solution for variable speed WECS.",2003,0, 753,Optimal cost-effective design of parallel systems subject to imperfect fault-coverage,"Computer-based systems intended for critical applications are usually designed with sufficient redundancy to be tolerant of errors that may occur. However, under imperfect fault-coverage conditions (such as the system cannot adequately detect, locate, and recover from faults and errors in the system), system failures can result even when adequate redundancy is in place. Because parallel architecture is a well-known and powerful architecture for improving the reliability of fault-tolerant systems, this paper presents the cost-effective design policies of parallel systems subject to imperfect fault-coverage. The policies are designed by considering (1) cost of components, (2) failure cost of the system, (3) common-cause failures, and (4) performance levels of the system. Three kinds of cost functions are formulated considering that the total average cost of the system is based on: (1) system unreliability, (2) failure-time of the system, and (3) total processor-hours. It is shown that the MTTF (mean time to failure) of the system decreases by increasing the spares beyond a certain limit. Therefore, this paper also presents optimal design policies to maximize the MTTF of these systems. The results of this paper can also be applied to gracefully degradable systems.",2003,0, 754,Robust reliability design of diagnostic systems,"Diagnostic systems are software-based built-in-test systems which detect, isolate and indicate the failures of the prime systems. The use of diagnostic systems reduces the losses due to the failures of the prime systems and facilitates subsequent repairs. Thus diagnostic systems have found extensive applications in industry. The algorithms performing operations for diagnosis are important parts of the diagnostic systems. If the algorithms are not adequately designed, the systems will be sensitive to noise sources, and commit type I error (a) and type II error (). This paper is to improve the robustness and reliability of the diagnostic systems through robust design of the algorithms by using reliability as an experimental response. To conduct the design, we define the reliability and robustness of the systems, and propose their metrics. The influences of and errors on reliability are evaluated and discussed. The effects of noise factors on robustness are assessed. The classical P-diagram is modified; a generic P-diagram containing both prime and diagnostic systems is created. Based on the proposed dynamic reliability metric, we describe the steps for robust reliability design and develop a method for experimental data analysis. The robustness and reliability of the diagnostic systems are maximized by choosing optimal levels of algorithm parameters. An automobile example is presented to illustrate how the proposed design method is used. The example shows that the method is efficient in defining, measuring and building robustness and reliability.",2003,0, 755,Integrate hardware/software device testing for use in a safety-critical application,"In train and transit applications, the occurrence of a single hazard (fault) may be quite catastrophic resulting in significant societal costs, ranging from loss of life to major asset damages. The axiomatic safety-critical assessment process (ASCAP) has been demonstrated as a competent method for assessing the risk associated with train and transit systems. ASCAP concurrently simulates the movement of n-trains within a given system from the perspective of the individual trains. During simulation, each train interacts with a series of appliances that are located along the track, within the trains and at a central office. Within ASCAP, each appliance is represented by a probabilistic multistate model, whose state selection is decided using a Monte Carlo process. In lieu of exercising this multistate model for a given appliance, the ASCAP methodology supports the inclusion of actual appliances within the simulation platform. Hence, an appliance can be fault tested in a simulation environment that emulates the actual operational environment to which it will be exposed. The ASCAP software can interface with a given appliance through an input/output (I/O) node contained within its executing platform. This node provides the ASCAP software with the capability of communicating with an external device, such as a track or an onboard appliance. When a train intersects with a particular appliance, the actual appliance can be queried by the ASCAP simulator to ascertain its status. This state information can then be used by ASCAP in lieu of its multi-state model representation of the appliance. This simulation process provides a mechanism to determine the appliance's ability to perform its intended safety-critical function in the presence of hardware/software design faults within its intended operational environment. By being able to quantify these effects prior to deploying a new appliance, credible and convincing evidences can be prepared the to ensure that overall system safety will not be adversely impacted.",2003,0, 756,Reliability Centered Maintenance Maturity Level Roadmap,"Numerous maintenance organizations are implementing various forms of reliability centered maintenance (RCM). Whether it is a classic or a streamlined RCM program, the challenge is to do it fast, but with predictable performance, quality, cost, and schedule. Hence, organizations need guidance to ensure their RCM programs are consistently implemented across the company and to improve their ability to manage key RCM process areas such as analysis, training, and metrics. The RCM Maturity Level Roadmap provides the structure for an organization to assess its RCM maturity and key process area capability. In addition, the Roadmap helps establish priorities for improvement and guide the implementation of these improvements.",2003,0, 757,Sahinoglu-Libby (SL) probability density function-component reliability applications in integrated networks,"The forced outage ratio of a hardware (or a software) component is defined as the failure rate divided by the sum of the failure and the repair rates. The probability density function (PDF) of the FOR is a three-parameter beta distribution (G3B), renamed to be the Sahinoglu-Libby (SL) probability distribution that was pioneered in 1981. The failure and repair rates are assumed to be the generalized gamma variables where the corresponding shape and scale parameters, respectively, are unequal. A three-parameter beta or G3B PDF, equivalent to an FOR PDF, renamed to be the SL, is shown to default to an ordinary two-parameter beta PDF when the shape parameters are identical. Furthermore, the authors will present a wide perspective of the usability and limitations of the said PDF in theoretical and practical terms, also referring to work done by some other authors in the area. In the new era of quality and reliability, the usage of the SL will assist studies in correctly formulating the PDF of the unavailability or availability random variables to estimate the network reliability and quality indices for engineering and utility considerations. Bayesian methodology is employed to compute small-sample estimators by using informative and noninformative priors for the component failure and repair rates in terms of loss functions, as opposed to the uncontested and erroneous usage of the mle, regardless of the inadequacy of the historical data. Case studies illustrate a phenomenon of overestimation of the availability index in safety and time critical components as well as in systems, when mle is conventionally employed. This work assists the network planners, and analysts, like those of Internet Service Providers, providing a targeted reliability measure of their integrated computer network in a quality-conscious environment under the pressure of an ever-expanding demand and a risk, that needs to be mitigated.",2003,0, 758,Design and implementation of real-time digital video streaming system over IPv6 network using feedback control,"In this paper, we discuss a design of a real-time DV (Digital Video) streaming system, which dynamically adjusts packet transmission rate from the source host according to feedback information from the network. In our DV streaming system, the destination host continuously notifies the source host of network status (e.g., the end-to-end packet transmission delay and the packet loss probability in the network). The source host dynamically adjusts its packet transmission rate by lowering the quality of the video stream using a feedback-based control mechanism. Our DV streaming system achieves an efficient utilization of network resources, and prevents packet losses in the network. Thus, our DV streaming system realizes high-quality and real-time video streaming services on the Internet. By modifying the existing DVTS (Digital Video Transmission System), we implement a prototype of our real-time DV streaming system. Through several experimental results, we demonstrate the effectiveness of our DV streaming system.",2003,0, 759,Task graph extraction for embedded system synthesis,"Consumer demand and improvements in hardware have caused distributed real-time embedded systems to rapidly increase in complexity. As a result, designers faced with time-to-market constraints are forced to rely on intelligent design tools to enable them to keep up with demand. These tools are continually being used earlier in the design process when the design is at higher levels of abstraction. At the highest level of abstraction are hardware/software co-synthesis tools which take a system specification as input. Although many embedded systems are described in C, the system specifications for many of these tools are often in the form of one or more task graphs. These tools are very effective at solving the co-synthesis problem using task graphs but require that designers manually transform the specification from C code to task graphs, a tedious and error-prone job. The task graph extraction tool described in this paper reduces the potential for error and the time required to design an embedded system by automating the task graph extraction process. Such a tool can drastically improve designer productivity. As far as we know, this is the first tool of its kind. It has been made available on the web.",2003,0, 760,Iterative processing algorithm to detect biases in assessments,"In order to assess a large number of diverse projects as fairly and rapidly as possible, a procedure often adopted is to use a panel consisting of a large number of experts, only a small number of whom assess each project. Since no one expert assesses all the projects, conscious or unconscious bias regarding overall standards by any expert will advantage or disadvantage the projects assessed by that particular expert. This paper presents an iterative algorithm that has been used successfully to detect and correct for such biases. Each expert's assessments are modeled as differing from the ideal as a result of a shift of mean and having a standard deviation that is too low or too high. This model is used in conjunction with the concept of """"paired assessments"""" to account for individual projects being of unusually high or low quality and so to evaluate the discrepancy from the ideal marks. The same computer program also has applications in the peer-review or expert-evaluation of research proposals, and any other situation involving subjective assessments by a restricted number of persons.",2003,0, 761,Parametric fault tree for the dependability analysis of redundant systems and its high-level Petri net semantics,"In order to cope efficiently with the dependability analysis of redundant systems with replicated units, a new, more compact fault-tree formalism, called Parametric Fault Tree (PFT), is defined. In a PFT formalism, replicated units are folded and indexed so that only one representative of the similar replicas is included in the model. From the PFT, a list of parametric cut sets can be derived, where only the relevant patterns leading to the system failure are evidenced regardless of the actual identity of the component in the cut set. The paper provides an algorithm to convert a PFT into a class of High-Level Petri Nets, called SWN. The purpose of this conversion is twofold: to exploit the modeling power and flexibility of the SWN formalism, allowing the analyst to include statistical dependencies that could not have been accommodated into the corresponding PFT and to exploit the capability of the SWN formalism to generate a lumped Markov chain, thus alleviating the state explosion problem. The search for the minimal cut sets (qualitative analysis) can be often performed by a structural T-invariant analysis on the generated SWN. The advantages that can be obtained from the translation of a PFT into a SWN are investigated considering a fault-tolerant multiprocessor system example.",2003,0, 762,"Assessing attitude towards, knowledge of, and ability to apply, software development process","Software development is one of the most economically critical engineering activities. It is unsettling, therefore, that regularly published analyses reveal that the percentage of projects that fail, by coming in far over budget or far past schedule, or by being cancelled with significant financial loss, is considerably greater in software development than in any other branch of engineering. The reason is that successful software development requires expertise in both state of the art (software technology) and state of the practice (software development process). It is widely recognized that failure to follow best practice, rather than technological incompetence, is the cause of most failures. It is critically important, therefore, that (i) computer science departments be able assess the quality of the software development process component of their curricula and that industry be able to assess the efficacy of SPI (software process improvement) efforts. While assessment instruments/tools exist for knowledge of software technology, none exist for attitude toward, knowledge of, or ability to use, software development process. We have developed instruments for measuring attitude and knowledge, and are working on an instrument to measure ability to use. The current version of ATSE, the instrument for measuring attitude toward software engineering, is the result of repeated administrations to both students and software development professionals, post-administration focus groups, rewrites, and statistical reliability analyses. In this paper we discuss the development of ATSE, results, both expected an unexpected, of recent administrations of ATSE to students and professionals, the various uses to which ATSE is currently being put and to which it could be put, and ATSE's continuing development and improvement.",2003,0, 763,A frame-level measurement apparatus for performance testing of ATM equipment,"Performance testing of asynchronous transfer mode (ATM) equipment is dealt with here. The attention is principally paid to frame-level metrics, recently proposed by the ATM Forum because of their suitability to reflect user-perceived performance better than traditional cell-level metrics. Following the suggestions of the ATM Forum, more and more network engineers and production managers are interested today in these metrics, thus increasing the need of instruments and measurement solutions appropriate to their estimation. Trying to satisfy this exigency, a new VME extension for instrumentation (VXI) based measurement apparatus is proposed in the paper. The apparatus features a suitable software, developed by the authors, which allows the evaluation of the aforementioned metrics by simply making use of common ATM analyzers; only two VXI line interfaces, capable of managing the physical and ATM layers, are, in fact, adopted. Some details concerning ATM technology and its hierarchical structure, as well as the main differences between frames, specific to the ATM adaptation layer, and cells, characterizing the underlying ATM layer, are first given. Both the hardware and software solutions of the measurement apparatus are then described in detail, paying particular attention to the measurement procedures implemented. In the end, the performance of a new ATM device is assessed through the proposed apparatus.",2003,0,271 764,Code optimization for code compression,"With the emergence of software delivery platforms such as Microsoft's .NET, the reduced size of transmitted binaries has become a very important system parameter, strongly affecting system performance. We present two novel pre-processing steps for code compression that explore program binaries' syntax and semantics to achieve superior compression ratios. The first preprocessing step involves heuristic partitioning of a program binary into streams with high auto-correlation. The second preprocessing step uses code optimization via instruction rescheduling in order to improve prediction probabilities for a given compression engine. We have developed three heuristics for instruction rescheduling that explore tradeoffs of the solution quality versus algorithm run-time. The pre-processing steps are integrated with the generic paradigm of prediction by partial matching (PPM) which is the basis of our compression codec. The compression algorithm is implemented for x86 binaries and tested on several large Microsoft applications. Binaries compressed using our compression codec are 18-24% smaller than those compressed using the best available off-the-shelf compressor.",2003,0, 765,Estimating bounds on the reliability of diverse systems,"We address the difficult problem of estimating the reliability of multiple-version software. The central issue is the degree of statistical dependence between failures of diverse versions. Previously published models of failure dependence described what behavior could be expected """"on average"""" from a pair of """"independently generated"""" versions. We focus instead on predictions using specific information about a given pair of versions. The concept of """"variation of difficulty"""" between situations to which software may be subject is central to the previous models cited, and it turns out to be central for our question as well. We provide new understanding of various alternative imprecise estimates of system reliability and some results of practical use, especially with diverse systems assembled from pre-existing (e.g., """"off-the-shelf"""") subsystems. System designers, users, and regulators need useful bounds on the probability of system failure. We discuss how to use reliability data about the individual diverse versions to obtain upper bounds and other useful information for decision making. These bounds are greatly affected by how the versions' probabilities of failure vary between subdomains of the demand space or between operating regimes-it is even possible in some cases to demonstrate, before operation, upper bounds that are very close to the true probability of failure of the system-and by the level of detail with which these variations are documented in the data.",2003,0, 766,A metric-based approach to enhance design quality through meta-pattern transformations,"During the evolution of object-oriented legacy systems, improving the design quality is. most often a highly demanded objective. For such systems which have a large number of classes and are subject to frequent modifications, detection and correction of design defects is a complex task. The use of automatic detection and correction tools can be helpful for this task. Various research approaches have proposed transformations that improve the quality of an object-oriented systems while preserving its behavior This paper proposes a framework where a catalogue of object-oriented metrics can be used-as indicators for automatically detecting situations where a particular transformation can be applied to improve the quality of an object-oriented legacy system. The correction process is based on analyzing the impact of various meta-pattern transformations on these object-oriented metrics.",2003,0, 767,A tamper-resistant framework for unambiguous detection of attacks in user space using process monitors,"Replication and redundancy techniques rely on the assumption that a majority of components are always safe and voting is used to resolve any ambiguities. This assumption may be unreasonable in the context of attacks and intrusions. An intruder could compromise any number of the available copies of a service resulting in a false sense of security. The kernel based approaches have proven to be quite effective but they cause performance impacts if any code changes are in the critical path. We provide an alternate user space mechanism consisting of process monitors by which such user space daemons can be unambiguously monitored without causing serious performance impacts. A framework that claims to provide such a feature must itself be tamper-resistant to attacks. We theoretically analyze and compare some relevant schemes and show their fallibility. We propose our own framework that is based on some simple principles of graph theory and well-founded concepts in topological fault tolerance, and show that it can not only unambiguously detect any such attacks on the services but is also very hard to subvert. We also present some preliminary results as a proof of concept.",2003,0, 768,Location-detection strategies in pervasive computing environments,"Pervasive computing environments accommodate interconnected and communicating mobile devices. Mobility is a vital aspect of everyday life and technology must offer support for moving users, objects, and devices. Their growing number has strong implications on the bandwidth of wireless and wired networks. Network bandwidth becomes a scare resource and its efficient use is crucial for the quality of service in pervasive computing. In this article we study process models for detecting location changes of moving objects and their effect on the network bandwidth. We simulate a scenario of 10/sup 4/ moving objects for a period of 10/sup 7/ time cycles while monitoring the quality of service with respect to network bandwidth for different location detection strategies. The simulation shows that the class of strategies implementing a synchronous model offers better quality of service than the timed model. We conclude the article with a set of guidelines for the application of the strategies we have investigated.",2003,0, 769,A simple system for detection of EEG artifacts in polysomnographic recordings,"We present an efficient parametric system for automatic detection of electroencephalogram (EEG) artifacts in polysomnographic recordings. For each of the selected types of artifacts, a relevant parameter was calculated for a given epoch. If any of these parameters exceeded a threshold, the epoch was marked as an artifact. Performance of the system, evaluated on 18 overnight polysomnographic recordings, revealed concordance with decisions of human experts close to the interexpert agreement and the repeatability of expert's decisions, assessed via a double-blind test. Complete software (Matlab source code) for the presented system is freely available from the Internet at http://brain.fuw.edu.pl/artifacts.",2003,0, 770,A pragmatic approach to managing APC FDC in high volume logic production,"At Infineon Technologies APC fault detection is now implemented in many process areas in its high volume fabs. With the APC Software """"APC-Trend"""" process engineers and maintenance can detect and classify anomalies in machine and process parameters and supervise them on the basis of an automated alarming system. An overview of the current usage of APC FDC at Infineon is given.",2003,0, 771,Advanced analysis of dynamic neural control advisories for process optimization and parts maintenance,"This paper details an advanced set of analyses designed to drive specific process variable setpoint adjustments or maintenance actions required for cost effective process control using the Dynamic Neural ControllerTM (DNC) wafer-to-wafer advisories for semiconductor manufacturing advanced process control. The new analytic displays and metrics are illustrated using data obtained on a LAM 4520XL at STMicroelectronics as part of a SEMATECH SPIT beta test evaluation. The DNC represents a comprehensive modeling environment that uses as its input extensive process chamber information and history of the time since maintenance actions occurred. The DNC uses a neural network to predict multiple quality output metrics and a closed-loop risk-based optimization to maximize process quality performance while minimizing overall cost of tool operation and machine downtime. The software responds in an advisory mode on a wafer-to-wafer basis as to the optimal actions to be taken. In this paper, we present three specific instances of patterns arising during wafer processing over time that signal the process or equipment engineer to the need for corrective action: either a process setpoint adjustment or specific maintenance actions. Based on the controller's recommended corrective action set with the overall risk reduction predicted by such actions, a metric of corrective action """"urgency"""" can be created. The tracking of this metric over time yields different pattern types that signify a quantified need for a specific type of corrective action. Three basic urgency patterns are found: 1. a pattern in a given maintenance action over time showing increasing urgency or """"risk reduction"""" capability for the action; 2. a pattern in a process variable specific to a given recipe indicating a chronic request over time to only adjust the variable setpoint either above or below the current target; 3. a pattern in a process variable existing over all recipes processed through the chamber indicating chronic request to adjust the variable setpoint in either or both directions over time. This pattern is a pointer to the need for a maintenance action that is either corroborated by the urgency graph for that maintenance action, or if no such action has been previously take- n, a guide to the source of the equipment malfunction.",2003,0, 772,Software-based erasure codes for scalable distributed storage,"This paper presents a new class of erasure codes, Lincoln Erasure codes (LEC), applicable to large-scale distributed storage that includes thousands of disks attached to multiple networks. A high-performance software implementation that demonstrates the capability to meet these anticipated requirements is described. A framework for evaluation of candidate codes was developed to support in-depth analysis. When compared with erasure codes based on the work of Reed-Solomon and Luby (2000), tests indicate LEC has a higher throughput for encoding and decoding and lower probability of failure across a range of test conditions. Strategies are described for integration with storage-related hardware and software.",2003,0, 773,System health and intrusion monitoring (SHIM): project summary,"Computer systems and networks today are vulnerable to attacks. In addition to preventive strategies, intrusion detection has been used to further improve the security of computers and networks. Nevertheless, current intrusion detection and response system can detect only known attacks and provide primitive responses. The System Health and Intrusion Monitoring (SHIM) project aims at developing techniques to monitor and assess the health of a large distributed system. SHIM can accurately detect novel attacks and provide strategic information for further correlation, assessments, and response management.",2003,0, 774,An automated method for test model generation from switch level circuits,"Custom VLSI design at the switch level is commonly applied when a chip is required to meet stringent operating requirements in terms of speed, power, or area. ATPG requires gate level models, which are verified for correctness against switch level models. Typically, test models are created manually from the switch level models - a tedious, error-prone process requiring experienced DFT engineers. This paper presents an automated flow for creating gate level test models from circuits at the switch level. The proposed flow utilizes Motorola's Switch Level Verification (SLV) tool, which employs detailed switch level analysis to model the behavior of MOS transistors and represent them at a higher level of abstraction. We present experimental results, which demonstrate that the automated flow is capable of producing gate models that meet the ATPG requirements and are comparable to manually created ones.",2003,0, 775,Assessing XP at a European Internet company,"Fst, a European Internet services company, has been experimenting with introducing XP in its development work. The article describes the company's experiences with XP, explores its implementation practice by practice, and discusses XPs pros and cons in three key areas; customer relationships, project management, and ISO 9001 quality assurance.",2003,0, 776,Tests and tolerances for high-performance software-implemehted fault detection,"We describe and test a software approach to fault detection in common numerical algorithms. Such result checking or algorithm-based fault tolerance (ABFT) methods may be used, for example, to overcome single-event upsets in computational hardware or to detect errors in complex, high-efficiency implementations of the algorithms. Following earlier work, we use checksum methods to validate results returned by a numerical subroutine operating subject to unpredictable errors in data. We consider common matrix and Fourier algorithms which return results satisfying a necessary condition having a linear form; the checksum tests compliance with this condition. We discuss the theory and practice of setting numerical tolerances to separate errors caused by a fault from those inherent in finite-precision floating-point calculations. We concentrate on comprehensively defining and evaluating tests having various accuracy/computational burden tradeoffs, and we emphasize average-case algorithm behavior rather than using worst-case upper, bounds on error.",2003,0, 777,On maximizing the fault coverage for a given test length limit in a synchronous sequential circuit,"When storage requirements or limits on test application time do not allow a complete (compact) test set to be used for a circuit, a partial test set that detects as many faults as possible is required. Motivated by this application, we address the following problem. Given a test sequence T of length L for a synchronous sequential circuit and a length MS of length at most M such that the fault coverage of TS is maximal. A similar problem was considered before for combinational and scan circuits, and solved by test ordering. Test ordering is not possible with the single test sequence considered here. We solve this problem by using a vector omission process that allows the length of the sequence T to be reduced while allowing controlled reductions in the number of detected faults. In this way, it is possible to obtain a sequence TS that has the desired length and a maximal fault coverage.",2003,0, 778,Testable design and testing of micro-electro-fluidic arrays,"The testable design and testing of a fully software-controllable lab-on-a-chip, including a fluidic array of FlowFETs, control and interface electronics is presented. Test hardware is included for detecting faults in the DMOS electro-fluidic interface and the digital parts. Multidomain fault modeling and simulation shows the effects of faults in the (combined) fluidic and electrical parts. The fault simulations also reveal important parameters of multi-domain test-stimuli, e.g. fluid velocity, for detecting both electrical and fluidic defects.",2003,0, 779,An improvement project for distribution transformer load management in Taiwan,"This paper introduces an application program that is based on an automated mapping/facilities management/geographic information system (AM/FM/GIS) to provide information expectation, load forecasting, and power flow calculation capability in distribution systems. First, the database and related data structure used in the Taipower distribution automation pilot system is studied and thoroughly analyzed. Then, our program, developed by the AM/FM FRAMME and Visual Basic software, is integrated into the above pilot system. Moreover, this paper overcomes the weak points of the pilot system, such as difficult use, incomplete function, nonuniform sampling for billing and dispatch of bills and inability to simultaneously transfer customer data. This program can enforce the system and can predict future load growth on distribution feeders, considering the effects of temperature variation, and power needed for air-conditioners. In addition, on the basis of load density and diversity factors of typical customers, the saturation load of a new housing zone can be estimated. As for the power flow analysis, it can provide three-phase quantities of voltage drop at each node, the branch current, and the system loss. The program developed in this study can effectively aid public electric utilities in distribution system planning and operation.",2003,0,964 780,A study of the effect of imperfect debugging on software development cost,"It is widely recognized that the debugging processes are usually imperfect. Software faults are not completely removed because of the difficulty in locating them or because new faults might be introduced. Hence, it is of great importance to investigate the effect of the imperfect debugging on software development cost, which, in turn, might affect the optimal software release time or operational budget. In this paper, a commonly used cost model is extended to the case of imperfect debugging. Based on this, the effect of imperfect debugging is studied. As the probability of perfect debugging, termed testing level here, is expensive to be increased, but manageable to a certain extent with additional resources, a model incorporating this situation is presented. Moreover, the problem of determining the optimal testing level is considered. This is useful when the decisions regarding the test team composition, testing strategy, etc., are to be made for more effective testing.",2003,0, 781,Understanding change-proneness in OO software through visualization,"During software evolution, adaptive, and corrective maintenance are common reasons for changes. Often such changes cluster around key components. It is therefore important to analyze the frequency of changes to individual classes, but, more importantly, to also identify and show related changes in multiple classes. Frequent changes in clusters of classes may be due to their importance, due to the underlying architecture or due to chronic problems. Knowing where those change-prone clusters are can help focus attention, identify targets for re-engineering and thus provide product-based information to steer maintenance processes. This paper describes a method to identify and visualize classes and class interactions that are the most change-prone. The method was applied to a commercial embedded, real-time software system. It is object-oriented software that was developed using design patterns.",2003,0, 782,Identifying comprehension bottlenecks using program slicing and cognitive complexity metrics,Achieving and maintaining high software quality is most dependent on how easily the software engineer least familiar with the system can understand the system's code. Understanding attributes of cognitive processes can lead to new software metrics that allow the prediction of human performance in software development and for assessing and improving the understandability of text and code. In this research we present novel metrics based on current understanding of short-term memory performance to predict the location of high frequencies of errors and to evaluate the quality of a software system. We further enhance these metrics by applying static and dynamic program slicing to provide programmers with additional guidance during software inspection and maintenance efforts.,2003,0, 783,Fault-oriented software robustness assessment for multicast protocols,"This paper reports a systematic approach for detecting software defects in multicast protocol implementations. We deploy a fault-oriented methodology and an integrated test system targeting software robustness vulnerabilities. The primary method is to assess protocol implementation by non-traditional interface fault injection that simulates network attacks. The test system includes a novel packet driving engine, a PDU generator based on Strengthened BNF notation and a few auxiliary tools. We apply it to two multicast protocols, IGMP and PIM-DM, and investigate their behaviors under active functional attacks. Our study proves its effectiveness for promoting production of more reliable multicast software.",2003,0, 784,Improving web application testing with user session data,"Web applications have become critical components of the global information infrastructure, and it is important that they be validated to ensure their reliability. Therefore, many techniques and tools for validating web applications have been created. Only a few of these techniques, however, have addressed problems of testing the functionality of web applications, and those that do have not fully considered the unique attributes of web applications. In this paper we explore the notion that user session data gathered as users operate web applications can be successfully employed in the testing of those applications, particularly as those applications evolve and experience different usage profiles. We report results of an experiment comparing new and existing test generation techniques for web applications, assessing both the adequacy of the generated tests and their ability to detect faults on a point-of-sale web application. Our results show that user session data can produce test suites as effective overall as those produced by existing white-box techniques, but at less expense. Moreover the classes of faults detected differ somewhat across approaches, suggesting that the techniques may be complimentary.",2003,0, 785,Improving test suites via operational abstraction,"This paper presents the operational difference technique for generating, augmenting, and minimizing test suites. The technique is analogous to structural code coverage techniques, but it operates in the semantic domain of program properties rather than the syntactic domain of program text. The operational difference technique automatically selects test cases; it assumes only the existence of a source of test cases. The technique dynamically generates operational abstractions (which describe observed behavior and are syntactically identical to formal specifications)from test suite executions. Test suites can be generated by adding cases until the operational abstraction stops changing. The resulting test suites are as small, and detect as many faults, as suites with 100% branch coverage, and are better at detecting certain common faults. This paper also presents the area and stacking techniques for comparing test suite generation strategies; these techniques avoid bias due to test suite size.",2003,0, 786,Understanding and predicting effort in software projects,"We set out to answer a question we were asked by software project management: how much effort remains to be spent on a specific software project and how will that effort be distributed over time? To answer this question we propose a model based on the concept that each modification to software may cause repairs at some later time and investigate its theoretical properties and application to several projects in Avaya to predict and plan development resource allocation. Our model presents a novel unified framework to investigate and predict effort, schedule, and defects of a software project. The results of applying the model confirm a fundamental relationship between the new feature and defect repair changes and demonstrate its predictive properties.",2003,0, 787,Automated support for classifying software failure reports,This paper proposes automated support for classifying reported software failures in order to facilitate prioritizing them and diagnosing their causes. A classification strategy is presented that involves the use of supervised and unsupervised pattern classification and multivariate visualization. These techniques are applied to profiles of failed executions in order to group together failures with the same or similar causes. The resulting classification is then used to assess the frequency and severity of failures caused by particular defects and to help diagnose those defects. The results of applying the proposed classification strategy to failures of three large subject programs are reported These results indicate that the strategy can be effective.,2003,0, 788,Assessing test-driven development at IBM,"In a software development group of IBM Retail Store Solutions, we built a non-trivial software system based on a stable standard specification using a disciplined, rigorous unit testing and build approach based on the test-driven development (TDD) practice. Using this practice, we reduced our defect rate by about 50 percent compared to a similar system that was built using an ad-hoc unit testing approach. The project completed on time with minimal development productivity impact. Additionally, the suite of automated unit test cases created via TDD is a reusable and extendable asset that will continue to improve quality over the lifetime of the software system. The test suite will be the basis for quality checks and will serve as a quality contract between all members of the team.",2003,0, 789,"The impact of pair programming on student performance, perception and persistence","This study examined the effectiveness of pair programming in four lecture sections of a large introductory programming course. We were particularly interested in assessing how the use of pair programming affects student performance and decisions to pursue computer science related majors. We found that students who used pair programming produced better programs, were more confident in their solutions, and enjoyed completing the assignments more than students who programmed alone. Moreover, pairing students were significantly more likely than non-pairing students to complete the course, and consequently to pass it. Among those who completed the course, pairers performed as well on the final exam as non-pairers, were significantly more likely to be registered as computer science related majors one year later, and to have taken subsequent programming courses. Our findings suggest that not only does pairing not compromise students' learning, but that it may enhance the quality of their programs and encourage them to pursue computer science degrees.",2003,0, 790,"Patterns, frameworks, and middleware: their synergistic relationships","The knowledge required to develop complex software has historically existed in programming folklore, the heads of experienced developers, or buried deep in the code. These locations are not ideal since the effort required to capture and evolve this knowledge is expensive, time-consuming, and error-prone. Many popular software modeling methods and tools address certain aspects of these problems by documenting how a system is designed However they only support limited portions of software development and do not articulate why a system is designed in a particular way, which complicates subsequent software reuse and evolution. Patterns, frameworks, and middleware are increasingly popular techniques for addressing key aspects of the challenges outlined above. Patterns codify reusable design expertise that provides time-proven solutions to commonly occurring software problems that arise in particular contexts and domains. Frameworks provide both a reusable product-line architecture [1] - guided by patterns - for a family of related applications and an integrated set of collaborating components that implement concrete realizations of the architecture. Middleware is reusable software that leverages patterns and frameworks to bridge the gap between the functional requirements of applications and the underlying operating systems, network protocol stacks, and databases. This paper presents an overview of patterns, frameworks, and middleware, describes how these technologies complement each other to enhance reuse and productivity, and then illustrates how they have been applied successfully in practice to improve the reusability and quality of complex software systems.",2003,0, 791,Designing software architectures for usability,"Usability is increasingly recognized as a quality attribute that one has to design for. The conventional alternative is to measure usability on a finished system and improve it. The disadvantage of this approach is, obviously, that the cost associated with implementing usability improvements in a fully implemented system are typically very high and prohibit improvements with architectural impact. In this tutorial, we present the insights gained, techniques developed and lessons learned in the EU-IST project STATUS (SofTware Architectures That supports USability). These include a forward-engineering perspective on usability, a technique for specifying usability requirements, a method for assessing software architectures for usability and, finally, for improving software architectures for usability. The topics are extensively illustrated by examples and experiences from many industrial cases.",2003,0, 792,6th ICSE workshop on component-based software engineering: automated reasoning and prediction,"Component-based technologies and processes have been deployed in many organizations and in many fields over the past several years. However, modeling, reasoning about, and predicting component and system properties remains challenging in theory and in practice. CBSE6 builds on previous workshops in the ICSE/CBSE series, and in 2003 is thematically centered on automated composition theories. Composition theories support reasoning about, and predicting, the runtime properties of assemblies of components. Automation is a practical necessity for applying composition theories in practice. Emphasis is placed in this workshop on composition theories that are well founded theoretically, are verifiable or falsifiable, automated by tools, and supported by practical evaluation.",2003,0, 793,Electromagnetic environment analysis of a software park near transmission lines,"The electromagnetic environments (EMEs) of planned Zhongguancun Software Park near transmission lines, including electrical field, magnetic field and grounding potential rise under three cases of lightning, normal operation and short circuit faults, are assessed by numerical analysis. The power frequency electromagnetic environments of the software park are below the maximum ecological allowed exposure values for the general public, nevertheless the power frequency magnetic field may interfere with the sensitive computer display unit. The influence of short-circuit fault in two different cases of remote short-circuit and neighboring short-circuit on the software park was discussed. The main problem we must pay attention to is the ground potential rise in the software park due to neighboring short-circuit fault, it would threaten the safe operation of electronic devices in the software park. On the other hand, the lightning strike is a serious threat to the software park, protective countermeasures should be adopted to improve the electromagnetic environments of the software park.",2003,0,1089 794,Reliable upgrade of group communication software in sensor networks,"Communication is critical between nodes in wireless sensor networks. Upgrades to their communication software need to be done reliably because residual software errors in the new module can cause complete system failure. We present a software architecture, called cSimplex, which can reliably upgrade multicast-based group communication software in sensor networks. Errors in the new module are detected using statistical checks and a stability definition that we propose. Error recovery is done by switching to a well-tested, reliable safety module without any interruption in the functioning of the system. cSimplex has been implemented and demonstrated in a network of acoustic sensors with mobile robots functioning as base stations. Experimental results show that faults in the upgraded software can be detected with an accuracy of 99.71% on average. The architecture, which can be easily extended to other reliable upgrade problems, will facilitate a paradigm shift in system evolution from static design and extensive testing to reliable upgrades of critical communication components in networked systems, thus also enabling substantial savings in testing time and resources.",2003,0, 795,Software fault tolerance of distributed programs using computation slicing,"Writing correct distributed programs is hard. In spite of extensive testing and debugging, software faults persist even in commercial grade software. Many distributed systems, especially those employed in safety-critical environments, should be able to operate properly even in the presence of software faults. Monitoring the execution of a distributed system, and, on detecting a fault, initiating the appropriate corrective action is an important way to tolerate such faults. This gives rise to the predicate detection problem which involves finding a consistent cut of a distributed computation, if it exists, that satisfies the given global predicate. Detecting a predicate in a computation is, however, an NP-complete problem. To ameliorate the associated combinatorial explosion problem, we introduce the notion of computation slice in our earlier papers [5, 10]. Intuitively, slice is a concise representation of those consistent cuts that satisfy a certain condition. To detect a predicate, rather than searching the state-space of the computation, it is much more efficient to search the state-space of the slice. In this paper we provide efficient algorithms to compute the slice for several classes of predicates. Our experimental results demonstrate that slicing can lead to an exponential improvement over existing techniques in terms of lime and space.",2003,0, 796,QoS evaluation of VoIP end-points,"We evaluate the QoS of a number of VoIP end-points, in terms of mouth-to-ear (M2E) delay, clock skew, silence suppression behavior and robustness to packet loss. Our results show that the M2E delay depends mainly on the receiving end-point. Hardware IP phones, when acting as receivers, usually achieve a low average M2E delay (45-90 ms) under low jitter conditions. Software clients achieve an average M2E delay from 65 ms to over 400 ms, depending on the actual implementation. All tested end-points can compensate for clock skew, although some suffer from occasional playout buffer underflow. Only a few of the tested end-points support silence suppression. We find that these silence detectors have a relatively long hangover time (> 0.5 sec), and they may falsely detect music as silence. All hardware IP phones we tested support some form of packet loss concealment better than silence substitution. The concealment generally works well for two to three consecutive losses at 20 ms packet intervals, but voice will quickly deteriorate beyond that.",2003,0, 797,Restoration schemes with differentiated reliability,"Reliability of data exchange is becoming increasingly important. In addition, applications may require multiple degrees of reliability. The concept of differentiated reliability (DiR) was recently introduced in [A. Fumagalli and M. Tacca, January 2001] to provide multiple degrees of reliability in protection schemes that provision spare resources. With this paper, the authors extend the DiR concept to restoration schemes in which network resources for a disrupted connection along secondary paths are sought upon failure occurrence, i.e., they are not provisioned before the fault. The DiR concept is applied in two dimensions: restoration blocking probability i.e., the probability that the disrupted connection is not recovered due to lack of network resources - and restoration time - i.e., the time necessary to complete the connection recovery procedure. Differentiation in the two dimensions is accomplished by proposing three preemption policies that allow high priority connections to preempt resources allocated to low priority connections. The three policies trade complexity, i.e., number of preempted connections, for better reliability differentiation. Obtained results indicate that by using the proposed preemption policies, it is possible to guarantee a significant differentiation of both restoration blocking probability and restoration time. By carefully choosing the preemption policy, the desired reliability degree can be obtained, while minimizing the number of preempted connections.",2003,0, 798,Impact of rate control on the capacity of an Iub link: single service case,"Universal Mobile Telecommunications System (UMTS) networks are capable of serving packet-switched data applications at bit rates as high as 384 kbps. This paper studies the capacity and utilization of the downlink of the Iub interface, which lies between the radio network controller (RNC) and the base station (NodeB) in the UMTS network. The 3GPP standards define a Node B """"receive window"""" within which a frame should arrive for it to be processed and transmitted to the UE in time. If the frame arrives too late, it will be discarded. Such frame discard event results in some loss in voice/data quality. Via simulations, we evaluate the link capacity for web-browsing traffic at 64 kbps, 128 kbps and 384 kbps, with a frame discard probability target of 0.5%. Our results indicate that the Iub link utilization is very poor due to the highly bursty nature of data traffic. In order to alleviate this problem, we introduce a rate control (RC) scheme where the peak user data rate is temporarily lowered during times of high congestion. This lowering of data rate is done through appropriate selection of the transport block size within the transport format set. As a result of such rate control, the capacity of the Iub link improves.",2003,0, 799,Voltage flicker calculations for single-phase AC railroad electrification systems,"Rapid load variations can cause abrupt changes in the utility voltage, so-called voltage flicker. The voltage flicker may result in light flickering and, in extreme cases, damage to electronic equipment. Electrified railroads are just one example where such rapid load variation occurs as trains accelerate, decelerate, and encounter and leave grades. For balanced loads, the voltage flicker is easily determined using per-phase analysis. AC electrification system substations operating at a commercial frequency, however, are supplied from only two phases of utility three-phase transmission system. In order to calculate the voltage flicker for such an unbalanced system, symmetrical component method needs to be used. In this paper, a procedure is developed for evaluating the effects of short-time traction load variation onto utility system. Applying the symmetrical component method, voltage flicker equations are developed for loads connected to A-B, B-C, and C-A phases of a three-phase utility system. Using a specially-developed software simulating the train and electrification system performance, loads at the traction power substation transformers are calculated in one-second intervals. Subsequently, voltages at the utility busbars are calculated for each interval, and the voltage variation from interval to interval is expressed in percent. The calculated voltage flicker is then compared to the utility accepted limits. Based on this comparison, the capability of the utility power system to support the traction loads can be assessed and the suitability of the proposed line taps for the traction power substations evaluated.",2003,0, 800,Automatic document metadata extraction using support vector machines,"Automatic metadata generation provides scalability and usability for digital libraries and their collections. Machine learning methods offer robust and adaptable automatic metadata extraction. We describe a support vector machine classification-based method for metadata extraction from header part of research papers and show that it outperforms other machine learning methods on the same task. The method first classifies each line of the header into one or more of 15 classes. An iterative convergence procedure is then used to improve the line classification by using the predicted class labels of its neighbor lines in the previous round. Further metadata extraction is done by seeking the best chunk boundaries of each line. We found that discovery and use of the structural patterns of the data and domain based word clustering can improve the metadata extraction performance. An appropriate feature normalization also greatly improves the classification performance. Our metadata extraction method was originally designed to improve the metadata extraction quality of the digital libraries Citeseer [S. Lawrence et al., (1999)] and EbizSearch [Y. Petinot et al., (2003)]. We believe it can be generalized to other digital libraries.",2003,0, 801,Tight bounded localization of facial features with color and rotational independence,"Human face detection plays an important role in applications such as video surveillance, human computer interface, face recognition and face image database management. In this paper, we propose a novel facial feature detection algorithm for various face image types, conditions, invariant rotation, and any appearances. There are three main steps. First, Radon transform is used for face angle detection on rotated image. Subsequently, the feature regions are detected using Neural Visual Model (NVM). Finally, using image dilation and Radon transform, the facial features are extracted from the detected regions. Input parameters are obtained from the face characteristics and the positions of facial features not including any intensity informations. Our algorithm is successfully tested with various types of faces which are color images, gray images, binary images, wearing the sunglasses, wearing the scarf, lighting effect, low-quality images, color and sketch images from animated cartoon, rotated face images, and rendered face images.",2003,0, 802,Considering fault removal efficiency in software reliability assessment,"Software reliability growth models (SRGMs) have been developed to estimate software reliability measures such as the number of remaining faults, software failure rate, and software reliability. Issues such as imperfect debugging and the learning phenomenon of developers have been considered in these models. However, most SRGMs assume that faults detected during tests will eventually be removed. Consideration of fault removal efficiency in the existing models is limited. In practice, fault removal efficiency is usually imperfect. This paper aims to incorporate fault removal efficiency into software reliability assessment. Fault removal efficiency is a useful metric in software development practice and it helps developers to evaluate the debugging effectiveness and estimate the additional workload. In this paper, imperfect debugging is considered in the sense that new faults can be introduced into the software during debugging and the detected faults may not be removed completely. A model is proposed to integrate fault removal efficiency, failure rate, and fault introduction rate into software reliability assessment. In addition to traditional reliability measures, the proposed model can provide some useful metrics to help the development team make better decisions. Software testing data collected from real applications are utilized to illustrate the proposed model for both the descriptive and predictive power. The expected number of residual faults and software failure rate are also presented.",2003,0, 803,Practical code inspection techniques for object-oriented systems: an experimental comparison,"Although inspection is established as an effective mechanism for detecting defects in procedural systems, object-oriented systems have different structural and execution models. This article describes the development and empirical investigation of three different techniques for reading OO code during inspection.",2003,0, 804,A 2-level call admission control scheme using priority queue for decreasing new call blocking & handoff call dropping,"In order to provide a fast moving mobile host (MH) supporting multimedia applications with a consistent quality of service (QoS), an efficient call admission mechanism is in need. This paper proposes the 2-level call admission (2LCAC) scheme based on a call admission scheme using the priority to guarantee the consistent QoS for mobile multimedia applications. The 2LCAC consists of the basic call admission and advanced call admission; the former determines call admission based on bandwidth available in each cell and the latter determines call admission by utilizing delay tolerance time (DTT) and priority queue (PQueue) algorithms. In order to evaluate the performance of our scheme, we measure the metrics such as the blocking probability of new calls, dropping probability of handoff calls and bandwidth utilization. The result shows that the performance of our scheme is superior to that of existing schemes such as complete sharing policy (CSP), guard channel policy (GCP) and adaptive guard channel policy (AGCP).",2003,0, 805,"Modelling a secure, mobile, and transactional system with CO-OPN","Modelling complex concurrent systems is often difficult and error-prone, in particular when new concepts coming from advanced practical applications are considered. These new application domains include dynamicity, mobility, security, and localization dependent computing. In order to fully model and prototype such systems we propose to use several concepts existing in our specification language CO-OPN, like context, dynamicity, mobility, subtyping, and inheritance. CO-OPN (concurrent object oriented Petri net) is a formal specification language for modelling distributed systems; it is based on coordinated algebraic Petri nets. We focus on the use of several basic mechanisms of CO-OPN for modelling mobile systems and the generation of corresponding Java code. A significant example of distributors accessible through mobile devices (for example, PDA with Bluetooth) is fully modelled and implemented with our technique.",2003,0, 806,Asymptotic insensitivity of least-recently-used caching to statistical dependency,"We investigate a widely popular least-recently-used (LRU) cache replacement algorithm with semiMarkov modulated requests. SemiMarkov processes provide the flexibility for modeling strong statistical correlation, including the broadly reported long-range dependence in the World Wide Web page request patterns. When the frequency of requesting a page n is equal to the generalized Zipf's law c/n, > 1, our main result shows that the cache fault probability is asymptotically, for large cache sizes, the same as in the corresponding LRU system with i.i.d. requests. This appears to be the first explicit average case analysis of LRU caching with statistically dependent request sequences. The surprising insensitivity of LRU caching performance demonstrates its robustness to changes in document popularity. Furthermore, we show that the derived asymptotic result and simulation experiments are in excellent agreement, even for relatively small cache sizes. The potential of using our results in predicting the behavior of Web caches is tested using actual, strongly correlated, proxy server access traces.",2003,0, 807,Quality-based auto-tuning of cell uplink load level targets in WCDMA,"The objective of this paper was to validate the feasibility of auto-tuning WCDMA cell uplink load level targets based on quality of service. The uplink cell load level was measured with received wideband total power. The quality indicators used were called blocking probability, packet queuing probability and degraded block error ratio probability. The objective was to improve performance and operability of the network with control software aiming for a specific quality of service. The load level targets in each cell were regularly adjusted with a control method in order to improve performance. The approach was validated using a dynamic WCDMA system simulator. The conducted simulations support the assumption that the uplink performance can be managed and improved by the proposed cell-based automated optimization.",2003,0, 808,Model checking for probability and time: from theory to practice,"Probability features increasingly often in software and hardware systems: it is used in distributed coordination and routing problems, to model fault-tolerances and performance, and to provide adaptive resource management strategies. Probabilistic model checking is an automatic procedure for establishing if a desired property holds in a probabilistic specifications such as """"leader election is eventually resolved with probability 1"""", """"the chance of shutdown occurring is at most 0.01%"""", and """"the probability that a message will be delivered within 30ms is at least 0.75"""". A probabilistic model checker calculates the probability of a given temporal logic property being satisfied, as opposed to validity. In contrast to conventional model checkers, which rely on reachability analysis of the underlying transition system graph, probabilistic model checking additionally involves numerical solutions of linear equations and linear programming problems. This paper reports our experience with implementing PRISM (www.cs.bham.ac.uk/dxp/prism), a probabilistic symbolic model checker, demonstrates its usefulness in analyzing real-world probabilistic protocols, and outlines future challenges for this research direction.",2003,0, 809,Accurate modeling and simulation of SAW RF filters,"The popularity of wireless services and the increasing demand for higher quality, new services, and the need for higher data rates will boost the cellular terminal market. Today, third generation (3G) systems exist in many metropolitan areas. In addition, wireless LAN systems, such as Bluetooth or IEEE 802.11-based systems, are emerging. The key components in the microwave section of the mobile terminals of these systems incorporate - apart from active radio frequency integrated circuits (RFICs) and RF modules - a multitude of passive components. The most unique passive components used in the microwave section are surface acoustic wave (SAW) filters. Due to the progress of integration in the active part of the systems the component count in modern terminals is decreasing. On the other hand, the average number of SAW RF filters per cellular phone is increasing due to multi-band terminals. As a consequence, the passive components outnumber the RFICs by far in today's systems. The market is demanding smaller and smaller terminals and, thus, the size of all components has to be reduced. Further reduction of component count and required PCB area is obtained by integration of passive components and SAW devices using low-temperature co-fired ceramic (LTCC). The trend or reducing the size dramatically while keeping or even improving the performance of the RF filters requires accurate software tools for the simulation of all relevant effects and interactions. In the past it was sufficient to predict the acoustic behavior on the SAW chip, but higher operating frequencies, up to 2.5 GHz, and stringent specifications up to 6 GHz demand to account for electromagnetic (EM) effects, too. The combination of accurate acoustic simulation tools together with 2.5/3D EM simulation software packages allows to predict and optimize the performance of SAW filters and SAW-based front-end modules.",2003,0, 810,A birth-process approach to Moranda's geometric software-reliability model,"To alleviate some of the objections to the basic Jelinski Moranda (JM) model for software failures, Moranda proposed a geometric de-eutrophication model. This model assumes that the times between failures are statistically-independent exponential random variables with given failure rates. In this model the failure rates decrease geometrically with the detection of a fault. Using an intuitive approach, Musa, Iannino, Okumoto , see also Farr , derived expressions for the mean and the intensity functions of the process N (t) which counts the number of faults detected in the time interval [O, t] for the Moranda geometric de-eutrophication model. N (t) is studied as a pure birth stochastic process; its probability generating function is derived, as well as its mean, intensity and reliability functions. The expressions for the mean and intensity functions derived by MIO are only approximations and can be quite different from the true functions for certain choices of the failure rates. The exact expressions for the mean function and the intensity function of N (t) are used to find the optimum release time of software based on a cost structure for Moranda's geometric de-eutrophication model.",2003,0, 811,Availability requirement for a fault-management server in high-availability communication systems,"This paper investigates the availability requirement for the fault management server in high-availability communication systems. This study shows that the availability of the fault management server does not need to be 99.999% in order to guarantee a 99.999% system availability, as long as the fail-safe ratio (the probability that the failure of the fault management server does not bring down the system) and the fault coverage ratio (probability that the failure in the system can be detected and recovered by the fault management server) are sufficiently high. Tradeoffs can be made among the availability of the fault management server, the fail-safe ratio, and the fault coverage ratio to optimize system availability. A cost-effective design for the fault management server is proposed.",2003,0, 812,Modeling and measurements of novel high k monolithic transformers,"This paper presents modeling and measurements of a novel monolithic transformer with high coupling k and quality factor Q characteristics. The present transformer utilizes a Z-shaped multilayer metalization to increase k without sacrificing Q. The new transformer has been fabricated using Motorola 0.18 micron copper process. A simple 2-port lumped circuit model is used to model the new design. Experimental data shows a good agreement with predicted data obtained from an HFSS software simulator. An increase of about 10% in mutual coupling and 15% in Q has been achieved. For a modest increase in k of about 5%, Q can be increased by up to 20%.",2003,0, 813,Concurrent fault detection in a hardware implementation of the RC5 encryption algorithm,"Recent research has shown that fault diagnosis and possibly fault tolerance are important features when implementing cryptographic algorithms by means of hardware devices. In fact, some security attack procedures are based on the injection of faults. At the same time, hardware implementations of cryptographic algorithms, i.e. crypto-processors, are becoming widespread. There is however, only very limited research on implementing fault diagnosis and tolerance in crypto-algorithms. Fault diagnosis is studied for the RC5 crypto-algorithm, a recently proposed block-cipher algorithm that is suited for both software and hardware implementations. RC5 is based on a mix of arithmetic and logic operations, and is therefore a challenge for fault diagnosis. We study fault propagation in RC5, and propose and evaluate the cost/performance tradeoffs of several error detecting codes for RC5. Costs are estimated in terms of hardware overhead, and performances in terms of fault coverage. Our most important conclusion is that, despite its nonuniform nature, RC5 can be efficiently protected by using low-cost error detecting codes.",2003,0, 814,A mobile location-based vehicle fleet management service application,"The convergence of multiple technologies, including the Internet, wireless communications, geographic information system, location technologies, and mobile devices, has given rise to new types of information utilities that may be referred as mobile location-based services (LBS). LBS can be described as applications that exploit knowledge about where an information device (user) is located. For example, location information can be used to provide automobile drivers with optimal routes to a geographic destination. Specifically, we examine a context where the people need to move physically from one location to another via taxis. In this scenarios the user is in control of the location information associated with the mobile device. However, problems arise when a fleet management application use that dynamic information to provide the best taxi assignment. This paper presents a new approach to the taxi assignment problem in a mobile environment based on optimization and simulation. Our specific interest is to predict the impact of the interaction between the assignment algorithm and fleet management in the mobile environment on the desired quality of service for the mobile users. The paper also discusses the current approach and limitations of the new approach and some simulation results, such as average transport time and unserved mobile users average. Finally, we present some conclusions.",2003,0, 815,The efficient bus arbitration scheme in SoC environment,"This paper presents the dynamic bus arbiter architecture for a system on chip design. The conventional bus-distribution algorithms, such as the static fixed priority and the round robin, show several defects that are bus starvation, and low system performance because of bus distribution latency in a bus cycle time. The proposed dynamic bus architecture is based on a probability bus distribution algorithm and uses an adaptive ticket value method to solve the impartiality and starvation problems. The simulation results show that the proposed algorithm reduces the buffer size of a master by 11% and decreases the bus latency of a master by 50%.",2003,0, 816,Transparent distributed threads for Java,"Remote method invocation in Java RMI allows the flow of control to pass across local Java threads and thereby span multiple virtual machines. However, the resulting distributed threads do not strictly follow the paradigm of their local Java counterparts for at least three reasons. Firstly, the absence of a global thread identity causes problems when reentering monitors. Secondly, blocks synchronized on remote objects do not work properly. Thirdly, the thread interruption mechanism for threads executing a remote call is broken. These problems make multi-threaded distributed programming complicated and error prone. We present a two-level solution: On the library level, we extend KaRMI (Philippsen et al. (2000)), a fast replacement for RMI, with global thread identities for eliminating problems with monitor reentry. Problem with synchronization on remote objects are solved with a facility for remote monitor acquisition. Our interrupt forwarding mechanism enables the application to get full control over its distributed threads. On the language level, we integrate these extensions with JavaParty's transparent remote objects (Philippsen et al. (1997)) to get transparent distributed threads. We finally evaluate our approach with benchmarks that show costs and benefits of our overall design.",2003,0, 817,"Comments on """"The confounding effect of class size on the validity of object-oriented metrics""""","It has been proposed by El Emam et al. (ibid. vol.27 (7), 2001) that size should be taken into account as a confounding variable when validating object-oriented metrics. We take issue with this perspective since the ability to measure size does not temporally precede the ability to measure many of the object-oriented metrics that have been proposed. Hence, the condition that a confounding variable must occur causally prior to another explanatory variable is not met. In addition, when specifying multivariate models of defects that incorporate object-oriented metrics, entering size as an explanatory variable may result in misspecified models that lack internal consistency. Examples are given where this misspecification occurs.",2003,0, 818,A class of random multiple bits in a byte error correcting and single byte error detecting (Stb/EC-SbED) codes,"Correcting multiple random bit errors that corrupt a single DRAM chip becomes very important in certain applications, such as semiconductor memories used in computer and communication systems, mobile systems, aircraft, and satellites. This is because, in these applications, the presence of strong electromagnetic waves in the environment or the bombardment of an energetic particle on a DRAM chip is highly likely to upset more than just one bit stored in that chip. On the other hand, entire chip failures are often presumed to be less likely events and, in most applications, detection of errors caused by single chip failures are preferred to correction due to check bit length considerations. Under this situation, codes capable of correcting random multiple bit errors that are confined to a single chip output and simultaneously detecting errors caused by single chip failures are attractive for application in high speed memory systems. This paper proposes a class of codes called Single t/b-error Correcting-Single b-bit byte Error Detecting (Stb/EC-SbED) codes which have the capability of correcting random t-bit errors occurring within a single b-bit byte and simultaneously indicating single b-bit byte errors. For the practical case where the chip data output is 8 bits, i.e., b = 8, the S38/EC-S8ED code proposed in this paper, for example, requires only 12 check bits at information length 64 bits. Furthermore, this S38/EC-S8ED code is capable of correcting errors caused by single subarray data faults, i.e., single 4-bit byte errors, as well. This paper also shows that perfect S(b-t)b/EC-SbED codes, i.e., perfect Stb/EC-SbED codes for the case where t = b - 1, do exist and provides a theorem to construct these codes.",2003,0, 819,Low-cost on-line fault detection using control flow assertions,"A control flow fault occurs when a processor fetches and executes an incorrect next instruction. Executable assertions, i.e., special instructions that check some invariant properties of a program, provide a powerful and low-cost method for on-line detection of hardware-induced control flow faults. We propose a technique called ACFC (Assertions for Control Flow Checking) that assigns an execution parity to a basic block, and uses the parity bit to detect faults. Using a graph model of a program, we classify control flow faults into skip, re-execute and multi-path faults. We derive some necessary conditions for these faults to manifest themselves as execution parity errors. To force a control flow fault to excite a parity error, the target program is instrumented with additional instructions. Special assertions are inserted to detect such parity errors. We have a developed a preprocessor that takes a C program as input and inserts ACFC assertions automatically. We have implemented a software-based fault injection tool SFIG which takes advantage of the GNU debugger. Fault injection experiments show that ACFC incurs less performance overhead (around 47%) and memory overhead (around 30%) than previous techniques, with no significant loss in fault coverage.",2003,0, 820,Introducing SW-based fault handling mechanisms to cope with EMI in embedded electronics: are they a good remedy?,"We summarize a study on the effectiveness of two software-based fault handling mechanisms in terms of detecting conducted electromagnetic interference (EMI) in microprocessors. One of these techniques deals with processor control flow checking. The second one is used to detect errors in code variables. In order to check the effectiveness of such techniques in RF ambient, an EIC 61.000-4-29 normative-compliant conducted RF-generator was implemented to inject spurious electromagnetic noise into the supply lines of a commercial off-the-shelf (COTS) microcontroller-based system. Experimental results suggest that the considered techniques present a good effectiveness to detect this type of faults, despite the multiple-fault injection nature of EMI in the processor control and data flows, which in most cases result in a complete system functional loss (the system must be reset).",2003,0, 821,An overview of configurable computing machines for software radio handsets,"The advent of software radios has brought a paradigm shift to radio design. A multimode handset with dynamic reconfigurability has the promise of integrated services and global roaming capabilities. However, most of the work to date has been focused on software radio base stations, which do not have as tight constraints on area and power as handsets. Base station software radio technology progressed dramatically with advances in system design, adaptive modulation and coding techniques, reconfigurable hardware, A/D converters, RF design, and rapid prototyping systems, and has helped bring software radio handsets a step closer to reality. However, supporting multimode radios on a small handset still remains a design challenge. A configurable computing machine, which is an optimized FPGA with application-specific capabilities, show promise for software radio handsets in optimizing hardware implementations for heterogeneous systems. In this article contemporary CCM architectures that allow dynamic hardware reconfiguration with maximum flexibility are reviewed and assessed. This is followed by design recommendations for CCM architectures for use in software radio handsets.",2003,0, 822,Applications of service curve theory,"In this paper, we study well-known utilization-based admission control schemes which are derived by ad-hoc mathematical manipulations in previous literatures. We show that the same results can easily be achieved by using service curve theory (SCT). We also investigate the problem of providing statistical quality of service (QoS) in wireless network. With the same idea as the service curve theory, we can find an approximation formula to estimate drop probabilities in the Cruz (1995) and Agarwal et. al. (1999) delay system. In general, this analysis methodology is effective and can be used in the delay analysis of other applications.",2003,0, 823,Uncertainty in the output of artificial neural networks,"Analysis of the performance of artificial neural networks (ANNs) is usually based on aggregate results on a population of cases. In this paper, we analyze ANN output corresponding to the individual case. We show variability in the outputs of multiple ANNs that are trained and """"optimized"""" from a common set of training cases. We predict this variability from a theoretical standpoint on the basis that multiple ANNs can be optimized to achieve similar overall performance on a population of cases, but produce different outputs for the same individual case because the ANNs use different weights. We use simulations to show that the average standard deviation in the ANN output can be two orders of magnitude higher than the standard deviation in the ANN overall performance measured by the Az value. We further show this variability using an example in mammography where the ANNs are used to classify clustered microcalcifications as malignant or benign based on image features extracted from mammograms. This variability in the ANN output is generally not recognized because a trained individual ANN becomes a deterministic model. Recognition of this variability and the deterministic view of the ANN present a fundamental contradiction. The implication of this variability to the classification task warrants additional study.",2003,0, 824,Detection of errors in case hardening processes brought on by cooling lubricant residues,"Life cycle of case hardened steel work pieces depends on the quality of hardening. A large influencing factor on the quality of hardening is the cleanliness of the work pieces. In manufacturing a large amount of auxiliary materials such as cooling lubricants and drawing compounds are used to ensure correct execution of cutting and forming processes. Especially the residues of cooling lubricants are carried into following processes on the surfaces of the machined parts. Stable and controlled conditions cannot be guaranteed for these subsequent processes as the residues' influence on the process performance is known insufficiently, leading to a high uncertainty and consequently high expense factor. Therefore, information is needed about the type and amount of contamination. In practice the influence of these cooling lubricants on case hardening steels is a well-known phenomenon but correlation of the residue volume and resulting hardness are not known. A short overview of the techniques to detect cooling lubricant residues will be given in this paper and a method to detect the influence of the residues on the hardening process of case hardening steels will be shown. An example will be given for case hardening steel 16MnCr5 (1.7131). The medium of contamination is ARAL SAROL 470 EP.",2003,0, 825,Context-based adaptive binary arithmetic coding in the H.264/AVC video compression standard,"Context-based adaptive binary arithmetic coding (CABAC) as a normative part of the new ITU-T/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel low-complexity method for binary arithmetic coding and probability estimation that is well suited for efficient hardware and software implementations. CABAC significantly outperforms the baseline entropy coding method of H.264/AVC for the typical area of envisaged target applications. For a set of test sequences representing typical material used in broadcast applications and for a range of acceptable video quality of about 30 to 38 dB, average bit-rate savings of 9%-14% are achieved.",2003,0, 826,Fault analysis of current-controlled PWM-inverter fed induction-motor drives,"In this paper, the fault-tolerance capability of IM-drive is studied. The discussion on the fault-tolerance of IM drives in the literature has mostly been on the conceptual level without any detailed analysis. Most of studies are only achieved experimentally. This paper provides an analytical tool to quickly analyze and predict the performance under fault conditions. Also, most of the presented results were machine specific and not general enough to be applicable as an evaluation tool. So, this paper will present a generalized method for predicting the post-fault performance of IM-drives after identifying the various faults that can occur. The fault analysis for IM in the motoring mode will be presented in this paper. The paper includes an analysis for different classifications of drive faults. The faults in an IM-drive -that will be studied- can be broadly classified as: machine fault, (i.e., one of stator windings is open or short, multiple phase open or short, bearings, and rotor bar is broken) and inverter-converter faults (i.e., phase switch open or short, multiple phase fault, and DC-link voltage drop). Briefly, a general-purpose software package for variety of IM-drive faults -is introduced. This package is very important in IM-fault diagnosis and detection using artificial intelligent techniques, wavelet and signal processing.",2003,0, 827,Automatic communication refinement for system level design,"This paper presents a methodology and algorithms for automatic communication refinement. The communication refinement task in system-level synthesis transforms abstract data-transfer between components to its actual bus level implementation. The input model of the communication refinement is a set of concurrently executing components, communicating with each other through abstract communication channels. The refined model reflects the actual communication architecture. Choosing good communication architecture in system level design requires sufficient exploration through evaluation of various architectures. However, this would not be possible with manually refining the system model for each communication architecture. For one, manual refinement is tedious and error-prone. Secondly, it wastes substantial amount of precious designer time. We solve this problem with automatic model refinement. We also present a set of experimental results to demonstrate how the proposed approach works on a typical system level design.",2003,0, 828,A scalable software-based self-test methodology for programmable processors,"Software-based self-test (SBST) is an emerging approach to address the challenges of high-quality, at-speed test for complex programmable processors and systems-on chips (SoCs) that contain them. While early work on SBST has proposed several promising ideas, many challenges remain in applying SBST to realistic embedded processors. We propose a systematic scalable methodology for SBST that automates several key steps. The proposed methodology consists of (i) identifying test program templates that are well suited for test delivery to each module within the processor, (ii) extracting input/output mapping functions that capture the controllability/observability constraints imposed by a test program template for a specific module-under-test, (iii) generating module-level tests by representing the input/output mapping functions as virtual constraint circuits, and (iv) automatic synthesis of a software self-test program from the module-level tests. We propose novel RTL simulation-based techniques for template ranking and selection, and techniques based on the theory of statistical regression for extraction of input/output mapping functions. An important advantage of the proposed techniques is their scalability, which is necessitated by the significant and growing complexity of embedded processors. To demonstrate the utility of the proposed methodology, we have applied it to a commercial state-of-the-art embedded processor (Xtensa form Tensilica Inc.). We believe this is the first practical demonstration of software-based self-test on a processor of such complexity. Experimental results demonstrate that software self-test programs generated using the proposed methodology are able to detect most (95.2%) of the functionally testable faults, and achieve significant simultaneous improvements in fault coverage and test length compared with conventional functional test.",2003,0, 829,State-based power analysis for systems-on-chip,"Early power analysis for systems-on-chip (SoC) is crucial for determining the appropriate packaging and cost. This early analysis commonly relies on evaluating power formulas for all cores for multiple configurations of voltage, frequency, technology and application parameters, which is a tedious and error-prone process. This work presents a methodology and algorithms for automating the power analysis of SoCs. Given the power state machines for individual cores, this work defines the product power state machine for the whole SoC and uses formal symbolic simulation algorithms for traversing and computing the minimum and maximum power dissipated by sets of power states in the SoC.",2003,0, 830,Wire length prediction based clustering and its application in placement,"In this paper, we introduce a metric to evaluate proximity of connected elements in a netlist. Compared to connectivity by S. Hauck and G. Borriello (1997) and edge separability by J. Cong and S.K. Lim (2000), our metric is capable of predicting short connections more accurately. We show that the proposed metric can also predict relative wire length in multipin nets. We develop a fine-granularity clustering algorithm based on the new metric and embed it into the Fast Placer Implementation (FPI) framework by B. Hu and M. Marek-Sadowska (2003). Experimental results show that the new clustering algorithm produces better global placement results than the net absorption of Hu and M. Marek-Sadowska (2003) algorithm, connectivity of S. Hauck and G. Borriello (1997), and edge separability of J. Cong and S.K. Lim (2000) based algorithms. With the new clustering algorithm, FPI achieves up to 50% speedup compared to the latest version of Capo8.5 in http://vlsicad.ucsd.edu/Resources/SoftwareLinks/PDtools/, without placement quality losses.",2003,0, 831,Quality certification based on hierarchical classification of software packages,"With the advance of software and computer technology, COTS (Commercial-Off-The-Shelf) software is radically diversified and its application areas are also being extended. Because software quality evaluation is dependent upon types of software and environments where software is operated, certification organizations need different certification programs that are foundations or basic frames for certifying software. Although we can certify new software products by making new certification programs or referring to previous ones, both of them are somewhat ineffective methods. Therefore, we need to systematically generate certification programs to assess new software, and consider types of software and software environments at the same time. In this paper, we propose a meta model in order to systematically derive certification programs from previous ones. With this model, we can construct certification programs incrementally based on hierarchical classification of software packages. Furthermore, by generating certification programs with quality data on some certified software products, we validate the meta model.",2003,0, 832,Nonscan design for testability for synchronous sequential circuits based on conflict resolution,"A testability measure called conflict, based on conflict analysis in the process of sequential circuit test generation is introduced to guide nonscan design for testability. The testability measure indicates the number of potential conflicts to occur or the number of clock cycles required to detect a fault. A new testability structure is proposed to insert control points by switching the extra inputs to primary inputs, using whichever extra inputs of all control points can be controlled by independent signals. The proposed design for testability approach is economical in delay, area, and pin overheads. The nonscan design for testability method based on the conflict measure can reduce many potential backtracks and make many hard-to-detect faults easy-to-detect; therefore, it can enhance actual testability of the circuit greatly. Extensive experimental results are presented to demonstrate the effectiveness of the method.",2003,0, 833,An experimental comparison of usage-based and checklist-based reading,"Software quality can be defined as the customers' perception of how a system works. Inspection is a method to monitor and control the quality throughout the development cycle. Reading techniques applied to inspections help reviewers to stay focused on the important parts of an artifact when inspecting. However, many reading techniques focus on finding as many faults as possible, regardless of their importance. Usage-based reading helps reviewers to focus on the most important parts of a software artifact from a user's point of view. We present an experiment, which compares usage-based and checklist-based reading. The results show that reviewers applying usage-based reading are more efficient and effective in detecting the most critical faults from a user's point of view than reviewers using checklist-based reading. Usage-based reading may be preferable for software organizations that utilize or start utilizing use cases in their software development.",2003,0, 834,Case-base reasoning in vehicle fault diagnostics,"This paper presents our research in case-based reasoning (CBR) with application to vehicle fault diagnosis. We have developed a distributed diagnostic agent system, DDAS that detects faults of a device based on signal analysis and machine learning. The CBR techniques presented are used to rind root cause of vehicle faults based on the information provided by the signal agents in DDAS. Two CBR methods are presented, one used directly the diagnostic output from the signal agents and another uses the signal segment features. We present experiments conducted on real vehicle cases collected from auto dealers and the results show that both method are effective in finding root causes of vehicle faults.",2003,0, 835,A cognitive complexity metric based on category learning,"Software development is driven by software comprehension. Controlling a software development process is dependent on controlling software comprehension. Measures of factors that influence software comprehension are required in order to achieve control. The use of high-level languages results in many different kinds of lines of code that require different levels of comprehension effort. As the reader learns the set of arrangements of operators, attributes and labels particular to an application, comprehension is eased as familiar arrangements are repeated. Elements of cognition that describe the mechanics of comprehension serve as a guide to assessing comprehension demands in the understanding of programs written in high level languages. A new metric, kinds of lines of code identifier density is introduced and a case study demonstrates its application and importance. Related work is discussed.",2003,0, 836,A framework for distributed fault management using intelligent software agents,"This paper proposes a framework for distributed management of network faults by software agents. Intelligent network agents with advanced reasoning capabilities address many of the issues for the distribution of processing and control in network management. The agents detect, correlate and selectively seek to derive a clear explanation of alarms generated in their domain. The causal relationship between faults and their effects is presented as a Bayesian network. As evidence (alarms) is gathered, the probability of the presence of any particular fault is strengthened or weakened. Agents having a narrower view of the network forward their findings to another with a much broader view of the network. Depending on the network's degree of automation, the agent can carry out local recovery actions. A prototype reflecting the ideas discussed in this paper is under implementation.",2003,0, 837,Identifying effective software metrics using genetic algorithms,"Various software metrics may be used to quantify object-oriented source code characteristics in order to assess the quality of the software. This type of software quality assessment may be viewed as a problem of classification: given a set of objects with known features (software metrics) and group labels (quality rankings), design a classifier that can predict the quality rankings of new objects using only the software metrics. We have obtained a variety of software measures for a Java application used for biomedical data analysis. A system architect has ranked the quality of the objects as low, medium-low, medium or high with respect to maintainability. A commercial program was used to parse the source code identifying 16 metrics. A genetic algorithm (GA) was implemented to determine which subset of the various software metrics gave the best match to the quality ranking specified by the expert. By selecting the optimum metrics for determining object quality, GA-based feature selection offers an insight into which software characteristics developers should try to optimize.",2003,0, 838,A system for controlling software inspections,"Software inspections are a powerful tool for detecting faults in software during the early phases of the life cycle. Deciding when to stop inspections is an important determinant of inspection effectiveness. Capture-recapture (CR) models can be used to estimate defect content, and hence help to make a reinspection decision. We present Monte Carlo simulations of six CR models. The objective is to find the best CR model. This builds on previous work by simulating the context of high-reliability systems. The results indicate that model MtCh, which underestimates median relative error, gives zero failures and has the best decision accuracy.",2003,0, 839,Evaluating four white-box test coverage methodologies,"This paper presents an illustrative study aimed at evaluating the effectiveness of four white-box test coverage techniques for software programs. In the study, an experimental design was considered which was used to evaluate the chosen testing techniques. The evaluation criteria were determined both in terms of the ability to detect faults and the number of test cases required. Faults were seeded artificially into the program and several faulty-versions of programs (mutants) were generated taking help of mutation operators. Test case execution and coverage measurements were done with the help of two testing tools, Cantata and OCT. Separate regression models relating coverage and effectiveness (fault detection ability and number of test cases required) are developed. These models can be helpful for determining test effectiveness when the coverage levels are known in a problem domain.",2003,0, 840,Quantifying architectural attributes,"Summary form only given. Traditional software metrics are inapplicable to software architectures, because they require information that is not available at the architectural level, and reflect attributes that are not meaningful at this level. We briefly present architecture-relevant quality attributes, then we introduce architecture-enabled quantitative functions, and run an experiment which shows how and to what extent the latter are correlated to (hence can be used to predict) the former.",2003,0, 841,Improving Chinese/English OCR performance by using MCE-based character-pair modeling and negative training,"In the past several years, we've been developing a high performance OCR engine for machine printed Chinese/ English documents. We have reported previously (1) how to use character modeling techniques based on MCE (minimum classification error) training to achieve the high recognition accuracy, and (2) how to use confidence-guided progressive search and fast match techniques to achieve the high recognition efficiency. In this paper, we present two more techniques that help reduce search errors and improve the robustness of our character recognizer. They are (1) to use MCE-trained character-pair models to avoid error-prone character-level segmentation for some trouble cases, and (2) to perform a MCE-based negative training to improve the rejection capability of the recognition models on the hypothesized garbage images during recognition process. The efficacy of the proposed techniques is confirmed by experiments in a benchmark test.",2003,0, 842,A segmentation method for bibliographic references by contextual tagging of fields,"In this paper, a method based on part-of-speech tagging (PoS) is used for bibliographic reference structure. This method operates on a roughly structured ASCII file, produced by OCR. Because of the heterogeneity of the reference structure, the method acts in a bottom-up way, without an a priori model, gathering structural elements from basic tags to sub-fields and fields. Significant tags are first grouped in homogeneous classes according to their grammar categories and then reduced in canonical forms corresponding to record fields: """"authors"""", """"title"""", """"conference name"""", """"date"""", etc. Non labelled tokens are integrated in one or another field by either applying PoS correction rules or using a structure model generated from well-detected records. The designed prototype operates with a great satisfaction on different record layouts and character recognition qualities. Without manual intervention, 96.6% words are correctly attributed, and about 75.9% references are completely segmented from 2500 references.",2003,0, 843,Evaluating SEE: a benchmarking system for document page segmentation,"The decomposition of a document into segments such as text regions and graphics is a significant part of the document analysis process. The basic requirement for rating and improvement of page segmentation algorithms is systematic evaluation. The approaches known from the literature have the disadvantage that manually generated reference data (zoning ground truth) are needed for the evaluation task. The effort and cost of the creation of these data are very high. This paper describes the evaluation system SEE and presents an assessment of its quality. The system requires the OCR generated text and the original text of the document in correct reading order (text ground truth) as input. No manually generated zoning ground truth is needed. The implicit structure information that is contained in the text ground truth is used for the evaluation of the automatic zoning. Therefore, an assignment of the corresponding text regions in the text ground truth and those in the OCR generated text (matches) is sought. A fault tolerant string matching algorithm underlies a method, able to tolerate OCR errors in the text. The segmentation errors are determined as a result of the evaluation of the matching. Subsequently, the edit operations which are necessary for the correction of the recognized segmentation errors are computed to estimate the correction costs. Furthermore, SEE provides a version of the OCR generated text, which is corrected from the detected page segmentation errors.",2003,0, 844,Comparison of physical and software-implemented fault injection techniques,"This paper addresses the issue of characterizing the respective impact of fault injection techniques. Three physical techniques and one software-implemented technique that have been used to assess the fault tolerance features of the MARS fault-tolerant distributed real-time system are compared and analyzed. After a short summary of the fault tolerance features of the MARS architecture and especially of the error detection mechanisms that were used to compare the erroneous behaviors induced by the fault injection techniques considered, we describe the common distributed testbed and test scenario implemented to perform a coherent set of fault injection campaigns. The main features of the four fault injection techniques considered are then briefly described and the results obtained are finally presented and discussed. Emphasis is put on the analysis of the specific impact and merit of each injection technique.",2003,0, 845,RTGEN-an algorithm for automatic generation of reservation tables from architectural descriptions,"Reservation Tables (RTs) have long been used to detect conflicts between operations that simultaneously access the same architectural resource. Traditionally, these RTs have been specified explicitly by the designer. However, the increasing complexity of modern processors makes the manual specification of RTs cumbersome and error prone. Furthermore, manual specification of such conflict information is infeasible for supporting rapid architectural exploration. In this paper, we present an algorithm to automatically generate RTs from a high-level processor description with the goal of avoiding manual specification of RTs, resulting in more concise architectural specifications and also supporting faster turnaround time in design space exploration. We demonstrate the utility of our approach on a set of experiments using the TI C6201 very long instruction word digital signal processor and DLX processor architectures, and a suite of multimedia and scientific applications.",2003,0, 846,CVS release history data for detecting logical couplings,"The dependencies and interrelations between classes and modules affect the maintainability of object-oriented systems. It is therefore important to capture weaknesses of the software architecture to make necessary corrections. We describe a method for software evolution analysis. It consists of three complementary steps, which form an integrated approach for the reasoning about software structures based on historical data: 1) the quantitative analysis uses version information for the assessment of growth and change behavior; 2) the change sequence analysis identifies common change patterns across all system parts; and 3) the relation analysis compares classes based on CVS release history data and reveals the dependencies within the evolution of particular entities. We focus on the relation analysis and discuss its results; it has been validated based on empirical data collected from a concurrent versions system (CVS) covering 28 months of a picture archiving and communication system (PACS). Our software evolution analysis approach enabled us to detect shortcomings of PACS such as architectural weaknesses, poorly designed inheritance hierarchies, or blurred interfaces of modules.",2003,0, 847,Business rule evolution and measures of business rule evolution,"There is an urgent industrial need to enforce the changes of business rules (BRs) to software systems quickly, reliably and economically. Unfortunately, evolving BRs in most existing software systems is both time-consuming and error-prone. In order to manage, control and improve BR evolution, it is necessary that the software evolution community comes to an understanding of the ways in which BRs are implemented and how BR evolution can be facilitated or hampered by the design of software systems. We suggest that new software metrics are needed to allow us to measure the characteristics of BR evolution and to help us to explore possible improvements in a systematic way. A suitable set of BR-related metrics help us to discover the root causes of the difficulties inherent in BR evolution, evaluate the success of proposed approaches to BR evolution and improve the BR evolution process as a whole.",2003,0, 848,An investigation into formatting and layout errors produced by blind word-processor users and an evaluation of prototype error prevention and correction techniques,"This paper presents the results of an investigation into tools to support blind authors in the creation and checking of word processed documents. Eighty-nine documents produced by 14 blind authors are analyzed to determine and classify common types of layout and formatting errors. Based on the survey result, two prototype tools were developed to assist blind authors in the creation of documents: a letter creation wizard, which is used before the document is produced; and a format/layout checker that detects errors and presents them to the author after the document has been created. The results of a limited evaluation of the tools by 11 blind computer users are presented. A survey of word processor usage by these users is also presented and indicates that: authors have concerns about the appearance of the documents that they produce; many blind authors fail to use word processor tools such as spell checkers, grammar checkers and templates; and a significant number of blind people rely on sighted help for document creation or checking. The paper concludes that document formatting and layout is a problem for blind authors and that tools should be able to assist.",2003,0, 849,E-MAGINE: the development of an evaluation method to assess groupware applications,"This paper describes the development of the evaluation method E-MAGINE. The aim of this evaluation method is to support groups to efficiently assess the groupware applications, which fit them best. The method focuses on knowledge sharing groups and follows a modular structure. E-MAGINE is based on the Contingency Perspective in order to structurally characterize groups in their context. Another main building block of the method forms the new ISO-norm for ICT tools, """"Quality in Use."""" The overall method comprises two main phases. The initial phase leads to a first level profile and provides an indication of possible mismatches between group and application. The formulation of this initial profile has the benefit of providing a clear guide for further decisions on what instruments should be applied in the final phase of the evaluation process. It is argued that E-MAGINE fulfills the demand for a more practical and efficient groupware evaluation approach.",2003,0, 850,"High quality statecharts through tailored, perspective-based inspections","In the embedded systems domain, statecharts have become an important technique to describe the dynamic behavior of a software system. In addition, statecharts are an important element of object-oriented design documents and are thus widely used in practice. However, not much is known about how to inspect them. Since their invention by Pagan in 1976, inspections proved to be an essential quality assurance technique in software engineering. Traditionally, inspections were used to detect defects in code documents, and later in requirements documents. We define a defect taxonomy for statecharts. Using this taxonomy, we present an inspection approach for inspecting statecharts, which combines existing inspection techniques with several new perspective-based scenarios. Moreover, we address the problems of inspecting large documents by using prioritized use cases in combination with perspective-based reading.",2003,0, 851,Object-oriented mutation to assess the quality of tests,The quality of a test suite can be measured using mutation analysis. Groups of OO mutation operators are proposed for testing object-oriented features. The OO operators applied to UML specification and C++ code are illustrated by an example. Experimental results demonstrate effectiveness of different mutation operators and the reduction of functional test suite.,2003,0, 852,Detection and recovery techniques for database corruption,"Increasingly, for extensibility and performance, special purpose application code is being integrated with database system code. Such application code has direct access to database system buffers, and as a result, the danger of data being corrupted due to inadvertent application writes is increased. Previously proposed hardware techniques to protect from corruption require system calls, and their performance depends on details of the hardware architecture. We investigate an alternative approach which uses codewords associated with regions of data to detect corruption and to prevent corrupted data from being used by subsequent transactions. We develop several such techniques which vary in the level of protection, space overhead, performance, and impact on concurrency. These techniques are implemented in the Dali main-memory storage manager, and the performance impact of each on normal processing is evaluated. Novel techniques are developed to recover when a transaction has read corrupted data caused by a bad write and gone on to write other data in the database. These techniques use limited and relatively low-cost logging of transaction reads to trace the corruption and may also prove useful when resolving problems caused by incorrect data entry and other logical errors.",2003,0, 853,Strategies for software reuse: a principal component analysis of reuse practices,"This research investigates the premise that the likelihood of success of software reuse efforts may vary with the reuse strategy employed and, hence, potential reuse adopters must be able to understand reuse strategy alternatives and their implications. We use survey data collected from 71 software development groups to empirically develop a set of six dimensions that describe the practices employed in reuse programs. The study investigates the patterns in which these practices co-occur in the real world, demonstrating that the dimensions cluster into five distinct reuse strategies, each with a different potential for reuse success. The findings provide a means to classify reuse settings and assess their potential for success.",2003,0, 854,Automatic backdoor analysis with a network intrusion detection system and an integrated service checker,"We examine how a network intrusion detection system can be used as a trigger for service checking and reporting. This approach reduces the amount of false alerts (false positives) and raises the quality of the alert report. A sample data over the Christmas period of year 2002 is analyzed as an example and detection of unauthorized SSH servers used as the main application. Unauthorized interactive backdoors to a network belong to the most dangerous class of intrusions (D. Zamboni et al., 1998). These backdoors are usually installed by root-kits, to hide the system compromise activity. They are a gateway to launch exploits, gain super-user access to hosts in the internal network and use the attacked network as a stepping stone to attack other networks. In this research, we have developed software and done statistical analysis to assess and prevent such situations.",2003,0, 855,When can we test less?,"When it is impractical to rigorously assess all parts of complex systems, test engineers use defect detectors to focus their limited resources. We define some properties of an ideal defect detector and assess different methods of generating one. In the case study presented here, traditional methods of generating such detectors (e.g. reusing detectors from the literature, linear regression, model trees) were found to be inferior to those found via a PACE analysis.",2003,0, 856,Definition and validation of design metrics for distributed applications,"As distributed technologies become more widely used, the need for assessing the quality of distributed applications correspondingly increases. Despite the rich body of research and practice in developing quality measures for centralised applications, there has been little emphasis on measures for distributed software. The need to understand the complex structure and behaviour of distributed applications suggests a shift in interest from traditional centralised measures to the distributed arena. We tackles the problem of evaluating quality attributes of distributed applications using software measures. Firstly, we present a measures suite to quantify internal attributes of design at an early development phase, embracing structural and behavioural aspects. The proposed measures are obtained from formal models derived from intuitive models of the problem domain. Secondly, since theoretical validation of software measures provides supporting evidence as to whether a measure really captures the internal attributes they purport to measure, we consider this validation as a necessary step before empirical validation takes place. Therefore, these measures are here theoretically validated following a framework proposed in the literature.",2003,0, 857,Dealing with missing software project data,"Whilst there is a general consensus that quantitative approaches are an important part of successful software project management, there has been relatively little research into many of the obstacles to data collection and analysis in the real world. One feature that characterises many of the data sets we deal with is missing or highly questionable values. Naturally this problem is not unique to software engineering, so we explore the application of two existing data imputation techniques that have been used to good effect elsewhere. In order to assess the potential value of imputation we use two industrial data sets. Both are quite problematic from an effort modelling perspective because they contain few cases, have a significant number of missing values and the projects are quite heterogeneous. We examine the quality of fit of effort models derived by stepwise regression on the raw data and data sets with values imputed by various techniques is compared. In both data sets we find that k-nearest neighbour (k-NN) and sample mean imputation (SMI) significantly improve the model fit, with k-NN giving the best results. These results are consistent with other recently published results, consequently we conclude that imputation can assist empirical software engineering.",2003,0, 858,Analyzing the cost and benefit of pair programming,"We use a combination of metrics to understand, model, and evaluate the impact of pair programming on software development. Pair programming is a core technique in the hot process paradigm of extreme programming. At the expense of increased personnel cost, pair programming aims at increasing both the team productivity and the code quality as compared to conventional development. In order to evaluate pair programming, we use metrics from three different categories: process metrics such as the pair speed advantage of pair programming; product metrics such as the module breakdown structure of the software; and project context metrics such as the market pressure. The pair speed advantage is a metric tailored to pair programming and measures how much faster a pair of programmers completes programming tasks as compared to a single developer. We integrate the various metrics using an economic model for the business value of a development project. The model is based on the standard concept of net present value. If the market pressure is strong, the faster time to market of pair programming can balance the increased personnel cost. For a realistic sample project, we analyze the complex interplay between the various metrics integrated in our model. We study for which combinations of the market pressure and pair speed advantage the value of the pair programming project exceeds the value of the corresponding conventional project. When time to market is the decisive factor and programmer pairs are much faster than single developers, pair programming can increase the value of a project, but there also are realistic scenarios where the opposite is true. Such results clearly show that we must consider metrics from different categories in combination to assess the cost-benefit relation of pair programming.",2003,0, 859,An analogy-based approach for predicting design stability of Java classes,"Predicting stability in object-oriented (OO) software, i.e., the ease with which a software item evolves while preserving its design, is a key feature for software maintenance. In fact, a well designed OO software must be able to evolve without violating the compatibility among versions, provided that no major requirement reshuffling occurs. Stability, like most quality factors, is a complex phenomenon and its prediction is a real challenge. We present an approach, which relies on the case-based reasoning (CBR) paradigm and thus overcomes the handicap of insufficient theoretical knowledge on stability. The approach explores structural similarities between classes, expressed as software metrics, to guess their chances of becoming unstable. In addition, our stability model binds its value to the impact of changing requirements, i.e., the degree of class responsibilities increase between versions, quantified as the stress factor. As a result, the prediction mechanism favours the stability values for classes having strong structural analogies with a given test class as well as a similar stress impact. Our predictive model is applied on a testbed made up of the classes from four major version of the Java API.",2003,0, 860,Using service utilization metrics to assess the structure of product line architectures,"Metrics have long been used to measure and evaluate software products and processes. Many metrics have been developed that have lead to different degrees of success. Software architecture is a discipline in which few metrics have been applied, a surprising fact given the critical role of software architecture in software development. Software product line architectures represent one area of software architecture in which we believe metrics can be of especially great use. The critical importance of the structure defined by a product line architecture requires that its properties be meaningfully assessed and that informed architectural decisions be made to guide its evolution. To begin addressing this issue, we have developed a class of closely related metrics that specifically target product line architectures. The metrics are based on the concept of service utilization and explicitly take into account the context in which individual architectural elements are placed. We define the metrics, illustrate their use, and evaluate their strengths and weaknesses through their application on three example product line architectures.",2003,0, 861,Assessing the maintainability benefits of design restructuring using dependency analysis,"Software developers and project managers often have to assess the quality of software design. A commonly adopted hypothesis is that a good design should cost less to maintain than a poor design. We propose a model for quantifying the quality of a design from a maintainability perspective. Based on this model, we propose a novel strategy for predicting the """"return on investment"""" (ROI) for possible design restructurings using procedure level dependency analysis. We demonstrate this approach with two exploratory Java case studies. Our results show that common low level source code transformations change the system dependency structure in a beneficial way, allowing recovery of the initial refactoring investment over a number of maintenance activities.",2003,0, 862,Developing fault predictors for evolving software systems,"Over the past several years, we have been developing methods of predicting the fault content of software systems based on measured characteristics of their structural evolution. In previous work, we have shown there is a significant linear relationship between code churn, a synthesized metric, and the rate at which faults are inserted into the system in terms of number of faults per unit change in code churn. We have begun a new investigation of this relationship with a flight software technology development effort at the jet propulsion laboratory (JPL) and have progressed in resolving the limitations of the earlier work in two distinct steps. First, we have developed a standard for the enumeration of faults. Second, we have developed a practical framework for automating the measurement of these faults. we analyze the measurements of structural evolution and fault counts obtained from the JPL flight software technology development effort. Our results indicate that the measures of structural attributes of the evolving software system are suitable for forming predictors of the number of faults inserted into software modules during their development. The new fault standard also ensures that the model so developed has greater predictive validity.",2003,0, 863,One approach to the metric baselining imperative for requirements processes,"The success of development projects in customer-oriented industries depends on reliable processes for the definition and maintenance of requirements. With the sustained, severe reduction in the rush to new technology, this widely accepted fact has become increasingly evident in the networking industry. Customers now focus on high product quality as they strive for economy of operation. Enhancing product quality necessitates enhancing processes, which in turn can necessitate applying more accurate (and precise) measures. Finding process deviations and identifying patterns of product deficiencies are critical steps to achieving high quality products. We describe the application of quantitative process control (QPC) during early development phases to establish and maintain baseline distributions characterizing RMCM&T processes, and to monitor their evolutions. Metric baselining as described includes key metric identification, and data normalization, filtering, and categorization. Empirical baselining provides the statistical sensitivity to detect requirements process problems, and to support targeted identification of particular requirements-related patterns in defects.",2003,0, 864,A consumer report on BDD packages,"BDD packages have matured to a state where they are often considered a commodity. Does this mean that all (publicly and commercially) available packages are equally good? Does this preclude any new developments? In this paper, we present a consumer report on 13 BDD packages and thereby try to answer these questions. We argue that there is a substantial spectrum in quality as measured by various metrics and we found that even the better packages do not always deploy the latest technology. We show how various design decisions underlying the studied packages exhibit themselves at the programming interface level, and we claim that this allows us to predict performance to a certain extent.",2003,0, 865,Accurate dependability analysis of CAN-based networked systems,"Computer-based systems where several nodes exchange information via suitable network interconnections are today exploited in many safety-critical applications, like those belonging to the automotive field. Accurate dependability analysis of such a kind of systems is thus a major concern for designers. In this paper, we present an environment we developed in order to assess the effects of faults in CAN-based networks. We developed an IP core implementing the CAN protocol controller, and we exploited it to set-up a network composed of several nodes. Thanks to the approach we adopted, we were able to assess via simulation-based fault injection the effects of faults both in the bus used to carry information and inside each CAN controller as well. In this paper, we report a detailed description of the environment we set-up and we present some preliminary results we gathered to assess the soundness of the proposed approach.",2003,0, 866,A formal experiment comparing extreme programming with traditional software construction,This paper describes an experiment carried out during the Spring/2002 academic semester with computer science students at the University of Sheffield. The aim of the experiment was to assess extreme programming and compare it with a traditional approach. With this purpose the students constructed software for real clients. We observed 20 teams working for 4 clients. Ten teams worked with extreme programming and ten with the traditional approach. In terms of quality and size teams working with extreme programming produced similar final products to traditional teams. The major implication for the current practice of traditional software engineering is that in spite of the absence of design and the presence of testing before coding the product obtained still has similar quality and size. The implication for extreme programming is the possibility of growth and maturation given the fact that it provided results that were as good as those from the traditional approach.,2003,0, 867,Towards automatic transcription of Syriac handwriting,"We describe a method implemented for the recognition of Syriac handwriting from historical manuscripts. The Syriac language has been a neglected area for handwriting recognition research, yet is interesting because the preponderance of scribe-written manuscripts offers a challenging yet tractable medium for OCR research between the extremes of typewritten text and free handwriting. Like Arabic, Syriac is written in a cursive form from right-to-left, and letter shape depends on the position within the word. The method described does not need to find character strokes or contours. Both whole words and character shapes were used in recognition experiments. After segmentation using a novel probabilistic method, features of these shapes are found that tolerate variation in formation and image quality. Each shape is recognised individually using a discriminative support vector machine with 10-fold cross-validation. We describe experiments using a variety of segmentation methods and combinations of features on characters and words. Images from scribe-written historical manuscripts are used, and the recognition results are compared with those for images taken from clearer 19th century typeset documents. Recognition rates vary from 61-100%, depending on the algorithms used and the size and source of the data set.",2003,0, 868,Statistical analysis on a case study of load effect on PSD technique for induction motor broken rotor bar fault detection,Broken rotor bars in an induction motor create asymmetries and result in abnormal amplitude of the sidebands around the fundamental supply frequency and its harmonics. Monitoring the power spectral density (PSD) amplitudes of the motor currents at these frequencies can be used to detect the existence of broken rotor bar faults. This paper presents a study on an actual three-phase induction motor using the PSD analysis as a broken rotor bar fault detection technique. The distributions of PSD amplitudes of experimental healthy and faulty motor data sets at these specific frequencies are analyzed statistically under different load conditions. Results indicate that statistically significant conclusions on broken rotor bar detection can vary significantly under different load conditions and under different inspected frequencies. Detection performance in terms of the variation of PSD amplitudes is also investigated as a case study.,2003,0, 869,Policy-guided software evolution,"Ensuring that software systems evolve in a desired manner has thus far been an elusive goal. In a continuing effort towards this objective, in this paper we propose a new approach that monitors an evolving software system, or its evolution process, against evolutionary policies so that any feedback obtained can be used to improve the system or its process. Two key concepts that make this possible are: (1) a mechanism to detect policy violations; and (2) a contextual framework to support activities of evolving a software system beyond the next release. Together, they could provide a wide and deep scope for managing software evolution. The benefit of our approach is that it would help in: sustaining the quality of a software system as it evolves; reducing evolutionary costs; and improving evolutionary processes.",2003,0, 870,Application of neural networks for software quality prediction using object-oriented metrics,"The paper presents the application of neural networks in software quality estimation using object-oriented metrics. Quality estimation includes estimating reliability as well as maintainability of software. Reliability is typically measured as the number of defects. Maintenance effort can be measured as the number of lines changed per class. In this paper, two kinds of investigation are performed: predicting the number of defects in a class; and predicting the number of lines change per class. Two neural network models are used: they are Ward neural network; and General Regression neural network (GRNN). Object-oriented design metrics concerning inheritance related measures, complexity measures, cohesion measures, coupling measures and memory allocation measures are used as the independent variables. GRNN network model is found to predict more accurately than Ward network model.",2003,1, 871,QuaTrace: a tool environment for (semi-) automatic impact analysis based on traces,"Cost estimation of changes to software systems is often inaccurate and implementation of changes is time consuming, cost intensive, and error prone. One reason for these problems is that relationships between documentation entities (e.g., between different requirements) are not documented at all or only incompletely. In this paper, we describe a constructive approach to support later changes to software systems. Our approach consists of a traceability technique and a supporting tool environment. The tracing approach describes which traces should be established in which way. The proposed tool environment supports the application of the guidelines in a concrete development context. The tool environment integrates two existing tools: a requirements management tool (i.e., RequisitePro) and a CASE tool (i.e., Rhapsody). Our approach allows traces to be established, analyzed, and maintained effectively and efficiently.",2003,0, 872,A framework for understanding conceptual changes in evolving source code,"As systems evolve, they become harder to understand because the implementation of concepts (e.g. business rules) becomes less coherent. To preserve source code comprehensibility, we need to be able to predict how this property will change. This would allow the construction of a tool to suggest what information should be added or clarified (e.g. in comments) to maintain the code's comprehensibility. We propose a framework to characterize types of concept change during evolution. It is derived from an empirical investigation of concept changes in evolving commercial COBOL II files. The framework describes transformations in the geometry and interpretation of regions of source code. We conclude by relating our observations to the types of maintenance performed and suggest how this work could be developed to provide methods for preserving code quality based on comprehensibility.",2003,0, 873,Automatic device configuration and data validation through mobile communication,"The introduction of personal computing and wireless communication technology provides an option for on site device software updating and data retrieving. This is especially true for any devices sitting in a remote site where computing network is not accessible. In many advanced computing systems, frequent software updating and configuration profiles refreshing are required. This is clumsy and error prone procedures when users are not familiar with the operating systems. Suppose all the necessary files and programs are predefined in a mobile computing device such as notebooks, PDAs, or even mobile phones. All necessary files and software can be transferred to the corresponding computing devices and PCs at remote sites through wireless communication links such as Bluetooth, infrared, general packet radio service (GPRS). This idea helps solve the initial installation cost of a communication network to a remote site.",2003,0, 874,A guaranteed quality of service wireless access scheme for CDMA networks,"Current wireless multimedia applications may require different quality-of-service (QoS) measures such as throughput, packet loss rate, delay, and delay jitter. In this paper, we propose an access scheme for CDMA networks that can provide absolute QoS guarantees for different service classes. The access scheme uses several M/D/1 queues, each representing a different service class, and allocates a transmission rate to each queue so as to satisfy the different QoS requirements. Operation in error-prone channels is enabled by a mechanism that compensates sessions, which experience poor channels. Analysis and simulation results are used to illustrate the viability of the access scheme.",2003,0, 875,Light-weight theorem proving for debugging and verifying units of code,"Software bugs are very difficult to detect even in small units of code. Several techniques to debug or prove correct such units are based on the generation of a set of formulae whose unsatisfiability reveals the presence of an error. These techniques assume the availability of a theorem prover capable of automatically discharging the resulting proof obligations. Building such a tool is a difficult, long, and error-prone activity. In this paper, we describe techniques to build provers which are highly automatic and flexible by combining state-of-the-art superposition theorem provers and BDDs. We report experimental results on formulae extracted from the debugging of C functions manipulating pointers showing that an implementation of our techniques can discharge proof obligations which cannot be handled by Simplify (the theorem prover used in the ESC/Java tool) and perform much better on others.",2003,0, 876,Performance analysis of software rejuvenation,"Cluster-based systems, a combination of interconnected individual computers, have become a popular solution to build the scalable and highly available Web servers. In order to reduce system outages due to aging phenomenon, software rejuvenation, a proactive fault-tolerance strategy has been introduced into cluster systems. Compared with clusters of a flat architecture, in which all the nodes share the same functions, we model and analyze the dispatcher-worker based cluster systems, which employ prediction-based rejuvenation both on the dispatcher and the worker pool. To evaluate the effects of rejuvenation, stochastic reward net models are constructed and solved by SPNP (stochastic Petri net package). Numerical results show that prediction-based software rejuvenation can significantly increase system availability and reduce the expected job loss probability.",2003,0, 877,Using redundancies to find errors,"Programmers generally attempt to perform useful work. If they performed an action, it was because they believed it served some purpose. Redundant operations violate this belief. However, in the past, redundant operations have been typically regarded as minor cosmetic problems rather than serious errors. This paper demonstrates that, in fact, many redundancies are as serious as traditional hard errors (such as race conditions or pointer dereferences). We experimentally test this idea by writing and applying five redundancy checkers to a number of large open source projects, finding many errors. We then show that, even when redundancies are harmless, they strongly correlate with the presence of traditional hard errors. Finally, we show how flagging redundant operations gives a way to detect mistakes and omissions in specifications. For example, a locking specification that binds shared variables to their protecting locks can use redundancies to detect missing bindings by flagging critical sections that include no shared state.",2003,0, 878,Architectural-level risk analysis using UML,"Risk assessment is an essential part in managing software development. Performing risk assessment during the early development phases enhances resource allocation decisions. In order to improve the software development process and the quality of software products, we need to be able to build risk analysis models based on data that can be collected early in the development process. These models will help identify the high-risk components and connectors of the product architecture, so that remedial actions may be taken in order to control and optimize the development process and improve the quality of the product. In this paper, we present a risk assessment methodology which can be used in the early phases of the software life cycle. We use the Unified Modeling Language (UML) and commercial modeling environment Rational Rose Real Time (RoseRT) to obtain UML model statistics. First, for each component and connector in software architecture, a dynamic heuristic risk factor is obtained and severity is assessed based on hazard analysis. Then, a Markov model is constructed to obtain scenarios risk factors. The risk factors of use cases and the overall system risk factor are estimated using the scenarios risk factors. Within our methodology, we also identify critical components and connectors that would require careful analysis, design, implementation, and more testing effort. The risk assessment methodology is applied on a pacemaker case study.",2003,0, 879,Analogy based prediction of work item flow in software projects: a case study,"A software development project coordinates work by using work items that represent customer, tester and developer found defects, enhancements, and new features. We set out to facilitate software project planning by modeling the flow of such work items and using information on historic projects to predict the work flow of an ongoing project. The history of the work items is extracted from problem tracking or configuration management databases. The Web-based prediction tool allows project managers to select relevant past projects and adjust the prediction based on staffing, type, and schedule of the ongoing project. We present the workflow model, and briefly describe project prediction of a large software project for customer relationship management (CRM).",2003,0, 880,An experimental evaluation of inspection and testing for detection of design faults,"The two most common strategies for verification and validation, inspection and testing, are in a controlled experiment evaluated in terms of their fault detection capabilities. These two techniques are in the previous work compared applied to code. In order to compare the efficiency and effectiveness of these techniques on a higher abstraction level than code, this experiment investigates inspection of design documents and testing of the corresponding program, to detect faults originating from the design document. Usage-based reading (UBR) and usage-based testing (UBT) were chosen for inspections and testing, respectively. These techniques provide similar aid to the reviewers as to the testers. The purpose of both fault detection techniques is to focus the inspection and testing from a user's viewpoint. The experiment was conducted with 51 Master's students in a two-factor blocked design; each student applied each technique once, each application on different versions of the same program. The two versions contained different sets of faults, including 13 and 14 faults, respectively. The general results from this study show that when the two groups of subjects are combined, the efficiency and effectiveness are significantly higher for usage-based reading and that testing tends to require more learning. Rework is not taken into account, thus the experiment indicates strong support for design inspection over testing.",2003,0, 881,The application of capture-recapture log-linear models to software inspections data,"Re-inspection has been deployed in industry to improve the quality of software inspections. The number of remaining defects after inspection is an important factor affecting whether to re-inspect the document or not. Models based on capture-recapture (CR) sampling techniques have been proposed to estimate the number of defects remaining in the document after inspection. Several publications have studied the robustness of some of these models using software engineering data. Unfortunately, most of the existing studies did not examine the log linear models with respect software inspection data. In order o explore the performance of the log linear models, we evaluated their performance for three person inspection teams. Furthermore, we evaluated the models using an inspection data set that was previously used to asses different CR models. Generally speaking, the study provided very promising results. According to our results, the log linear models proved to be more robust that all CR based models previously assessed for three-person inspections.",2003,0, 882,Quantitative studies in software release planning under risk and resource constraints,"Delivering software in an incremental fashion implicitly reduces many of the risks associated with delivering large software projects. However, adopting a process, where requirements are delivered in releases means decisions have to be made on which requirements should be delivered in which release. This paper describes a method called EVOLVE+, based on a genetic algorithm and aimed at the evolutionary planning of incremental software development. The method is initially evaluated using a sample project. The evaluation involves an investigation of the tradeoff relationship between risk and the overall benefit. The link to empirical research is two-fold: firstly, our model is based on interaction with industry and randomly generated data for effort and risk of requirements. The results achieved this way are the first step for a more comprehensive evaluation using real-world data. Secondly, we try to approach uncertainty of data by additional computational effort providing more insight into the problem solutions: (i) effort estimates are considered to be stochastic variables following a given probability function; (ii) instead of offering just one solution, the L-best (L > 1) solutions are determined. This provides support in finding the most appropriate solution, reflecting implicit preferences and constraints of the actual decision-maker. Stability intervals are given to indicate the validity of solutions and to allow the problem parameters to be changed without adversely affecting the optimality of the solution.",2003,0, 883,A reconfigurable Byzantine quorum approach for the Agile Store,"Quorum-based protocols can be used to manage data when it is replicated at multiple server nodes to improve availability and performance. If some server nodes can be compromised by a malicious adversary, Byzantine quorums must be used to ensure correct access to replicated data. This paper introduces reconfigurable Byzantine quorums, which allow various quorum protocol parameters to be adapted based on the behavior of compromised nodes and the performance needs of the system. We present a protocol that generalizes dynamic Byzantine quorums by allowing the system size to change as faulty servers are removed from the system, in addition to adapting the fault threshold. A new architecture and algorithm that provide the capability to detect and remove faulty servers are also described. Finally, simulation results are presented that demonstrate the benefits offered by our approach.",2003,0, 884,An experimental evaluation of correlated network partitions in the Coda distributed file system,"Experimental evaluation is an important way to assess distributed systems, and fault injection is the dominant technique in this area for the evaluation of a system's dependability. For distributed systems, network failure is an important fault model. Physical network failures often have far-reaching effects, giving rise to multiple correlated failures as seen by higher-level protocols. This paper presents an experimental evaluation, using the Loki fault injector, which provides insight into the impact that correlated network partitions have on the Coda distributed file system. In this evaluation, Loki created a network partition between two Coda file servers, during which updates were made at each server to the same replicated data volume. Upon repair of the partition, a client requested directory resolution to converge the diverging replicas. At various stages of the resolution, Loki invoked a second correlated network partition, thus allowing us to evaluate its impact on the system's correctness, performance, and availability.",2003,0, 885,Assessing the dependability of OGSA middleware by fault injection,"This paper presents our research on devising a dependability assessment method for the upcoming OGSA 3.0 middleware using network level fault injection. We compare existing DCE middleware dependability testing research with the requirements of testing OGSA middleware and derive a new method and fault model. From this we have implemented an extendable fault injector framework and undertaken some proof of concept experiments with a simulated OGSA middleware system based around Apache SOAP and Apache Tomcat. We also present results from our initial experiments, which uncovered a discrepancy with our simulated OGSA system. We finally detail future research, including plans to adapt this fault injector framework from the stateless environment of a standard Web service to the stateful environment of an OGSA service.",2003,0, 886,Fault tolerance technology for autonomous decentralized database systems,"The Autonomous Decentralized Database System (ADDS) has been proposed in the background of e-business in respect to the dynamic and heterogeneous requirements of the users. With the rapid development of information technology, different companies in the field of e-business are supposed to cooperate in order to cope with the continuous changing demands of services in a dynamic market. In a diversified environment of service provision and service access, the ADDS provides flexibility to integrate heterogeneous and autonomous systems while assuring timeliness and high availability. A loosely-consistency management technology confers autonomy to each site for updating while maintaining the consistency of the whole system. Moreover, a background coordination technology, by utilizing a mobile agent, has been devised to permit the sites to coordinate and cooperate with each other while conferring the online property. The use of mobile agent, however, is critical and requires reliability with regard to mobile agent failures that may lead to bad response times and hence the availability of the system may lost. A fault tolerance technology is proposed in order that the system autonomously detect and recover the fault of the mobile agent due to a failure in a transmission link, site or bug in the software. The effectiveness of the proposition is shown by simulation.",2003,0, 887,Randomized asynchronous consensus with imperfect communications,"We introduce a novel hybrid failure model, which facilitates an accurate and detailed analysis of round-based synchronous, partially synchronous and asynchronous distributed algorithms under both process and link failures. Granting every process in the system up to fl send and receive link failures (with fla arbitrary faulty ones among those) in every round, without being considered faulty, we show that the well-known randomized Byzantine agreement algorithm of (Srikanth & Toueg 1987) needs just n ≥ 4fl + 2ffla+ 3fa + 1 processes for coping with fa Byzantine faulty processes. The probability of disagreement after R iterations is only 2-R, which is the same as in the FLP model and thus much smaller than the lower bound 0(1/R) known for synchronous systems with lossy links. Moreover, we show that 2-stubborn links are sufficient for this algorithm. Hence, contrasting widespread belief, a perfect communications subsystem is not required for efficiently solving randomized Byzantine agreement.",2003,0, 888,The ModelCamera: a hand-held device for interactive modeling,"An important goal of automated modeling is to provide computer graphics applications with high quality models of complex real-world scenes. Prior systems have one or more of the following disadvantages: slow modeling pipeline, applicability restricted to small scenes, no direct color acquisition, and high cost. We describe a hand-held scene modeling device that operates at five frames per second and that costs $2,000. The device consists of a digital video camera with 16 laser pointers attached to it. As the operator scans the scene, the pointers cast blobs that are detected and triangulated to provide sparse, evenly spaced depth samples. The frames are registered and merged into an evolving model, which is rendered continually to provide immediate operator feedback.",2003,0, 889,An empirical study on groupware support for software inspection meetings,"Software inspection is an effective way to assess product quality and to reduce the number of defects. In a software inspection, the inspection meeting is a key activity to agree on collated defects, to eliminate false positives, and to disseminate knowledge among the team members. However, inspection meetings often require high effort and may lose defects found in earlier inspection steps due to ineffective meeting techniques. Only few tools are available for this task. We have thus been developing a set of groupware tools to lower the effort of inspection meetings and to increase their efficiency. We conducted an experiment in an academic environment with 37 subjects to empirically investigate the effect of groupware tool support for inspection meetings. The main findings of the experiment are that tool support considerably lowered the meeting effort, supported inspectors in identifying false positives, and reduced the number of true defects lost.",2003,0, 890,Tool-assisted unit test selection based on operational violations,"Unit testing, a common step in software development, presents a challenge. When produced manually, unit test suites are often insufficient to identify defects. The main alternative is to use one of a variety of automatic unit test generation tools: these are able to produce and execute a large number of test inputs that extensively exercise the unit under test. However, without a priori specifications, developers need to manually verify the outputs of these test executions, which is generally impractical. To reduce this cost, unit test selection techniques may be used to help select a subset of automatically generated test inputs. Then developers can verify their outputs, equip them with test oracles, and put them into the existing test suite. In this paper, we present the operational violation approach for unit test selection, a black-box approach without requiring a priori specifications. The approach dynamically generates operational abstractions from executions of the existing unit test suite. Any automatically generated tests violating the operational abstractions are identified as candidates for selection. In addition, these operational abstractions can guide test generation tools to produce better tests. To experiment dynamic approach, we integrated the use of Daikon (a dynamic invariant detection tool) and Jtest (a commercial Java unit testing tool). An experiment is conducted to assess this approach.",2003,0, 891,What test oracle should I use for effective GUI testing?,"Test designers widely believe that the overall effectiveness and cost of software testing depends largely on the type and number of test cases executed on the software. In this paper we show that the test oracle used during testing also contributes significantly to test effectiveness and cost. A test oracle is a mechanism that determines whether software executed correctly for a test case. We define a test oracle to contain two essential parts: oracle information that represents expected output; and an oracle procedure that compares the oracle information with the actual output. By varying the level of detail of oracle information and changing the oracle procedure, a test designer can create different types of test oracles. We design 11 types of test oracles and empirically compare them on four software systems. We seed faults in software to create 100 faulty versions, execute 600 test cases on each version, for all 11 types of oracles. In all, we report results of 660,000 test runs on software. We show (1) the time and space requirements of the oracles, (2) that faults are detected early in the testing process when using detailed oracle information and complex oracle procedures, although at a higher cost per test case, and (3) that employing expensive oracles results in detecting a large number of faults using relatively smaller number of test cases.",2003,0, 892,Predicting fault prone modules by the Dempster-Shafer belief networks,"This paper describes a novel methodology for predicting fault prone modules. The methodology is based on Dempster-Shafer (D-S) belief networks. Our approach consists of three steps: first, building the D-S network by the induction algorithm; second, selecting the predictors (attributes) by the logistic procedure; third, feeding the predictors describing the modules of the current project into the inducted D-S network and identifying fault prone modules. We applied this methodology to a NASA dataset. The prediction accuracy of our methodology is higher than that achieved by logistic regression or discriminant analysis on the same dataset.",2003,1, 893,Augmented reality for programming industrial robots,"Existing practice for programming robots involves teaching it a sequence of waypoints in addition to process-related events, which defines the complete robot path. The programming process is time consuming, error prone and, in most cases, requires several iterations before the program quality is acceptable. By introducing augmented reality technologies in this programming process, the operator gets instant real-time, visual feedback of a simulated process in relation to the real object, resulting in reduced programming time and increased quality of the resulting robot program. This paper presents a demonstrator of a standalone augmented reality pilot system allowing an operator to program robot waypoints and process specific events related to paint applications. During the programming sequence, the system presents visual feedback of the paint result for the operator, allowing him to inspect the process result before the robot has performed the actual task.",2003,0, 894,Cost-effective graceful degradation in speculative processor subsystems: the branch prediction case,"We analyze the effect of errors in branch predictors, a representative example of speculative processor subsystems, to motivate the necessity for fault tolerance in such subsystems. We also describe the design of fault tolerant branch predictors using general fault tolerance techniques. We then propose a fault-tolerant implementation that utilizes the finite state machine (FSM) structure of the pattern history table (PHT) and the set of potential faulty states to predict the branch direction, yet without strictly identifying the correct state. The proposed solution provides virtually the same prediction accuracy as general fault tolerant techniques, while significantly reducing the incurred hardware overhead.",2003,0, 895,Static test compaction for multiple full-scan circuits,"Current design methodologies and methodologies for reducing test data volume and test application time for full-scan circuits allow testing of multiple circuits (or subcircuits of the same circuit) simultaneously using the same test data. We describe a static compaction procedure that accepts test sets generated independently for multiple full-scan circuits, and produces a compact test set that detects all the faults detected by the individual test sets. The resulting test set can be used for testing the circuits simultaneously using the same test data. This procedure provides an alternative to test generation procedures that perform test generation for complex circuits made up of multiple circuits. Such procedures also reduce the amount of test data and test application time required for testing all the circuits by testing them simultaneously using the same test data. However, they require consideration of a more complex circuit.",2003,0, 896,"A two layered case based reasoning approach to text summarization, based on summarization pattern","What actually is done in case of text summarization in case based reasoning terminology is that, the situation is defined as the ensemble of some consecutive sentences, and the solution is the set of the sentences selected as the outcome of the summarization process. In order to make a quality summary considering the context, a semantic understanding, seems to be important. In this respect we propose an approach to use a two layered CBR approach. Regarding this, we proposed an approach to text summarization based on two layered case based reasoning framework. Regarding this, the primary CBR cycle tries to make a summary of the source text, and the secondary CBR cycle tries to detect the context, and changes the bias values (fixed values) related to the primary CBR modules.",2003,0, 897,An enhanced passive testing tool for network protocols,We study passive testing on protocols to detect faults in network devices. An enhanced passive testing tool is developed using integer linear programming in determining the ranges of the variables. On-line pruning reveals the current configuration of the system and the transition covered. Network system monitoring is conducted in a formal and fine-granularity way.,2003,0, 898,Accounting for false indication in a Bayesian diagnostics framework,"Accounting for the effects of test uncertainty is a significant problem in test and diagnosis. Specifically, assessment of the level of uncertainty and subsequent utilization of that assessment to improve diagnostics must be addressed. One approach, based on measurement science, is to treat the probability of a false indication (false alarm or missed detection) as the measure of uncertainty. Given the ability to determine such probabilities, a Bayesian approach to diagnosis suggests itself. In the paper, we present a mathematical derivation for false indication and apply it to the specification of Bayesian diagnosis. We draw from measurement science, reliability theory, and the theory of Bayesian networks to provide an end-to-end probabilistic treatment of the fault diagnosis problem.",2003,0, 899,Effect of tilt angle variations in a halo implant on Vth values for 0.14-m CMOS devices,"Sensitivity of critical transistor parameters to halo implant tilt angle for 0.14-m CMOS devices was investigated. Vth sensitivity was found to be 3% per tilt degree. A tilt angle mismatch between two serial ion implanters used in manufacturing was detected by tracking Vth performance for 0.14-m production lots. Even though individual implanters may be within tool specifications for tilt angle control (0.5 for our specific tool type), the relative mismatch could be as large as 1, and therefore, result in a Vth mismatch of over 3% from nominal. The Vth mismatch results are in qualitative agreement with simulation results using SUPREM and MEDICI software.",2003,0, 900,Online control design for QoS management,"In this paper we present an approach for QoS management that can be applied for a general class of real-time distributed computation systems. In this paper, the QoS adaptation problem is formulated based on a utility function that measures the relative performance of the system. A limited-horizon online supervisory controller is used for this purpose. The online controller explores a limited region of the state-space of the system at each time step and decides the best action accordingly. The feasibility and accuracy of the online algorithm can be assessed at design time.",2003,0, 901,Uncovering hidden contracts: the .NET example,"Software contracts take the form of routine preconditions, postconditions, and class invariants written into the program itself. The design by contract methodology uses such contracts for building each software element, an approach that is particularly appropriate for developing safety-critical software and reusable libraries. This methodology is a key design element of some existing libraries, especially the Eiffel Software development environment, which incorporates contract mechanisms in the programming language itself. Because the authors see the contract metaphor as inherent to quality software development, they undertook the work reported in the article as a sanity check to determine whether they see contracts everywhere simply because their development environment makes using them natural or whether contracts are intrinsically present, even when other designers don't express or even perceive them. They studied classes from the .NET collections library for implicit contracts and assessed improvements that might result from making them explicit.",2003,0, 902,Automatic detection and diagnosis of faults in generated code for procedure calls,"In this paper, we present a compiler testing technique that closes the gap between existing compiler implementations and correct compilers. Using formal specifications of procedure-calling conventions, we have built a target-sensitive test suite generator that builds test cases for a specific aspect of compiler code generators: the procedure-calling sequence generator. By exercising compilers with these specification-derived target-specific test suites, our automated testing tool has exposed bugs in every compiler tested on the MIPS and one compiler on the SPARC. These compilers include some that have been in heavy use for many years. Once a fault has been detected, the system can often suggest the nature of the problem. The testing system is an invaluable tool for detecting, isolating, and correcting faults in today's compilers.",2003,0, 903,Tolerance of control-flow testing criteria,"Effectiveness of testing criteria is the ability to detect failure in a software program. We consider not only effectiveness of some testing criterion in itself but a variance of effectiveness of different test sets satisfied the same testing criterion. We name this property """"tolerance"""" of a testing criterion and show that, for practical using a criterion, a high tolerance is as well important as high effectiveness. The results of empirical evaluation of tolerance for different criteria, types of faults and decisions are presented. As well as quite simple and well-known control-flow criteria, we study more complicated criteria: full predicate coverage, modified condition/decision coverage and reinforced condition/decision coverage criteria.",2003,0, 904,Information quality assessment of a yellow-pages location-based service,"The main problems in assessing the information quality of a particular kind of location-based service are described. Verifying the conformance of query results to the real world is identified as the most expensive problem. An approach from database quality assessment is adapted, by combining it with conventional software reliability testing, so that the verification can be replaced with two significantly easier test obligations. The resulting procedure should achieve a good compromise between the confidence level of the results and the testing effort. The approach seems also applicable to a broader range of systems.",2003,0, 905,An efficient defect estimation method for software defect curves,Software defect curves describe the behavior of the estimate of the number of remaining software defects as software testing proceeds. They are of two possible patterns: single-trapezoidal-like curves or multiple-trapezoidal-like curves. In this paper we present some necessary and/or sufficient conditions for software defect curves of the Goel-Okumoto NHPP model. These conditions can be used to predict the effect of the detection and removal of a software defect on the variations of the estimates of the number of remaining defects. A field software reliability dataset is used to justify the trapezoidal shape of software defect curves and our theoretical analyses. The results presented in this paper may provide useful feedback information for assessing software testing progress and have potentials in the emerging area of software cybernetics that explores the interplay between software and control.,2003,0, 906,Next generation application integration: challenges and new approaches,"Integrating multiple heterogeneous data sources into applications is a time-consuming, costly and error-prone engineering task. Relatively mature technologies exist that make integration tractable from an engineering perspective. These technologies, however, have many limitations, and hence present opportunities for breakthrough research. This paper briefly describes some of these limitations, and enumerates a subset of the general open research problems. It then describes the Data Concierge research project and prototype that is attempting to provide solutions to some of these problems.",2003,0, 907,A scheme for dynamic detection of concurrent execution of object-oriented software,"Program testing is the most widely adopted approach for assuring the quality and reliability of software systems. Despite the popularity of the objected-oriented programs, its testing is much more challenging than that of the conventional programs. We proposed previously a methodology known as TACCLE for testing object-oriented software. It has not, however, addressed the aspects of concurrency and non-determinism. In this paper, we propose a scheme for dynamically detecting and testing concurrency in object-oriented software by executing selected concurrent pairs of operations. The scheme is based on OBJSA nets and addresses concurrency and nondeterminism problems. An experimental case study is reported to show the effectiveness of the scheme in detecting deadlocks, race conditions and other coherence problems. The scheme supplements our previous static approach to detecting deadlock in Java multithreaded programs.",2003,0, 908,An embryonic approach to reliable digital instrumentation based on evolvable hardware,"Embryonics encompasses the capability of self-repair and self-replication in systems. This paper presents a technique based on reconfigurable hardware coupled with a novel backpropagation algorithm for reconfiguration, together referred to as evolvable hardware (EHW), for ensuring reliability in digital instrumentation. The backpropagation evolution is much faster than genetic learning techniques. It uses the dynamic restructuring capabilities of EHW to detect faults in digital systems and reconfigures the hardware to repair or adapt to the error in real-time. An example application is presented of a robust BCD to a seven-segment decoder driving a digital display. The results obtained are quite interesting and promise quick and low cost embryonic schemes for reliability in digital instrumentation.",2003,0, 909,The design of reliable devices for mission-critical applications,"Mission-critical applications require that any failure that may lead to erroneous behavior and computation is detected and signaled as soon as possible in order not to jeopardize the entire system. Totally self-checking (TSC) systems are designed to be able to autonomously detect faults when they occur during normal circuit operation. Based on the adopted TSC design strategy and the goal pursued during circuit realization (e.g., area minimization), the circuit, although TSC, may not promptly detect the fault depending on the actual number of input configurations that serve as test vectors for each fault in the network. If such a number is limited, although TSC it may be improbable that the fault is detected once it occurs, causing detection and aliasing problems. The paper presents a design methodology, based on a circuit re-design approach and an evaluation function, for improving a TSC circuit promptness in detecting faults' occurrence, a property we will refer to as TSC quality.",2003,0, 910,Improvement of sensor accuracy in the case of a variable surface reflectance gradient for active laser range finders,"In active laser range finders, the computation of the (x, y, z) coordinates of each point of a scene can be performed using the detected centroid p~ of the image spot on the sensor. When the reflectance of the scene under analysis is uniform, the intensity profile of the image spot is a Gaussian and its centroid is correctly detected assuming an accurate peak position detector. However, when a change of reflectance occurs on the scene, the intensity profile of the image spot is no longer Gaussian. This change introduces a deviation p on the detected centroid p~, which will lead to erroneous (x, y, z) coordinates. This paper presents two heuristic models to improve the sensor accuracy in the case of a variable surface reflectance gradient. Simulation results are presented to show the quality of the correction and the resulting accuracy.",2003,0, 911,Helmet-mounted display image quality evaluation system,"Helmet-mounted displays (HMDs) provide essential pilotage and fire control imagery information for pilots. To maintain system integrity and readiness, there is a need to develop an image quality evaluation system for HMDs. In earlier work, a framework was proposed for an HMD system called the integrated helmet and display sighting system (IHADSS), used with the U.S. Army's Apache helicopter. This paper describes prototype development and interface design and summarizes bench test findings using three IHADSS helmet display units (HDUs). The prototype consists of hardware (cameras, sensors, image capture/data acquisition cards, battery pack, HDU holder, moveable rack and handle, and computer) and software algorithms for image capture and analysis. Two cameras with different-size apertures are mounted in parallel on a rack facing an HDU holder. A handle allows users to position the HDU in front of the two cameras. The HMD test pattern is then captured. Sensors detect the position of the holder and whether the HDU is angled correctly in relation to the camera. Algorithms detect HDU features captured by the two cameras, including focus, orientation, displacement, field-of-view, and number of grayshades. Bench testing of three field-quality HDUs indicates that the image analysis algorithms are robust and able to detect the desired image features. Suggested future directions include development of a learning algorithm to automatically develop or revise feature specifications as the number of inspection samples increases.",2003,0, 912,Sequential Monte Carlo video text segmentation,This paper presents a probabilistic algorithm for segmenting and recognizing text embedded in video sequences. The algorithm approximates the posterior distribution of segmentation thresholds of video text by a set of weighted samples. After initialization the set of samples is recursively refined by random sampling under a temporal Bayesian framework. The proposed methodology allows us to estimate the optimal text segmentation parameters directly in function of the string recognition results instead of segmentation quality. Results on a database of 6944 images demonstrate the validity of the algorithm.,2003,0, 913,Co-histogram and its application in remote sensing image compression evaluation,"Peak signal-to-noise ratio (PSNR) has found its application as an evaluation metric for image coding, but in many instances it provides an inaccurate representation of the image quality. The new tool proposed in this paper is called co-histogram, which is a statistic graph generated by counting the corresponding pixel pairs of two images. For image coding evaluation, the two images are the original image and a compressed and recovered image. The graph is a two-dimensional joint probability distribution of the two images. A co-histogram shows how the pixels are distributed among combinations of two image pixel values. By means of co-histogram, we can have a visual interpretation of PSNR, and the symmetry of a co-histogram is also significant for objective evaluation of remote sensing image compression. Our experiments with two SAR images and a TM image using DCT-based JPEG and wavelet-based SPIHT coding methods perform the importance of the co-histogram symmetry.",2003,0, 914,Random access using isolated regions,"Random access is a desirable feature in many video communication systems. Intra pictures is conventionally used as random access points, but correct picture content is recovered gradually within a range of pictures starting from a non-intra random access point. This paper proposes the use of the isolated regions technique for gradual decoder refresh and presents how the proposed method can be used in the upcoming ITU-T recommendation H.264, also known as MPEG-4 part 10 or advanced video coding. The presented simulations reveal that the proposed method outperforms intra-picture-based random access points in error-prone network conditions. It is also shown that the proposed method is more flexible and suits packet-based transmission better compared to progressively located intra-coded slices.",2003,0, 915,Configuration and management of a real-time smart transducer network,"Smart transducer technology supports the composability, configurability, and maintainability of sensor/actuator networks. The configuration and management of such networks, when carried out manually, is an expensive and error-prone task. Therefore, many existing fieldbus systems provide means of """"plug-and-play"""" that assist the user in these tasks. In this paper we describe configuration and management aspects in the area of dependable real-time fieldbus systems. We propose a configuration and management framework for a low-cost real-time fieldbus network. The framework uses formal XML descriptions to describe node and network properties in order to enable a data representation that comes with low overhead on nodes and enables the easy integration with software tools. The framework builds on the infrastructure and interfaces defined in the OMG smart transducers interface standard. As a case study, we have implemented a TTP/A configuration tool operating on a basic framework using the described concepts and mechanisms.",2003,0, 916,Computer-aided coordination of power system protection,"In electrical power networks, faults can occur for various reasons. The task of the protection devices in electrical power systems (EPS) is to detect these faults and to eliminate the faulty parts of an EPS. In order to minimize the fault's consequences, different protection devices have to be coordinated. In this article, a computer-aided approach is described that should help project-planning engineers with this task and reduce the very time-consuming procedure of protection-strategy development and documentation preparation to a minimum. The main goal of the proposed concept is to make the tool as user-friendly as possible. The user communicates with a program system and device databases via a graphical user interface (GUI), which enables the visualization of a network. The clear graphical representation of the problem reduces the possibility of human errors. In this paper, the concept of the tool for overcurrent, overload and distance-protection coordination, its main features, and a typical example are presented.",2003,0, 917,Reliable heterogeneous applications,"This paper explores the notion of computational resiliency to provide reliability in heterogeneous distributed applications. This notion provides both software fault-tolerance and the ability to tolerate information-warfare attacks. This technology seeks to strengthen a military mission, rather than to protect its network infrastructure using static defense measures such as network security, intrusion sensors, and firewalls. Even if a failure or attack is successful and never detected, it should be possible to continue information operations and achieve mission objectives. Computational resiliency involves the dynamic use of replicated software structures, guided by mission policy, to achieve reliable operation. However, it goes further to regenerate, automatically, replication in response to a failure or attack, allowing the level of system reliability to be restored and maintained. This paper examines a prototype concurrent programming technology to support computational resiliency in a heterogeneous distributed computing environment. The performance of the technology is explored through two example applications.",2003,0, 918,Quality of service provision assessment for campus network,"The paper presents a methodology for assessing the quality of service (QoS) provision for a campus network. The author utilizes the Staffordshire University's network communications infrastructure (SUNCI) as a testing platform and discusses a new approach and QoS provision, by adding a component of measurement to the existing model presented by J.L. Walker (see J. Services Marketing, vol.9, no.1, p.5-14, 1995). The QoS provision is assessed in light of users' perception compared with the network traffic measurements and online monitoring reports. The users' perception of the QoS provision of a telecommunications network infrastructure is critical to the successful business management operation of any organization. The computing environment in modern campus networks is complex, employing multiple heterogeneous hardware and software technologies. In support of highly interactive user applications, QoS provision is essential to the users' ever increasing level of expectations. The paper offers a cost effective approach to assessing the QoS provision within a campus network.",2003,0, 919,Yield analysis of compiler-based arrays of embedded SRAMs,This paper presents a detailed analysis of the yield of embedded static random access memories (eSRAM) which are generated using a compiler. Defect and fault analysis inclusive of industrial data are presented for these chips by taking into account the design constructs (referred to as kernels) and the physical properties of the layout. The new tool CAYA (Compiler-based Array Yield Analysis) is based on a characterization of the design process which accounts for fault types and the relation between functional and structural faults; a novel empirical model is proposed to facilitate the yield calculation. Industrial data is provided for the analysis of various configurations with different structures and redundancy. The effectiveness and accuracy as provided by CAYA are assessed with respect to industrial designs.,2003,0, 920,Dependability analysis of CAN networks: an emulation-based approach,"Today many safety-critical applications are based on distributed systems where several computing nodes exchange information via suitable network interconnections. An example of this class of applications is the automotive field, where developers are exploiting the CAN protocol for implementing the communication backbone. The capability of accurately evaluating the dependability properties of such a kind of systems is today a major concern. In this paper we present a new environment that can be fruitfully exploited to assess the effects of faults in CAN-based networks. The entire network is emulated via an ad-hoc hardware/software system that allows easily evaluating the effects of faults in all the network components, namely the network nodes, the protocol controllers and the transmission channel. In this paper, we report a detailed description of the environment we set-up and we present some preliminary results we gathered to assess the soundness of the proposed approach.",2003,0, 921,An intelligent early warning system for software quality improvement and project management,"One of the main reasons behind unfruitful software development projects is that it is often too late to correct the problems by the time they are detected. It clearly indicates the need for early warning about the potential risks. In this paper, we discuss an intelligent software early warning system based on fuzzy logic using an integrated set of software metrics. It helps to assess risks associated with being behind schedule, over budget, and poor quality in software development and maintenance from multiple perspectives. It handles incomplete, inaccurate, and imprecise information, and resolve conflicts in an uncertain environment in its software risk assessment using fuzzy linguistic variables, fuzzy sets, and fuzzy inference rules. Process, product, and organizational metrics are collected or computed based on solid software models. The intelligent risk assessment process consists of the following steps: fuzzification of software metrics, rule firing, derivation and aggregation of resulted risk fuzzy sets, and defuzzification of linguistic risk variables.",2003,0, 922,Genetic programming-based decision trees for software quality classification,"The knowledge of the likely problematic areas of a software system is very useful for improving its overall quality. Based on such information, a more focused software testing and inspection plan can be devised. Decision trees are attractive for a software quality classification problem which predicts the quality of program modules in terms of risk-based classes. They provide a comprehensible classification model which can be directly interpreted by observing the tree-structure. A simultaneous optimization of the classification accuracy and the size of the decision tree is a difficult problem, and very few studies have addressed the issue. This paper presents an automated and simplified genetic programming (gp) based decision tree modeling technique for the software quality classification problem. Genetic programming is ideally suited for problems that require optimization of multiple criteria. The proposed technique is based on multi-objective optimization using strongly typed GP. In the context of an industrial high-assurance software system, two fitness functions are used for the optimization problem: one for minimizing the average weighted cost of misclassification, and one for controlling the size of the decision tree. The classification performances of the GP-based decision trees are compared with those based on standard GP, i.e., S-expression tree. It is shown that the GP-based decision tree technique yielded better classification models. As compared to other decision tree-based methods, such as C4.5, GP-based decision trees are more flexible and can allow optimization of performance objectives other than accuracy. Moreover, it provides a practical solution for building models in the presence of conflicting objectives, which is commonly observed in software development practice.",2003,0, 923,Definition of a systematic method for the generation of software test programs allowing the functional verification of system on chip (SoC),We present a novel approach for hardware functional verification of system on chip (SoC). Our approach is based on the use of on chip programmable processors like CPUs or DSPs to generate test programs for hardware parts of the design. Traditionally test programs are written at a low level using specific functions for hardware accesses. This method is time consuming and error prone as tests are hand written. We introduce a method allowing the use of high level software test programs. The link between hardware and software is achieved by using a custom operating system. We focus on the benefits that are obtained by handling high level test programs.,2003,0, 924,An expression's single fault model and the testing methods,"This paper proposes a single fault model for the faults of the expressions, including operator faults (operator reference fault: an operator is replaced by another, extra or missing operator for single operand), incorrect variable or constant, incorrect parentheses. These types of faults often exist in the software, but some fault classes are hard to detect using traditional testing methods. A general testing method is proposed to detect these types of faults. Furthermore the fault simulation method of the faults is presented which can accelerate the generation of test cases and minimize the testing cost greatly. Our empirical results indicate that our methods require a smaller number of test cases than random testing, while retaining fault-detection capabilities that are as good as, or better than the traditional testing methods.",2003,0, 925,Assessing software implemented fault detection and fault tolerance mechanisms,The problems of hardware fault detection and correction using software techniques are analysed. They relate to our experience with a large class of applications. The effectiveness of these techniques is studied using a special fault injector with various statistical tools. A new error-handling scheme is proposed.,2003,0, 926,Briefing a new approach to improve the EMI immunity of DSP systems,"Hereafter, we present an approach dealing to improve the reliability of digital signal processing (DSP) systems operating in real noisy (electromagnetic interference - EMI) environments. The approach is based on the coupling of two techniques: the """"DSP-oriented signal integrity improvement"""" technique deals to increase the signal-to-noise ratio (SNR) and is essentially a modification of the classic Recovery Blocks Scheme. The second technique, named """"SW-based fault handling"""" aims to detect in real-time data- and control-flow faults throughout modifications of the processor C-code. When compared to conventional approaches using Fast Fourier Transform (FIT) and Hamming Code, the primary benefit of such an approach is to improve system reliability by means of a considerably low complexity, reasonably low performance degradation and, when implemented in hardware, with reduced area overhead. Aiming to illustrate the proposed approach, we present a case study for a speech recognition system, which was partially implemented in a PC microcomputer and in a COTS microcontroller. This system was tested under a home-tailored EMI environment according to the International Standard Normative IEC 61.0004-29. The obtained results indicate that the proposed approach can effectively improve the reliability of DSP systems operating in real noise (EMI) environments.",2003,0, 927,Study on the cost/benefit/optimization of software safety test,"Although the safety-critical system has high demand on safety, the cost of software test therefore must be taken account of. In the test of railway computer interlocking software carried out, the safety test for a station software last several months, therefore, in order to reduce the test time, it is practical to choose functions from the function set to test through optimization. Software safety test is realized by running testing cases at the cost of labor and time. It is expected to detect dangerous function defects and reduce system loss to gain benefit. Optimization strategy is a best choice to consider testing cases.",2003,0, 928,Detection or isolation of defects? An experimental comparison of unit testing and code inspection,"Code inspections and white-box testing have both been used for unit testing. One is a static analysis technique, the other, a dynamic one, since it is based on executing test cases. Naturally, the question arises whether one is superior to the other, or, whether either technique is better suited to detect or isolate certain types of defects. We investigated this question with an experiment with a focus on detection of the defects (failures) and isolation of the underlying sources of the defects (faults). The results indicate that there exist significant differences for some of the effects of using code inspection versus testing. White-box testing is more effective, i.e. detects significantly more defects while inspection isolates the underlying source of a larger share of the defects detected. Testers spend significantly more time, hence the difference in efficiency is smaller, and is not statistically significant. The two techniques are also shown to detect and identify different defects, hence motivating the use of a combination of methods.",2003,0, 929,Optimal resource allocation for the quality control process,"Software development project employs some quality control (QC) process to detect and remove defects. The final quality of the delivered software depends on the effort spent on all the QC stages. Given a quality goal, different combinations of efforts for the different QC stages may lead to the same goal. In this paper, we address the problem of allocating resources to the different QC stages, such that the optimal quality is obtained. We propose a model for the cost of QC process and then view the resource allocation among different QC stages as an optimization problem. We solve this optimization problem using non-linear optimization technique of sequential quadratic programming. We also give examples to show how a sub-optimal resource allocation may either increase the resource requirement significantly or lower the quality of the final software.",2003,0, 930,Building a requirement fault taxonomy: experiences from a NASA verification and validation research project,"Fault-based analysis is an early lifecycle approach to improving software quality by preventing and/or detecting pre-specified classes of faults prior to implementation. It assists in the selection of verification and validation techniques that can be applied in order to reduce risk. This paper presents our methodology for requirements-based fault analysis and its application to National Aeronautics and Space Administration (NASA) projects. The ideas presented are general enough to be applied immediately to the development of any software system. We built a NASA-specific requirement fault taxonomy and processes for tailoring the taxonomy to a class of software projects or to a specific project. We examined requirement faults for six systems, including the International Space Station (ISS), and enhanced the taxonomy and processes. The developed processes, preliminary tailored taxonomies for critical/catastrophic high-risk (CCHR) systems, preliminary fault occurrence data for the ISS project, and lessons learned are presented and discussed.",2003,0, 931,A new software testing approach based on domain analysis of specifications and programs,"Partition testing is a well-known software testing technique. This paper shows that partition testing strategies are relatively ineffective in detecting faults related to small shifts in input domain boundary. We present an innovative software testing approach based on input domain analysis of specifications and programs, and propose the principle and procedure of boundary test case selection in functional domain and operational domain. The differences of the two domains are examined by analyzing the set of their boundary test cases. To automatically determine the operational domain of a program, the ADSOD system is prototyped. The system supports not only the determination of input domain of integer and real data types, but also non-numeric data types such as characters and enumerated types. It consists of several modules in finding illegal values of input variables with respect to specific expressions. We apply the new testing approach to some example studies. A preliminary evaluation on fault detection effectiveness and code coverage illustrates that the approach is highly effective in detecting faults due to small shifts in the input domain boundary, and is more economical in test case generation than the partition testing strategies.",2003,0, 932,An empirical study on testing and fault tolerance for software reliability engineering,"Software testing and software fault tolerance are two major techniques for developing reliable software systems, yet limited empirical data are available in the literature to evaluate their effectiveness. We conducted a major experiment to engage 34 programming teams to independently develop multiple software versions for an industry-scale critical flight application, and collected faults detected in these program versions. To evaluate the effectiveness of software testing and software fault tolerance, mutants were created by injecting real faults occurred in the development stage. The nature, manifestation, detection, and correlation of these faults were carefully investigated. The results show that coverage testing is generally an effective means to detecting software faults, but the effectiveness of testing coverage is not equivalent to that of mutation coverage, which is a more truthful indicator of testing quality. We also found that exact faults found among versions are very limited. This result supports software fault tolerance by design diversity as a creditable approach for software reliability engineering. Finally we conducted domain analysis approach for test case generation, and concluded that it is a promising technique for software testing purpose.",2003,0, 933,Shared semantic domains for computational reliability engineering,"Modeling languages and the software tools which support them are essential to engineering. However, as these languages become more sophisticated, it becomes difficult to assure both the validity of their semantic specifications and the dependability of their program implementations. To ameliorate this problem we propose to develop shared semantic domains and corresponding implementations for families of related modeling languages. The idea is to amortize investments at the intermediate level across multiple language definitions and implementations. To assess the practicality of this approach for modeling languages, we applied it to two languages for reliability modeling and analysis. In earlier work, we developed the intermediate semantic domain of failure automata (FA), which we used to formalize the semantics of dynamic fault trees (DFTs). in this paper, we show that a variant of the original FA can serve as a common semantic domain for both DFTs and reliability block diagrams (RBDs). Our experiences suggest that the use of a common semantic domain and a shared analyzer for expressions at this level can ease the task of formalizing and implementing modeling languages, reducing development costs and improving their dependability.",2003,0, 934,Automating the analysis of voting systems,"Voting is a well-known technique to combine the decisions of peer experts. It is used in fault tolerant applications to mask errors from one or more experts using n-modular redundancy (NMR) and n-version programming. Voting strategies include: majority, weighted voting, plurality; instance runoff voting, threshold voting, and the more general weighted k-out-of-n systems. Before selecting a voting schema for a particular application, we have to understand the various tradeoffs and parameters and how they impact the correctness, reliability, and confidence in the final decision made by a voting system. In this paper, we propose an enumerated simulation approach to automate the behavior analysis of voting schemas with application to majority and plurality voting. We conduct synthetic studies using a simulator that we develop to analyze results from each expert, apply a voting mechanism, and analyze the voting results. The simulator builds a decision tree and uses a depth-first traversal algorithm to obtain the system reliability among other factors. We define and study the following behaviors: 1) the probability of reaching a consensus, """"Pc""""; 2) reliability of the voting system, """"R""""; 3) certainly index, """"T""""; and 4) the confidence index, """"C"""". The parameters controlling the analysis are the number of participating experts, the number of possible output symbols that can be produced by an expert, the probability distribution of each expert's output, and the voting schema. The paper presents an enumerated simulation approach for analyzing voting systems which can be used when the use of theoretical models are challenged by dependencies between experts or uncommon probability distributions of the expert's output.",2003,0, 935,A Bayesian belief network for assessing the likelihood of fault content,"To predict software quality, we must consider various factors because software development consists of various activities, which the software reliability growth model (SRGM) does not consider. In this paper, we propose a model to predict the final quality of a software product by using the Bayesian belief network (BBN) model. By using the BBN, we can construct a prediction model that focuses on the structure of the software development process explicitly representing complex relationships between metrics, and handling uncertain metrics, such as residual faults in the software products. In order to evaluate the constructed model, we perform an empirical experiment based on the metrics data collected from development projects in a certain company. As a result of the empirical evaluation, we confirm that the proposed model can predict the amount of residual faults that the SRGM cannot handle.",2003,0, 936,Fault correction profiles,"In general, software reliability models have focused on modeling and predicting the failure detection process and have not given equal priority to modeling the fault correction process. However, it is important to address the fault correction process in order to identify the need for process improvements. Process improvements, in turn, will contribute to achieving software reliability goals. We introduce the concept of a fault correction profile """" a set of functions that predict fault correction events as a function of failure detection events. The fault correction profile identifies the need for process improvements and provides information for developing fault correction strategies. Related to the fault correction profile is the goal fault correction profile. This profile represents the fault correction goal against which the achieved fault correction profile can be compared. This comparison motivates the concept of fault correction process instability, and the attributes of instability. Applying these concepts to the NASA Goddard Space Flight Center fault correction process and its data, we demonstrate that the need for process improvement can be identified, and that improvements in process would contribute to meeting product reliability goals.",2003,0, 937,Augmenting simulated annealing to build interaction test suites,"Component based software development is prone to unexpected interaction faults. The goal is to test as many-potential interactions as is feasible within time and budget constraints. Two combinatorial objects, the orthogonal array and the covering array, can be used to generate test suites that provide a guarantee for coverage of all t-sets of component interactions in the case when the testing of all interactions is not possible. Methods for construction of these types of test suites have focused on two main areas. The first is finding new algebraic constructions that produce smaller test suites. The second is refining computational search algorithms to find smaller test suites more quickly. In this paper we explore one method for constructing covering arrays of strength three that combines algebraic constructions with computational search. This method leverages the computational efficiency and optimality of size obtained through algebraic constructions while benefiting from the generality of a heuristic search. We present a few examples of specific constructions and provide some new bounds for some strength three covering arrays.",2003,0, 938,New quality estimations in random testing,"By reformulating the issue of random testing into an equivalent problem we are able to introduce a new kind of quality estimations based on Monte Carlo integration and the central limit theorem. This method also provides a limited but working """"success theory"""" in the case of no detected failures. In an empirical evaluation using hundreds of billions of simulated tests we furthermore find a very good match between the quality estimations presented in this article and the true failure frequencies. Both simple modulus defects as well as seeded defects in two extensively employed numerical routines were subject to investigation in the empirical work.",2003,0, 939,A data clustering algorithm for mining patterns from event logs,"Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. The paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines.",2003,0, 940,On the Infiniband subnet discovery process,"InfiniBand is becoming an industry standard both for communication between processing nodes and I/O devices, and for interprocessor communication. Instead of using a shared bus, InfiniBand employs an arbitrary (possibly irregular) switched point-to-point network. InfiniBand specification defines a basic management infrastructure that is responsible for subnet configuration, activation, and fault tolerance. After the detection of a topology change, management entities collect the current subnet topology. The topology discovery algorithm is one of the management issues that are outside the scope of the current specification. Preliminary implementations obtain the entire topological information each time a change is detected. In this work, we present and analyze an optimized implementation, based on exploring only the region that has been affected by the change.",2003,0, 941,Creating value through test,"Test is often seen as a necessary evil; it is a fact of life that ICs have manufacturing defects and those need to be filtered out by testing before the ICs are shipped to the customer. In this paper, we show that techniques and tools used in the testing field can also be (re-)used to create value to (1) designers, (2) manufacturers, and (3) customers alike. First, we show how the test infrastructure can be used to detect, diagnose, and correct design errors in prototype silicon. Secondly, we discuss how test results are used to improve the manufacturing process and hence production yield. Finally, we present test technologies that enable systems of high reliability for safety-critical applications.",2003,0, 942,"Detecting soft errors by a purely software approach: method, tools and experimental results","In this paper is described a software technique allowing the detection of soft errors occurring in processor-based digital architectures. The detection mechanism is based on a set of rules allowing the transformation of the target application into a new one, having the same functionalities but being able to identify bit-flips arising in memory areas as well as those perturbing the processor's internal registers. Experimental results issued from fault injection sessions and preliminary radiation test campaigns, performed on a complex DSP processor, provide objective figures about the efficiency of the proposed error detection technique.",2003,0, 943,Network management agent allocation scheme in mesh networks,"In this letter, we propose a scheme for constructing a reliable alarm detection structure in the communication networks. We investigate how to allocate a minimal set of management agents and still cover all alarms in mesh networks under the assumption that alarms are delivered along provisioned paths. We also consider the probabilistic nature of alarm loss and propose an efficient scheme for allocating a minimal set of agents while keeping the overall alarm loss probability below a threshold.",2003,0, 944,Testing criteria for data flow software,"We propose the use of accessibility measures in some testing strategies to specify testing objectives based on a functional model. The functional model, which is founded on the information transfer within software, was used with success to analyze testability for data-flow software. The testing strategies based on this model allow specification of testing objectives in relation to faults diagnostic, i.e. they allow not only faults to be detected but also to be located in the software. The approach is applied on a dataflow design provided by THALES Avionics to specify testing objectives.",2003,0, 945,Exploring the relationship between experience and group performance in software review,"The aim is to examine the important relationships between experience, task training and software review performance. One hundred and ninety-two volunteer university students were randomly assigned into 48 four-member groups. Subjects were required to detect defects from a design document. The main findings include (1) role experience has a positive effect on software review performance; (2) working experience in the software industry has a positive effect on software review performance; (3) task training has no significant effect on software review performance; (4) role experience has no significant effect on task training; (5) working experience in the software industry has a significant effect on task training.",2003,0, 946,Automated support for data exchange via XML,"XML has recently emerged as a standard for exchanging data between different software applications. We present an approach for automatic code generation to interpret information in an XML document. The approach is based on a user-defined mapping of the XML document's structure onto the application's API. This mapping is declarative in nature, and thus easy to specify, and is used by code generator that applies advanced code generation and manipulation techniques to generate the appropriate code. The approach relieves developers from the time-consuming and error-prone task of writing the interpreter themselves, and complements existing XML technologies such as XSLT.",2003,0, 947,Flexible interface matching for Web-service discovery,"The Web-services stack of standards is designed to support the reuse and interoperation of software components on the Web. A critical step, to that end, is service discovery, i.e., the identification of existing Web services that can potentially be used in the context of a new Web application. UDDI, the standard API for publishing Web-services specifications, provides a simple browsing-by-business-category mechanism for developers to review and select published services. In our work, we have developed a flexible service discovery method, for identifying potentially useful services and assessing their relevance to the task at hand. Given a textual description of the desired service, a traditional information-retrieval method is used to identify the most similar service description files, and to order them according to their similarity. Next, given this set of likely candidates and a (potentially partial) specification of the desired service behavior, a structure-matching step further refines and assesses the quality of the candidate service set. In this paper, we describe and experimentally evaluate our Web-service discovery process.",2003,0, 948,A model for battery lifetime analysis for organizing applications on a pocket computer,"A battery-powered portable electronic system shuts down once the battery is discharged; therefore, it is important to take the battery behavior into account. A system designer needs an adequate high-level battery model to make battery-aware decisions targeting the maximization of the system's online lifetime. We propose such a model that allows a designer to analytically predict the battery time-to-failure for a given load. Our model also allows for a tradeoff between the accuracy and the amount of computation performed. The quality of the proposed model is evaluated using typical pocket computer applications and a detailed low-level simulation of a lithium-ion electrochemical cell. In addition, we verify the proposed model against actual measurements taken on a real lithium-ion battery.",2003,0, 949,"Using Internet-based, distributed collaborative writing tools to improve coordination and group awareness in writing teams","The paper argues for using specialized collaborative writing (CW) tools to improve the results of distributed, Internet-based writing teams. The key features of collaborative tools that support enhanced coordination and group awareness are compared to existing writing tools. The first Internet-based CW tool, Collaboratus, is introduced, and its group features are compared with those of Microsoft Word. Next, theoretical propositions, hypotheses, and constructs are formulated to predict outcomes of distributed groups that use CW tools. A four-week-long synchronous-distributed experiment then compares the outcomes of Collaboratus and Word groups. Innovative measures show that Collaboratus groups generally experience better outcomes than Word groups, in terms of productivity, document quality, relationships, and communication, but not in terms of satisfaction. The results buttress the conclusion that Internet-based CW teams can benefit from specialized collaborative technologies that provide enhanced coordination, group awareness, and CW activity support.",2003,0, 950,Detection of invalid routing announcements in RIP protocol,"Traditional routing protocol designs have focused solely on the functionality of the protocols and implicitly assume that all routing update messages received by a router carry valid information. However operational experience suggests that hardware faults, software implementation bugs, operator misconfigurations, let alone malicious attacks can all lead to invalid routing protocol announcements. Although several recent efforts have developed cryptography-based authentication for routing protocols, such enhancements alone are rendered ineffective in the face of faults caused by misconfigurations or hardware/software errors. In this paper we develop a simple routing update validation algorithm for the RIP protocol, RIP with triangle theorem checking and probing (RIP-TP). In RIP-TP routers utilize a triangle theorem to identify suspicious new routing announcements, and then use probing messages to verify the correctness of the announcements. We have evaluated the effectiveness of RIP-TP through simulation using various faulty node behaviors, link failure dynamics and network sizes. The results show that, with an overhead as low as about one probing message per received update message in the worst case, RIP-TP can effectively detect 95% or more invalid routing announcements.",2003,0, 951,An adaptive bandwidth reservation scheme in multimedia wireless networks,"Next generation wireless networks target to provide quality of service (QoS) for multimedia applications. In this paper, the system supports two QoS criteria, i.e., the system should keep the handoff dropping probability always less than a predefined QoS bound, while maintaining the relative priorities of different traffic classes in terms of blocking probability. To achieve this goal, a dynamic multiple-threshold bandwidth reservation scheme is proposed, which is capable of granting differential priorities to different traffic class and to new ad handoff traffic for each class by dynamically adjusting bandwidth reservation thresholds. Moreover, in times of network congestion, a preventive measure by use of throttling new connection acceptance is taken. Another contribution of this paper is to generalize the concept of relative priority, hence giving the network operator more flexibility to adjust admission control policy by incorporating some dynamic factors such as offered load. The elaborate simulation is conducted to verify the performance of the scheme.",2003,0, 952,Unbounded system model allows robust communication infrastructure for power quality measurement and control,"A robust information infrastructure is required to collect power quality measurements and to execute corrective actions. It is based on a software architecture, designed at middleware level, that makes use of Internet protocols for communication over different media. While the middleware detects the anomalies in the communication and computation system and reacts appropriately, the application functionality is maintained through system reconfiguration or graceful degradation. Such anomalies may come from dynamic changes in the topology of the underlying communication system, or the enabling/disabling of processing nodes on the network. The added value of this approach comes from the flexibility to deal with a dynamic environment based on an unbounded system model. The paper illustrates this approach in a power quality measurement and control system with compensation based on active filters.",2003,0, 953,Visual composition of Web services,"Composing Web services into a coherent application can be a tedious and error prone task when using traditional textual scripting languages. As an alternative, complex interactions patterns and data exchanges between different Web services can be effectively modeled using a visual language. In this paper we discuss the requirements of such an application scenario and we present the design of the BioOpera Flow Language. This visual composition language has been fully implemented in a development environment for Web service composition with usability features emphasizing rapid development and visual scalability.",2003,0, 954,Preserving non-programmers' motivation with error-prevention and debugging support tools,"A significant challenge in teaching programming to disadvantaged populations is preserving learners' motivation and confidence. Because programming requires such a diverse set of skills and knowledge, the first steps in learning to program can be highly error-prone, and can quickly exhaust whatever attention learners are willing to give to a programming task. Our approach to preserving learners' motivation is to design highly integrated support tools to prevent the errors they would otherwise make. In this paper, the results of a recent study on programming errors are summarized, and many novel error-preventing tools are proposed.",2003,0, 955,Measurement-based admission control in UMTS multiple cell case,"In this paper, we develop an efficient call admission control (CAC) algorithm for UMTS systems. We first introduce the expressions that we developed for signal-to-interference (SIR) for both uplink and downlink, to obtain a novel CAC algorithm that takes into account, in addition to SIR constraints, the effects of mobility, coverage as well as the wired capacity in the LMTS terrestrial radio access network (UTRAN). for the uplink, and the maximal transmission power of the base station, for the downlink. As of its implementation, we investigate the measurement-based approach as a means to predict future, both handoff and new, call arrivals and thus manage different priority levels. Compared to classical CAC algorithms, our CAC mechanism achieves better performance in terms of outage probability and QoS management.",2003,0, 956,A software methodology for detecting hardware faults in VLIW data paths,"The proposed methodology aims to achieve processor data paths for VLIW architectures able to autonomously detect transient and permanent hardware faults while executing their applications. The approach, carried out on the compiled application software, provides the introduction of additional instructions for controlling the correctness of the computation with respect to failures in one of the data path functional units. The advantage of a software approach to hardware fault detection is interesting because it allows one to apply it only to the critical applications executed on the VLIW architecture, thus not causing a delay in the execution of noncritical tasks. Furthermore, by exploiting the intrinsic redundancy of this class of architectures no hardware modification is required on the data path so that no processor customization is necessary.",2003,0, 957,A heuristic for refresh policy selection in heterogeneous environments,"We address data warehouse maintenance, i.e. how changes to autonomous sources should be detected and propagated to a warehouse. We have extended our work on source characteristics and timings relevant to single source views by exploring data integration from (multiple) heterogeneous sources. We identify relevant maintenance policies and develop a set of heuristics to guide policy choice. On the basis of empirical (testbed) experiments, we claim that resulting selections are good.",2003,0, 958,Addressing workload variability in architectural simulations,"The inherent variability of multithreaded commercial workloads can lead to incorrect results in architectural simulation studies. Although most architectural simulation studies ignore space variability's effects, our results demonstrate that space variability has serious implications for architectural simulation studies using multithreaded workloads. The standard solution - running long enough - does not easily apply to simulation because of its enormous slowdown. To address this problem, we propose a simulation methodology combining multiple simulations with standard statistical techniques, such as confidence intervals and hypothesis testing. This methodology greatly decreases the probability of drawing incorrect conclusions, and permits reasonable simulation times given sufficient simulation hosts.",2003,0, 959,Faults in grids: why are they so bad and what can be done about it?,"Computational grids have the potential to become the main execution platform for high performance and distributed applications. However, such systems are extremely complex and prone to failures. We present a survey with the grid community on which several people shared their actual experience regarding fault treatment. The survey reveals that, nowadays, users have to be highly involved in diagnosing failures, that most failures are due to configuration problems (a hint of the area's immaturity), and that solutions for dealing with failures are mainly application-dependent. Going further, we identify two main reasons for this state of affairs. First, grid components that provide high-level abstractions when working, do expose all gory details when broken. Since there are no appropriate mechanisms to deal with the complexity exposed (configuration, middleware, hardware and software issues), users need to be deeply involved in the diagnosis and correction of failures. To address this problem, one needs a way to coordinate different support teams working at the grids different levels of abstraction. Second, fault tolerance schemes today implemented on grids tolerate only crash failures. Since grids are prone to more complex failures, such those caused by heisenbugs, one needs to tolerate tougher failures. Our hope is that the very heterogeneity, that makes a grid a complex environment, can help in the creation of diverse software replicas, a strategy that can tolerate more complex failures.",2003,0, 960,Investigation of interfaces with analytical tools,"This paper focuses on advancements in three areas of analyzing interfaces, namely, acoustic microscopy for detecting damage to closely spaced interfaces, thermal imaging to detect damage and degradation of thermal interface materials and laser spallation, a relatively new concept to understand the strength of interfaces. Acoustic microscopy has been used widely in the semiconductor assembly and package area to detect delamination, cracks and voids in the package, but the resolution in the axial direction has always been a limitation of the technique. Recent advancements in acoustic waveform analysis has now allowed for detection and resolution of closely spaced interfaces such as layers within the die. Thermal imaging using infrared (IR) thermography has long been used for detection of hot spots in the die or package. With recent advancements in very high-speed IR cameras, improved pixel resolution, and sophisticated software programming, the kinetics of heat flow can now be imaged and analyzed to reveal damage or degradation of interfaces that are critical to heat transfer. The technique has been demonstrated to be useful to understand defects and degradation of thermal interface materials used to conduct heat away from the device. Laser spallation is a method that uses a short duration laser pulse to cause fracture at the weakest interface and has the ability to measure the adhesion strength of the interface. The advantage of this technique is that it can be used for fully processed die or wafers and even on packaged devices. The technique has been used to understand interfaces in devices with copper metallization and low-k dielectrics.",2003,0, 961,Advanced concepts in time-frequency signal processing made simple,"Time-frequency representations (TFRs) such as the spectrogram are important two-dimensional tools for processing time-varying signals. In this paper, we present the Java software module we developed for the spectrogram implementation together with the associated programming environment. Our aim is to introduce to students the advanced concepts of TFRs at an early stage in their education without requiring a rigorous theoretical background. We developed two sets of exercises using the spectrogram based on signal analysis and speech processing together with on-line evaluation forms to assess student learning experiences. In the paper, we also provide the positive statistical and qualitative feedback we obtained when the Java software and corresponding exercises were used in a signal processing course.",2003,0, 962,Link quality assessment in mobile satellite communication systems,"This paper presents a simulation tool aimed to assess the relation between the link quality and the overall capacity on a MSS (mobile satellite system). This tool is particularly appropriated to compare different radio interfaces and to estimate the applicability of the UTRA radio interface in future satellite systems. Moreover, it permits the analysis of innovative radio resource management techniques. All the models included in the simulator, namely spotbeam projection on the Earth, multibeam antennas and radio propagation models are also presented.",2003,0, 963,Growing a software quality culture in an educational environment,"The technical skills students must acquire in a typical computer science program are often mandated through standards or curricular requirements. How are nontechnical skills assessed? computer science educators must teach and encourage the development of other critical skills needed in the workplace such as personal accountability, a strong work ethic and an ability to deliver on-time and correct work. This paper describes the results of a student survey designed to provoke some thoughts about the evolving work ethic and work culture of today's students. Along with the survey results, the importance in asking the questions and a brief analysis of how the behavior or activity fits into the quality cycle are presented. Finally, a section on continuous improvement strategies is proposed.",2003,0, 964,An improvement project for distribution transformer load management in Taiwan,"Summary form only given. This paper introduces an application program that is based on an automated mapping/facilities management/geographic information system (AM/FM/GIS) to provide information expectation, load forecasting and power flow calculation capability in distribution systems. First, the database and related data structure used in the Taipower distribution automation pilot system is studied and thoroughly analyzed. Then, our program, developed by the AM/FM FRAMME and Visual Basic software, is integrated into the above pilot system. Moreover, this paper overcomes the weak points of the pilot system, such as difficult use, incomplete function, nonuniform sampling for billing and dispatch of bills and inability to simultaneously transfer customer data. This program can enforce the system and can predict future load growth on distribution feeders, considering the effects of temperature variation, and power needed for air-conditioners. In addition, on the basis of load density and diversity factors of typical customers, the saturation load of a new housing zone can be estimated. As for the power flow analysis, it can provide three-phase quantities of voltage drop at each node, the branch current, and the system loss. The program developed in this study can effectively aid public electric utilities in distribution system planning and operation.",2003,0, 965,Risk responsibility for supply in Latin America - the Argentinean case,"In deregulation of electricity sectors in Latin America two approaches have been used to allocate the responsibility on the electricity supply: (1) The government keeps the final responsibility on the supply. Suppliers (distribution companies or traders) do not have control on the rationing when it becomes necessary to curtail load. In such case they cannot manage the risks associated to the supply. This is the case in the markets of Brazil and Colombia. (2) The responsibility is fully transferred to suppliers. The regulatory entity supervises the quality of the supply and different types of penalties are applied when load is not supplied. This approach is currently used in Argentina, Chile and Peru. In Argentina the bilateral contracts, that are normally financial, become physical when a rationing event happens. This approach permits suppliers to have a great control on risks. Both approaches have defenders and detractors. In some cases, the conclusions on a same event have completely opposite interpretations and diagnoses. For instance, the crisis of supply in Brazil during 2002 was interpreted as a fault of the market by the defenders of the final responsibility of the state, or attributed to excess of regulation and of interference of the government by the advocates of decentralized schemes. This presentation will analyze the performance of both approaches in Latin America, assessing the diverse types of arguments used to criticize or to defend to each one of these approaches, and finally to present some conclusions on the current situation and future of the responsibility on supply and risks associated.",2003,0, 966,Low-cost power quality monitor based on a PC,"This paper presents the development of a low-cost digital system useful for power quality monitoring and power management. Voltage and current measurements are made through Hall-effect sensors connected to a standard data acquisition board, and the applications were programmed in LabVIEWTM , running on Windows in a regular PC. The system acquires data continuously, and stores in files the events that result from anomalies detected in the monitored power system. Several parameters related to power quality and power management can be analyzed through 6 different applications, named: """"scope and THD"""", """"strip chart"""", """"wave shape"""", """"sags and swells"""", """"classical values"""" and """"p-q theory"""". The acquired information can be visualized in tables and/or in charts. It is also possible to generate reports in HTML format. These reports can be sent directly to a printer, embedded in other software applications, or accessed through the Internet, using a Web browser. The potential of the developed system is shown, namely the advantages of virtual instrumentation, regarding to flexibility, cost and performance, in the scope of power quality monitoring and power management.",2003,0, 967,Flour quality control using image processing,This document presents the description of an application for the automatic visual inspection of flour quality. The flour quality depends on the number of impurities detected in the flour after a predefined settling time. The software was developed with IMAQ Vision for LabVTEW software-developing tool and it uses a commercial camera as image acquisition device. The paper along its sections describes the main system blocks. An illustrative example is used for better description of the several steps during the digital processing of the acquired images.,2003,0, 968,Beat: Boolean expression fault-based test case generator,"We present a system which generates test cases from Boolean expressions. The system is based on the integration of several fault-based test case selection strategies developed by us. Our system generates test cases that are guaranteed to detect all single operator fault and all single operand faults when the Boolean expression is in irredundant disjunctive normal form. Apart from being an automated test case generation tool developed for software testing practitioners, this system can also be used as a training or self-learning tool for students as well as software testing practitioners.",2003,0, 969,Applying fault correction profiles,"In general, software reliability models have focused on modeling and predicting the failure detection process and have not given equal priority to modeling the fault correction process. However, it is important to address the fault correction process in order to identify the need for process improvements. Process improvements, in turn, will contribute to achieving software reliability goals. We introduce the concept of a fault correction profile - a set of functions that predict fault correction events as a function of failure detection events. The fault correction profile identifies the need for process improvements and provides information for developing fault correction strategies. Related to the fault correction profile is the goal fault correction profile. This profile represents the fault correction goal against which the achieved fault correction profile can be compared. This comparison motivates the concept of fault correction process instability, and the attributes of instability. Applying these concepts to the NASA Goddard Space Flight Center fault correction process and its data, we demonstrate that the need for process improvement can be identified, and that improvements in process would contribute to meeting product reliability goals.",2003,0, 970,A stress-point resolution system based on module signatures,"This paper introduces a framework to provide design and testing guidance through a stress-point resolution system based on a module signature for module categorization. A stress-point resolution system includes stress-point identification and the selection of appropriate mitigation activities for those identified stress-points. Progress has been made in identifying stress-point to target the most fault-prone modules in a system by the module signature classification technique. Applying the stress-point prediction method on a large Motorola production system with approximately 1500 modules and comparing the classified modules to change reports, misclassification errors occurred at a rate of less than 2%. After identifying the stress point candidates, localized remedial actions should be undertaken. This algorithmic classification may suggest more insights into defect analysis and correction activities to enhance the software development strategies of software designers and testers.",2003,0, 971,RTOS scheduling in transaction level models,"Raising the level of abstraction in system design promises to enable faster exploration of the design space at early stages. While scheduling decision for embedded software has great impact on system performance, it's much desired that the designer can select the right scheduling algorithm at high abstraction levels so as to save him from the error-prone and time consuming task of tuning code delays or task priority assignments at the final stage of system design. In this paper we tackle this problem by introducing a RTOS model and an approach to refine any unscheduled transaction level model (TLM) to a TLM with RTOS scheduling support. The refinement process provides a useful tool to the system designer to quickly evaluate different dynamic scheduling algorithms and make the optimal choice at an early stage of system design.",2003,0, 972,Synthesizing operating system based device drivers in embedded systems,"This paper presents a correct-by-construction synthesis method for generating operating system based device drivers from a formally specified device behavior model. Existing driver development is largely manual using an ad-hoc design methodology. Consequently, this task is error prone and becomes a bottleneck in embedded system design methodology. Our solution to this problem starts by accurately specifying device access behavior with a formal model, viz. extended event driven finite state machines. We state easy check soundness conditions on the model that subsequently guarantee properties such as bounded execution time and deadlock-free behavior. We design a deadlock-free resource accessing scheme for our device access model. Finally, we synthesize an operating system (OS) based event processing mechanism, which is the core of the device driver, using a disciplined methodology that assures the correctness of the resulting driver. We validate our synthesis method using two case studies: an infrared port and the USB device controller for an SA1100 based handheld. Besides assuring a correct-by-construction driver, the size of the specification is 70% smaller than a manually written driver, which is a strong indicator of improved design productivity.",2003,0, 973,Early estimation of the size of VHDL projects,"The analysis of the amount of human resources required to complete a project is felt as a critical issue in any company of the electronics industry. In particular, early estimation of the effort involved in a development process is a key requirement for any cost-driven system-level design decision. In this paper, we present a methodology to predict the final size of a VHDL project on the basis of a high-level description, obtaining a significant indication about the development effort. The methodology is the composition of a number of specialized models, tailored to estimate the size of specific component types. Models were trained and tested on two disjoint and large sets of real VHDL projects. Quality-of-result indicators show that the methodology is both accurate and robust.",2003,0, 974,Towards statistical inferences of successful prostate surgery,"Prostate cancer continues to be the leading cancer in the United States male population. The options for local therapy have proliferated and include various forms of radiation delivery, cryo-destruction, and novel forms of energy delivery as in high-intensity focused ultrasound. Surgical removal, however, remains the standard procedure for cure. Currently there are little objective parameters that are used to compare the efficiency of each form of surgical removal. As surgeons apply these different surgical approaches, a quality assessment would be most useful, not only with regard to overall comparison of one approach vs. another but also surgeon evaluation of personal surgical performance as they relate to a standard. To this end, we discuss the development of a process employing image reconstruction and analysis techniques to assess the volume and extent of extracapsular soft tissue removed with the prostate. Parameters such as the percent of capsule covered by soft tissue and where present the average depth of soft tissue coverage are assessed. A final goal is to develop software for the purpose of a quality assurance assessment for pathologists and surgeons to evaluate the adequacy/appropriateness of each surgical procedure; laparoscopic versus open perineal or retropubic prostatectomy.",2003,0, 975,Multisensor based power diagnosis system for an intelligent robot,"This paper presents a power supply diagnosis system using redundant managed sensors method detection for intelligent security robot. The power system of the security robot we have developed in our lab. Consists of three parts, namely, computer power, drive motor power and circuit system power. We intend to detect the current value and diagnose the fault sensors of the power system. In this paper, we focus on the PC power supply, and we use eight current sensors to detect the current variety of the PC power and diagnose which sensor to be fault. First, we use computer simulation and implement it in the industry standard PC using A/D converter card. Next, we use the diagnosis algorithm in order to design the agent-based power supply system using microprocessor. Finally, we implement the proposed method in the PC power system of the intelligent security robot.",2003,0, 976,Solenoidal and planar microcoils for NMR spectroscopy,"The extraction of nuclear magnetic resonance (NMR) spectra of samples having smaller and smaller volumes is a real challenge. Reductions of volume are dictated by the difficulties of production of sufficiently large samples or by necessities of miniaturization of the analyzing system. In both cases a careful design of the radiofrequency (RF) coil, ensuring an optimum reception of the NMR signal, is mandatory. We evaluated the usefulness of an electromagnetic simulation software for the design and optimization of these radio-frequency coils, which are more and more used in biology and health research projects. The contribution of different effects (dc, skin and proximity) at the total resistance of the coils were assessed as well as the expected SNR per sample volume unit and the quality factor. Designed for a biological application, these coils have to be as less invasive as possible and they must allow small quantities analysis of metabolites inside capillary tubs or surrounding the microcoil. In order to evaluate detection efficiency and spectral resolution, preliminary experiments have been performed at 85.13 MHz under a static magnetic field of 2 T.",2003,0, 977,The WindSat calibration/validation plan and early results,"Summary form only given. The WindSat radiometer was launched as part of the Coriolis mission in January 2003. WindSat was designed to provide fully polarimetric passive microwave measurements globally, and in particular over the oceans for ocean surface wind vector retrieval. Due to prohibitive risk and cost associated with an end-to-end pre-launch absolute radiometer calibration (i.e. from the energy incident on the main reflector through the receiver digitized output) it was important to develop an on-orbit calibration plan that verifies instrument conformance to specification and, if necessary, derive suitable calibration coefficients or sensor algorithms that bring the instrument into specification. This is especially true for the WindSat Cal/Val, in view of the fact that it is the first fully polarimetric spaceborne radiometer. This paper will provide an overview of the WindSat Cal/Val Program. The Cal/Val plan, patterned after the very successful SSM/I Cal/Val, is designed to ensure the highest quality data products and maximize the return on the available Cal/Val resources. The Cal/Val is progressive and will take place in well-defined stages: Early Orbit Evaluation, Initial Assessment, Detailed Calibration, and EDR Validation. The approach allows us to focus efforts more quickly on issues as they are identified and ensures a logical progression of activities. At each level of the Cal/Val the examination of the instrument and algorithm errors becomes more detailed. Along with the WindSat Cal/Val structure overview, we will present the independent data sources to be used and the analysis techniques to be employed. During the Early Orbit phase, special instrument operating modes have been developed to monitor the sensor health from the time of instrument turn-on to a short time after it reaches stable operating conditions. This mode is uniquely important for the WindSat since it affords the only opportunity to examine all of the data in the full 360 degree scan- > - > and to directly assess potential field-of-view intrusion effects from the spacecraft or other sensors on-board the satellite and evaluate the spin control system by observing the statistical distribution of data as the horns scan through the calibration loads. The next phase of the WindSat Cal/Val consists of an initial assessment of all sensor and environmental data products generated by the Ground Processing Software. The primary focus of the assessment is to conduct an end-to-end review of the output files (TDR, SDR and EDR) to verify the following: proper functioning of the on-line GPS modules, instrument calibration (including Antenna Pattern Correction and Stokes Coupling) does not contain large errors, EDR algorithms provide reasonable products, and there are no major geo-iocation errors. This paper will provide a summary of the results of the Early Orbit and Initial Assessment phases of the WindSat Cal/Val Program.",2003,0, 978,Performance of common and dedicated traffic channels for point-to-multipoint transmissions in W-CDMA,"We present a performance evaluation of two strategies for transmitting packet data to a group of multicast users over the W-CDMA air interface. In the first scheme, a single common or broadcast channel is used to transmit multicast data over the entire cell, whereas in the second scheme multiple dedicated channels are employed for transmitting to the group. We evaluate the performance of both schemes in terms of the number of users that can be supported by either scheme at a given quality of service defined in terms of a target outage probability.",2003,0, 979,A QoS-based routing algorithm in multimedia satellite networks,"Real-time multimedia applications impose strict delay bounds and are sensitive to delay variations. Satellite link handover increases delay jitter and signaling overhead as well as the termination probability of ongoing connections. To satisfy the QoS requirements of multimedia applications, satellite routing protocols should consider link handovers and minimize their effect on the active connections. A new routing algorithm is proposed to reduce both the inter-satellite handover and ISL handover. Once a connection request arrives, the remaining coverage time of satellites is used in the deterministic UDL routing. In the probabilistic ISL routing, the propagation delay and existence probability of ISL links are considered to reduce the delay and ISL handover probability. The rerouting algorithm is called when link handover occurs. Experiments show that this routing algorithm results in small delay jitter, low rerouting frequency, and low rerouting processing overhead.",2003,0, 980,Electronic test solutions for FlowFET fluidic arrays,"The testable design and test of a software-controllable lab-on-a-chip, including a fluidic array of FlowFETs, control and interface electronics is presented. Test hardware is included for detecting faults in the DMOS electro-fluidic interface and the digital parts. Multi-domain fault modelling and simulation shows the effects of faults in the (combined) fluidic and electrical parts. Fault simulations also reveal important parameters of multi-domain test-stimuli for detecting both electrical and fluidic defects.",2003,0, 981,Predicting maintainability with object-oriented metrics -an empirical comparison,"
First Page of the Article
",2003,0, 982,Analysis of techniques for building intrusion tolerant server systems,"The theme of intrusion detection systems (IDS) is detection because prevention mechanisms alone are not guaranteed to keep intruders out. The research focus of IDS is therefore on how to detect as many attacks as possible, as soon as we can, and at the same time to reduce the false alarm rate. However, a growing recognition is that a variety of mission critical applications need to continue to operate or provide a minimal level of services even when they are under attack or have been partially compromised; hence the need for intrusion tolerance. The goal of this paper is to identify common techniques for building highly available and intrusion tolerant server systems and characterize with examples how various techniques are applied in different application domains. Further, we want to point out the potential pitfalls as well as challenging open research issues which need to be addressed before intrusion tolerant systems (ITS) become prevalent and truly useful beyond a specific range of applications.",2003,0, 983,Particle swarm optimization for worst case tolerance design,"Worst case tolerance analysis is a major subtask in modern industrial electronics. Recently, the demands on industrial products like production costs or probability of failure have become more and more important in order to be competitive in business. The main key to improve the quality of electronic products is the challenge to reduce the effects of parameter variations, which can be done by robust parameter design. This paper addresses the applicability of particle swarm optimization combined with pattern search for worst case circuit design. The main advantages of this approach are the efficiency and robustness of the particle swarm optimization strategy. The method is also well suited for higher order problems, i.e. for problems with a high number of design parameters, because of the linear complexity of the pattern search algorithm.",2003,0, 984,User's perception of quality of service provision,"The world today is driven by the information exchange providing support for the national and global cooperation. The supporting telecommunications infrastructures are becoming more complex providing the platform for the user driven real-time applications over the large geographical distances. The essential decisions made concerning the state welfare, heath systems, education, business, national security and defence, depend on quality of service provision of telecommunications and data networks. Regardless of the technology supporting the information flows, the final verdict on the quality of service is made by the end user. As a result, it is essential to assess the quality of service provision in the light of user's perception. This article presents a cost effective methodology to assess the user's perception of quality of service provision utilizing the existing Staffordshire University Network.",2003,0, 985,An integrated methodology to improve classification accuracy of remote sensing data,"We investigated and improved the accuracy of supervised classification by eliminating electromagnetic radiation scattering effect of aerosol particles from cloud-free Landsat TM data. The scattering effect was eliminated by deriving a mathematical model including the amount of the scattered radiation per pixel area and aerosol size distribution, which was derived using randomly collected training sets. An algorithm in C++ has been developed with iterations to derive the aerosol size distribution and to remove the effect of aerosols scattering in addition to the use of IRDAS software (commercial software). To assess the accuracy of the supervised classification, results of remote sensing data were compared with Global Positioning System (GPS) ground truth reference data in error matrices (output results of classification). The results of the corrected images show great improvement of image quality and classification accuracy. The misclassified off-diagonal pixels were minimized in the accuracy assessment error matrices. Therefore it fulfills the criteria of accuracy improvement. The overall accuracy of the supervised classification is improved (between 18% and 27%). The Z-score shows significant difference between the corrected data and the raw data (between 4.0 and 11.91) by employing KHAT statistics evaluation.",2003,0, 986,A convergence model for asynchronous parallel genetic algorithms,"We describe and verify a convergence model that allows the islands in a parallel genetic algorithm to run at different speeds, and to simulate the effects of communication or machine failure. The model extends on present theory of parallel genetic algorithms and furthermore it provides insight into the design of asynchronous parallel genetic algorithms that work efficiently on volatile and heterogeneous networks, such as cycle-stealing applications working over the Internet. The model is adequate for comparing migration parameter settings in terms of convergence and fault tolerance, and a series of experiments show how the convergence is affected by varying the failure rate and the migration topology, migration rate, and migration interval. Experiments conducted show that while very sparse topologies are inefficient and failure-prone, even small increases in topology order result in more robust models with convergence rates that approach the ones found in fully-connected topologies.",2003,0, 987,Empirical case studies of combining software quality classification models,"The increased reliance on computer systems in the modern world has created a need for engineering reliability control of computer systems to the highest possible standards. This is especially crucial in high-assurance and mission critical systems. Software quality classification models are one of the important tools in achieving high reliability. They can be used to calibrate software metrics-based models to detect fault-prone software modules. Timely use of such models can greatly aid in detecting faults early in the life cycle of the software product. Individual classifiers (models) may be improved by using the combined decision from multiple classifiers. Several algorithms implement this concept and have been investigated. These combined learners provide the software quality modeling community with accurate, robust, and goal oriented models. This paper presents a comprehensive comparative evaluation of three combined learners, Bagging, Boosting, and Logit-Boost. We evaluated these methods with a strong and a weak learner, i.e., C4.5 and Decision Stumps, respectively. Two large-scale case studies of industrial software systems are used in our empirical investigations.",2003,0, 988,Walking the talk: building quality into the software quality management tool,"The market for products whose objective is to improve software project management and software quality management is expected to be large and growing. In this paper, we present data from a software project whose mission is to build a commercial enterprise software project and quality management tool. The data we present include: planned and actual schedule, earned value, size, productivity, and defect removal by phase. We present process quality index and percent defect free modules as useful measures to predict post development defects in the product. We conclude that vendors of software project and quality management tools must walk the talk by utilizing disciplined techniques for managing the project and building quality into their products with known quality methods. The Team Software Process is a proven framework for both software project management and software quality management.",2003,0, 989,Software cost estimation through conceptual requirement,"Software cost estimation is vital for the effective control and management of the whole software development process. Currently, the constructive cost model (COCOMO II) is the most popular tool for estimating software cost. It uses lines of code and function points to assess software size. However, these are actually implementation details and difficult to estimate during the early stage of software development. The entity relationship (ER) model is well used in conceptual modeling (requirements analysis) for data-intensive systems. In this article, we explore the use of ER model for the estimation of software cost. A new term, path complexity, is proposed. Based on path complexity and other factors, we built a multiple regression model for software cost estimation. The approach has been validated statistically through system data from the real industry projects.",2003,0, 990,Combining behavior and data modeling in automated test case generation,"Software testing plays a critical role in the process of creating and delivering high-quality software products. Manual software testing can be an expensive, tedious and error-prone process, therefore testing is often automated in an attempt to reduce its cost and improve its defect detection capability. Model-based testing, a technique used in automated test case generation, is an important topic because it addresses the need for test suites that are of high-quality and yet, maintainable. Current model-based techniques often use a single model to represent system behavior. Using a single model may restrict the number and type of test cases that may be generated. In this paper, system-level test case generation is accomplished using two models to represent system behavior. The results of case studies used to evaluate this technique indicate that for the systems studied a larger percentage of the required test cases can be generated using the combined modeling approach.",2003,0, 991,Deriving software statistical testing model from UML model,"Software statistical testing is concerned with testing the entire software systems based on their usage models. In the context of UML-based development, it is desired that usage models can be derived from UML analysis artifacts. This paper presents a method that derives the software usage models from reasonably constrained UML artifacts. The method utilizes use case diagrams, sequence diagrams and the execution probability of each sequence diagram in its associated use case. By projecting the messages in sequence diagrams onto the objects under test, the method elicits messages and their occurrence probabilities for generating the usage model of each use case for the objects under test. Then the usage models of use cases are integrated into the system usage model. The integration procedure utilizes the execution sequential relations between use cases.",2003,0, 992,Experiences in the inspection process characterization techniques,"Implementation of a disciplined engineering approach to software development requires the existence of an adequate supporting measurement & analysis system. Due to demands for increased efficiency and effectiveness of software processes, measurement models need to be created to characterize and describe the various processes usefully. The data derived from these models should then be analyzed quantitatively to assess the effects of new techniques and methodologies. In recent times, statistical and process thinking principles have led software organizations to appreciate the value of applying statistical process control techniques. As part of the journey towards SW-CMM® Level 5 at the Motorola Malaysia Software Center, which the center achieved in October 2001, considerable effort was spent on exploring SPC techniques to establish process control while focusing on the quantitative process management KPA of the SW-CMM. This paper discusses the evolutionary learning experiences, results and lessons learnt by the center in establishing appropriate analysis techniques using statistical and other derivative techniques. The paper discusses the history of analysis techniques that were explored with specific focus on characterizing the inspection process. Future plans to enhance existing techniques and to broaden the scope to cover analysis of other software processes are also discussed.",2003,0, 993,Design and implementation of a fault diagnosis system for transmission and subtransmission networks,"This paper proposes a new intelligent diagnostic system for on-line fault diagnosis of power systems using information of relays and circuit breakers. This diagnostic system consists of three parts: an interfacing hardware, a navigation software and an intelligent core. The interfacing hardware samples the protective elements of the power system. By means of this data, the intelligent core detects occurrence of fault and determines the fault features, such as type and location of fault. The navigation software manages the diagnostic system. The software controls the interfacing hardware and provides required data of the intelligent core. Moreover, this software is user interface of the fault diagnostic system. The proposed approach has been examined on a practical power system (Semnan Regional Electric Company) with real and simulated events. Obtained results confirm the validity of the developed approach.",2003,0, 994,Automated Control Systems for the Safety Integrity Levels 3 and 4,"Programs employed for purposes of safety related control must be formally safety licensed, which constitutes a very difficult and hitherto not satisfactorily solved problem. Striving for utmost simplicity and easy comprehensibility of verification methods, the programming methods cause/effect tables and function block diagrams based on verified libraries are assigned to the upper two Safety Integrity Levels SIL 4 and SIL 3, resp., as they are the only ones so far allowing to verify highly safety critical automation software in trustworthy, easy and economic ways. For each of the two SILs a dedicated, a low complexity execution platform is presented supporting the corresponding programming method architecturally. Their hardware is fault detecting or supervised by a fail safe logic, resp., to initiate emergency shut-downs in case of malfunctions. By design, there is no semantic gap between the programming and machine execution levels, enabling the safety licensing of application software by extremely simple, but rigorous methods, viz., diverse back translation and inspection. Operating in strictly periodic fashion, the controllers exhibit fully predictable real time behaviour.",2003,0, 995,Assessing the Dependability of SOAP RPC-Based Web Services by Fault Injection,"This paper presents our research on devising a dependability assessment method for SOAP-based Web Services using network level fault injection. We compare existing DCE middleware dependability testing research with the requirements of testing SOAP RPC-based applications and derive a new method and fault model for testing web services. From this we have implemented an extendable fault injector framework and undertaken some proof of concept experiments with a system based around Apache SOAP and Apache Tomcat. We also present results from our initial experiments, which uncovered a discrepancy within our system. We finally detail future research, including plans to adapt this fault injector framework from the stateless environment of a standard web service to the stateful environment of an OGSA service.",2003,0, 996,3D reconstruction and model acquisition of objects in real world scenes using stereo imagery,"Recently, the use of three dimensional computer models has greatly increased, in part due to the availability of fast, inexpensive hardware and technologies like VRML-ready Internet browsers. These models often represent objects in real world scenes and are typically built by hand using CAD software, an error-prone and time consuming process. The paper outlines a simple and efficient method based on passive stereo imaging techniques by which these models may be acquired, processed and utilized with little effort. The described method extracts the 3D geometrical data from the stereo images, for the purpose of creating realistic 3D models. The approach uses calibrated cameras",2003,0, 997,Role of requirements engineering in software development process: an empirical study,"Requirements problems are widely acknowledged to reduce the quality of software. This work details an empirical study of requirements problems as identified by eleven Australian software companies. Our analysis aims to provide RE practitioners with some insight into designing appropriate RE processes in order to achieve better results. This research was a two-fold process; firstly, a requirements process maturity was assessed and secondly, the types and number of problems faced by different practitioners during their software project was documented. The results indicate that there is no significant difference in problems faced by companies with mature and immature RE process. These findings suggest that a holistic approach is required in order to achieve quality software and organizations should not solely concentrate on improving requirement process. Through our empirical study we have also analysed problems identified by different groups of practitioners and found that there are more differences than similarities in the problems across practitioner groups.",2003,0, 998,Component based development for transient stability power system simulation software,"A component-based development (CBD) approach has become increasingly important in the software industry. Any software that apply the CBD will not only save time and cost through reusability of component, but also have the capability to handle the complex problems. Since CBD design is based on object-oriented programming (OOP), the components with a good quality and reusability can be created, classified and managed for future reuse. The methodology of OOP is based on the real object. The mechanism of OOP such as encapsulation, inheritance, and polymorphism are the advantages that could be used to define real objects associated with the program. This paper focused on the implementation of the CBD to power system transient stability simulation (TSS). There are many methods to solve transient stability problem, but in this paper two methods are applied to solve TSS problems, namely trapezoidal method and modified Euler method. The performance of two approaches, CBD and non CBD applications of power system transient stability simulation is assessed through tests carried out using IEEE data test systems.",2003,0, 999,On the design of Modular Software Defined Radio Systems,"Software Radio and its enabling technologies have been discussed extensively in the past. The focus of most work is either on a specific hardware part of the transceiver's signal processing chain, or on algorithm design for the digital baseband. However, a methodology for managing open, Software Defined Radio systems is still in demand. In this paper, the implementation of a software defined physical layer is perceived as a real-time embedded system design problem on multiprocessor hardware, using pieces of software which are unknown a priori. We propose a new way of modeling this design situation. Granularity G is introduced to describe the degree of modularity in software defined communication functions. The speedup s is used to assess the quality of modular implementations. Computer simulations lead us to first observations under the premises of the new model. A mathematical analysis complements these experiments and reveals how to adjust the simulation parameters for interpretable results. The speedup behavior of a simple graph structure is predicted from this analysis. A more complicated structure is then reassessed by means of simulation, finally leading to enhanced guidelines for the design of Modular Software Defined Radio systems.",2003,0, 1000,Identifying rate mismatch through architecture transformation,"Rate mismatches are often missed when simulation is used as a validation tool for embedded systems because it is hard to determine if the mismatch was introduced by the coupling of the software models, the communication protocol, a flawed system design or a combination of the three factors. By eliminating the ambiguity introduced by the software model couplings and the communication protocol, it is possible to trace rate mismatches to flawed system design. The High Level Architecture details the couplings between the various components present in the system, while the Time Triggered Protocol provides deterministic, reliable and fault-tolerant communication between the components. This paper presents an mapping of the High Level Architecture specification of the system to the Time Triggered Protocol. Any rate mismatches detected, can then be attributed to flawed system design, making simulation a powerful validation tool for the design and implementation of hard real-time embedded systems.",2003,0, 1001,Analysis of channel allocation schemes for cellular mobile communication networks,"The coverage area and the number of users of mobile communication networks are in a continuous and fast growing, while the allocated frequency spectrum remains unchanged. New efficient management schemes have to be deployed in order to maintain a good grade of service. This paper deals with a software package meant to evaluate by simulation the blocking probability in a cellular system induced by a channel allocation scheme and, thus, to select its parameters in accordance with the network architecture and the traffic distribution on its coverage area.",2003,0, 1002,An empirical comparison and characterization of high defect and high complexity modules,"We analyzed a large set of complexity metrics and defect data collected from six large-scale software products, two from IBM and four from Nortel Networks, to compare and characterize the similarities and differences between the high defect (HD) and high complexity modules. We observed that the most complex modules often have an acceptable quality and HD modules are not typically the most complex ones. This observation was statistically validated through hypothesis testing. Our analyses also indicated that the clusters of modules with the highest defects are usually those whose complexity rankings are slightly below the most complex ones. These results should help us better understand the complexity behavior of HD modules and guide future software development and research efforts.",2003,1, 1003,Citation recognition for scientific publications in digital libraries,"A method based on part-of-speech tagging (PoS) is used for bibliographic reference structure. This method operates on a roughly structured ASCII file, produced by OCR. Because of the heterogeneity of the reference structure, the method acts in a bottom-up way, without an a priori model, gathering structural elements from basic tags to subfields and fields. Significant tags are first grouped in homogeneous classes according to their categories and then reduced in canonical forms corresponding to record fields: """"authors"""", """"title"""", """"conference name"""", """"date"""", etc. Nonlabeled tokens are integrated in one or another field by either applying PoS correction rules or using a interor intra-field model generated from well-detected records. The designed prototype operates with a great satisfaction on different record layouts and character recognition qualities. Without manual intervention, 96.6% words are correctly attributed, and about 75,9% references are completely segmented from 2,575 references.",2004,0, 1004,"A multi-interface, multi-profiling system for chronic disease management learning","A key aspect of successful chronic disease management is active partnership between consumer and provider - this is particularly important in diabetes management, where many key activities are in the hands of the patient. We have developed a multi-interface system that promotes high quality diabetes management through profiling and adaptive support of both consumer and provider in the context of a university podiatry clinic. Handheld devices are used for decision support, data capture and notification of patient concerns in consultation. Consultation data integrates with Web based learning environments for podiatry students and consumers. The architecture implements our approach to patient provider partnership and exemplifies integration of system goals across platforms, users and devices. Upcoming field trials assess whether we have achieved an acceptable system that improves quality of management activities.",2004,0, 1005,A cost-benefit stopping criterion for statistical testing,"Determining when to stop a statistical test is an important management decision. Several stopping criteria have been proposed, including criteria based on statistical similarity, the probability that the system has a desired reliability, and the expected cost of remaining faults. This paper proposes a new stopping criterion based on a cost-benefit analysis using the expected reliability of the system (as opposed to an estimate of the remaining faults). The expected reliability is used, along with other factors such as units deployed and expected use, to anticipate the number of failures in the field and the resulting anticipated cost of failures. Reductions in this number generated by increasing the reliability are balanced against the cost of further testing to determine when testing should be stopped.",2004,0, 1006,Using machine learning for estimating the defect content after an inspection,"We view the problem of estimating the defect content of a document after an inspection as a machine learning problem: The goal is to learn from empirical data the relationship between certain observable features of an inspection (such as the total number of different defects detected) and the number of defects actually contained in the document. We show that some features can carry significant nonlinear information about the defect content. Therefore, we use a nonlinear regression technique, neural networks, to solve the learning problem. To select the best among all neural networks trained on a given data set, one usually reserves part of the data set for later cross-validation; in contrast, we use a technique which leaves the full data set for training. This is an advantage when the data set is small. We validate our approach on a known empirical inspection data set. For that benchmark, our novel approach clearly outperforms both linear regression and the current standard methods in software engineering for estimating the defect content, such as capture-recapture. The validation also shows that our machine learning approach can be successful even when the empirical inspection data set is small.",2004,0, 1007,Nine-coded compression technique with application to reduced pin-count testing and flexible on-chip decompression,"This paper presents a new test data compression technique based on a compression code that uses exactly nine codewords. In spite of its simplicity, it provides significant reduction in test data volume and test application time. In addition, the decompression logic is very small and independent of the precomputed test data set. Our technique leaves many don't-care bits unchanged in the compressed test set, and these bits can be filled randomly to detect non-modeled faults. The proposed technique can be efficiently adopted for single- or multiple-scan chain designs to reduce test application time and pin requirement. Experimental results for ISCAS'89 benchmarks illustrate the flexibility and efficiency of the proposed technique.",2004,0, 1008,A generic RTOS model for real-time systems simulation with systemC,"The main difficulties in designing real-time systems are related to time constraints: if an action is performed too late, it is considered as a fault (with different levels of criticism). Designers need to use a solution that fully supports timing constraints and enables them to simulate early on the design process a real-time system. One of the main difficulties in designing HW/SW systems resides in studying the effect of serializing tasks on processors running a real-time operating system (RTOS). In this paper, we present a generic model of RTOS based on systemC. It allows assessing real-time performances and the influence of scheduling according to RTOS properties such as scheduling policy, context-switch time and scheduling latency.",2004,0, 1009,A little knowledge about software,"Software engineering is still a young discipline. Software development group managers must keep their groups current with this dynamic body of knowledge as it evolves. There are two basic approaches: require staff to have both application expertise and software expertise, or create a software cell. The latter approach runs the risk of two communities not communicating well, although it might make staying abreast of changes in software engineering easier. The first approach should work better than it does today if some new educational patterns are put in place. For example, we could start treating software more like mathematics, introducing more software courses into undergraduate programs in other disciplines. Managers must also focus on the best way to develop software expertise for existing staff. Staff returning to school for a master's in software engineering can acquire a broad understanding of the field, but at a substantial cost in both time and effort. Short courses call help to fill this gap, but most short courses are skill based, whereas a deeper kind of learning is needed. As the first step, however, managers must assess software's impact on their bottom line deliverables. It might surprise them how much they depend on software expertise to deliver their products.",2004,0, 1010,Static analysis of XML transformations in Java,"XML documents generated dynamically by programs are typically represented as text strings or DOM trees. This is a low-level approach for several reasons: 1) traversing and modifying such structures can be tedious and error prone, 2) although schema languages, e.g., DTD, allow classes of XML documents to be defined, there are generally no automatic mechanisms for statically checking that a program transforms from one class to another as intended. We introduce XACT, a high-level approach for Java using XML templates as a first-class data type with operations for manipulating XML values based on XPath. In addition to an efficient runtime representation, the data type permits static type checking using DTD schemas as types. By specifying schemes for the input and output of a program, our analysis algorithm will statically verify that valid input data is always transformed into valid output data and that the operations are used consistently.",2004,0, 1011,Efficient monitoring to detect wireless channel failures for MPI programs,"In the last few years the use of wireless technology has increased by leaps and bounds and as a result powerful portable computers with wireless cards are viable nodes in parallel distributed computing. In this scenario it is natural to consider the possibility of frequent failures in the wireless channel. In MPI programs, such wireless network behavior is reflected as communication failure. Although the MPI standard does not handle failures, there are some projects that address this issue. To the best of our knowledge there is no previous work that presents a practical solution for fault-handling in MPI programs that run on wireless environments. We present a mechanism at the application level, that combined with wireless network monitoring software detects these failures and warns MPI applications to enable them to take appropriate action.",2004,0, 1012,Stereo analysis by hybrid recursive matching for real-time immersive video conferencing,"Real-time stereo analysis is an important research area in computer vision. In this context, we propose a stereo algorithm for an immersive video-conferencing system by which conferees at different geographical places can meet under similar conditions as in the real world. For this purpose, virtual views of the remote conferees are generated and adapted to the current viewpoint of the local participant. Dense vector fields of high accuracy are required in order to guarantee an adequate quality of the virtual views. Due to the usage of a wide baseline system with strongly convergent camera configurations, the dynamic disparity range is about 150 pixels. Considering computational costs, a full search or even a local search restricted to a small window of a few pixels, as it is implemented in many real-time algorithms, is not suitable for our application because processing on full-resolution video according to CCIR 601 TV standard with 25 frames per second is addressed-the most desirable as a pure software solution running on available processors without any support from dedicated hardware. Therefore, we propose in this paper a new fast algorithm for stereo analysis, which circumvents the window search by using a hybrid recursive matching strategy based on the effective selection of a small number of candidates. However, stereo analysis requires more than a straightforward application of stereo matching. The crucial problem is to produce accurate stereo correspondences in all parts of the image. Especially, errors in occluded regions and homogenous or less structured regions lead to disturbing artifacts in the synthesized virtual views. To cope with this problem, mismatches have to be detected and substituted by a sophisticated interpolation and extrapolation scheme.",2004,0, 1013,Analyzing software measurement data with clustering techniques,"For software quality estimation, software development practitioners typically construct quality-classification or fault prediction models using software metrics and fault data from a previous system release or a similar software project. Engineers then use these models to predict the fault proneness of software modules in development. Software quality estimation using supervised-learning approaches is difficult without software fault measurement data from similar projects or earlier system releases. Cluster analysis with expert input is a viable unsupervised-learning solution for predicting software modules' fault proneness and potential noisy modules. Data analysts and software engineering experts can collaborate more closely to construct and collect more informative software metrics.",2004,0, 1014,Design centering using an approximation to the constraint region,"The paper discusses the applicability of the piecewise-ellipsoidal approximation (PEA) to the acceptability region for solution of various design problems. The PEA technique, originally developed and tested for linear discrete circuits described in the frequency domain, is briefly reviewed. It is shown that PEA is a generic mathematical method and its applicability is extended to linear and nonlinear systems (not necessary electrical) described in time or frequency domains. The architecture of a software implementing the technique is introduced and approximations to the acceptability regions for the given design specifications for integrated circuits (CMOS amplifier, clock driver) and multidomain systems (servomechanism) are constructed and their accuracy checked. Then, some standard optimal-design algorithms (i.e., worst case parametric yield maximization, yield versus cost optimization) are redesigned to exploit the PEA properties (e.g., local convexity/concavity) and make them more effective. The algorithms are confronted with design problems, and quality of the resulting designs is assessed.",2004,0, 1015,Towards dependable Web services,"Web services are the key technology for implementing distributed enterprise level applications such as B2B and grid computing. An important goal is to provide dependable quality guarantees for client-server interactions. Therefore, service level management (SLM) is gaining more and more significance for clients and providers of Web services. The first step to control service level agreements is a proper instrumentation of the application code in order to monitor the service performance. However, manual instrumentation of Web services is very costly and error-prone and thus not very efficient. Our goal was to develop a systematic and automated, tool-supported approach for Web services instrumentation. We present a dual approach for efficiently instrumenting Web services. It consists of instrumenting the frontend Web services platform as well as the backend services. Although the instrumentation of the Web services platform necessarily is platform-specific, we have found a general, reusable approach. On the backend-side aspect-oriented programming techniques are successfully applied to instrument backend services. We present experimental studies of performance instrumentation using the application response measurement (ARM) API and evaluate the efficiency of the monitoring enhancements. Our results point the way to systematically gain better insights into the behaviour of Web services and thus how to build more dependable Web-based applications.",2004,0, 1016,Safety testing of safety critical software based on critical mission duration,"To assess the safety of software based safety critical systems, we firstly analyzed the differences between reliability and safety, then, introduced a safety model based on three-state Markov model and some safety-related metrics. For safety critical software it is common to demand that all known faults are removed. Thus an operational test for safety critical software takes the form of a specified number of test cases (or a specified critical mission duration) that must be executed unsafe-failure-free. When the previous test has been early terminated as a result of an unsafe failure, it has been proposed that the further test need to be more stringent (i.e. the number of tests that must be executed unsafe-failure-free should increase). In order to solve the problem, a safety testing method based on critical mission duration and Bayesian testing stopping rules is proposed.",2004,0, 1017,Protecting wavelet lifting transforms,"Wavelet transforms are the central to many applications in image processing and data compression. They have banks of multirate filters that are difficult to protect from computer-induced numerical errors. An efficient algorithm-based fault tolerance approach is proposed for detecting arithmetic errors in the output data. Concurrent weighted parity values are designed to detect the effects of a single numerical error within the transform structure. The parity calculations use weighted sums of data, where the input parity weighting is related to the weighting used on the output data. Each parity computation is properly viewed as an inner product between weighting values and the data motivating the use of dual space functionals related to the error gain matrices that describe error propagations to the output. The parity weighting values are defined by a combination of dual space functionals. An iterative procedure for evaluating the design of the parity weights has been incorporated in Matlab code.",2004,0, 1018,Dependability analysis of a class of probabilistic Petri nets,"Verification of various properties associated with concurrent/distributed systems is critical in the process of designing and analyzing dependable systems. While techniques for the automatic verification of finite-state systems are relatively well studied, one of the main challenges in the domain of verification is concerned with the development of new techniques capable of coping with problems beyond the finite state framework. We investigate a number of problems closely related to dependability analysis in the context of probabilistic infinite-state systems modelled by probabilistic conflict-free Petri nets. Using a valuation method, we are able to demonstrate effective procedures for solving the termination with probability 1, the self-stabilization with probability 1, and the controllability with probability 1 problems in a unified framework.",2004,0, 1019,Periodic partial validation: cost-effective source code validation process in cross-platform software development environment,"Enterprise software development typically involves cooperation among multiple entities. In a cross-platform software development environment, developers can categorize the source code of products into platform specific and platform generic components, so that common features can be deployed seamlessly across platforms. As the complexity of component and source code inter-dependency increases, build breakages occur more frequently, and the lack of an efficient detection mechanism often results in slow response with higher costs. We present a successful cost-effective method to automatically detect and identify such breakages. We deployed a centralized code validation and policing tool, and the results prove its effectiveness as an important quality assurance component in the software development process.",2004,0, 1020,SRAT-distribution voltage sags and reliability assessment tool,"Interruptions to supply and sags of distribution system voltage are the main aspects causing customer complaints. There is a need for analysis of supply reliability and voltage sag to relate system performance with network structure and equipment design parameters. This analysis can also give prediction of voltage dips, as well as relating traditional reliability and momentary outage measures to the properties of protection systems and to network impedances. Existing reliability analysis software often requires substantial training, lacks automated facilities, and suffers from data availability. Thus it requires time-consuming manual intervention for the study of large networks. A user-friendly sag and reliability assessment tool (SRAT) has been developed based on existing impedance data, protection characteristics, and a model of failure probability. The new features included in SRAT are a) efficient reliability and sag assessments for a radial network with limited loops, b) reliability evaluation associated with realistic protection and restoration schemes, c) inclusion of momentary outages in the same model as permanent outage evaluation, d) evaluation of the sag transfer through meshed subtransmission network, and e) simplified probability distribution model determined from available faults records. Examples of the application of the tools to an Australian distribution network are used to illustrate the application of this model.",2004,0, 1021,Automated design flaw correction in object-oriented systems,"Software inevitably changes. As a consequence, we observe the phenomenon referred to as """"software entropy"""" or """"software decay"""": the software design continually degrades making maintenance and functional extensions overly costly if not impossible. There exist a number of approaches to identify design flaws (problem detection) and to remedy them (refactoring). There is, however, a conceptual gap between these two stages: There is no appropriate support for the automated mapping of design flaws to possible solutions. Here we propose an integrated, quality-driven and tool-supported methodology to support object-oriented software evolution. Our approach is based on the novel concept of """"correction strategies"""". Correction strategies serve as reference descriptions that enable a human-assisted tool to plan and perform all necessary steps for the safe removal of detected design flaws, with special concern towards the targeted quality goals of the restructuring process. We briefly sketch our tool chain and illustrate our approach with the help of a medium-sized real-world case-study.",2004,0, 1022,The process of and the lessons learned from performance tuning of a product family software architecture for mobile phones,"Performance is an important nonfunctional quality attribute of a software system but not always is considered when a software is designed. Furthermore, software evolves and changes can negatively affect the performance. New requirements could introduce performance problems and the need for a different architecture design. Even if the architecture has been designed to be easy to extend and flexible enough to be modified to perform its function, a software component designed to be too general and flexible can slower the execution of the application. Performance tuning is a way to assess the characteristics of an existing software and highlight design flaws or inefficiencies. Periodical performance tuning inspections and architecture assessments can help to discover potential bottlenecks before it is too late especially when changes and requirements are added to the architecture design. In this paper a performance tuning experience of one Nokia product family architecture will be described. Assessing a product family architecture means also taking into account the performance of the entire line of products and optimizations must include or at least not penalize its members.",2004,0, 1023,Towards the definition of a maintainability model for Web applications,"The growing diffusion of Web-based services in many and different business domains has triggered the need for new Web applications (WAs). The pressing market demand imposes very short time for the development of new WAs, and frequent modifications for existing ones. Well-defined software processes and methodologies are rarely adopted both in the development and maintenance phases. As a consequence, WAs' quality usually degrades in terms of architecture, documentation, and maintainability. Major concerns regard the difficulties in estimating costs of maintenance interventions. Thus, a strong need for methods and models to assess the maintainability of existing WAs is growing more and more. In this paper we introduce a first proposal for a WA maintainability model; the model considers those peculiarities that makes a WA different from a traditional software system and a set of metrics allowing an estimate of the maintainability is identified. Results from some initial case studies to verify the effectiveness of the proposed model are presented in the paper.",2004,0, 1024,Enhancing real-time CORBA via real-time Java features,"End-to-end middleware predictability is essential to support quality of service (QoS) capabilities needed by distributed real-time and embedded (DRE) applications. Real-time CORBA is a middleware standard that allows DRE applications to allocate, schedule, and control the QoS of CPU, memory, and networking resources. Existing real-time CORBA solutions are implemented in C++, which is generally more complicated and error-prone to program than Java. The real-time specification for Java (RTSJ) provides extensions that enable Java to be used for developing DRE systems. Real-time CORBA does not currently leverage key RTSJ features, such as scoped memory and real-time threads. Thus, integration of real-time CORBA and RTSJ is essential to ensure the predictability required for Java-based DRE applications. We provide the following contributions to the study of middleware for DRE applications. First we analyze the architecture of ZEN, our implementation of real-time CORBA, identifying sources for the application of RTSJ features. Second, we describe how RTSJ features, such as scoped memory and real-time threads, can be associated with key ORB components to enhance the predictability of DRE applications using realtime CORBA and the RTSJ. Third, we perform preliminary qualitative and quantitative analysis of predictability enhancements arising from our application of RTSJ features. Our results show that use of RTSJ features can considerably improve the predictability of DRE applications written using Real-time CORBA and real-time Java.",2004,0, 1025,Fault-tolerant data delivery for multicast overlay networks,"Overlay networks represent an emerging technology for rapid deployment of novel network services and applications. However, since public overlay networks are built out of loosely coupled end-hosts, individual nodes are less trustworthy than Internet routers in carrying out the data forwarding function. Here we describe a set of mechanisms designed to detect and repair errors in the data stream. Utilizing the highly redundant connectivity in overlay networks, our design splits each data stream to multiple sub-streams which are delivered over disjoint paths. Each sub-stream carries additional information that enables receivers to detect damaged or lost packets. Furthermore, each node can verify the validity of data by periodically exchanging Bloom filters, the digests of recently received packets, with other nodes in the overlay. We have evaluated our design through both simulations and experiments over a network testbed. The results show that most nodes can effectively detect corrupted data streams even in the presence of multiple tampering nodes.",2004,0, 1026,Knowledge-centric and language independent framework for safety analysis tools,"This paper presents a knowledge-centric and language independent framework and its application to develop safety analysis tools for avionics systems. A knowledge-centric approach is important to address domain-specific needs, with respect to the types of problems the tools detect and the strategies used to analyze and adapt the code. The knowledge is captured by formally specified patterns used to detect a variety of problems, ranging from simple syntactic issues to difficult semantic problems requiring global analysis. Patterns can also be used to describe transformations of the software, used to rectify problems detected through software inspection, and to support interactive inspection and adaptation when full automation is impractical. This paper describes the Knowledge Centric Software (KCS) framework. It focuses on two key aspects: an eXtensible Common Intermediate Language (XCIL) for language independent analysis, and an eXtensible Pattern Specification Language (XPSL) for representing domain-specific knowledge.",2004,0, 1027,Reducing overfitting in genetic programming models for software quality classification,"A high-assurance system is largely dependent on the quality of its underlying software. Software quality models can provide timely estimations of software quality, allowing the detection and correction of faults prior to operations. A software metrics-based quality prediction model may depict overfitting, which occurs when a prediction model has good accuracy on the training data but relatively poor accuracy on the test data. We present an approach to address the overfitting problem in the context of software quality classification models based on genetic programming (GP). The problem has not been addressed in depth for GP-based models. The presence of overfitting in a software quality classification model affects its practical usefulness, because management is interested in good performance of the model when applied to unseen software modules, i.e., generalization performance. In the process of building GP-based software quality classification models for a high-assurance telecommunications system, we observed that the GP models were prone to overfitting. We utilize a random sampling technique to reduce overfitting in our GP models. The approach has been found by many researchers as an effective method for reducing the time of a GP run. However, in our study we utilize random to reduce overfitting with the aim of improving the generalization capability of our GP models.",2004,0, 1028,An approach for designing and assessing detectors for dependable component-based systems,"In this paper, we present an approach that helps in the design and assessment of detectors. A detector is a program component that asserts the validity of a predicate in a given program state. We first develop a theory of error detection, and identify two main properties of detectors, namely completeness and accuracy. Given the complexity of designing efficient detectors, we introduce two metrics, namely completeness (C) and inaccuracy (I), that capture the operational effectiveness of detector operations, and each metric captures one efficiency aspect of the detector. Subsequently, we present an approach for experimentally evaluating these metrics, and is based on fault-injection. The metrics developed in our approach also allow a system designer to perform a cost-benefit analysis for resource allocation when designing efficient detectors for fault-tolerant systems. The applicability of our approach is suited for the design of reliable component-based systems.",2004,0, 1029,How good is your blind spot sampling policy,"Assessing software costs money and better assessment costs exponentially more money. Given finite budgets, assessment resources are typically skewed towards areas that are believed to be mission critical. This leaves blind spots: portions of the system that may contain defects which may be missed. Therefore, in addition to rigorously assessing mission critical areas, a parallel activity should sample the blind spots. This paper assesses defect detectors based on static code measures as a blind spot sampling method. In contrast to previous results, we find that such defect detectors yield results that are stable across many applications. Further, these detectors are inexpensive to use and can be tuned to the specifics of the current business situations.",2004,0, 1030,Assessing reliability risk using fault correction profiles,"Building on the concept of the fault correction profile - a set of functions that predict fault correction events as a function of failure detection events - introduced in previous research, we define and apply reliability risk metrics that are derived from the fault correction profile. These metrics assess the threat to reliability of an unstable fault correction process. The fault correction profile identifies the need for process improvements and provides information for developing fault correction strategies. Applying these metrics to the NASA Goddard Space Flight Center fault correction process and its data, we demonstrate that reliability risk can be measured and used to identify the need for process improvement.",2004,0, 1031,Unsupervised learning for expert-based software quality estimation,"Current software quality estimation models often involve using supervised learning methods to train a software quality classifier or a software fault prediction model. In such models, the dependent variable is a software quality measurement indicating the quality of a software module by either a risk-based class membership (e.g., whether it is fault-prone or not fault-prone) or the number of faults. In reality, such a measurement may be inaccurate, or even unavailable. In such situations, this paper advocates the use of unsupervised learning (i.e., clustering) techniques to build a software quality estimation system, with the help of a software engineering human expert. The system first clusters hundreds of software modules into a small number of coherent groups and presents the representative of each group to a software quality expert, who labels each cluster as either fault-prone or not fault-prone based on his domain knowledge as well as some data statistics (without any knowledge of the dependent variable, i.e., the software quality measurement). Our preliminary empirical results show promising potentials of this methodology in both predicting software quality and detecting potential noise in a software measurement and quality dataset.",2004,0, 1032,Automated detection of injected faults in a differential equation solver,"Analysis of logical relationships between inputs and outputs of a computational system can significantly reduce the test execution effort via minimizing the number of required test cases. Unfortunately, the available specification documents are often insufficient to build a complete and reliable model of the tested system. In this paper, we demonstrate the use of a data mining method, called Info-Fuzzy Network (IFN), which can automatically induce logical dependencies from execution data of a stable software version, construct a set of non-redundant test cases, and identify faulty outcomes in new, potentially faulty releases of the same system. The proposed approach is applied to the Unstructured Mesh Finite Element Solver (UMFES) which is a general finite element program for solving 2D elliptic partial differential equations. Experimental results demonstrate the capability of the IFN-based testing methodology to detect several kinds of faults injected in the code of this sophisticated application.",2004,0, 1033,Adding assurance to automatically generated code,"Code to estimate position and attitude of a spacecraft or aircraft belongs to the most safety-critical parts of flight software. The complex underlying mathematics and abundance of design details make it error-prone and reliable implementations costly. AutoFilter is a program synthesis tool for the automatic generation of state estimation code from compact specifications. It can automatically produce additional safety certificates which formally guarantee that each generated program individually satisfies a set of important safety policies. These safety policies (e.g., array-bounds, variable initialization) form a core of properties which are essential for high-assurance software. Here we describe the AutoFilter system and its certificate generator and compare our approach to the static analysis tool PolySpace.",2004,0, 1034,A framework of software rejuvenation for survivability,"We propose a novel approach of the security issue to survivability. The main objectives are to detect the attacks in real time, to characterize the attacks, and to survive in face of the attacks. To counteract the attacks' attempts, we perform the software rejuvenation methods (SWRMS) such as killing the intruders' processes in their tracks, halting abuse before it happens, shutting down unauthorized connection, and responding and restarting in real time. These slogans will really frustrate and deter the attacks, as the attackers can't make their progress. This is a way of survivability to maximize the deterrence against the attacks in the target environment. We address a framework to model and analyze the critical intrusion tolerance problems ahead of intrusion detection and we present a set of innovative models to solve the security aging problems.",2004,0, 1035,Open design of networked power quality monitoring systems,Permanent continuous power quality monitoring is beginning to be recognized as an important aid for managing power quality. Preventive maintenance can only be initiated if such monitoring is available to detect the minor disturbances that may precede major disruptions. This paper establishes the need to encourage interoperability between power quality instruments from different vendors. It discusses the frequent problem of incompatibility between equipment that results from the inherent inflexibilities in existing designs. A new approach has been proposed to enhance interoperability through the use of open systems in their design. It is demonstrated that it is possible to achieve such open design using existing software and networking technologies. The benefits and disadvantages to both the end-users and the equipment manufacturers are also being discussed.,2004,0, 1036,Reliability and robustness assessment of diagnostic systems from warranty data,"Diagnostic systems are software-intensive built-in-test systems, which detect, isolate and indicate the failures of prime systems. The use of diagnostic systems reduces the losses due to the failures of prime systems and facilitates the subsequent correct repairs. Therefore, they have found extensive applications in industry. Without loss of generality, this paper utilizes the on-board diagnostic systems of automobiles as an illustrative example. A failed diagnostic system generates or . error incurs unnecessary warranty costs to manufacturers, while error causes potential losses to customers. Therefore, the reliability and robustness of diagnostic systems are important to both manufacturers and customers. This paper presents a method for assessing the reliability and robustness of the diagnostic systems by using warranty data. We present the definitions of robustness and reliability of the diagnostic systems, and the formulae for estimating , and reliability. To utilize warranty data for assessment, we describe the two-dimensional (time-in-service and mileage) warranty censoring mechanism, model the reliability function of the prime systems, and devise warranty data mining strategies. The impact of error on warranty cost is evaluated. Fault tree analyses for and errors are performed to identify the ways for reliability and robustness improvement. The method is applied to assess the reliability and robustness of an automobile on-board diagnostic system.",2004,0, 1037,A design tool for large scale fault-tolerant software systems,"In order to assist software designers in the application of fault-tolerance techniques to large scale software systems, a computer-aided software design tool has been proposed and implemented that assess the criticality of the software modules contained in the system. This information assists designers in identifying weaknesses in large systems that can lead to system failures. Through analysis and modeling techniques based in graph theory, modules are assessed and rated as to the criticality of their position in the software system. Graphical representation at two levels facilitates the use of cut set analysis, which is our main focus. While the task of finding all cut sets in any graph is NP-complete, the tool intelligently applies cut set analysis by limiting the problem to provide only the information needed for meaningful analysis. In this paper, we examine the methodology and algorithms used in the implementation of this tool and consider future refinements. Although further testing is needed to assess performance on increasingly complex systems, preliminary results look promising. Given the growing demand for reliable software and the complexities involved in the design of these systems, further research in this area is indicated.",2004,0, 1038,Extended fault modeling used in the space shuttle PRA,"A probabilistic risk assessment (PRA) has been completed for the space shuttle with NASA sponsorship and involvement. This current space shuttle PRA is an advancement over past PRAs conducted for the space shuttle in the technical approaches utilized and in the direct involvement of the NASA centers and prime contractors. One of the technical advancements is the extended fault modeling techniques used. A significant portion of the data collected by NASA for the space shuttle consists of faults, which are not yet failures but have the potential of becoming failures if not corrected. This fault data consists of leaks, cracks, material anomalies, and debonding faults. Detailed, quantitative fault models were developed for the space shuttle PRA which involved assessing the severity of the fault, detection effectiveness, recurrence control effectiveness, and mission-initiation potential. Each of these attributes was transformed into a quantitative weight to provide a systematic estimate of the probability of the fault becoming a failure in a mission. Using the methodology developed, mission failure probabilities were estimated from collected fault data. The methodology is an application of counter-factual theory and defect modeling which produces consistent estimates of failure rates from fault rates. Software was developed to analyze all the relevant fault data collected for given types of faults in given systems. The software allowed the PRA to be linked to NASA's fault databases. This also allows the PRA to be updated as new fault data is collected. This fault modeling and its implementation with FRAS was an important part of the space shuttle PRA.",2004,0, 1039,The challenge of space nuclear propulsion and power systems reliability,"In October of 2002, The Power and Propulsion Office and The Risk Management Office of NASA Glenn Research Center in Cleveland, Ohio began developing the reliability, availability, and maintainability (RAM) engineering approach for the Space Nuclear Propulsion and Power Systems Project. The objective of the Space Nuclear Power and Propulsion Project is to provide safe and reliable propulsion and power systems for planetary missions. The safety of the crew, ground personnel, and the public has to be the highest priority of the RAM engineering approach for nuclear powered space systems. The project will require a top level reliability goal for substantial mission success in the range from 0.95 to 0.98. In addition, the probability of safe operation without loss of crew, vehicle, or danger to the public, cannot be less than 0.9999. The achievement of these operational goals will require the combined application of many RAM engineering techniques. These include: advanced reliability, availability, and maintainability analysis, probabilistic risk assessment that includes hardware, software, and human induced faults, accelerated life testing, parts stress analysis, and selective end to end sub-system testing. Design strategy must involve the selection of parts and materials specifically to withstand the stresses of prolonged operation in the space and planetary environments with a wide design margin. Interplanetary distances and resulting signal time delay drive the need for autonomous control of major system functions including redundancy management.",2004,0, 1040,ISP-operated protection of home networks with FIDRAN,"In order to fight against the increasing number of network security incidents due to mal-protected home networks permanently connected to the Internet via DSL, TV cable or similar technologies, we propose that Internet service providers (ISP) operate and manage intrusion prevention systems (IPS) which are to a large extend executed on the consumer's gateway to the Internet (e.g., DSL router). The paper analyses the requirements of ISP-operated intrusion prevention systems and presents our approach for an IPS that runs on top of an active networking environment and is automatically configured by a vulnerability scanner. We call the system FIDRAN (Flexible Intrusion Detection and Response framework for Active Networks). The system autonomously analyses the home network and correspondingly configures the IPS. Furthermore, our system detects and adjusts itself to changes in the home network (new service, new host, etc.). First performance comparisons show that our approach - while offering more flexibility and being able to support continuous updating by active networking principles - competes well with the performance of conventional intrusion prevention systems like Snort-Inline.",2004,0, 1041,Processing of abdominal ultrasound images using seed based region growing method,"There are many diseases relating to abdomen. Patients suffering by abdominal diseases will be experiencing chronic or acute abdominal pain or suspects of an abdominal mass. Abdomen has two major parts: liver and gallbladder. Gallbladder and liver diseases are very common not only in Malaysia but also all over the globe. Hundreds of patients die from such diseases every year. Doctors face difficulty in diagnosing the types of diseases and sometimes unnecessary measures like surgery has to be performed. An abdominal ultrasound image is a useful way of examining internal organs, including the liver, gallbladder, spleen and kidneys. Ultrasound is safe, radiation free, faster and cheaper. Ultrasound images themselves will not give a clear view of an affected region. In general raw ultrasound images contains lot of imbedded noises. So digital processing can improve the quality of raw ultrasound images. In this work a software tool called ultrasound processing tool (UPT) has been developed by employing the histogram equalization and region growing approach to give a clearer view of the affected regions in the abdomen. The system was tested on more than 20 cases. Here, the results of two cases are presented, one on gallbladder mass and another on liver cancer. The radiologists have reported that original ultrasound images were not at all clear enough to detect the shape and area of the affected regions and the ultrasound processing tool has provided them clear and better view of the internal details of the diseases.",2004,0, 1042,Tolerating late memory traps in dynamically scheduled processors,"In the past few years, exception support for memory functions such as virtual memory, informing memory operations, software assist for shared memory protocols, or interactions with processors in memory has been advocated in various research papers. These memory traps may occur on a miss in the cache hierarchy or on a local or remote memory access. However, contemporary, dynamically scheduled processors only support memory exceptions detected in the TLB associated with the first-level cache. They do not support memory exceptions taken deep in the memory hierarchy. In this case, memory traps may be late, in the sense that the exception condition may still be undecided when a long-latency memory instruction reaches the retirement stage. In this paper we evaluate through simulation the overhead of memory traps in dynamically scheduled processors, focusing on the added overhead incurred when a memory trap is late. We also propose some simple mechanisms to reduce this added overhead while preserving the memory consistency model. With more aggressive memory access mechanisms in the processor we observe that the overhead of all memory traps - either early or late - is increased while the lateness of a trap becomes largely tolerated so that the performance gap between early and late memory traps is greatly reduced. Additionally, because of caching effects in the memory hierarchy, the frequency of memory traps usually decreases as they are taken deeper in the memory hierarchy and their overall impact on execution times becomes negligible. We conclude that support for memory traps taken throughout the memory hierarchy could be added to dynamically scheduled processors at low hardware cost and little performance degradation.",2004,0, 1043,Optimizing testing efficiency with error-prone path identification and genetic algorithms,"We present a method for optimizing software testing efficiency by identifying the most error prone path clusters in a program. We do this by developing variable length genetic algorithms that optimize and select the software path clusters which are weighted with sources of error indexes. Although various methods have been applied to detecting and reducing errors in a whole system, there is little research into partitioning a system into smaller error prone domains for testing. Exhaustive software testing is rarely possible because it becomes intractable for even medium sized software. Typically only parts of a program can be tested, but these parts are not necessarily the most error prone. Therefore, we are developing a more selective approach to testing by focusing on those parts that are most likely to contain faults, so that the most error prone paths can be tested first. By identifying the most error prone paths, the testing efficiency can be increased.",2004,0, 1044,Enforcing system-wide properties,"Policy enforcement is a mechanism for ensuring that system components follow certain programming practices, comply with specified rules, and meet certain assumptions. Unfortunately, the most common mechanisms used today for policy enforcement are documentation, training, and code reviews. The fundamental problem is that these mechanisms are expensive, time-consuming, and still error-prone. To cope with this problem, we present IRC (Implementation Restriction Checker), an extensible framework for automatically enforcing system-wide policies or contracts. The framework is built on top of a platform for aspect-oriented programming at the level of Java byte-code instructions and is available as an eclipse plug-in as well as a standalone application. It includes a set of directly usable checkers and can be easily extended to implement new ones.",2004,0, 1045,Teaching the process of code review,"Behavioural theory predicts that interventions that improve individual reviewers' expertise also improve the performance of the group in Software Development Technical Reviews (SDTR) [C. Sauer et al.,(2000)]. This includes improvements both in individual's expertise in the review process, as well as their ability to find defects and distinguish true defects from false positives. We present findings from University training in these skills using authentic problems. The first year the course was run it was designed around actual code review sessions, the second year this was expanded to enable students to develop and trial their own generic process for document reviews. This report considers the values and shortcomings of the teaching program from an extensive analysis of the defect detection in the first year, when students were involved in a review process that was set up for them, and student feedback from the second year when students developed and analysed their own process.",2004,0, 1046,A framework for classifying and comparing software architecture evaluation methods,"Software architecture evaluation has been proposed as a means to achieve quality attributes such as maintainability and reliability in a system. The objective of the evaluation is to assess whether or not the architecture lead to the desired quality attributes. Recently, there have been a number of evaluation methods proposed. There is, however, little consensus on the technical and nontechnical issues that a method should comprehensively address and which of the existing methods is most suitable for a particular issue. We present a set of commonly known but informally described features of an evaluation method and organizes them within a framework that should offer guidance on the choice of the most appropriate method for an evaluation exercise. We use this framework to characterise eight SA evaluation methods.",2004,0, 1047,The framework of a web-enabled defect tracking system,"This paper presents an evaluation and investigation of issues to implement a defect management system; a tool used to understand and predict software product quality and software process efficiency. The scope is to simplify the process of defect tracking through a web-enabled application. The system will enable project management, development, quality assurance and software engineer to track and manage problem specifically defects in the context of software project. A collaborative function is essential as this will enable users to communicate in real time mode. This system makes key defect tracking coordination and information available disregards the geographical and time factor.",2004,0, 1048,A model of scalable distributed network performance management,"Quality of service in IP networks necessitates the use of performance management. As Internet continues to grow exponentially, a management system should be scalable in terms of network size, speed and number of customers subscribed to value-added services. This article proposes a flexible, scalable, self-adapting model for managing large-scale distributed network. In this model, Web services framework is used to build the software architecture and XML is used to build the data exchange interface. Policy-based hierarchical event-processing mechanism presented by this paper can efficiently balance the loads and improve the flexibility of the system. The prediction algorithm adopted by this model can predict the network performance more effectively and accurately.",2004,0, 1049,Hardware - software structure for on-line power quality assessment: part I,"The main objective of the proposed work is to introduce a new concept of advanced power quality assessment. The introduced system is implemented using applications of a set of powerful software algorithms and a digital signal processor based hardware data acquisition system. The suggested scheme is mainly to construct a system for real time detection and identification of different types of power quality disturbances that produce a sudden change in the power quality levels. Moreover, a new mitigation technique through generating feedback correction signals for disturbance compensation is addressed. The performance of the suggested system is tested and verified through real test examples. The obtained results reveal that, the introduced system detects fast and accurately most of the power quality disturbance events and introduces new indicative factors estimating the performance of any supply system subjected to a set number of disturbance events.",2004,0, 1050,Assessing the robustness of self-managing computer systems under highly variable workloads,"Computer systems are becoming extremely complex due to the large number and heterogeneity of their hardware and software components, the multilayered architecture used in their design, and the unpredictable nature of their workloads. Thus, performance management becomes difficult and expensive when carried out by human beings. An approach, called self-managing computer systems, is to build into the systems the mechanisms required to self-adjust configuration parameters so that the quality of service requirements of the system are constantly met. In this paper, we evaluate the robustness of such methods when the workload exhibits high variability in terms of the interarrival time and service times of requests. Another contribution of this paper is the assessment of the use of workload forecasting techniques in the design of QoS controllers.",2004,0, 1051,Autonomic pervasive computing based on planning,"Pervasive computing envisions a world with users interacting naturally with device-rich environments to perform various kinds of tasks. These environments must, thus, be self-managing and autonomic systems, receiving only high-level guidance from users. However, these environments are also highly dynamic $the context and resources available in these environments can change rapidly. They are also prone to failures - one or more entities can fail due to variety of reasons. The dynamic and fault-prone nature of these environments poses major challenges to their autonomic operation. In this paper we present a paradigm for the operation of pervasive computing environments that is based on goal specification and STRIPS-based planning. Users as well as application developers can describe tasks to be performed in terms of abstract goals and a planning framework decides how these goals are to be achieved. This paradigm helps improve the fault-tolerance, adaptability, ease of programming and usability of these environments. We have developed and used a prototype planning system within our pervasive computing system, Gaia.",2004,0, 1052,On identifying stable ways to configure systems,We consider the often error-prone process of initially building and/or reconfiguring a computer system. We formulate an optimization framework for capturing certain aspects of this system (re)configuration process. We describe offline and online algorithms that could aid operators in making decisions for how best to take actions on their computers so as to maintain the health of their systems.,2004,0, 1053,Finding satisfying global states: all for one and one for all,"Summary form only given. Given a distributed computation and a global predicate, predicate detection involves determining whether there exists at least one consistent cut (or global state) of the computation that satisfies the predicate. On the other hand, computation slicing is concerned with computing the smallest sub-computation - with the least number of consistent cuts - that contains all consistent cuts of the computation satisfying the predicate. We investigate the relationship between predicate detection and computation slicing and show that the two problems are equivalent. Specifically, given an algorithm to detect a predicate b in a computation C, we derive an algorithm to compute the slice of C with respect to b. The time-complexity of the (derived) slicing algorithm is O(n|E|) times the time-complexity of the detection algorithm, where n is the number of processes and E is the set of events. We discuss how the """"equivalence """" result can be utilized to derive a faster algorithm for solving the general predicate detection problem. Slicing algorithms described in our earlier papers are all off-line in nature. We also give an online algorithm for computing the slice for a predicate that can be detected efficiently. The amortized time-complexity of the algorithm is O(n(c + n)) times the time-complexity of the detection algorithm, where c is the average concurrency in the computation.",2004,0, 1054,On static WCET analysis vs. run-time monitoring of execution time,"Summary form only given. Dynamic, distributed, real-time control systems control a widely varying environment, are made up of application programs that are dispersed among loosely-coupled computers, and must control the environment in a timely manner. The environment determines the number of threats; thus, it is difficult to determine the range of the workload at design time using static worst-case execution time analysis. While a system is lightly loaded, it is wasteful to reserve resources for the heaviest load. Likewise, it is also possible that the load will increase higher than the assumed worst case. A system that has a preset number of resources reserved to it is no longer guaranteed to meet its deadlines under such conditions. In order to ensure that such applications meet their real-time requirements, a mechanism is required to monitor and maintain the real-time quality of service (QoS): a QoS manager, which monitors the processing timing (latency) and resource usage of a distributed real-time system, forecasts, detects and diagnoses violations of the timing constraints, and requests more or fewer resources to maintain the desired timing characteristics. To enable better control over the system, the goals are as follows: 1) Gather detailed information about antiair warfare and air-traffic control application domains and employ it in the creation of a distributed real-time sensing and visualization testbed for air-traffic control. 2) Identify mathematical relationships among independent and dependent variables, such as performance and fault tolerance vs. resource usage, and security vs. performance. 3) Uncover new techniques for ensuring performance, fault tolerance, and security by optimizing the variables under the constraints of resource availability and user requirements.",2004,0, 1055,Implementing a reconfigurable atomic memory service for dynamic networks,"Summary form only given. Transforming abstract algorithm specifications into executable code is an error-prone process in the absence of sophisticated compilers that can automatically translate such specifications into the target distributed system. We present a framework that was developed for translating algorithms specified as Input/Output Automata (IOA) to distributed programs. The framework consists of a methodology that guides the software development process and a core set of functions needed in target implementations that reduce unnecessary software development. The systems developed using this methodology preserve the modularity of the original specifications, making it easier to track refinements and effect optimizations. As a proof of concept, this work also presents a distributed implementation of a reconfigurable atomic memory service for dynamic networks (RAMBO). This service emulates atomic read/write shared objects in the dynamic setting where processors can arbitrarily crash, or join and leave the computation. The algorithm tolerates processor crashes and message loss and guarantees atomicity for arbitrary patterns of asynchrony and failure. The algorithm implementing the service is given in terms of IOA. An important consideration in formulating RAMBO was that it could be employed as a building block in real systems. Following a formal presentation of RAMBO algorithm, this work describes an optimized implementation that was developed using the methodology presented here. The system is implemented in Java and runs on a network of workstations. Empirical data illustrates the behavior of the system.",2004,0, 1056,An investigation into the application of different performance prediction techniques to e-Commerce applications,Summary form only given. Predictive performance models of e-Commerce applications allows grid workload managers to provide e-Commerce clients with qualities of service (QoS) whilst making efficient use of resources. We demonstrate the use of two 'coarse-grained' modelling approaches (based on layered queuing modelling and historical performance data analysis) for predicting the performance of dynamic e-Commerce systems on heterogeneous servers. Results for a popular e-Commerce benchmark show how request response times and server throughputs can be predicted on servers with heterogeneous CPUs at different background loads. The two approaches are compared and their usefulness to grid workload management is considered.,2004,0, 1057,A heuristic for multi-constrained multicast routing,"In contrast to the situation that the constrained minimum Steiner tree (CMST) problem has attracted much attention in the quality of service (QoS) routing area, little work has been done on multicast routing subject to multiple additive constraints, even though the corresponding applications are obvious. We propose a heuristic, HMCMC, to solve this problem. The basic idea of HMCMC is to construct the multicast tree step by step, which is done essentially based on the latest research results on multi-constrained unicast routing. Computer simulations demonstrate that, if there is one, the proposed heuristic can find a feasible multicast tree with a fairly high probability.",2004,0, 1058,Thinking about thinking aloud: a comparison of two verbal protocols for usability testing,"We report on an exploratory experimental comparison of two different thinking aloud approaches in a usability test that focused on navigation problems in a highly nonstandard Web site. One approach is a rigid application of Ericsson and Simon's (for original paper see Protocol Analysis: Verbal Reports as Data, MIT Press (1993)) procedure. The other is derived from Boren and Ramey's (for original paper see ibid., vol. 43, no. 3, p. 261-278 (2000)) proposal based on speech communication. The latter approach differs from the former in that the experimenter has more room for acknowledging (mm-hmm) contributions from subjects and has the possibility of asking for clarifications and offering encouragement. Comparing the verbal reports obtained with these two methods, we find that the process of thinking aloud while carrying out tasks is not affected by the type of approach that was used. The task performance does differ. More tasks were completed in the B and R condition, and subjects were less lost. Nevertheless, subjects' evaluations of the Web site quality did not differ, nor did the number of different navigation problems that were detected.",2004,0, 1059,Useful cycles in probabilistic roadmap graphs,"Over the last decade, the probabilistic road map method (PRM) has become one of the dominant motion planning techniques. Due to its random nature, the resulting paths tend to be much longer than the optimal path despite the development of numerous smoothing techniques. Also, the path length varies a lot every time the algorithm is executed. We present a new technique that results in higher quality (shorter) paths with much less variation between the executions. The technique is based on adding useful cycles to the roadmap graph.",2004,0, 1060,Semidefinite programming for ad hoc wireless sensor network localization,We describe an SDP relaxation based method for the position estimation problem in wireless sensor networks. The optimization problem is set up so as to minimize the error in sensor positions to fit distance measures. Observable gauges are developed to check the quality of the point estimation of sensors or to detect erroneous sensors. The performance of this technique is highly satisfactory compared to other techniques. Very few anchor nodes are required to accurately estimate the position of all the unknown nodes in a network. Also the estimation errors are minimal even when the anchor nodes are not suitably placed within the network or the distance measurements are noisy.,2004,0, 1061,Predicting C++ program quality by using Bayesian belief networks,"There have been many attempts to build models for predicting the software quality. Such models are used to measure the quality of software systems. The key variables in these models are either size or complexity metrics. There are, however, serious statistical and theoretical difficulties with these approaches. By using Bayesian belief network, we can overcome some of the more serious problems by taking more quality factors, which have direct or indirect impact on the software quality. In this paper, we have suggested a model to predicting the computer program quality by using Bayesian belief network. We found that the implementation of all quality factors were not feasible. Therefore, we have selected 14 quality factors to be implemented on an average size of two C++ programs. The selection criteria were based on the reviewer's opinions. Each node on the given Bayesian believe network represents one quality factor. We have drawn the BBN for the two C++ programs considering 14 nodes. The BBN has been constructed. The model has been executed and the results have been discussed.",2004,0, 1062,A real-time monitoring and diagnosis system for manufacturing automation,"Condition monitoring and fault diagnosis in modern engineering practices is of great practical significance for improving the quality and productivity, preventing the machinery from damages. In general, this practice consists of two parts: extracting appropriate features from sensor signals and recognizing possible faulty patterns from the features. In order to cope with the complex manufacturing operations and develop a feasible system for real-time application, we proposed three approaches. By defining the marginal energy, a new feature representation emerged, while by real-time learning algorithms with support vector techniques and hidden Markov model representations, a modular software architecture and a new similarity measure were developed for comparison, monitoring, and diagnosis. A novel intelligent computer-based system has been developed and evaluated in over 30 factories and numerous metal stamping processes as an example of manufacturing operations. The real-time operation of this system demonstrated that the proposed system is able to detect abnormal conditions efficiently and effectively resulting in a low-cost, effective approach to real-time monitoring in manufacturing. The related technologies have been transferred to industry, presenting a tremendous impact in current automation practice in Asia and the world.",2004,0, 1063,Exact analysis of a class of GI/G/1-type performability models,"We present an exact decomposition algorithm for the analysis of Markov chains with a GI/G/1-type repetitive structure. Such processes exhibit both M/G/1-type & GI/M/1-type patterns, and cannot be solved using existing techniques. Markov chains with a GI/G/1 pattern result when modeling open systems which accept jobs from multiple exogenous sources, and are subject to failures & repairs; a single failure can empty the system of jobs, while a single batch arrival can add many jobs to the system. Our method provides exact computation of the stationary probabilities, which can then be used to obtain performance measures such as the average queue length or any of its higher moments, as well as the probability of the system being in various failure states, thus performability measures. We formulate the conditions under which our approach is applicable, and illustrate it via the performability analysis of a parallel computer system.",2004,0, 1064,Analyzing information flow control policies in requirements engineering,"Currently security features are implemented and validated during the last phases of the software development life cycle. This practice results in less secure software systems and higher cost of fixing defects software vulnerability. To achieve more secure systems, security features must be considered during the early phases of the software development process. This work presents a high-level methodology that analyzes the information flow requirements and ensures the proper enforcement of information flow control policies. The methodology uses requirements specified in the Unified Modeling Language (UML) as its input and stratified logic programming language as the analysis language. The methodology improves security by detecting unsafe information flows before proceeding to latter stages of the life cycle.",2004,0, 1065,The use of unified APC/FD in the control of a metal etch area,"An adaptive neural network-based advanced process control software, the Dynamic Neural ControllerTM (DNC), was employed at National Semiconductor's 200 mm fabrication facility, South Portland, Maine, to enhance the performance of metal etch tools. The installation was performed on 5 identical LAM 9600 TCP Metal etchers running production material. The DNC produced a single predictive model on critical outputs and metrology for each tool based on process variables, maintenance, input metrology and output metrology. Although process metrology is usually measured on only one wafer per lot, the process can be closely monitored on a wafer-by-wafer basis with the DNC models. The DNC was able to provide recommendations for maintenance (replacing components in advance of predicted failure) and process variable adjustments (e.g. gas flow) to maximize tool up time and to reduce scrap. This enabled the equipment engineers to both debug problems more quickly on the tool and to make adjustments to tool parameters before out-of-spec wafers were produced. After a comparison of the performance of all 5 tools for a 2-month period prior to DNC installation vs. a 2-month post-DNC period, we concluded that the software was able to predict when maintenance actions were required, when process changes were required, and when maintenance actions were being taken but were not required. We observed a significant improvement in process Cpks for the metal etchers in this study.",2004,0, 1066,Model-driven reverse engineering,"Reverse engineering is the process of comprehending software and producing a model of it at a high abstraction level, suitable for documentation, maintenance, or reengineering. But from a manager's viewpoint, there are two painful problems: 1) It's difficult or impossible to predict how much time reverse engineering will require. 2) There are no standards to evaluate the quality of the reverse engineering that the maintenance staff performs. Model-driven reverse engineering can overcome these difficulties. A model is a high-level representation of some aspect of a software system. MDRE uses the features of modeling technology but applies them differently to address the maintenance manager's problems. Our approach to MDRE uses formal specification and automatic code generation to reverse the reverse-engineering process. Models written in a formal specification language called SLANG describe both the application domain and the program being reverse engineered, and interpretations annotate the connections between the two. The ability to generate a similar version of a program gives managers a fixed target for reverse engineering. This, in turn, enables better effort prediction and quality evaluation, reducing development risk.",2004,0, 1067,"Requirements triage: what can we learn from a """"medical"""" approach?","New-product development is commonly risky, judging by the number of high-profile failures that continue to occur-especially in software engineering. We can trace many of these failures back to requirements-related issues. Triage is a technique that the medical profession uses to prioritize treatment to patients on the basis of their symptoms' severity. Trauma triage provides some tantalizing insights into how we might measure risk of failure early, quickly, and accurately. For projects at significant risk, we could activate a """"requirements trauma system"""" to include specialists, processes, and tools designed to correct the issues and improve the probability that the project ends successfully. We explain these techniques and suggest how we can adapt them to help identify and quantify requirements-related risks.",2004,0, 1068,Impact of process variation phenomena on performance and quality assessment,"Summary form only given. Logic product density and performance trends have continued to follow the course predicted by Moore's Law. To support the trends in the future and build logic products approaching one billion or more transistors before the end of the decade, several challenges must be met. These challenges include: 1) maintaining transistor/interconnect feature scaling, 2) the increasing power density dilemma, 3) increasing relative difficulty of 2-D feature resolution and general critical dimension control, 4) identifying cost effective solutions to increasing process and design database complexity, and 5), improving general performance and quality predictability in the face of the growing control, complexity and predictability issues. The trend in transistor scaling can be maintained while addressing the power density issue with new transistor structures, design approaches, and product architectures (e.g. high-k, metal gate, etc.). Items 3 to 5 are the focus of this work and are also strongly inter-related. The general 2-D patterning and resolution control problems will require several solution approaches both through design and technology e.g. reduce design degrees of freedom, use of simpler arrayed structures, improved uniformity, improved tools, etc. The data base complexity/cost problem will require solutions likely to involve use of improved data structure, improved use of hierarchy, and improved software and hardware solutions. Performance assessment, predictability and quality assessment will benefit from solutions to the control and complexity issues noted above. In addition, new design techniques/tools as well as improved process characterization models and methods can address the general performance/quality assessment challenge.",2004,0, 1069,ASAAM: aspectual software architecture analysis method,"Software architecture analysis methods aim to predict the quality of a system before it has been developed. In general, the quality of the architecture is validated by analyzing the impact of predefined scenarios on architectural components. Hereby, it is implicitly assumed that an appropriate refactoring of the architecture design can help in coping with critical scenarios and mending the architecture. This paper shows that there are also concerns at the architecture design level which inherently crosscut multiple architectural components, which cannot be localized in one architectural component and which, as such, can not be easily managed by using conventional abstraction mechanisms. We propose the aspectual software architecture analysis method (ASAAM) to explicitly identify and specify these architectural aspects and make them transparent early in the software development life cycle. ASAAM introduces a set of heuristic rules that help to derive architectural aspects and the corresponding tangled architectural components from scenarios. The approach is illustrated for architectural aspect identification in the architecture design of a window management system.",2004,0, 1070,An investigation of the approach to specification-based program review through case studies,"Software review is an effective means to enhance the quality of software systems. However, traditional review methods emphasize the importance of the way to organize reviews and rely on the quality of the reviewers' experience and personal skills. In this paper we propose a new approach to rigorously reviewing programs based on their formal specifications. The fundamental idea of the approach is to use a formal specification as a standard to check whether all the required functions and properties in the specification are correctly implemented by its program. To help investigate the effectiveness and the weakness of the approach, we conduct two case studies of reviewing two program systems that implement the same formal specification of """"A Research Management Policy"""" using different strategies, and present the evaluation of the case studies. The results show that the review approach is effective in detecting faults when the reviewer is different from the programmer, but less effective when the reviewer is the same as the programmer.",2004,0, 1071,Requirements driven software evolution,"Software evolution is an integral part of the software life cycle. Furthermore in the recent years the issue of keeping legacy systems operational in new platforms has become critical and one of the top priorities in IT departments worldwide. The research community and the industry have responded to these challenges by investigating and proposing techniques for analyzing, transforming, integrating, and porting software systems to new platforms, languages, and operating environments. However, measuring and ensuring that compliance of the migrant system with specific target requirements have not been formally and thoroughly addressed. We believe that issues such as the identification, measurement, and evaluation of specific re-engineering and transformation strategies and their impact on the quality of the migrant system pose major challenges in the software re-engineering community. Other related problems include the verification, validation, and testing of migrant systems, and the design of techniques for keeping various models (architecture, design, source code) during evolution, synchronized. In this working session, we plan to assess the state of the art in these areas, discuss on-going work, and identify further research issues.",2004,0, 1072,An intelligent admission control scheme for next generation wireless systems using distributed genetic algorithms,"A different variety of services requiring different levels of quality of service (QoS) need be addressed for mobile users of the next generation wireless system (NGWS). An efficient handoff technique with intelligent admission control can accomplish this aim. In this paper, a new, intelligent handoff scheme using distributed genetic algorithms (DGA) is proposed for NGWS. This scheme uses DGA to achieve high network utilization, minimum cost and handoff latency. A performance analysis is provided to assess the efficiency of the proposed DGA scheme. Simulation results show a significant improvement in handoff latencies and costs over traditional genetic algorithms and other admission control schemes.",2004,0, 1073,Quantifying the reliability of proven SPIDER group membership service guarantees,"For safety-critical systems, it is essential to quantify the reliability of the assumptions that underlie proven guarantees. We investigate the reliability of the assumptions of the SPIDER group membership service with respect to transient and permanent faults. Modeling 12,600 possible system configurations, the probability that SPIDER's maximum fault assumption does not hold for an hour mission varies from less likely than l0-11 to more likely than 10-3. In most cases examined, a transient fault tolerance strategy was superior to the permanent fault tolerance strategy previously in use for the range of transient fault arrival rates expected in aerospace systems. Reliability of the maximum fault assumption (upon which the proofs are based) differs greatly when subjected to asymmetric, symmetric, and benign faults. This case study demonstrates the benefits of quantifying the reliability of assumptions for proven properties.",2004,0, 1074,Dependable initialization of large-scale distributed software,"Most documented efforts in fault-tolerant computing address the problem of recovering from failures that occur during normal system operation. To bring a system to a point where it can begin performing its duties first requires that the system successfully complete initialization. Large-scale distributed systems may take hours to initialize. For such systems, a key challenge is tolerating failures that occur during initialization, while still completing initialization in a timely manner. In this paper, we present a dependable initialization model that captures the architecture of the system to be initialized, as well as interdependencies among system components. We show that overall system initialization may sometimes complete more quickly if recovery actions are deferred as opposed to commencing recovery actions as soon as a failure is detected. This observation leads us to introduce a recovery decision function that dynamically assesses when to take recovery actions. We then describe a dependable initialization algorithm that combines the dependable initialization model and the recovery decision function for achieving fast initialization. Experimental results show that our algorithm incurs lower initialization overhead than that of a conventional initialization algorithm. This work is the first effort we are aware of that formally studies the challenges of initializing a distributed system in the presence of failures.",2004,0, 1075,A bi-criteria scheduling heuristic for distributed embedded systems under reliability and real-time constraints,"Multi-criteria scheduling problems, involving optimization of more than one criterion, are subject to a growing interest. In this paper, we present a new bi-criteria scheduling heuristic for scheduling data-flow graphs of operations onto parallel heterogeneous architectures according to two criteria: first the minimization of the schedule length, and second the maximization of the system reliability. Reliability is defined as the probability that none of the system components will fail while processing. The proposed algorithm is a list scheduling heuristics, based on a bi-criteria compromise function that introduces priority between the operations to be scheduled, and that chooses on what subset of processors they should be scheduled. It uses the active replication of operations to improve the reliability. If the system reliability or the schedule length requirements are not met, then a parameter of the compromise function can be changed and the algorithm re-executed. This process is iterated until both requirements are met.",2004,0, 1076,Does your result checker really check?,"A result checker is a program that checks the output of the computation of the observed program for correctness. Introduced originally by Blum, the result-checking paradigm has provided a powerful platform assuring the reliability of software. However, constructing result checkers for most problems requires not only significant domain knowledge but also ingenuity and can be error prone. In this paper we present our experience in validating result checkers using formal methods. We have conducted several case studies in validating result checkers from the commercial LEDA system for combinatorial and geometric computing. In one of our case studies, we detected a logical error in a result checker for a program computing max flow of a graph.",2004,0, 1077,Delivering packets during the routing convergence latency interval through highly connected detours,"Routing protocols present a convergence latency for all routers to update their tables after a fault occurs and the network topology changes. During this time interval, which in the Internet has been shown to be of up to minutes, packets may be lost before reaching their destinations. In order to allow nodes to continue communicating during the convergence latency interval, we propose the use of alternative routes called detours. In this work we introduce new criteria for selecting detours based on network connectivity. Detours are chosen without the knowledge of which node or link is faulty. Highly connected components present a larger number of distinct paths, thus increasing the probability that the detour will work correctly. Experimental results were obtained with simulation on random Internet-like graphs generated with the Waxman method. Results show that the fault coverage obtained through the usage of the best detour is up to 90%. When the three best detours are considered, the fault coverage is up to 98%.",2004,0, 1078,Why PCs are fragile and what we can do about it: a study of Windows registry problems,"Software configuration problems are a major source of failures in computer systems. In this paper, we present a new framework for categorizing configuration problems. We apply this categorization to Windows registry-related problems obtained from various internal as well as external sources. Although infrequent, registry-related problems are difficult to diagnose and repair. Consequently they frustrate the users. We classify problems based on their manifestation and the scope of impact to gain useful insights into how problems affect users and why PCs are fragile. We then describe techniques to identify and eliminate such registry failures. We propose health predicate monitoring for detecting known problems, fault injection for improving application, robustness, and access protection mechanisms for preventing fragility problems.",2004,0, 1079,Diverse firewall design,"Firewalls are safety-critical systems that secure most private networks. An error in a firewall either leaks secret information from its network or disrupts legitimate communication between its network and the rest of the Internet. How to design a correct firewall is therefore an important issue. In this paper, we propose the method of diverse firewall design, which is inspired by the well-known method of design diversity for building fault-tolerant software. Our method consists of two phases: a design phase and a comparison phase. In the design phase, the same requirement specification of a firewall is given to multiple teams who proceed independently to design different versions of the firewall. In the comparison phase, the resulting multiple versions are compared with each other to find out all the discrepancies between them, then each discrepancy is further investigated and a correction is applied if necessary. The technical challenge in the method of diverse firewall design is how to discover all the discrepancies between two given firewalls. We present a series of three efficient algorithms for solving this problem: (I) a construction algorithm for constructing an equivalent ordered firewall decision diagram from a sequence of rules, (2) a shaping algorithm for transforming two ordered firewall decision diagrams to become semi-isomorphic without changing their semantics, and (3) a comparison algorithm for detecting all the discrepancies between two semi-isomorphic firewall decision diagrams.",2004,0, 1080,On benchmarking the dependability of automotive engine control applications,"The pervasive use of ECUs (electronic control units) in automotive systems motivates the interest of the community in methodologies for quantifying their dependability in a reproducible and cost-effective way. Although the core of modern vehicle engines is managed by the control software embedded in engine ECUs, no practical approach has been proposed so far to characterise the impact of faults on the behaviour of this software. This paper proposes a dependability benchmark for engine control applications. The essential features of such type of applications are first captured in a general model, which is then exploited in order to specify a standard procedure to assess dependability measures. These measures are defined taking into account the expectations of industrials purchasing engine ECUs with integration purposes. The benchmark also considers the current set of technological limitations that the manufacturing of modern engine ECUs imposes to the experimental process. The approach is exemplified on two engine control applications.",2004,0, 1081,A scalable distributed QoS multicast routing protocol,"Many Internet multicast applications such as teleconferencing and remote diagnosis have quality-of-service (QoS) requirements. It is a challenging task to build QoS constrained multicast trees with high performance, high success ratio, low overhead, and low system requirements. This paper presents a new scalable QoS multicast routing protocol (SoMR) that has very small communication overhead and requires no state outside the multicast tree. SoMR achieves the favorable tradeoff between routing performance and overhead by carefully selecting the network sub-graph in which it conducts the search for a path that can support the QoS requirement, and by auto-tuning the selection according to the current network conditions. Its early-warning mechanism helps to detect and route around the real bottlenecks in the network, which increases the chance of finding feasible paths for additive QoS requirements. SoMR minimizes the system requirements; it relies only on the local state stored at each router. The routing operations are completely decentralized.",2004,0, 1082,Scalable network assessment for IP telephony,Multimedia applications such as IP telephony are among the applications that demand strict quality of service (QoS) guarantees from the underlying data network. At the predeployment stage it is critical to assess whether the network can handle the QoS requirements of IP telephony and fix problems that may prevent a successful deployment. In this paper we describe a technique for efficiently assessing network readiness for IP telephony. Our technique relies on understanding link level QoS behavior in a network from an IP telephony perspective. We use network topology and end-to-end measurements collected from the network in locating the sources of performance problems that may prevent a successful IP telephony deployment. We present an empirical study conducted on a real network spanning three geographically separated sites of an enterprise network. The empirical results indicate that our approach efficiently and accurately pinpoints links in the network incurring the most significant delay.,2004,0, 1083,Blocking probability analysis in future wireless networks,"This paper proposes to model each cell of future wireless networks as a G/G/c/c queueing system. As such a model has not been explicitly addressed in the literature, we apply maximum entropy principles to evaluate both traffic distribution and blocking probability within each cell. Analysis of numerical results enables to specify the conditions under which the system offers good quality of service in terms of blocking probability. More specifically, such an analysis reveals that coefficient of variation of call arrivals has more impact over the blocking probability than coefficient of variation of channel holding time.",2004,0, 1084,Closing gaps by clustering unseen directions,"Although in recent years the 3D-scanning field has reached a good level of maturity, it is still far from being perceived by common users as a 3D-photography approach, as simple as standard photography is. The main reason for that is that obtaining good 3D models without human intervention is still very hard. In particular, two problems remain open: automatic registration of single shots and planning of the acquisition session. In this paper we address the second issue and propose a solution to improve the coverage of automatically acquired objects. Rather than searching for the next-best-view in order to minimise the number of acquisitions, we propose a simple and easy-to-implement algorithm limiting our scope to closing gaps (i.e. filling unsampled regions) in roughly acquired models. The idea is very simple: detect holes in the current model and cluster their estimated normals in order to determine new views. Some results are shown to support our approach.",2004,0, 1085,On feature interactions among Web services,"Web services promise to allow businesses to adapt rapidly to changes in the business environment, and the needs of different customers. However, the rapid introduction of new services paired with the dynamicity of the business environment also leads to undesirable interactions that negatively impact service quality and user satisfaction. In this paper, we propose an approach for modeling such undesirable interactions as feature interactions. Our approach for detecting interactions is based on goal-oriented analysis and scenario modeling. It allows us to reason about feature interactions in terms of goal conflicts, and feature deployment. Two case studies illustrate the approach. The paper concludes with a discussion, and an outlook on future research.",2004,0, 1086,Code generation for WSLAs using AXpect,"WSLAs can be viewed as describing the service aspect of Web services. By their nature, Web services are distributed. Therefore, integrating support code into a Web service application is potentially costly and error prone. Viewed from this AOP perspective, then, we present a method for integrating WSLAs into code generation using the AXpect weaver, the AOP technology for Infopipes. This helps to localize the code physically and therefore increase the eventual maintainability and enhance the reuse of the WSLA code. We then illustrate the weavers capability by using a WSLA document to codify constraints and metrics for a streaming image application that requires CPU resource monitoring.",2004,0, 1087,FIESTA-EXTRA: cell-oriented software for the defect/fault analysis in VLSI circuits,"The main concepts which laid the foundation for the special software development are considered. This software tool is named FIESTA-EXTRA (Faults Identification and EStimation of TestAbility by EXTRAction of faults probabilities, kinds of faults and usefulness of test patterns for faults detection) and is developed for defect/fault analysis in the complex gates from industrial cell library. Specific features of the main three extractors of the developed software are considered. The results of the FIESTA-ExTRA approbation are described.",2004,0, 1088,Probability models for high dynamic range imaging,"Methods for expanding the dynamic range of digital photographs by combining images taken at different exposures have recently received a lot of attention. Current techniques assume that the photometric transfer function of a given camera is the same (modulo an overall exposure change) for all the input images. Unfortunately, this is rarely the case with today's camera, which may perform complex nonlinear color and intensity transforms on each picture. In this paper, we show how the use of probability models for the imaging system and weak prior models for the response functions enable us to estimate a different function for each image using only pixel intensity values. Our approach also allows us to characterize the uncertainty inherent in each pixel measurement. We can therefore produce statistically optimal estimates for the hidden variables in our model representing scene irradiance. We present results using this method to statistically characterize camera imaging functions and construct high-quality high dynamic range (HDR) images using only image pixel information.",2004,0, 1089,Electromagnetic environment analysis of a software park near transmission lines,"The electromagnetic environments (EMEs) of the planned Zhongguancun Software Park near transmission lines, including electrical field, magnetic field, and ground potential rise under three cases of lightning stroke, normal operation, and short-circuit faults, are assessed by numerical analysis. The power frequency EMEs of the software park are below the maximum ecological allowed exposure values for the general public; , nevertheless, the power frequency magnetic field may interfere with the sensitive computer display unit. The influence of short-circuit fault in two different cases of remote short circuit and neighboring short circuit on the software park is discussed. The main problem we must pay attention to is the ground potential rise in the software park due to neighboring short-circuit fault; it would threaten the safe operation of electronic devices in the software park. On the other hand, the lightning stroke is a serious threat to the software park. How to improve the EMEs of the software park is discussed.",2004,0, 1090,Taming lambda's for applications: the OptIPuter system software,"Summary form only given. Dense wavelength-division multiplexing (DWDM), dark fiber, and low-cost optical switches provide the technological capability for private, high bandwidth communication. However, achieving any substantial application benefit from use of these resources is dauntingly complex and error prone. These emerging environments are often called lambda grids. We are developing a simple abstraction called a distributed virtual computer (DVC), which provides convenient application use of dynamic optical resources. DVC descriptions naturally express communication and computation resource requirements, enabling coordinated resource binding. In addition, their shared namespace provides a natural vehicle for incorporating a range of novel capabilities, including novel transport protocols which expose and exploit the capabilities DWDM environment, including efficient multi-point to point (GTP), optical multicast, real-time communication, and fast point to point transports. DVC's also provide a convenient model for integrating a wide array of network-attached instruments and storage. We describe initial experience with DVC's and how they provide an integrating architecture for lambda grids. The OptlPuter project is a large multi-institutional project led by Larry Smarr at the University of California, San Diego (UCSD) and Tom DeFanti at the University of Illinois at Chicago (UIC). Other software efforts include optical signaling software, visualization, distributed configuration management, and two driving applications involving petabytes of data (in conjunction with the Biomedical Informatics Research Network and the Scripps Institute of Oceanography). The project also includes construction of a high-speed OptlPuter testbed spanning UCSD and UIC.",2004,0, 1091,A novel out-of-band signaling mechanism for enhanced real-time support in tactical ad hoc wireless networks,"Ad hoc wireless networks have been an increasingly important area of research in the recent past. One issue of great interest in this area is the provision of real-time support. While existing military applications use reservation-based approaches to provide real-time bandwidth guarantees, these schemes are adversely affected by node mobility. We propose an enhanced real-time support scheme that uses a novel out-of-band signaling mechanism to predict future mobility patterns and take corrective action when needed. We also propose an architecture to support differentiated service classes. This helps mobility affected nodes to take proactive measures so as to offer better real-time services to bandwidth critical applications. Through extensive simulations, we show that the use of the out-of-band signaling leads to better real-time support overall, and better response to the more critical classes as needed. We also provide a theoretical analysis for estimating the probability of disruption of a real-time call.",2004,0, 1092,Automated reference-counted object recycling for real-time Java,"We introduce an aspect-oriented reformulation of reference-counting that is particularly well-suited to Java applications and does not share the error-prone characteristic of manual, user-driven reference counting. We present our method in the context of the real-time specification for Java and demonstrate that it can recycle dead objects in bounded time. We apply partial evaluation to specialize the aspect-generated code, which substantially reduces the reference-counting overhead.",2004,0, 1093,One more step in the direction of modularized integration concerns,"Component integration creates value by automating the costly and error-prone task of imposing desired behavioral relationships on components manually. Requirements for component integration, however, complicate software design and evolution in several ways: first, they lead to coupling among components; second, the code that implements various integration concerns in a system is often scattered over and tangled with the code implementing the component behaviors. Straightforward software design techniques map integration requirements to scattered and tangled code, compromising modularity in ways that dramatically increase development and maintenance costs.",2004,0, 1094,Visual timed event scenarios,"Formal description of real-time requirements is a difficult and error prone task. Conceptual and tool support for this activity plays a central role in the agenda of technology transference from the formal verification engineering community to the real-time systems development practice. In this article we present VTS, a visual language to define complex event-based requirements such as freshness, bounded response, event correlation, etc. The underlying formalism is based on partial orders and supports real-time constraints. The problem of checking whether a timed automaton model of a system satisfies these sort of scenarios is shown to be decidable. Moreover, we have also developed a tool that translates visually specified scenarios into observer timed automata. The resulting automata can be composed with a model under analysis in order to check satisfaction of the stated scenarios. We show the benefits of applying these ideas to some case studies.",2004,0, 1095,Team-based fault content estimation in the software inspection process,"The main objective of software inspection is to detect faults within a software artifact. This helps to reduce the number of faults and to increase the quality of a software product. However, although inspections have been performed with great success, and although the quality of the product is increased, it is difficult to estimate the quality. During the inspection process, attempts with objective estimations as well as with subjective estimations have been made. These methods estimate the fault content after an inspection and give a hint of the quality of the product. This paper describes an experiment conducted throughout the inspection process, where the purpose is to compare the estimation methods at different points. The experiment evaluates team estimates from subjective and objective fault content estimation methods integrated with the software inspection process. The experiment was conducted at two different universities with 82 reviewers. The result shows that objective estimates outperform subjective when point and confidence intervals are used. This contradicts the previous studies in the area.",2004,0, 1096,An empirical study of software reuse vs. defect-density and stability,"The paper describes results of an empirical study, where some hypotheses about the impact of reuse on defect-density and stability, and about the impact of component size on defects and defect-density in the context of reuse are assessed, using historical data (data mining) on defects, modification rate, and software size of a large-scale telecom system developed by Ericsson. The analysis showed that reused components have lower defect-density than non-reused ones. Reused components have more defects with highest severity than the total distribution, but less defects after delivery, which shows that that these are given higher priority to fix. There are an increasing number of defects with component size for non-reused components, but not for reused components. Reused components were less modified (more stable) than non-reused ones between successive releases, even if reused components must incorporate evolving requirements from several application products. The study furthermore revealed inconsistencies and weaknesses in the existing defect reporting system, by analyzing data that was hardly treated systematically before.",2004,0, 1097,Fault management for networks with link state routing protocols,"For network fault management, we present a new technique that is based on on-line monitoring of networks with link state routing protocols, such as OSPF (open shortest path first) and integrated IS-IS. Our approach employs an agent that monitors the on-line information of the network link state database, analyzes the events generated by network faults for event correlation, and detects and localizes the faults. We apply our method to a real network topology with various types of network faults. Experimental results show that our approach can detect and localize the faults in a timely manner, yet without disrupting normal network operations.",2004,0, 1098,A transparent and centralized performance management service for CORBA based applications,"The quest for service quality in enterprise applications is driving companies to profile their online performance. Application management tools come in handy to deliver the required diagnosis. However, distributed applications are hard to manage due to their complexity and geographical dispersion. To cope with this problem, this paper presents a Java based management solution for CORBA distributed applications. The solution combines XML, SNMP and portable interceptors to provide a nonintrusive performance management service. Components can be attached to client and server sides to monitor messages and gather data into a centralized database. A detailed analysis can then be performed to expose behavioral problems in specific parts of the application. Performance reports and charts are supplied through a Web console. A prototypical implementation was tested against two available ORB to assess functionality and interposed overhead.",2004,0, 1099,On service replication strategy for service overlay networks,"The service overlay network (SON) is an effective means to deliver end-to-end QoS guaranteed applications on the current Internet. Duan et al. (2002) address the bandwidth provisioning problem on a SON, specifically, in determining the appropriate amount of bandwidth capacity to purchase from various autonomous systems so as to satisfy the QoS requirements of the SON's end users and at the same time maximize the total revenue of operating the overlay network. In this paper, we extend the concept of the service overlay network. Since traffic demands are time varying and there may be some unexpected events which can cause a traffic surge, these will significantly increase the probability of QoS violation and will reduce the profit margin of a SON. To overcome these problems, we propose to replicate services on the service gateways so as to dynamically adapt to these traffic surges. We show that the service replication problem, in general, is intractable. We propose an efficient service replication algorithm which replicates services for a subset of traffic flows. Under our replication strategy, one does not need to increase the bandwidth capacity of underlying links and at the same time, be able to increase the average profit for the overlay network. Experiments are carried out to illustrate that replication algorithm provides higher flexibility during traffic fluctuations and can quickly find a near-optimal solution.",2004,0, 1100,A system for fault detection and reconfiguration of hardware based active networks,"An experimental Active Network based on a PC running the Linux OS and operating as a router has been implemented. The PC carries a PCI-based FPGA board, which is the execution environment of the Active Applications. The users are able to send Active Packets and program dynamically the network by remote configuration of the target FPGA board. The FPGA can be reconfigured multiple times on-the-fly with several Active Applications (IP-cores). A fault detection module is permanently configured in one of the FPGAs of the PCI board. Its function is to monitor the Active Applications at run time and check the PCI bus transactions for violations of predefined rules. The fault detector module works as a """"firewall"""" preventing the communication between the configured application and the host computer, if a violation is detected.",2004,0, 1101,Probabilistic regression suites for functional verification,"Random test generators are often used to create regression suites on-the-fly. Regression suites are commonly generated by choosing several specifications and generating a number of tests from each one, without reasoning which specification should he used and how many tests should he generated from each specification. This paper describes a technique for building high quality random regression suites. The proposed technique uses information about the probablity of each test specification covering each coverage task. This probability is used, in tun, to determine which test specifications should be included in the regression suite and how many tests should, be generated from each specification. Experimental results show that this practical technique can he used to improve the quality, and reduce the cost, of regression suites. Moreover, it enables better informed decisions regarding the size and distribution of the regression suites, and the risk involved.",2004,0, 1102,Trends in EM susceptibility of IT equipment,"Information technology equipment and specifically personal computers (PCs) are an essential and integral part of our business and every day lives. Upset or disruption of these systems from intentional or unintentional electromagnetic interference, is untenable, especially if the equipment is used in a security or safety critical application. The susceptibility level for several PCs has been assessed using the mode stirred (reverberation) chamber technique. Results are provided which demonstrate the good repeatability of the method used and trends in the susceptibility level with respect to PC specification, build quality, and batch quality.",2004,0, 1103,Concurrent error detection in wavelet lifting transforms,"Wavelet transforms, central to multiresolution signal analysis and important in the JPEG2000 image compression standard, are quite susceptible to computer-induced errors because of their pipelined structure and multirate processing requirements. Such errors emanate from computer hardware, software bugs, or radiation effects from the surrounding environment. Implementations use lifting schemes, which employ update and prediction estimation stages, and can spread a single numerical error caused by failures to many output transform coefficients without any features to warn data users. We propose an efficient method to detect the arithmetic errors using weighted sums of the wavelet coefficients at the output compared with an equivalent parity value derived from the input data. Two parity values may straddle a complete multistage transform or several values may be used, each pair covering a single stage. There is greater error-detecting capability at only a slight increase in complexity when parity pairs are interspersed between stages. With the parity weighting design scheme, a single error introduced at a lifting section can be detected. The parity computation operation is properly viewed as an inner product between weighting values and the data, motivating the use of dual space functionals related to the error gain matrices. The parity weighting values are generated by a combination of dual space functionals. An iterative procedure for evaluating the design of the parity weights has been incorporated in Matlab code and simulation results are presented.",2004,0, 1104,An effective fault-tolerant routing methodology for direct networks,"Current massively parallel computing systems are being built with thousands of nodes, which significantly affect the probability of failure. M. E. Gomex proposed a methodology to design fault-tolerant routing algorithms for direct interconnection networks. The methodology uses a simple mechanism: for some source-destination pairs, packets are first forwarded to an intermediate node, and later, from this node to the destination node. Minimal adaptive routing is used along both subpaths. For those cases where the methodology cannot find a suitable intermediate node, it combines the use of intermediate nodes with two additional mechanisms: disabling adaptive routing and using misrouting on a per-packet basis. While the combination of these three mechanisms tolerates a large number of faults, each one requires adding some hardware support in the network and also introduces some overhead. In this paper, we perform an in-depth detailed analysis of the impact of these mechanisms on network behaviour. We analyze the impact of the three mechanisms separately and combined. The ultimate goal of this paper is to obtain a suitable combination of mechanisms that is able to meet the trade-off between fault-tolerance degree, routing complexity, and performance.",2004,0, 1105,On-demand location-aided QoS routing in ad hoc networks,"With the development and application of position devices, location-based routing has received growing attention. However, little study has been done on QoS routing with the aid of location information. The existing location-based routing approaches, such as flooding-based routing schemes and localized routing schemes, have their limitations. Motivated by ticket-based routing, we propose an on-demand location-aided, ticket-based QoS routing protocol (LTBR). Two special cases of LTBR, LTBR-1 and LTBR-2, are discussed in detail. LTBR-1 uses a single ticket to find a route satisfying a given QoS constraint. LTBR-2 uses multiple tickets to search valid routes in a limited area. All tickets are guided via both location and QoS information. LTBR has lower overhead compared with the original ticket-based routing, because it does not rely on an underlying routing table. On the other hand, LTBR can find routes with better QoS qualities than traditional location-based protocols. Our simulation results show that LTBR-1 can find high quality routes in relatively dense networks with high probability and very low overhead. In sparse networks, LTBR-2 can be used to enhance the probability of finding high quality routes with acceptable overhead.",2004,0, 1106,Direct digital synthesis: a tool for periodic wave generation (part 2),"Direct digital synthesis (DDS) is a useful tool for generating periodic waveforms. In this two-part article, the basic idea of this synthesis technique is presented and then focused on the quality of the sinewave a DDS can create, introducing the SFDR quality parameter. Next effective methods to increase the SFDR are presented through sinewave approximations, hardware schemes such as dithering and noise shaping, and an extensive list of reference. When the desired output is a digital signal, the signal's characteristics can be accurately predicted using the formulas given in this article. When the desired output is an analog signal, the reader should keep in mind that the performance of the DDS is eventually limited by the performance of the digital-to-analog converter and the follow-on analog filter. Hoping that this article would incite engineers to use DDS either in integrated circuits DDS or software-implemented DDS. From the author's experience, this technique has proven valuable when frequency resolution is the challenge, particularly when using low-cost microcontrollers.",2004,0, 1107,Fair QoS resource management and non-linear prediction of 3D rendering applications,"Resource management in a grid has to able to guarantee commercial or industrial applications a personalized quality of service (QoS). To implement, however, an efficient resource allocation scheme, prediction of task workload is required. In this paper, we present an efficient algorithm for predicting the workload of 3D rendering tasks based on constructive neural network architecture. We also consider the QoS scheduling problem whose target is to determine when and on which resource a given job should be executed. We propose an algorithm for QoS scheduling, which allocates the resources in a fair way, and we compare it to other scheduling schemes such as the earliest deadline first and the first come first serve policies.",2004,0, 1108,End-to-end defect modeling,"In this context, computer models can help us predict outcomes and anticipate with confidence. We can now use cause-effect modeling to drive software quality, moving our organization toward higher maturity levels. Despite missing good software quality models, many software projects successfully deliver software on time and with acceptable quality. Although researchers have devoted much attention to analyzing software projects' failures, we also need to understand why some are successful - within budget, of high quality, and on time-despite numerous challenges. Restricting software quality to defects, decisions made in successful projects must be based on some understanding of cause-effect relationships that drive defects at each stage of the process. To manage software quality by data, we need a model describing which factors drive defect introduction and removal in the life cycle, and how they do it. Once properly built and validated, a defect model enables successful anticipation. This is why it's important that the model include all variables influencing the process response to some degree.",2004,0, 1109,A taxonomy for software voting algorithms used in safety-critical systems,"Voting algorithms are used to provide an error masking capability in a wide range of highly dependable commercial & research applications. These applications include N-Modular Redundant hardware systems and diversely designed software systems based on N-Version Programming. The most sophisticated & complex algorithms can even tolerate malicious (or Byzantine) subsystem errors. The algorithms can be implemented in hardware or software depending on the characteristics of the application, and the type of voter selected. Many voting algorithms have been defined in the literature, each with particular strengths and weaknesses. Having surveyed more than 70 references from the literature, a functional classification is used in this paper to provide taxonomy of those voting algorithms used in safety-critical applications. We classify voters into three categories: generic, hybrid, and purpose-built voters. Selected algorithms of each category are described, for illustrative purposes, and application areas proposed. Approaches to the comparison of algorithm behavior are also surveyed. These approaches compare the acceptability of voter behavior based on either statistical considerations (e.g., number of successes, number of benign or catastrophic results), or probabilistic computations (e.g., probability of choosing correct value in each voting cycle or average mean square error) during q voting cycles.",2004,0, 1110,Applying generic timing tests for distributed multimedia software systems,"With recent advances in network technologies and computing power, multimedia systems have become a popular means for information delivery. However, testing of these systems is difficult. Due to incomplete control of their runtime and communication environment, precise temporal properties of multimedia systems are nonreproducible. Traditional software testing, which mainly deals with functional correctness, cannot be directly applied to testing temporal properties. Furthermore, time points are hard to be measured exactly, and in this sense are nondeterministic and nonreproducible. To address this problem, we propose a framework for testing the generic temporal properties of media objects in distributed multimedia software systems (DMSS). The timing properties are based on Allen's basic binary temporal relations between two objects, which can be extended to cover multiple objects. We have developed techniques for test case generation, and test result analysis based on a distributed tester architecture. Test templates are used in test case generation to reduce the possibility of human error, and the entire testing procedure can be automated. A prototype system has been built to test a DEC HPAS multimedia presentation system, which is a multimedia system supporting W3C's SMIL standard. Detailed discussions on practical issues illustrated with a number of actual tests are given. Experimental results have shown that our framework is effective in detecting errors in temporal properties. Furthermore, ways to reduce the test effort have been discussed, and guidelines for coming up with criteria for verdict computation based on the real-time requirements of the applications have been suggested.",2004,0, 1111,Impact of statechart implementation techniques on the effectiveness of fault detection mechanisms,"This work presents the analysis of an experiment series aiming at the discovery of the impact of two inherently different statechart implementation methods on the behavior of the resulting executables in the presence of faults. The discussion identifies the key features of implementation techniques influencing the effectiveness of standard fault detection mechanisms (memory protection, assertions etc.) and an advanced statechart-level watchdog scheme used for detecting the deviations from the abstract implementation-independent behavioral specification.",2004,0, 1112,On the persistence of computer dreams - an application framework for robust adaptive deployment,"The anticipated rewards of adaptive approaches will only be fully realised when autonomic algorithms can take configuration and deployment decisions that match and exceed those of human engineers. Such decisions are typically characterised as being based on a foundation of experience and knowledge. In humans, these underpinnings are themselves founded on the ashes of failure, the exuberance of courage and (sometimes) the outrageousness of fortune. We describe an application framework that will allow the incorporation of similarly risky, error prone and downright dangerous software artifacts into live systems - without undermining the certainty of correctness at application level. We achieve this by introducing the notion of application dreaming.",2004,0, 1113,Gesture tracking and recognition for lecture video editing,"This paper presents a gesture based driven approach for video editing. Given a lecture video, we adopt novel approaches to automatically detect and synchronize its content with electronic slides. The gestures in each synchronized topic (or shot) are then tracked and recognized continuously. By registering shots and slides and recovering their transformation, the regions where the gestures take place can be known. Based on the recognized gestures and their registered positions, the information in slides can be seamlessly extracted, not only to assist video editing, but also to enhance the quality of original lecture video.",2004,0, 1114,A case study of reading techniques in a software company,"Software inspection is an efficient method to detect faults early in the software lifecycle. This has been shown in several empirical studies together with experiments on reading techniques. However, experiments in industrial settings are often considered expensive for a software organization. Hence, many evaluations are performed in the academic environment with artificial documents. In this paper, we describe an empirical study in a software organization where a requirements document under development is used to compare two reading techniques. There are several benefits as well as drawbacks of using this kind of approach, which are extensively discussed in the paper. The reading techniques compared is the standard technique used in the organization (checklist-based) with the test perspective of perspective-based reading. The main result is that the test perspective of perspective-based reading seems more effective and efficient than the company standard method. The impact of this study is that the software organization will apply the new reading technique in future requirements inspections.",2004,0, 1115,Helping analysts trace requirements: an objective look,"This work addresses the issues related to improving the overall quality of the requirements tracing process for independent verification and validation analysts. The contribution of the paper is three-fold: we define requirements for a tracing tool based on analyst responsibilities in the tracing process; we introduce several measures for validating that the requirements have been satisfied; and we present a prototype tool that we built, RETRO (REquirements TRacing On-target), to address these requirements. We also present the results of a study used to assess RETRO's support of requirements and requirement elements that can be measured objectively.",2004,0, 1116,Speeding up requirements management in a product software company: linking customer wishes to product requirements through linguistic engineering,"Developing large complex software products aimed for a broad market involves a great flow of wishes and requirements. The former are elicited from customers while the latter are brought forth by the developing organization. These are preferably kept separated to preserve the different perspectives. The interrelationships should however be identified and maintained to enable well-founded decisions. Unfortunately, the current manual linkage is cumbersome, time-consuming, and error-prone. This work presents a pragmatic linguistic engineering approach to how statistical natural language processing may be used to support the manual linkage between customer wishes and product requirements by suggesting potential links. An evaluation with real requirements from industry is presented. It shows that in a realistic setting, automatic support could make linkage faster for at least 50% of the links. An estimation based on our evaluation also shows that considerable time savings are possible. The results, together with the identified enhancement, are promising for improving software quality and saving time in industrial requirements engineering.",2004,0, 1117,Browsing and searching behavior in the Renardus Web service: a study based on log analysis,"Renardus is a distributed Web-based service, which provides integrated searching and browsing access to quality-controlled Web resources. With the overall purpose of improving Renardus, the research aims to study: the detailed usage patterns (quantitative/qualitative, paths through the system); the balance between browsing and searching or mixed activities; typical sequences of usage steps and transition probabilities in a session; typical entry points, referring sites, points of failure and exit points; and, the usage degree of the browsing support features.",2004,0, 1118,Building a genetically engineerable evolvable program (GEEP) using breadth-based explicit knowledge for predicting software defects,"There has been extensive research in the area of data mining over the last decade, but relatively little research in algorithmic mining. Some researchers shun the idea of incorporating explicit knowledge with a Genetic Program environment. At best, very domain specific knowledge is hard wired into the GP modeling process. This work proposes a new approach called the Genetically Engineerable Evolvable Program (GEEP). In this approach, explicit knowledge is made available to the GP. It is considered breadth-based, in that all pieces of knowledge are independent of each other. Several experiments are performed on a NASA-based data set using established equations from other researchers in order to predict software defects. All results are statistically validated.",2004,0, 1119,Supporting quality of service in a non-dedicated opportunistic environment,"In This work we investigate the utilization of non-dedicated, opportunistic resources in a desktop environment to provide statistical assurances to a class of QoS sensitive, soft real-time applications. Supporting QoS in such an environment presents unique challenges: (1) soft real-time tasks must have continuous access to resources in order to deliver meaningful services. Therefore the tasks will fail if not enough idle resources are available in the system. (2) Although soft real-time tasks can be migrated from one machine to another, their QoS may be affected if there are frequent migrations. In this paper, we define two new QoS metrics (task failure rate and probability of bad migrations) to characterize these QoS failures/degradations. We also design admission control and resource recruitment algorithms to provide statistical guarantees on these metrics. Our model based simulation results show that the admission control algorithms are effective at providing the desired level of assurances, and are robust to different resource usage patterns. Our resource recruitment algorithm may need long time of observations to provide the desired guarantee. But even with moderate observations, we can reduce the probability of a bad migration from 12% to less than 4%, which is good enough for most real applications.",2004,0, 1120,NAFIPS 2004. 2004 Annual Meeting of the North American Fuzzy Information Processing Society (IEEE Cat. No.04TH8736),The following topics are dealt with: Web intelligence and world knowledge; reverse engineering software architecture using rough clusters; fuzzy logic to assist the planning in adolescent idiopathic scoliosis instrumentation surgery; fuzzy modeling estimation of mercury by wetland components; fuzzy logic aircraft environment controller; granular jointree probability propagation; data granulation and formal concept analysis; robust fuzzy clustering of relational data; genetic fuzzy decision agent based on personal ontology for meeting scheduling support system; soft computing agents for e-health: a prototype glaucoma monitoring; soft semantic Web services agent; interpolated linguistic terms; a fuzzy based method for classifying semantically equivalent spatial data sets in spatial database queries; indexing mechanisms to query FMBRs; A parallel fuzzy C-mean algorithm for image segmentation; fuzzy sliding mode control for a singularly perturbed systems; robust tuning for disturbance rejection of PID Controller using evolutionary algorithm; hierarchical genetic algorithms for fuzzy system optimization in intelligent control; rule extraction from a trained neural network for image keywords extraction; license plate recognition system using hybrid neural networks; pattern recognition using the fuzzy Sugeno integral for response integration in modular neural networks; ANFIS Based fault diagnosis for voltage-fed PWM motor drive systems; urban land development based on possibility theory; a fuzzy approach to segmenting the breast region in mammograms.,2004,0, 1121,Automatic generation of bus functional models fromtransaction level models,"This paper presents methodology and algorithms for generating bus functional models from transaction level models in system level design. Tkansaction level models are often used by designers for prototyping the bus functional architecture of the system. Being at a higher level of abstraction gives transaction level models the unique advantage of high simulation speed. This means that the designer can explore several bus functional architectures before choosing the optimal one. However, the process of converting a transaction level model to a bus functional model is not trivial. A manual conversion would not only be time consuming but also error prone. A bus functional model should also accurately represent the corresponding transaction level model. We present algorithms for automating this refinement process. Experimantal results presented using a tool based on these algorithms show their usefulness and feasibility.",2004,0, 1122,Nanolab: a tool for evaluating reliability of defect-tolerant nano architectures,"As silicon manufacturing technology reaches the nanoscale, architectural designs need to accommodate the uncertainty inherent at such scales. These uncertainties are germane in the miniscule dimension of the device, quantum physical effects, reduced noise margins, system energy levels reaching computing thermal limits, manufacturing defects, aging and many other factors. Defect tolerant architectures and their reliability measures gain importance for logic and micro-architecture designs based on nano-scale substrates. Recently, a Markov random field (MRF) has been proposed as a model of computation for nanoscale logic gates. In this paper, we take this approach further by automating this computational scheme and a belief propagation algorithm. We have developed MATLAB based libraries and toolset for fundamental logic gates that can compute output probability distributions and entropies for specified input distributions. Our tool eases evaluation of reliability measures of combinational logic blocks. The effectiveness of this automation is illustrated in this paper by automatically deriving various reliability results for defect-tolerant architectures, such as triple modular redundancy (TMR), cascaded triple modular redundancy (CTMR) and multi-stage iterations of these. These results are used to analyze trade-offs between reliability and redundancy for these architectural configurations.",2004,0, 1123,Intelligent fault diagnosis technique based on causality diagram,"We discuss the knowledge expression, reasoning and probability computing in causality diagram, which is developed from the belief network and overcomes some shortages. The model of causality diagram used for system fault diagnosis is brought forward, and the model constructing method and reasoning algorithm are also presented. At last, an application example in the fault diagnosis of the nuclear power plant is given which shows that the method is effect.",2004,0, 1124,Experimental study on QoS provisioning to heterogeneous VoIP sources in Diffserv environment,"The work presents a research activity focused on the experimental study of three different issues related to the provision of VoIP services with QoS guarantee in DiffServ networks. Firstly the study deals with the analysis of two dissimilar strategies for the setting of the parameters of a traffic control module in a DiffServ network node. Using these results, secondly the effectiveness of static and dynamic SLAs strategies for QoS provisioning in DiffServ environment is experimentally evaluated. This analysis is carried out considering aggregation of voice sources adopting two distinct codecs, i.e. G723.1 and G729. These codecs produce traffic with different statistical features (variable and constant bit rate respectively). Hence, this approach allows assessing the impact on the single sources' performance of the multiplexing of heterogeneous VoIP sources in a single class.",2004,0, 1125,Reliability-aware co-synthesis for embedded systems,"As technology scales, transient faults due to single event upsets have emerged as a key challenge for reliable embedded system design. This work proposes a design methodology that incorporates reliability into hardware-software co-design paradigm for embedded systems. We introduce an allocation and scheduling algorithm that efficiently handles conditional execution in multi-rate embedded systems, and selectively duplicates critical tasks to detect soft errors, such that the reliability of the system is increased. The increased reliability is achieved by utilizing the otherwise idle computation resources and incurs no resource or performance penalty. The proposed algorithm is fast and efficient, and is suitable for use in the inner loop of our hardware/software co-synthesis framework, where the scheduling routine has to be invoked many times.",2004,0, 1126,A change impact dependency measure for predicting the maintainability of source code,"We first articulate the theoretic difficulties with the existing metrics designed for predicting software maintainability. To overcome the difficulties, we propose to measure a purely internal and objective attribute of code, namely change impact dependency, and show how it can be modeled to predict real change impact. The proposed base measure can be further elaborated for evaluating software maintainability.",2004,0, 1127,Optimizing the planning and executing of software independent verification and validation (IV&V) in mature organizations,"To an organization involved in the construction of mission critical software, the safety and reliability of critical systems including their software is of utmost importance. The use of an independent group to provide verification and validation (IV&V) is intended to improve the quality of the software products. We seek to optimize the planning and execution of IV&V activities upon organizations that are already assessed with a certain level of process maturity, such as proposed by the Capability Maturity Model Integrated (CMMI).",2004,0, 1128,Data flow analysis and testing of Java Server Pages,"Web applications often rely on server-side scripts to handle HTTP requests, to generate dynamic contents, and to interact with other components. The server-side scripts usually mix with HTML statements and are difficult to understand and test. In particular, these scripts do not have any compiling check and could be error-prone. Thus, it becomes critical to test the server-side scripts for ensuring the quality and reliability of Web applications. We adapt traditional dataflow testing techniques into the context of Java Server Pages (JSP), a very popular server-side script for developing Web applications with Java technology. We point out that the JSP implicit objects and action tags can introduce several unique dataflow test artifacts which need to be addressed. A test model is presented to capture the dataflow information of JSP pages with considerations of various implicit objects and action tags. Based on the test model, we describe an approach to compute the intraprocedural and interprocedural data flow test paths for uncovering the data anomalies of JSP pages.",2004,0, 1129,WS-FIT: a tool for dependability analysis of Web services,This work provides an overview of fault injection techniques and their applicability to testing SOAP RPC based Web service systems. We also give a detailed example of the WS-FIT package and use it to detect a problem in a Web service based system.,2004,0, 1130,A computational framework for supporting software inspections,"Software inspections improve software quality by the analysis of software artifacts, detecting their defects for removal before these artifacts are delivered to the following software life cycle activities. Some knowledge regarding software inspections have been acquired by empirical studies. However, we found no indication that computational support for the whole software inspection process using appropriately such knowledge is available. This paper describes a computational framework whose requirements set was derived from knowledge acquired by empirical studies to support software inspections. To evaluate the feasibility of such framework, two studies have been accomplished: one case study, which has shown the feasibility of using the framework to support inspections, and an experimental study that evaluated the supported software inspection planning activity. Preliminary results of this experimental study suggested that unexperienced subjects are able to plan inspections with higher defect detection effectiveness, and in less time, when using this computational framework.",2004,0, 1131,Rostra: a framework for detecting redundant object-oriented unit tests,"Object-oriented unit tests consist of sequences of method invocations. Behavior of an invocation depends on the state of the receiver object and method arguments at the beginning of the invocation. Existing tools for automatic generation of object-oriented test suites, such as Jtest and J Crasher for Java, typically ignore this state and thus generate redundant tests that exercise the same method behavior, which increases the testing time without increasing the ability to detect faults. This work proposes Rostra, a framework for detecting redundant unit tests, and presents five fully automatic techniques within this framework. We use Rostra to assess and minimize test suites generated by test-generation tools. We also present how Rostra can be added to these tools to avoid generation of redundant tests. We have implemented the five Rostra techniques and evaluated them on 11 subjects taken from a variety of sources. The experimental results show that Jtest and JCrasher generate a high percentage of redundant tests and that Rostra can remove these redundant tests without decreasing the quality of test suites.",2004,0, 1132,Verifiable concurrent programming using concurrency controllers,"We present a framework for verifiable concurrent programming in Java based on a design pattern for concurrency controllers. Using this pattern, a programmer can write concurrency controller classes defining a synchronization policy by specifying a set of guarded commands and without using any of the error-prone synchronization primitives of Java. We present a modular verification approach that exploits the modularity of the proposed pattern, i.e., decoupling of the controller behavior from the threads that use the controller. To verify the controller behavior (behavior verification) we use symbolic and infinite state model checking techniques, which enable verification of controllers with parameterized constants, unbounded variables and arbitrary number of user threads. To verify that the threads use a controller in the specified manner (interface verification) we use explicit state model checking techniques, which allow verification of arbitrary thread implementations without any restrictions. We show that the correctness of the user threads can be verified using the concurrency controller interfaces as stubs, which improves the efficiency of the interface verification significantly. We also show that the concurrency controllers can be automatically optimized using the specific notification pattern. We demonstrate the effectiveness of our approach on a Concurrent Editor implementation which consists of 2800 lines of Java code with remote procedure calls and complex synchronization constraints.",2004,0, 1133,T-UPPAAL: online model-based testing of real-time systems,"The goal of testing is to gain confidence in a physical computer based system by means of executing it. More than one third of typical project resources are spent on testing embedded and real-time systems, but still it remains ad-hoc, based on heuristics, and error-prone. Therefore systematic, theoretically well-founded and effective automated real-time testing techniques are of great practical value. Testing conceptually consists of three activities: test case generation, test case execution and verdict assignment. We present T-UPPAAL-a new tool for model based testing of embedded real-time systems that automatically generates and executes tests """"online"""" from a state machine model of the implementation under test (IUT) and its assumed environment which combined specify the required and allowed observable (realtime) behavior of the IUT. T-UPPAAL implements a sound and complete randomized testing algorithm, and uses a formally defined notion of correctness (relativized timed input/output conformance) to assign verdicts. Using online testing, events are generated and simultaneously executed.",2004,0, 1134,Consistency check in modelling multi-agent systems,"In model-driven software development, inconsistency of a model must be detected and eliminated to ensure the quality of the model. This paper investigates the consistency check in the modelling of multi-agent systems (AMS). Consistency constraints are formally defined for the CAMLE language, which was proposed in our previous work for modelling MAS. Uses of the consistency constraints in the implementation of a modelling environment for automatic consistency check and model transformation are discussed",2004,0, 1135,Application of maximum entropy principle to software failure prediction,"Predicting failures from software input is still a tough issue. Two models, namely the surface model and structure model, are presented in this paper to predict failure by applying the maximum entropy principle. The surface model forecasts a failure from the statistical co-occurrence between input and failure, while the structure model does from the statistical cause-effect between fault and failure. To evaluate the models, precision is applied and 17 testing experiments are conducted on 5 programs. Based on the experiments, the surface model and structure model get an average precision of 0.876 and 0.858, respectively",2004,0, 1136,Software reliability growth models incorporating fault dependency with various debugging time lags,"Software reliability is defined as the probability of failure-free software operation for a specified period of time in a specified environment. Over the past 30 years, many software reliability growth models (SRGMs) have been proposed and most SRGMs assume that detected faults are immediately corrected. Actually, this assumption may not be realistic in practice. In this paper we first give a review of fault detection and correction processes in software reliability modeling. Furthermore, we show how several existing SRGMs based on NHPP models can be derived by applying the time-dependent delay function. On the other hand, it is generally observed that mutually independent software faults are on different program paths. Sometimes mutually dependent faults can be removed if and only if the leading faults were removed. Therefore, here we incorporate the ideas of fault dependency and time-dependent delay function into software reliability growth modeling. Some new SRGMs are proposed and several numerical examples are included to illustrate the results. Experimental results show that the proposed framework to incorporate both fault dependency and time-dependent delay function for SRGMs has a fairly accurate prediction capability",2004,0, 1137,Intelligent component selection,"Component-based software engineering (CBSE) provides solutions to the development of complex and evolving systems. As these systems are created and maintained, the task of selecting components is repeated. The context-driven component evaluation (CdCE) project is developing strategies and techniques for automating a repeatable process for assessing software components. This paper describes our work using artificial intelligence (AI) techniques to classify components based on an ideal component specification. Using AI we are able to represent dependencies between attributes, overcoming some of the limitations of existing aggregation-based approaches to component selection",2004,0, 1138,On the testing of particular input conditions,"Generating test cases from a specification can be done at an early stage. However, so many important aspects relevant to testing can be identified from the specification that exhaustively testing their combinations can be very costly. A common approach to reduce testing costs is to identify some particular input conditions and test each of them only once. We argue that such an approach should be used judiciously, or else inadequate tests may result. This paper explores several alternatives to assess the validity of the tester's hypothesis that a particular condition can be tested adequately with only one test case. These alternatives help to test the particular conditions more reliably and, hence, reduce the risk of not revealing the existence of faults",2004,0, 1139,Proof-guided testing: an experimental study,"Proof-guided testing is intended to enhance the test design with information extracted from the argument for correctness. The target application field is the verification of fault-tolerance algorithms where a paper proof is published Ideally, testing should be focused on the weak parts of the demonstration. The identification of weak parts proceeds by restructuring the informal discourse as a proof tree and analyzing it step by step. The approach is experimentally assessed using the example of a flawed group membership protocol (GMP). Results are quite promising: (1) compared to crude random testing, the proof-guided method allowed us to significantly improve the fault revealing power of test data; (2) the overall method also provided useful feedback on the proof and its potential flaw(s).",2004,0, 1140,Services-oriented dynamic reconfiguration framework for dependable distributed computing,"Recently service-oriented architecture (SOA) has received significant attention and one reason is that it is potentially survivable as services are located, bound, and executed at runtime over the Internet. However, this is not enough for dependable computing because the system must also be able to reconfigure once a system failure or overload is detected, and this reconfiguration must be done in real-time at runtime with minimum disruption to the current operation. This work presents reconfiguration requirements for building dependable SOA, and proposes a dynamic reconfiguration framework based on distributed monitoring, synchronization, and runtime verification with distributed agents.",2004,0, 1141,A novel ant clustering algorithm based on cellular automata,"Based on the principle of cellular automata in artificial life, an artificial ant sleeping model (ASM) and an ant algorithm for cluster analysis (A4C) are presented. Inspired by the behaviors of gregarious ant colonies, we use the ant agent to represent a data object. In ASM, each ant has two states: a sleeping state and an active state. The ant's state is controlled by a function of the ant's fitness to the environment it locates and a probability for the ants becoming active. The state of an ant is determined only by its local information. By moving dynamically, the ants form different subgroups adaptively, and hence the data objects they represent are clustered. Experimental results show that the A4C algorithm on ASM is significantly better than other clustering methods in terms of both speed and quality. It is adaptive, robust and efficient, achieving high autonomy, simplicity and efficiency.",2004,0, 1142,Geotechnical application of borehole GPR - a case history,"Borehole GPR measurements were performed to complement the site characterization of a planned expansion of a cement plant. This expansion includes a mill and a reclaim facility adjacent to the present buildings, the whole site being located in a karstic environment. Twenty-one geotechnical exploration borings revealed that the depth to bedrock is very irregular (between 1.5 m and 18 m) and that the rocks likely have vertical alteration channels that extend many meters below the rock surface. The purpose of the GPR survey was to reveal the presence of potential cavities and to better determine the required vertical extent of the caissons of the foundations. In general, the subsurface conditions consist of a top fill layer, which is electrically conductive, a residual clay layer and a limestone bedrock. Very poor EM penetration prevented surface measurements. Hence, 100 MHz borehole antennas were used to perform single-hole reflection and cross-hole transmission measurements. Sixteen geotechnical holes were visited during the survey. All holes were surveyed in reflection mode. Nineteen tomographic panels were scanned. Velocity tomogram were obtained for all the data. Attenuation tomography was performed in fewer occasions, due to higher uncertainty in the quality of the amplitude data. Resistivity probability maps were drawn when both velocity and attenuation data were obtained. The velocity tomography calculations were constrained using velocity profiles along the borings. These profiles were obtained by inversion of the single-hole first arrival data. The velocity tomography results show that globally the area can be separated in two zones, one with an average velocity around 0.1 m/ns and one with a slower 0.09 m/ns average velocity. In addition, low velocity anomalies attributable to weathered zones appear in the tomogram. In conclusion, the GPR results revealed no cavities. Sensitive zones were located, which helped the planning and the budgeting of the caissons - onstruction.",2004,0, 1143,Case studies on the application of the CORE model for requirements engineering process assessment [software engineering],"Existing requirements engineering (RE) process assessment models lack the components necessary to provide enough information about the quality of an RE process. The concept of """"concern of requirement engineering"""" (CORE) and the assessment models proposed in our previous research provide a method and new perspectives to assess the quality of an RE process. The case studies presented in this paper provide a comprehensive view of the application of the CORE model. The advantages of using the model for RE process assessment are twofold. First, it is more flexible because the major COREs assess the main contents of the activities of an RE process. Second, the categories classified in the model allow RE process assessment based on several categories. This allows for process improvement in an incremental manner. The CORE model is part of our RE process development framework and is used to assess the quality of the RE process under development.",2004,0, 1144,Development of an intelligent system for architecture design and analysis [software architecture],"Software architecture plays a pivotal role in allowing an organization to meet its business goals, in terms of the early insights it provides into the system, the communication it enables among stakeholders, and the value it provides as a re-usable asset. Unfortunately, designing and analyzing architecture for a certain system is recognized as a hard task for most software engineers, because the process of collecting, maintaining, and validating architectural information is complex, knowledge-intensive, iterative, and error prone. The needs of software architectural design and analysis have led to a desire to create tools to support the process. This paper introduces an intelligent system, which serves the following purposes: to obtain meaningful nonfunctional requirements from users; to aid in exploring architectural alternatives; and to facilitate architectural analysis.",2004,0, 1145,A reliability study on switching fabric based system architecture,"Processor and networking technologies have outrun system technologies, such that the system itself is the bottleneck. Many vendors and organisations are trying to address this problem by replacing the popular bus based architecture with a switching fabric. The paper studies the new switching technology and compares it with the old bus based architecture. Both architectures are to be used in a highly-available parallel processing environment. It is assumed that both systems have incorporated the same fault tolerant strategies. A hierarchical approach is used to break down the systems into subsets, such that both software and hardware components are considered. Each component is represented by a hazard function or a failure intensity. The fault tolerant strategies are used to estimate the repair time probability distribution function. The availability of the system is calculated. It is found that the switching fabric has a slightly higher availability.",2004,0, 1146,Specifications overview for counter mode of operation. Security aspects in case of faults,"In 2001, after a selection process, NIST added the counter mode of operation to be used with the advanced encryption standard (AES). In the NIST recommendation a standard incrementing function is defined for generation of the counter blocks which are encrypted for each plaintext block, IPsec Internet draft (R. Housley et al., May 2003) and ATM security specifications contain implementation specifications for counter mode standard incrementing function. In this paper we present those specifications. We analyze the probability to reveal useful information in case of faults in standard incrementing function described in NIST recommendation. The confidentiality of the mode can be compromised with the fault model presented in this paper. We recommend another solution to be used in generation of the standard incrementing function in the context of the counter mode.",2004,0, 1147,Design and evaluation of a fault-tolerant mobile-agent system,"The mobile agents create a new paradigm for data exchange and resource sharing in rapidly growing and continually changing computer networks. In a distributed system, failures can occur in any software or hardware component. A mobile agent can get lost when its hosting server crashes during execution, or it can get dropped in a congested network. Therefore, survivability and fault tolerance are vital issues for deploying mobile-agent systems. This fault tolerance approach deploys three kinds of cooperating agents to detect server and agent failures and recover services in mobile-agent systems. An actual agent is a common mobile agent that performs specific computations for its owner. Witness agents monitor the actual agent and detect whether it's lost. A probe recovers the failed actual agent and the witness agents. A peer-to-peer message-passing mechanism stands between each actual agent and its witness agents to perform failure detection and recovery through time-bounded information exchange; a log records the actual agent's actions. When failures occur, the system performs rollback recovery to abort uncommitted actions. Moreover, our method uses checkpointed data to recover the lost actual agent.",2004,0, 1148,Decidability results for parametric probabilistic transition systems with an application to security,"We develop a model of parametric probabilistic transition systems. In this model probabilities associated with transitions may be parameters, and we show how to find instances of parameters that satisfy a given property and instances that either maximize or minimize the probability of reaching a given state. We show, as an application, the model of a probabilistic non repudiation protocol. The theory we develop, allows us to find instances that maximize the probability that the protocol ends in a fair state (no participant has an advantage over the others).",2004,0, 1149,Exception safety for C#,"Programming-language mechanisms for throwing and handling exceptions can simplify some computer programs. However the use of exceptions can also be error prone, leading to new programming errors and code that is hard to understand. This paper describes ways to tame the exception usage in C#. In particular the paper describes the treatment of exceptions in Spec#, an experimental superset of C# that includes code contracts.",2004,0, 1150,Fault tolerance in a layered architecture: a general specification pattern in B,"Dependable control systems are usually complex and prone to errors of various natures. Such systems are often built in a modular and layered fashion. To guarantee system dependability, we need to develop software that is not only fault-free but also is able to cope with faults of other system components. In this paper we propose a general formal specification pattern that can be recursively applied to specify fault tolerance mechanisms at each architectural layer. Iterative application of this pattern via stepwise refinement in the B method results in development of a layered fault tolerant system correct by construction. We demonstrate the proposed approach by an excerpt from a realistic case study - development of liquid handling workstation Fillwell.",2004,0, 1151,HyperTree for self-stabilizing peer-to-peer systems,"Peer-to-peer systems are prone to faults, thus it is vitally important to design peer-to-peer systems to automatically regain consistency, namely to be self-stabilizing. Toward this goal, we present a deterministic structure that defines for every n the entire (IP) pointers structure among the n machines. Namely, the next hop for the insert, delete and search procedures of the peer-to-peer system. Thus, the consistency of the system is easily defined, monitored, verified and repaired. We present the HyperTree (distributed) structure which support the peer-to-peer procedures while ensuring that the out-degree and in-degree (the number of outgoing/incoming pointers) are b logb N where N in the maximal number of machines and b is an integer parameter greater than 1. In addition the HyperTree ensures that the maximal number of hops involved in each procedure is bounded by logb N. A self-stabilizing peer-to-peer system based on the HyperTree is presented.",2004,0, 1152,Myrinet networks: a performance study,"As network computing become commonplace, the interconnection networks and the communication system software become critical in achieving high performance. Thus, it is essential to systematically assess the features and performance of the new networks. Recently, Myricom has introduced a two-port """"E-card"""" Myrinet/PCl-X interface. In this paper, we present the basic performance of its GM2.I messaging layer, as well as a set of microbenchmarks designed to assess the quality of MPI implementation on top of GM. These microbenchmarks measure the latency, bandwidth, intra-node performance, computation/communication overlap, parameters of the LogP model, buffer reuse impact, different traffic patterns, and collective communications. We have discovered that the MPI basic performance is close to those offered at the GM. We find that the host overhead is very small in our system. The Myrinet network is shown to be sensitive to the buffer reuse patterns. However, it provides opportunities for overlapping computation with communication. The Myrinet network is able to deliver up to 2000MB/s bandwidth for the permutation patterns.",2004,0, 1153,Compression of VLSI test data by arithmetic coding,This work presents arithmetic coding and its application to data compression for VLSI testing. The use of arithmetic codes for compression results in a codeword whose length is close to the optimal value as predicted by entropy in information theory. Previous techniques (such as those based on Huffman or Golomb coding) result in optimal codes for test data sets in which the probability model of the symbols satisfies specific requirements. We show that Huffman and Golomb codes result in large differences between entropy bound and sustained compression. We present compression results of arithmetic coding for circuits through a practical integer implementation of arithmetic coding/decoding and analyze its deviation from the entropy bound as well. A software implementation approach is proposed and studied in detail using industrial embedded DSP cores.,2004,0, 1154,Testing and defect tolerance: a Rent's rule based analysis and implications on nanoelectronics,"Defect tolerant architectures will be essential for building economical gigascale nanoelectronic computing systems to permit functionality in the presence of a significant number of defects. The central idea underlying a defect tolerant configurable system is to build the system out of partially perfect components, detect the defects and configure the available good resources using software. In this paper we discuss implications of defect tolerance on power area, delay and other relevant parameters for computing architectures. We present a Rent's rule based abstraction of testing for VLSI systems and evaluate the redundancy requirements for observability. It is shown that for a very high interconnect defect density, a prohibitively large number of redundant components are necessary for observability and this has adverse affect on the system performance. Through a unified framework based on a priori wire length estimation and Rent's rule we illustrate the hidden cost of supporting such an architecture.",2004,0, 1155,At-speed functional verification of programmable devices,In this paper we present a novel approach for functional verification of programmable devices. The proposed methodology is suited to refine the results obtained by a functional automatic test pattern generator (ATPG). The hard-to-detect faults are examined by exploiting the controllability ability of a high-level ATPG in conjunction with the observability potentiality of software instructions targeted to the programmable device. Generated test programs can be used for both functional verification and at-speed testing.,2004,0, 1156,An energy-aware framework for coordinated dynamic software management in mobile computers,"Energy efficiency is a very important and challenging issue for resource-constrained mobile computers. We propose a dynamic software management (DSM) framework to improve battery utilization, and avoid competition for limited energy resources from multiple applications. We have designed and implemented a DSM module in user space, independent of the operating system (OS), which explores quality-of-service (QoS) adaptation to reduce system energy and employs a priority-based preemption policy for multiple applications. It also employs energy macromodels for mobile applications to aid in this endeavor. By monitoring the energy supply and predicting energy demand at each QoS level, the DSM module is able to select the best possible trade-off between energy conservation and application QoS. To the best of our knowledge, this is the first energy-aware coordinated framework utilizing adaptation of mobile applications. It honors the priority desired by the user and is portable to POSIX-compliant OSs. Our experimental results for some mobile applications (video player, speech recognizer, voice-over-IP) show that this approach can meet user-specified task-oriented goals and improve battery utilization significantly. They also show that prediction of application energy demand based on energy macro-models is a key component of this framework.",2004,0, 1157,Checkpointing for peta-scale systems: a look into the future of practical rollback-recovery,"Over the past two decades, rollback-recovery via checkpoint-restart has been used with reasonable success for long-running applications, such as scientific workloads that take from few hours to few months to complete. Currently, several commercial systems and publicly available libraries exist to support various flavors of checkpointing. Programmers typically use these systems if they are satisfactory or otherwise embed checkpointing support themselves within the application. In this paper, we project the performance and functionality of checkpointing algorithms and systems as we know them today into the future. We start by surveying the current technology roadmap and particularly how Peta-Flop capable systems may be plausibly constructed in the next few years. We consider how rollback-recovery as practiced today will fare when systems may have to be constructed out of thousands of nodes. Our projections predict that, unlike current practice, the effect of rollback-recovery may play a more prominent role in how systems may be configured to reach the desired performance level. System planners may have to devote additional resources to enable rollback-recovery and the current practice of using """"cheap commodity"""" systems to form large-scale clusters may face serious obstacles. We suggest new avenues for research to react to these trends.",2004,0, 1158,Progress in real-time fault tolerance,"This paper discusses progress in the field of real-time fault tolerance. In particular, it considers synchronous vs. asynchronous fault tolerance designs, maintaining replica consistency, alternative fault tolerance strategies, including checkpoint restoration, transactions, and consistent replay, and custom vs. generic fault tolerance.",2004,0, 1159,"Software's secret sauce: the """"-ilities"""" [software quality]","If beauty is in the eye of the beholder, then quality must be as well. We live in a world where beauty to one is a complete turnoff to another. Software quality is no different. We have the developer's perspective, the end users perspective, the testers perspective, and so forth. As you can see, meeting the requirements might be different from being fit for a purpose, which can also be different from complying with rules and regulations on how to develop and deploy the software. Yet we can think of all three perspectives as ways to determine how to judge and assess software quality. These three perspectives tie directly to the persistent software attributes focus section in this issue and, consequently, to the concept of software """"-ilities"""". The -ilities (or software attributes) are a collection of closely related behaviors that by themselves have little or no value to the end users but that can greatly increase a software application or system's value when added.",2004,0, 1160,The future of software infrastructure protection,"In this paper the author describes how a Gatekeeper prototype had detected 83 percent of all unknown real viruses thrown at it. Even more intriguing was that the 17 percent of viruses missed were all due to the prototype code's immaturity, rather than any failing of the method used to detect them. Stated another way: An enterprise-ready version of the prototype would have captured every virus the Internet could have thrown at it during its testing period. Of course, many signature-based virus detection tools can detect 100 percent of known viruses. But very few of them can recognize new viruses.",2004,0, 1161,Measuring application error rates for network processors,"Faults in computer systems can occur due to a variety of reasons. In many systems, an error has a binary effect, i.e. the output is either correct or it is incorrect. However, networking applications exhibit different properties. For example, although a portion of the code behaves incorrectly due to a fault, the application can still work correctly. Integrity of a network system is often unchanged during faults. Therefore, measuring the effects of faults on the network processor applications require new measurement metrics to be developed. In this paper, we highlight essential application properties and data structures that can be used to measure the error behavior of network processors. Using these metrics, we study the error behavior of seven representative networking applications under different cache access fault probabilities.",2004,0, 1162,Discovery of policy anomalies in distributed firewalls,"Firewalls are core elements in network security. However, managing firewall rules, particularly in multi-firewall enterprise networks, has become a complex and error-prone task. Firewall filtering rules have to be written, ordered and distributed carefully in order to avoid firewall policy anomalies that might cause network vulnerability. Therefore, inserting or modifying filtering rules in any firewall requires thorough intra- and inter-firewall analysis to determine the proper rule placement and ordering in the firewalls. We identify all anomalies that could exist in a single- or multi-firewall environment. We also present a set of techniques and algorithms to automatically discover policy anomalies in centralized and distributed legacy firewalls. These techniques are implemented in a software tool called the """"Firewall Policy Advisor"""" that simplifies the management of filtering rules and maintains the security of next-generation firewalls.",2004,0, 1163,Power quality factor and line-disturbances measurements in three-phase systems,"A power quality meter (PQM) is presented for measuring, as a first objective, a single indicator, designated power quality factor (PQF), in the range between zero to one, which integrally reflect the power transfer quality of a general three phase network feeding unbalanced nonlinear loads. PQF definition is based on the analysis of functions in the frequency domain, separating the fundamental terms from the harmonic terms of the Fourier series. Then, quality aspects considered in the PQF definition can be calculated: a) the voltage and current harmonic levels b) the degree of unbalance and c) the phase displacement factor in the different phases at the fundamental frequency. As a second objective, the PQM has been designed for detecting, classifying and organizes power line disturbances. For monitoring power line disturbances, the PQM is configured as virtual instrument, which automatically classifies and organizes them in a database while they are being recorded. The type of disturbances includes: impulse, oscillation, sag, swell, interruption, undervoltage, overvoltage, harmonics and frequency variation. For amplitude disturbances (impulse, sag, swell, interruption, undervoltage and overvoltage), the PQM permits the measurement of parameters such as amplitude, start time and final time. Measurement of harmonic distortion allows recording and visual presentation of the spectrum of amplitudes and phases corresponding to the first 40 harmonics. Software tools use the database structure to present summaries of power disturbances and locate an event by severity or time of occurrence. Simulated measurements are included to demonstrate the versatility of the instrument.",2004,0, 1164,Weighted least square estimation algorithm with software phase-locked loop for voltage sag compensation by SMES,"A superconducting magnetic energy storage (SMES) system is developed to protect a critical load from momentary voltage sags. This system is composed of a 0.3 MJ SMES coil, a 150 KVA IGBT-based current source inverter and a phase-shift inductor. In order to compensate the load voltage effectively whenever a source voltage sag happens, it is crucial for the signal processing algorithm to extract the fundamental components from the sample signals quickly, precisely and stably. In this paper, an estimation algorithm based on the weighted least square principle is developed to detect the positive- and negative-sequence fundamental components from the measured AC voltages and currents. A software phase-locked loop (SPLL) is applied to track the positive-sequence component of the source voltage. Simulations and experiments are carried out to demonstrate the algorithms. The results are presented.",2004,0, 1165,Empirical evaluation of the fault-detection effectiveness of smoke regression test cases for GUI-based software,"Daily builds and smoke regression tests have become popular quality assurance mechanisms to detect defects early during software development and maintenance. In previous work, we addressed a major weakness of current smoke regression testing techniques, i.e., their lack of ability to automatically (re)test graphical user interface (GUI) event interactions - we presented a GUI smoke regression testing process called daily automated regression tester (DART). We have deployed DART and have found several interesting characteristics of GUI smoke tests that we empirically demonstrate in this paper. We also combine smoke tests with different types of test oracles and present guidelines for practitioners to help them generate and execute the most effective combinations of test-case length and test oracle complexity. Our experimental subjects consist of four GUI-based applications. We generate 5000-8000 smoke tests (enough to be run in one night) for each application. Our results show that: (1) short GUI smoke tests with certain test oracles are effective at detecting a large number of faults; (2) there are classes of faults that our smoke test cannot detect; (3) short smoke tests execute a large percentage of code; and (4) the entire smoke testing process is feasible to do in terms of execution time and storage space.",2004,0, 1166,Industrial real-time regression testing and analysis using firewalls,"Industrial real-time systems are complex and need to be thoroughly tested before being released to the customer. We have found that last minute changes are often responsible for the introduction of defects, causing serious problems for the customer. We demonstrate that these defects can be introduced into real-time software in diverse ways, and there is no simple regression testing method that can deal with all of these defect sources. This paper describes the application of a testing firewall for regression testing whose form differs depending upon the defect. The idea of the testing firewall is to limit the regression testing to those potentially affected system elements directly dependent upon changed system elements, and then to thoroughly test these elements. This has resulted in substantial savings in regression testing costs, and yet has been effective in detecting critical defects with significant implication in terms of customer acceptance at ABB. Empirical studies are reported for these experiences in an industrial setting.",2004,0, 1167,Checking inside the black box: regression testing based on value spectra differences,"Comparing behaviors of program versions has become an important task in software maintenance and regression testing. Traditional regression testing strongly focuses on black-box comparison of program outputs. Program spectra have recently been proposed to characterize a program's behavior inside the black box. Comparing program spectra of program versions offers insights into the internal behavior differences between versions. We present a new class of program spectra, value spectra, which enriches the existing program spectra family. We compare the value spectra of an old version and a new version to detect internal behavior deviations in the new version. We use a deviation-propagation call tree to present the deviation details. Based on the deviation-propagation call tree, we propose two heuristics to locate deviation roots, which are program locations that trigger the behavior deviations. We have conducted an experiment on seven C programs to evaluate our approach. The results show that our approach can effectively expose program behavior differences between versions even when their program outputs are the same, and our approach reports deviation roots with high accuracy for most programs.",2004,0, 1168,Extracting facts from open source software,"Open source software systems are becoming increasingly important these days. Many companies are investing in open source projects and lots of them are also using such software in their own work. But because open source software is often developed without proper management, the quality and reliability of the code may be uncertain. The quality of the code needs to be measured and this can be done only with the help of proper tools. We describe a framework called Columbus with which we calculate the object oriented metrics validated by Basili et al. for illustrating how fault-proneness detection from the open source Web and e-mail suite called Mozilla can be done. We also compare the metrics of several versions of Mozilla to see how the predicted fault-proneness of the software system changed during its development. The Columbus framework has been further developed recently with a compiler wrapping technology that now gives us the possibility of automatically analyzing and extracting information from software systems without modifying any of the source code or makefiles. We also introduce our fact extraction process here to show what logic drives the various tools of the Columbus framework and what steps need to be taken to obtain the desired facts.",2004,0, 1169,Detection strategies: metrics-based rules for detecting design flaws,"In order to support the maintenance of an object-oriented software system, the quality of its design must be evaluated using adequate quantification means. In spite of the current extensive use of metrics, if used in isolation metrics are oftentimes too fine grained to quantify comprehensively an investigated design aspect (e.g., distribution of system's intelligence among classes). To help developers and maintainers detect and localize design problems in a system, we propose a novel mechanism - called detection strategy - for formulating metrics-based rules that capture deviations from good design principles and heuristics. Using detection strategies an engineer can directly localize classes or methods affected by a particular design flaw (e.g., God Class), rather than having to infer the real design problem from a large set of abnormal metric values. We have defined such detection strategies for capturing around ten important flaws of object-oriented design found in the literature and validated the approach experimentally on multiple large-scale case-studies.",2004,0, 1170,"Analysis, testing and re-structuring of Web applications","The current situation in the development of Web applications is reminiscent of the early days of software systems, when quality was totally dependent on individual skills and lucky choices. In fact, Web applications are typically developed without following a formalized process model: requirements are not captured and design is not considered; developers quickly move to the implementation phase and deliver the application without testing it. Not differently from more traditional software system, however, the quality of Web applications is a complex, multidimensional attribute that involves several aspects, including correctness, reliability, maintainability, usability, accessibility, performance and conformance to standards. In this context, aim of this PhD thesis was to investigate, define and apply a variety of conceptual tools, analysis, testing and restructuring techniques able to support the quality of Web applications. The goal of analysis and testing is to assess the quality of Web applications during their development and evolution; restructuring aims at improving the quality by suitably changing their structure.",2004,0, 1171,A neuro-fuzzy tool for software estimation,"Accurate software estimation such as cost estimation, quality estimation and risk analysis is a major issue in software project management. We present a soft computing framework to tackle this challenging problem. We first use a preprocessing neuro-fuzzy inference system to handle the dependencies among contributing factors and decouple the effects of the contributing factors into individuals. Then we use a neuro-fuzzy bank to calibrate the parameters of contributing factors. In order to extend our framework into fields that lack of an appropriate algorithmic model of their own, we propose a default algorithmic model that can be replaced when a better model is available. Validation using industry project data shows that the framework produces good results when used to predict software cost.",2004,0, 1172,Probabilistic evaluation of object-oriented systems,"The goal of this study is the development of a probabilistic model for the evaluation of flexibility of an object-oriented design. In particular, the model estimates the probability that a certain class of the system gets affected when new functionality is added or when existing functionality is modified. It is obvious that when a system exhibits a large sensitivity to changes, the corresponding design quality is questionable. Useful conclusions can be drawn from this model regarding the comparative evaluation of two or more object-oriented systems or even the assessment of several generations of the same system, in order to determine whether or not good design principles have been applied. The proposed model has been implemented in a Java program that can automatically analyze the class diagram of a given system.",2004,0, 1173,Module-order modeling using an evolutionary multi-objective optimization approach,"The problem of quality assurance is important for software systems. The extent to which software reliability improvements can be achieved is often dictated by the amount of resources available for the same. A prediction for risk-based rankings of software modules can assist in the cost-effective delegation of the limited resources. A module-order model (MOM) is used to gauge the performance of the predicted rankings. Depending on the software system under consideration, multiple software quality objectives may be desired for a MOM; e.g., the desired rankings may be such that if 20% of modules were targeted for reliability enhancements then 80% of the faults would be detected. In addition, it may also be desired that if 50% of modules were targeted then 100% of the faults would be detected. Existing works related to MOM(s) have used an underlying prediction model to obtain the rankings, implying that only the average, relative, or mean square errors are minimized. Such an approach does not provide an insight into the behavior of a MOM, the performance of which focusses on how many faults are accounted for by the given percentage of modules enhanced. We propose a methodology for building MOM (s) by implementing a multiobjective optimization with genetic programming. It facilitates the simultaneous optimization of multiple performance objectives for a MOM. Other prediction techniques, e.g., multiple linear regression and neural networks, cannot achieve multiobjective optimization for MOM(s). A case study of a high-assurance telecommunications software system is presented. The observed results show a new promise in the modeling of goal-oriented software quality estimation models.",2004,0, 1174,A replicated experiment of usage-based and checklist-based reading,"Software inspection is an effective method to detect faults in software artefacts. Several empirical studies have been performed on reading techniques, which are used in the individual preparation phase of software inspections. Besides new experiments, replications are needed to increase the body of knowledge in software inspections. We present a replication of an experiment, which compares usage-based and checklist-based reading. The results of the original experiment show that reviewers applying usage-based reading are more efficient and effective in detecting the most critical faults from a user's point of view than reviewers using checklist-based reading. We present the data of the replication together with the original experiment and compares the experiments. The main result of the replication is that it confirms the result of the original experiment. This replication strengthens the evidence that usage-based reading is an efficient reading technique.",2004,0, 1175,A controlled experiment for evaluating a metric-based reading technique for requirements inspection,"Natural language requirements documents are often verified by means of some reading technique. Some recommendations for defining a good reading technique point out that a concrete technique must not only be suitable for specific classes of defects, but also for a concrete notation in which requirements are written. Following this suggestion, we have proposed a metric-based reading (MBR) technique used for requirements inspections, whose main goal is to identify specific types of defects in use cases. The systematic approach of MBR is basically based on a set of rules as """"if the metric value is too low (or high) the presence of defects of type de fType1,...de fTypen must be checked"""". We hypothesised that if the reviewers know these rules, the inspection process is more effective and efficient, which means that the defects detection rate is higher and the number of defects identified per unit of time increases. But this hypotheses lacks validity if it is not empirically validated. For that reason the main goal is to describe a controlled experiment we carried out to ascertain if the usage of MBR really helps in the detection of defects in comparison with a simple checklist technique. The experiment result revealed that MBR reviewers were more effective at detecting defects than checklist reviewers, but they were not more efficient, because MBR reviewers took longer than checklist reviewers on average.",2004,0, 1176,Assessing the impact of active guidance for defect detection: a replicated experiment,"Scenario-based reading (SBR) techniques have been proposed as an alternative to checklists to support the inspectors throughout the reading process in the form of operational scenarios. Many studies have been performed to compare these techniques regarding their impact on the inspector performance. However, most of the existing studies have compared generic checklists to a set of specific reading scenarios, thus confounding the effects of two SBR key factors: separation of concerns and active guidance. In a previous work we have preliminarily conducted a repeated case study at the University of Kaiserslautern to evaluate the impact of active guidance on inspection performance. Specifically, we compared reading scenarios and focused checklists, which were both characterized as being perspective-based. The only difference between the reading techniques was the active guidance provided by the reading scenarios. We now have replicated the initial study with a controlled experiment using as subjects 43 graduate students in computer science at University of Bari. We did not find evidence that active guidance in reading techniques affects the effectiveness or the efficiency of defect detection. However, inspectors showed a better acceptance of focused checklists than reading scenarios.",2004,0, 1177,Assessment of software measurement: an information quality study,"This paper reports on the first phase of an empirical research project concerning methods to assess the quality of the information in software measurement products. Two measurement assessment instruments are developed and deployed in order to generate two sets of analyses and conclusions. These sets will be subjected to an evaluation of their information quality in phase two of the project. One assessment instrument was based on AIMQ, a generic model of information quality. The other instrument was developed by targeting specific practices relating to software project management and identifying requirements for information support. Both assessment instruments delivered data that could be used to identify opportunities to improve measurement The generic instrument is cheap to acquire and deploy, while the targeted instrument requires more effort to build. Conclusions about the relative merits of the methods, in terms of their suitability for improvement purposes, await the results from the second phase of the project.",2004,0, 1178,Assessing quantitatively a programming course,"The focus on assessment and measurement represents the main distinction between programming course and software engineering courses in computer curricula. We introduced testing as an essential asset of a programming course. It allows precise measurement of the achievements of the students and allows an objective assessment of the teaching itself. We measured the size and evolution of the programs developed by the students and correlated these metrics with the grades. We plan to collect progressively a large baseline. We compared the productivity and defect density of the program developed by the students during the exam to industrial data and similar academic experiences. We found that the productivity of our students is very high even compared to industrial settings. Our defect density (before rework) is higher than the industrial, which includes rework.",2004,0, 1179,Assessing usability through perceptions of information scent,Information scent is an establish concept for assessing how users interact with information retrieval systems. This paper proposes two ways of measuring user perceptions of information scent in order to assess the product quality of Web or Internet information retrieval systems. An empirical study is presented which validates these measures through an evaluation based on a live e-commerce application. This study shows a strong correlation between the measures of perceived scent and system usability. Finally the wider applicability of these methods is discussed.,2004,0, 1180,Software failure rate and reliability incorporating repair policies,"Reliability of a software application, its failure rate and the residual number of faults in an application are the three most important metrics that provide a quantitative assessment of the failure characteristics of an application. Typically, one of many stochastic models known as software reliability growth models (SRGMs) is used to describe the failure behavior of an application in its testing phase, and obtain an estimate of the above metrics. In order to ensure analytical tractability, SRGMs are based on an assumption of instantaneous repair and thus the estimates of the metrics obtained using SRGMs tend to be optimistic. In practice, fault repair activity consumes a nonnegligible amount of time and resources. Also, repair may be conducted according to many policies which are reflective of the schedule and budget constraints of a project. A few research efforts that have sought to incorporate repair into SRGMs are restrictive, since they consider only one of the several SRGMs, model the repair process using a constant rate, and provide an estimate of only the residual number of faults. These techniques do not address the issue of estimating application failure rate and reliability in the presence of repair. In this paper we present a generic framework which relies on the rate-based simulation technique in order to provide the capability to incorporate various repair policies into the finite failure nonhomogeneous Poisson process (NHPP) class of software reliability growth models. We also present a technique to compute the failure rate and the reliability of an application in the presence of repair. The potential of the framework to obtain quantitative estimates of the above three metrics taking into consideration different repair policies is illustrated using several scenarios.",2004,0, 1181,Comparing several coverage criteria for detecting faults in logical decisions,"Many testing coverage criteria, including decision coverage and condition coverage, are well-known to be inadequate for software characterised by complex logical decisions, such as those in safety-critical software. In the past decade, more sophisticated testing criteria have been advocated. In particular, compliance of MC/DC has been mandated in the aviation industry for the approval of airborne software. On the other hand, the MUMCUT criterion has been proved to guarantee the detection of certain faults in logical decisions in irredundant disjunctive normal form. We analyse and empirically evaluate the ability of test sets satisfying these testing criteria in detecting faults in logical decisions. Our results show that MC/DC test sets are effective, but they may still miss some faults that can almost always be detected by test sets satisfying the MUMCUT criterion.",2004,0, 1182,Automatic generation of Markov chain usage models from real-time software UML models,"The paper concerns automatic generation of usage models from real-time software UML models. Firstly, we define the reasonably constrained real-time software UML artifacts, which include use case diagrams, timed sequence diagrams and the execution probability of each sequence diagram in its associated use case. Secondly, the paper presents a method that derives the software usage model from the constrained UML artifacts. The method elicits the messages associated with the objects under testing and their occurrence probabilities to generate the usage model of each use case. Timing constraints in sequence diagrams are considered during usage model generation. Then the usage models of use cases are integrated into the software usage model by utilizing the execution sequence relations between use cases. The usage models can be used to generate real-time software statistical test cases and facilitate real-time software statistical testing.",2004,0, 1183,On the statistical properties of the F-measure,"The F-measure - the number of distinct test cases to detect the first program failure - is an effectiveness measure for debug testing strategies. We show that for random testing with replacement, the F-measure is distributed according to the geometric distribution. A simulation study examines the distribution of two adaptive random testing methods, to study how closely their sampling distributions approximate the geometric distribution, revealing that in the worst case scenario, the sampling distribution for adaptive random testing is very similar to random testing. Our results have provided an answer to a conjecture that adaptive random testing is always a more effective alternative to random testing, with reference to the F-measure. We consider the implications of our findings for previous studies conducted in the area, and make recommendations to future studies.",2004,0, 1184,Web site complexity metrics for measuring navigability,"In years, navigability has become the pivot of Web site designs. Existing works fall into two categories. The first is to evaluate and assess a Web site's navigability against a set of criteria or check list. The second is to analyse usage data of the Web site, such as the server log files. This work investigates a metric approach to Web site navigability measurement. In comparison with existing assessment and analysis methods, navigability metrics have the advantages of objectiveness and the possibility of using automated tools to evaluate large-scale Web sites. This work proposes a number of metrics for Web site navigability measurement based on measuring Web site structural complexity. We validate these metrics against Weyuker's software complexity axioms, and report the results of empirical studies of the metrics.",2004,0, 1185,A methodology for constructing maintainability model of object-oriented design,"It is obvious that qualities of software design heavily affects on qualities of software ultimately developed. One of claimed advantages of object-oriented paradigm is the ease of maintenance. The main goal of this work is to propose a methodology for constructing maintainability model of object-oriented software design model using three techniques. Two subcharacteristics of maintainability: understandability and modifiability are focused in this work. A controlled experiment is performed in order to construct maintainability models of object-oriented designs using the experimental data. The first maintainability model is constructed using metrics-discriminant technique. This technique analyzes the pattern of correlation between maintainability levels and structural complexity design metrics applying discriminant analysis. The second one is built using weighted-score-level technique. The technique uses a weighted sum method by combining understandability and modifiability levels which are converted from understandability and modifiability scores. The third one is created using weighted-predicted-level technique. Weighted-predicted-level uses a weighted sum method by combining predicted understandability and modifiability level, obtained from applying understandability and modifiability models. This work presents comparison of maintainability models obtained from three techniques.",2004,0, 1186,An integrated design of multipath routing with failure survivability in MPLS networks,"Multipath routing employs multiple parallel paths between the source and destination for a connection request to improve resource utilization of a network. In this paper, we provide an integrated design of multipath routing in MPLS networks. In addition, we take into account the quality of service (QoS) in carrying delay-sensitive traffic and failure survivability in the design. Path protection or restoration policies enables the network to accommodate link failures and avoid traffic loss. We evaluate the performance of the proposed schemes in terms of call blocking probability, network resource utilization and load balancing factor. The results demonstrate that the proposed integrated design framework can provide effective network failure survivability, and also achieve better load balancing and/or higher network resource utilization",2004,0, 1187,Assessing and improving state-based class testing: a series of experiments,"This work describes an empirical investigation of the cost effectiveness of well-known state-based testing techniques for classes or clusters of classes that exhibit a state-dependent behavior. This is practically relevant as many object-oriented methodologies recommend modeling such components with statecharts which can then be used as a basis for testing. Our results, based on a series of three experiments, show that in most cases state-based techniques are not likely to be sufficient by themselves to catch most of the faults present in the code. Though useful, they need to be complemented with black-box, functional testing. We focus here on a particular technique, Category Partition, as this is the most commonly used and referenced black-box, functional testing technique. Two different oracle strategies have been applied for checking the success of test cases. One is a very precise oracle checking the concrete state of objects whereas the other one is based on the notion of state invariant (abstract states). Results show that there is a significant difference between them, both in terms of fault detection and cost. This is therefore an important choice to make that should be driven by the characteristics of the component to be tested, such as its criticality, complexity, and test budget.",2004,0, 1188,Adaptive selection combining for soft handover in OVSF W-CDMA systems,"In W-CDMA, soft handover is supported at cell boundaries to maintain communication quality. The maximal ratio combining (MRC) and generalized selection combining (GSC) , are two possible approaches. However, soft handover is resource-intensive. In this letter, we propose an adaptive selection combining (ASC) scheme that can switch flexibly between MRC and GSC so as to take care of both channel loading and communication quality. The signal-to-interference-and-noise ratio (SINR) is kept as high as that of MRC while the blocking probability can remain at about the same level as that of GSC.",2004,0, 1189,On-chip testing of embedded silicon transducers,"System-on-chip (SoC) technologies are evolving towards the integration of highly heterogeneous devices, including hardware of a different nature, such as digital, analog and mixed-signal, together with software components. Embedding transducers, as predicted by technology roadmaps, is yet another step in this continuous search for higher levels of integration and miniaturisation. Embedded transducers fabricated with silicon/CMOS compatible technologies may have more limitations than transducers fabricated with fully dedicated technologies. However, they offer industry the possibility of providing low cost applications for very large market niches, while still keeping acceptable transducer sensitivity. This is the case, for example, for accelerometers, micromirrors display devices or CMOS imagers. Embedded transducers are analog components. But given the fact that they work with signals other than electrical, the test of these embedded parts poses new challenges. Test technology for SoC devices is rapidly maturing but many difficulties still remain, in particular for addressing the test of analog and mixed-signal parts. In this paper, we present our work in the field of MEMS (microelectromechanical systems) on-chip testing with a brief overview of the state-of-the-art.",2004,0, 1190,A practical consistent-quality two-pass VBR video coding algorithm for digital storage application,"This paper presents a practical two-pass VBR coding algorithm for digital storage applications, which provides consistent visual quality perceptually and satisfies a fixed total bit-budget constraint. In the first-pass coding, we detect scene changes precisely and obtain scene complexity as well as other statistical parameters of the video sequence. The total target bits are allocated to frames optimally according to scene spatial and temporal complexity with decoder buffer overflow and underflow consideration. In the second-pass coding, we employ an improved iterative Qp selection algorithm to search for the optimal picture-level reference Qp that results in minimum difference between the number of coded bits and that of target bits. Adaptive quantization and the non-integer picture-level reference Qp selection guarantee uniform quantization artifacts perceptually and the precision of iterative search. Experimental results show that the proposed algorithm can offer consistent visual quality with smaller PSNR variation and higher average PSNR improvement as compared to TM5 algorithm and a typical two-pass coding algorithm. The application of the proposed algorithm includes DVD, digital library, VOD, and other digital media applications.",2004,0, 1191,SeSFJava harness: service and assertion checking for protocol implementations,"Many formal specification languages and associated tools have been developed for network protocols. Ultimately, formal language specifications have to be compiled into a conventional programming language and this involves manual intervention (even with automated tools). This manual work is often error prone because the programmer is not familiar with the formal language. So our goal is to verify and test the ultimate implementation of a network protocol, rather than an abstract representation of it. We present a framework, called services and systems framework (SeSF), in which implementations and services are defined by programs in conventional languages, and mechanically tested against each other. SeSF is a markup language that can be integrated with any conventional language. We integrate SeSF into Java, resulting in what we call SeSFJava. We present a service-and-assertion checking harness for SeSFJava, called SeSFJava harness, in which distributed SeSFJava programs can be executed, and the execution checked against services and any other correctness assertions. The harness can test the final implementation of a concurrent system. We present an application to a data transfer service and sliding window protocol implementation. SeSFJava and the harness has been used in networking courses to specify and test transmission control protocol-like transport protocols and service.",2004,0, 1192,Configurable fault-tolerant processor (CFTP) for spacecraft onboard processing,"The harsh radiation environment of space, the propensity for SEUs to perturb the operations of silicon-based electronics, the rapid development of microprocessor capabilities and hence software applications, and the high cost (dollars and time) to develop and prove a system, require flexible, reliable, low cost, rapidly developed system solutions. A reconfigurable triple-modular-redundant (TMR) system-on-a-chip (SOC) utilizing field-programmable gate arrays (FPGAs) provides a practical solution for space-based systems. The configurable fault-tolerant processor (CFTP) is such a system, designed specifically for the purpose of testing and evaluating, on orbit, both the reliability of instantiated TMR soft-core microprocessors, the ability to reconfigure the system to support any onboard processor function, and the means for detecting and correcting SEU-induced configuration faults. The CFTP utilizes commercial off-the-shelf (COTS) technology to investigate a low-cost, flexible alternative to processor hardware architecture, with a total-ionizing-dose (TID) tolerant FPGA as the basis for a SOC. The flexibility of a configurable processor, based on FPGA technology, enables on-orbit upgrades, reconfigurations, and modifications to the soft-core architecture in order to support dynamic mission requirements. Single event upsets (SEU) to the data stored in the FPGA-based soft-core processors are detected and corrected by the TMR architecture. SEUs affecting the FPGA configuration itself are corrected by background """"scrubbing"""" of the configuration. The CFTP payload consists of a printed circuit board (PCB) of 5.3 inches7.3 inches utilizing a slightly modified PC/104 bus interface. The initial FPGA configuration is an instantiation of a TMR processor, with included error detection and correction (EDAC) and memory controller circuitry. The PCB is designed with requisite supporting circuitry including a configuration controller FPGA, SDRAM, and flash memory in order to allow the greatest variety of possible configurations. The CFTP is currently manifested as a space test program (STP) experimental payload on the Naval Postgraduate School's NPSAT1 and the United States Naval Academy's MidSTAR-1 satellites, which was launched into low earth orbit in March 2003- .",2004,0, 1193,Real-time diagnosis and prognosis with sensors of uncertain quality,"This work presents a real-time approach to the detection, isolation, and prediction of component failures in large-scale systems through the combination of two modules. The modules themselves are then used in conjunction with an inference engine, TEAMS-RT, which is part of Qualtech Systems integrated diagnostic toolset, to provide the end user with accurate diagnostic and prognostic information about the state of the system. The first module is a filter used to """"clean"""" observed test results from multiple sensors from system noise. The sensors have false alarm and missed detection probabilities that are not known a-priori, and must be estimated - ideally along with the accuracies of these estimates - online, within the inference engine. Further, recognizing a practical concern in most real systems, a sparsely instantiated observation vector must not be problematic. Multiple hypothesis tracking (MHT) is at the heart of the filtering algorithm and beta prior distributions are applied to the sensor errors. The second module is a prognostic engine that uses an interacting multiple model (IMM) approach to track the """"trajectory"""" of degrading sensors. Kalman filters estimate the movement in each dimension of the sensors. The current state and trajectory of each sensor is then used to predict the time to failure value, i.e., when the component corresponding to the sensor is no longer usable. The modules are integrated together and as part of the TEAMS-RT suite; logic is presented for the cases that they disagree.",2004,0, 1194,A route for qualifying/certifying an affordable structural prognostic health management (SPHM) system,"There is a growing interest in developing affordable SPHM systems that use artificial intelligence (AI) techniques and existing flight parameters to track how each individual aircraft is used and to quantify the damaging effects of usage. Over the past four years, Smiths and BAE systems have launched collaboration work to evolve a practical SPRM system. The collaborative work has built on BAE systems vast experience of operational load monitoring (OLM) that has spanned more than 30 years. The collaborative work has also built on the unique experience of Smiths over the past 20 years that has produced automatic data correction algorithms, mathematical networks (MNs), dynamic models and flight and usage management software (FUMSTM). Smiths and BAE systems have been also carrying out extensive investigations that lead to establishing and agreeing a route for qualifying/certifying AI based systems, an essential work element to avoid a delay in introducing SPHM into service use. The investigations have covered assessing the adequacy and quality of the truth data required to train/configure an AI method. They have also addressed the regulatory authority (RA) concern that any volume of data gathered to train an AI method would not capture the truth and some operations/configurations may produce novel data outside the training data. Guidelines for management approaches based on individual aircraft tracking (IAT) have been also investigated. The paper presents the results of the Smiths and BAE systems collaborative work and presents preliminary guidelines for qualifying/certifying AI methods.",2004,0, 1195,Predictive modeling and control of DMAS,"Predicting the behavior of distributed multi-agent systems (DMAS) is a blown, extremely challenging problem. In general, we are not able to make reliable quantitative predictions of the behavior that a given DMAS exhibits, even in a blown environment, due to the complex emergent effects in those systems, which often reflect chaotic interactions. Such predictability is nonetheless crucial for reliable, controlled development and deployment of such systems. We need to be able to control the behaviors of such systems, and want to optimize configurations to achieve acceptable and reliable returns of quality-of-service for an investment of resources. We describe here an approach to developing reliable predictive models for a particular class of DMAS. We have succeeded in developing such models for this class of applications and in achieving controlled behaviors and optimized configurations based on these predictive models. We discuss our approach, and results and plans for applying this approach to broader classes of applications.",2004,0, 1196,A multiobjective module-order model for software quality enhancement,"The knowledge, prior to system operations, of which program modules are problematic is valuable to a software quality assurance team, especially when there is a constraint on software quality enhancement resources. A cost-effective approach for allocating such resources is to obtain a prediction in the form of a quality-based ranking of program modules. Subsequently, a module-order model (MOM) is used to gauge the performance of the predicted rankings. From a practical software engineering point of view, multiple software quality objectives may be desired by a MOM for the system under consideration: e.g., the desired rankings may be such that 100% of the faults should be detected if the top 50% of modules with highest number of faults are subjected to quality improvements. Moreover, the management team for the same system may also desire that 80% of the faults should be accounted if the top 20% of the modules are targeted for improvement. Existing work related to MOM(s) use a quantitative prediction model to obtain the predicted rankings of program modules, implying that only the fault prediction error measures such as the average, relative, or mean square errors are minimized. Such an approach does not provide a direct insight into the performance behavior of a MOM. For a given percentage of modules enhanced, the performance of a MOM is gauged by how many faults are accounted for by the predicted ranking as compared with the perfect ranking. We propose an approach for calibrating a multiobjective MOM using genetic programming. Other estimation techniques, e.g., multiple linear regression and neural networks cannot achieve multiobjective optimization for MOM(s). The proposed methodology facilitates the simultaneous optimization of multiple performance objectives for a MOM. Case studies of two industrial software systems are presented, the empirical results of which demonstrate a new promise for goal-oriented software quality modeling.",2004,0, 1197,Software detection mechanisms providing full coverage against single bit-flip faults,"Increasing design complexity for current and future generations of microelectronic technologies leads to an increased sensitivity to transient bit-flip errors. These errors can cause unpredictable behaviors and corrupt data integrity and system availability. This work proposes new solutions to detect all classes of faults, including those that escape conventional software detection mechanisms, allowing full protection against transient bit-flip errors. The proposed solutions, particularly well suited for low-cost safety-critical microprocessor-based applications, have been validated through exhaustive fault injection experiments performed on a set of real and synthetic benchmark programs. The fault model taken into consideration was single bit-flip errors corrupting memory cells accessible to the user by means of the processor instruction set. The obtained results demonstrate the effectiveness of the proposed solutions.",2004,0, 1198,A screened Coulomb scattering module for displacement damage computations in Geant4,"A new software module adding screened Coulomb scattering to the Monte Carlo radiation simulation code Geant4 has been applied to compute the nonionizing component of energy deposited in semiconductor materials by energetic protons and other forms of radiation. This method makes it possible to create three-dimensional maps of nonionizing energy deposition from all radiation sources in structures with complex compositions and geometries. Essential aspects of previous NIEL computations are confirmed, and issues are addressed both about the generality of NIEL and the ability of beam experiments to simulate the space environment with high fidelity, particularly for light ion irradiation at very high energy. A comparison of the displacement energy deposited by electromagnetic and hadronic interactions of a proton beam with published data on GaAs LED degradation supports the conclusion of previous authors that swift light ions and slower heavy ions produce electrically active defects with differing efficiencies. These results emphasize that, for devices with extremely small dimensions, it is increasingly difficult to predict the response of components in space without the assistance of computational modeling.",2004,0, 1199,"Detection of forestland degradation using Landsat TM data in panda's habitat, Sichuan, China","In the 1990s forestland in the panda's habitat, Southwest China Mountains, underwent rapid degradation since the natural forest was converted into agricultural land. Remote sensing technology has not only provided a vivid representation of the forestland's surface but also become an efficient source of thematic maps such as the deforestation in this area. Landsat-5 TM data in 1994 and Landsat-7 TM data in 2002 are available for detecting the forestland degradation in the study area. The foggy, cloudy and snowy weather and mountainous landscape make it difficult to acquire remotely sensed data with high quality in the panda's habitat. Supervised classification is performed in the image process and a maximum-likelihood classification (MLC) is applied using the spectral signatures from the training sites. According to the topographical and meteorological conditions, different training sites are created such as forest-forest, river valley, forest, crop, town, water, snow, cloud, shadow and non-forest. As the result, forestland degradation map provides much information for forest degradation. Classification accuracy assessment is carried out by ERDAS software and the overall classification accuracy is up to 82.81%.",2004,0, 1200,Assessing biogenic emission impact on the ground-level ozone concentration by remote sensing and numerical model,"Emission inventory data is one of the major inputs for all air quality simulation models. Emission inventory data for the prediction of ground-level ozone concentration, grouped as point, area, mobile and biogenic sources, are a composite of all reported and estimated pollutant emission information from many organizations. Before applying air quality simulation model, the emission inventory data generally require additional processing for meeting spatial, temporal, and speciation requirements using advanced information technologies. In this study, SMOKE was setup to update the essential emission processing. The emission processing work was performed to prepare emission input for U.S. EPA's Models-3/CMAQ. The fundamental anthropogenic emission inventory commonly used in Taiwan is the TEDS 4.2 software package. However, without the proper inclusion of accurate estimation of biogenic emission, the estimation of ground-level ozone concentration may not be meaningful. With the aid of SPOT satellite image, biogenic gas emission modeling analysis can be achieved to fit in BEIS-2 in SMOKE. Improved utilization of land use identification data, based on SPOT outputs and emission factors, may be influential in support of the modeling work. During this practice, land use was identified via an integrated assessment based on both geographical information system and remote sensing technologies; and emission factors were adapted from a series of existing database in the literature. The research findings clearly indicate that the majority of biogenic VOCs emissions occurred in the mountains and farmland actually exhibit fewer impacts on ground-level ozone concentration in populated areas than the anthropogenic emissions in South Taiwan. This implies that fast economic growth ends up with sustainability issue due to overwhelming anthropogenic emissions",2004,0, 1201,"Quality assessment, verification, and validation of modeling and simulation applications","Many different types of modeling and simulation (M&S) applications are used in dozens of disciplines under diverse objectives including acquisition, analysis, education, entertainment, research, and training. M&S application verification and validation (V&V) are conducted to assess mainly the accuracy, which is one of many indicators affecting the M&S application quality. Much higher confidence can be achieved in accuracy if a quality-centered approach is used. This paper presents a quality model for assessing the quality of large-scale complex M&S applications as integrated with V&V. The guidelines provided herein should be useful for assessing the overall quality of an M&S application.",2004,0, 1202,Crystal Ball and Design for Six Sigma,"8 In today's competitive market, businesses are adopting new practices like Design For Six Sigma (DFSS), a customer driven, structured methodology for faster-to-market, higher quality, and less costly new products and services. Monte Carlo simulation and stochastic optimization can help DFSS practitioners understand the variation inherent in a new technology, process, or product, and can be used to create and optimize potential designs. The benefits of understanding and controlling the sources of variability include reduced development costs, minimal defects, and sales driven through improved customer satisfaction. This tutorial uses Crystal Ball Professional Edition, a suite of easy-to-use Microsoft Excel-based software, to demonstrate how stochastic simulation and optimization can be used in all five phases of DFSS to develop the design for a new compressor.",2004,0, 1203,An improved repository system for effective and efficient reuse of formal verification efforts,"This paper presents several enhancements to ARIFS, a reuse environment that sets the foundations for reusing formal verification efforts in an iterative and incremental software process for the design of distributed reactive systems. A criterion based on generic components is added, together with a self-learning mechanism, to reduce the search space and maximize the probability of retrieving useful information. Besides, a formalization is given on how to apply verification tasks on a reduced number of states when the retrieved information is not enough for the user's intents. These enhancements are shown to improve both the effectiveness and the efficiency of ARIFS.",2004,0, 1204,An infinite server queueing approach for describing software reliability growth: unified modeling and estimation framework,"In general, the software reliability models based on the nonhomogeneous Poisson processes (NHPPs) are quite popular to assess quantitatively the software reliability and its related dependability measures. Nevertheless, it is not so easy to select the best model from a huge number of candidates in the software testing phase, because the predictive performance of software reliability models strongly depends on the fault-detection data. The asymptotic trend of software fault-detection data can be explained by two kinds of NHPP models; finite fault model and infinite fault model. In other words, one needs to make a hypothesis whether the software contains a finite or infinite number of faults, in selecting the software reliability model in advance. In this article, we present an approach to treat both finite and infinite fault models in a unified modeling framework. By introducing an infinite server queueing model to describe the software debugging behavior, we show that it can involve representative NHPP models with a finite and an infinite number of faults. Further, we provide two parameter estimation methods for the unified NHPP based software reliability models from both standpoints of Bayesian and nonBayesian statistics. Numerical examples with real fault-detection data are devoted to compare the infinite server queueing model with the existing one under the same probability circumstance.",2004,0, 1205,An exploratory study of groupware support for distributed software architecture evaluation process,"Software architecture evaluation is an effective means of addressing quality related issues quite early in the software development lifecycle. Scenario-based approaches to evaluate architecture usually involve a large number of stakeholders, who need to be collocated for evaluation sessions. Collocating a large number of stakeholders is an expensive and time-consuming exercise, which may prove to be a hurdle in the wide-spread adoption of architectural evaluation practices. Drawing upon the successful introduction of groupware applications to support geographically distributed teams in software inspection, and requirements engineering disciplines, we propose the concept of distributed architectural evaluation using Internet-based collaborative technologies. This paper illustrates the methodology of a pilot study to assess the viability of a larger experiment intended to investigate the feasibility of groupware support for distributed software architecture evaluation. In addition, the results of the pilot study provide some interesting findings on the viability of groupware-supported software architectural evaluation process.",2004,0, 1206,Adaptive random testing by localization,"Based on the intuition that widely spread test cases should have greater chance of hitting the nonpoint failure-causing regions, several adaptive random testing (ART) methods have recently been proposed to improve traditional random testing (RT). However, most of the ART methods require additional distance computations to ensure an even spread of test cases. In this paper, we introduce the concept of localization that can be integrated with some ART methods to reduce the distance computation overheads. By localization, test cases would be selected from part of the input domain instead of the whole input domain, and distance computation would be done for some instead of all previous test cases. Our empirical results show that the fault detecting capability of our method is comparable to those of other ART methods.",2004,0, 1207,Performing high efficiency source code static analysis with intelligent extensions,"This paper presents an industry practice for highly efficient source code analysis to promote software quality. As a continuous work of previously reported source code analysis system, we researched and developed a few engineering-oriented intelligent extensions to implement more cost-effective extended code static analysis and engineering processes. These include an integrated empirical scan and filtering tool for highly accurate noise reduction, and a new code checking test tool to detect function call mismatch problems, which may lead to many severe software defects. We also extended the system with an automated defect filing and verification procedure. The results show that, for a huge code base of millions of lines, our intelligent extensions not only contribute to the completeness and effectiveness of static analysis, but also establish significant engineering productivity.",2004,0, 1208,Empirical evaluation of orthogonality of class mutation operators,"Mutation testing is a fault-based testing technique which provides strong quality assurance. Mutation testing has a very long history for the procedural programs at unit-level testing, but the research on mutation testing of object-oriented programs is still immature. Recently, class mutation operators are proposed to detect object-oriented specific faults. However, any analysis has not been conducted on the class mutation operators. In this paper, we evaluate the orthogonality of the class mutation operators by some experiment. The experimental results show the high possibility that each class mutation operator has fault-revealing power that is not achieved by other mutation operators, i.e. orthogonal. Also, the results show that the number of mutants from the class mutation operators is small so that the cost is not so high as procedural programs.",2004,0, 1209,Systematic operational profile development for software components,"An operational profile is a quantification of the expected use of a system. Determining an operational profile for software is a crucial and difficult part of software reliability assessment in general and it can be even more difficult for software components. This paper presents a systematic method for deriving an operational profile for software components. The method uses both actual usage data and intended usage assumptions to derive a usage structure, usage distribution and characteristics of parameters (including relationships between parameters). A usage structure represents the flow and interaction of operation calls. Statecharts are used to model the usage structures. A usage distribution represents probabilities of the operations. The method is illustrated on two Java classes but can be applied to any software component that is accessed through an application program interface (API).",2004,0, 1210,Possible implications of design decisions based on predictions,"Software systems and applications are increasingly constructed as assemblies of preexisting components. This makes software development cheaper and faster, and results in more favorable preconditions for achieving higher quality. This approach, however, introduces several problems, most of them originating from the fact that preexisting software components behave as black boxes. One problem is that it is difficult to analyze the properties of systems in which they are incorporated. To simplify the evaluation of system properties, different techniques have been developed to predict the behavior of systems on the basis of the properties of the constituent components. Because many cannot be formally specified, these techniques make use of statistical terms such as probability or mean value to express system properties. This paper discusses ethical aspects of the interpretation of such predictions. This problem is characteristic of many domains (data mining, safety-critical systems, etc.) but it is inherent in component-based software development",2004,0, 1211,Strategic power infrastructure defense (SPID),"Summary form only given. An advanced system called, """"strategic power infrastructure defense (SPID) system,"""" was developed by the Advanced Power Technologies (APT) Consortium consisting of the University of Washington, Arizona State University, Iowa State University and Virginia Tech. By incorporating multi-agent system technologies, the SPID system is able to assess power system vulnerability, monitor hidden failures of protective devices, and provide adaptive control actions to prevent catastrophic failures and cascading sequences of events. The SPID program was sponsored by EPRI and the U.S. Department of Defense. In this session, the panelist will summarize the SPID methodology and the multi-agent system technologies that are critical for the implementation of the SPID system. The software agents in the SPID system are organized in a multi-layer structure to facilitate collaboration among the agents. The agents communicate through a protocol called FIPA. SPID has the ability to adapt to changes in the power infrastructure environment through the embedded machine learning capability. Simulation examples of the multi-agent system is provided.",2004,0, 1212,Rotor cage fault diagnosis in induction motors based on spectral analysis of current Hilbert modulus,"Hilbert transformation is an ideal phase shifting tool in data signal processing. Being Hilbert transformed, the conjugated one of a signal is obtained. The Hilbert modulus is defined as the square of a signal and its conjugation. This work presents a method by which rotor faults of squirrel cage induction motors, such as broken rotor bars and eccentricity, can be diagnosed. The method is based on the spectral analysis of the stator current Hilbert Modulus of the induction motors. Theoretical analysis and experimental results demonstrate that has the same rotor fault detecting ability as the extended Park' vector approach. The vital advantage of the former is the smaller hardware and software spending compared with the existing ones.",2004,0, 1213,Extract rules from software quality prediction model based on neural network,"To get a highly reliable software product to the market on schedule, software engineers must allocate resources on the fault-prone software modules across the development effort. Software quality models based upon data mining from past projects can identify fault-prone modules in current similar development efforts. So that resources can be focused on fault-prone modules to improve quality prior to release. Many researchers have applied the neural networks approach to predict software quality. Although neural networks have shown their strengths in solving complex problems, their shortcoming of being 'black boxes' models has prevented them from being accepted as a common practice for fault-prone software modules prediction. That is a significant weakness, for without the ability to produce comprehensible decisions; it is hard to trust the reliability of neural networks that address real-world problems. We introduce an interpretable neural network model for software quality prediction. First, a three-layer feed-forward neural network with the sigmoid function in hidden units and the identity function in output unit was trained. The data used to train the neural network is collected from an earlier release of a telecommunications software system. Then use clustering genetic algorithm (CCA) to extract comprehensible rules from the trained neural network. We use the rule set extracted from the trained neural network to detect the fault-prone software modules of the later release and compare the predicting results with the neural network predicting results. The comparison shows that although the rule set's predicting accuracy is a little less than the trained neural network, it is more comprehensible.",2004,1, 1214,Noise identification with the k-means algorithm,"The presence of noise in a measurement dataset can have a negative effect on the classification model built. More specifically, the noisy instances in the dataset can adversely affect the learnt hypothesis. Removal of noisy instances will improve the learnt hypothesis; thus, improving the classification accuracy of the model. A clustering-based noise detection approach using the k-means algorithm is presented. We present a new metric for measuring the potentiality (noise factor) of an instance being noisy. Based on the computed noise factor values of the instances, the clustering-based algorithm is then used to identify and eliminate p% of the instances in the dataset. These p% of instances are considered the most likely to be noisy among the instances in the dataset - the p% value is varied from 1% to 40%. The noise detection approach is investigated with respect to two case studies of software measurement data obtained from NASA software projects. The two datasets are characterized by the same thirteen software metrics and a class label that classifies the program modules as fault-prone and not fault-prone. It is shown that as more noisy instances are removed, classification accuracy of the C4>5 learner improves. This indicates that the removed instances are most likely noisy instances that attributed to poor classification accuracy.",2004,0, 1215,Quantifying the quality of object-oriented design: the factor-strategy model,"The quality of a design has a decisive impact on the quality of a software product; but due to the diversity and complexity of design properties (e.g., coupling, encapsulation), their assessment and correlation with external quality attributes (e.g., maintenance, portability) is hard. In contrast to traditional quality models that express the """"goodness"""" of design in terms of a set of metrics, the novel Factor-Strategy model proposed by This work, relates explicitly the quality of a design to its conformance with a set of essential principles, rules and heuristics. This model is based on a novel mechanism, called detection strategy, that raises the abstraction level in dealing with metrics, by allowing to formulate good-design rules and heuristics in a quantifiable manner, and to detect automatically deviations from these rules. This quality model provides a twofold advantage: (i) an easier construction and understanding of the model as quality is put in connection with design principles rather than """"raw numbers""""; and (ii) a direct identification of the real causes of quality flaws. We have validated the approach through a comparative analysis involving two versions of a industrial software system.",2004,0, 1216,"An initial approach to assessing program comprehensibility using spatial complexity, number of concepts and typographical style","Software evolution can result in making a program harder to maintain, as it becomes more difficult to comprehend. This difficulty is related to the way the source code is formatted, the complexity of the code, and the amount of information contained within it. This work presents an initial approach that uses measures of typographical style, spatial complexity and concept assignment to measure these factors, and to model the comprehensibility of an evolving program. The ultimate aim of which is to identify when a program becomes more difficult to comprehend, triggering a corrective action to be taken to prevent this. We present initial findings from applying this approach. These findings show that this approach, through measuring these three factors, can model the change in comprehensibility of an evolving program. Our findings support the well-known claim that programs become more complex as they evolve, explaining this increase in complexity in terms of layout changes, conceptual coherence, spatial relationships between source code elements, and the relationship between these factors. This in turn can then be used to understand how maintenance affects program comprehensibility and to ultimately reduce its burden on software maintenance.",2004,0, 1217,A static reference flow analysis to understand design pattern behavior,"Design patterns are actively used by developers expecting that they provide the design with good quality such as flexibility and reusability. However, according to industrial reports on the use of design patterns, the expectation is not always realized . Especially, points out two causes of inappropriately applied patterns from a case study on a large commercial project: developers inexperienced in design patterns and no connection with project requirement. Wrong decisions on the use of design patterns make the program difficult to understand, and refactoring the program to improve the underlying structure, especially without documentation, can be very tricky. To eliminate wrongly applied patterns or document important decisions automatically, design pattern recovery is important for not only the development phase but also the maintenance phase. Many design pattern recovery approaches focus on structural characteristics and do not touch set-up behavior that configures links between participants and precedes pattern behavior. To detect design patterns implemented in program code more precisely and to show their behavior, we analyze program at expression level. Our approach is based on statically approximating run time behavior among pattern participants. For this, a static program analysis technique is used. Many static analysis techniques for object-oriented languages exist mainly for optimizing compiler in program analysis area.",2004,0, 1218,An initial experiment in reverse engineering aspects,"We evaluate the benefits of applying aspect-oriented software development techniques in the context of a large-scale industrial embedded software system implementing a number of crosscutting concerns. Additionally, we assess the feasibility of automatically extracting these crosscutting concerns from the source code. In order to achieve this, we present an approach for reverse engineering aspects from an ordinary application automatically. This approach incorporates both a concern verification and an aspect construction phase. Our results show that such automated support is feasible, and can lead to significant improvements in source code quality.",2004,0, 1219,Testing for missing-gate faults in reversible circuits,"Logical reversibility occurs in low-power applications and is an essential feature of quantum circuits. Of special interest are reversible circuits constructed from a class of reversible elements called k-CNOT (controllable NOT) gates. We review the characteristics of k-CNOT circuits and observe that traditional fault models like the stuck-at model may not accurately represent their faulty behavior or test requirements. A new fault model, the missing gate fault (MGF) model, is proposed to better represent the physical failure modes of quantum technologies. It is shown that MGFs are highly testable, and that all MGFs in an N-gate k-CNOT circuit can be detected with from one to [N/2] test vectors. A design-for-test (DFT) method to make an arbitrary circuit fully testable for MGFs using a single test vector is described. Finally, we present simulation results to determine (near) optimal test sets and DFT configurations for some benchmark circuits.",2004,0, 1220,Failure analysis of open faults by using detecting/un-detecting information on tests,"Recently, manufacturing defects including opens in the interconnect layers have been increasing. Therefore, a failure analysis for open faults has become important in manufacturing. Moreover, the failure analysis for open faults under BIST environment is demanded. Since the quality of the failure analysis is engaged by the resolution of locating the fault, we propose the method for locating single open fault at a stem, based on only detecting/un-detecting information on tests. Our method deduces candidate faulty stems based on the number of detections for single stuck-at fault at each fan-out branches, by performing single stuck-at fault simulation with both detecting and un-detecting tests. To improve the ability of locating the fault, the method reduces the candidate faulty stems based on the number of detections for multiple stuck-at faults at fanout branches of the candidate faulty stem, by performing multiple stuck-at fault simulation with detecting tests.",2004,0, 1221,Considering fault dependency and debugging time lag in reliability growth modeling during software testing,"Since the early 1970s tremendous growth has been seen in the research of software reliability growth modeling. In general, software reliability growth models (SRGMs) are applicable to the late stages of testing in software development and they can provide useful information about how to improve the reliability of software products. For most existing SRGMs, most researchers assume that faults are immediately detected and corrected. However, in practice, this assumption may not be realistic and satisfied. In this paper we first give a review of fault detection and correction processes in SRGMs. We show how several existing SRGMs based on NHPP models can be comprehensively derived by applying the time-dependent delay function. Furthermore, we show how to incorporate both failure dependency and time-dependent delay function into software reliability growth modeling. We present stochastic reliability models for software failure phenomenon based on NHPPs. Some numerical examples based on real software failure data sets are presented. The results show that the proposed framework to incorporate both failure dependency and time-dependent delay function into software reliability modeling has a useful interpretation in testing and correcting the software.",2004,0, 1222,A Bayesian framework-based end-to-end packet loss prediction in IP networks,"Channel modelling in a network path is of major importance in designing delay sensitive applications. It is often not possible for these applications to retransmit packets due to delay constraints and they must therefore be resilient to packet losses. In this paper, we first establish an association between traffic delays and the queue size at a network gateway. A novel method for predicting packet losses is then proposed that is based on the correlation between the packet losses and the variations in the end-to-end time delay observed during transmission. We show that this makes it possible to predict packet losses before they occur. The transmission of multimedia streams can then be dynamically adjusted to account for the predicted losses. As a result, better error-resilience can be provided for multimedia streams transmitting through a dynamic network channel. This means that they can provide an improved quality of transmission under the same network budget constraint. Experiments have been performed and preliminary results have shown that the method can provide a much smoother and more reliable transmission of data.",2004,0, 1223,Content-aware streaming of lecture videos over wireless networks,"Video streaming over wireless networks becomes increasingly important in a variety of applications. To accommodate the dynamic change of wireless networks, quality of service (QoS) scalable video streams need to be provided. This paper presents a system of content-aware wireless streaming of lecture (instructional) videos for e-learning applications. A method for real-time analysis of instructional videos is first provided to detect video content regions and classify video frames, then a 'leaking video buffer"""" model is applied to dynamically compress video streams. In our content-aware video streaming, instructional video content is detected and different QoS are selected for different types of video content. Our adaptive feedback control scheme is able to transmit properly compressed video streams to video clients not only based on wireless network bandwidth, but also based on video content and the feedback of video clients. Finally, we demonstrate the scalability and content awareness of our system and show experimental results of two lecture videos.",2004,0, 1224,Estimating Dependability of Parallel FFT Application using Fault Injection,This paper discusses estimation of dependability of a parallel FFT application. The application uses FFTW library. Fault susceptibility is assessed using software implemented fault injection. The fault injection campaign and the experiment results are presented. The response classes to injected faults are analyzed. The accuracy of evaluated data is verified experimentally.,2004,0, 1225,FarMAS: a MAS for extended quality workflow,"To date, the supply chain management systems offer no solutions for quality control of electrical domestic appliances through the traceability of components (hw or sw) information. Failures in assembled products may be detected at many points of the product life, therefore an early diagnosis could depend on the retrieval of all significant information recorded along the extended supply chain. The basic idea proposed in this work is to define a society of autonomous agents created to support the traceability of components information in a federated enterprises environment. We discuss, as a case study, a simple supply chain for the production of electrical appliances such as washing machines, refrigerators or dishwashers whose components' traceability is defined in terms of a kind of workflow extended for quality control (EQuW: extended quality workflow).",2004,0, 1226,A taxonomy and catalog of runtime software-fault monitoring tools,"A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.",2004,0, 1227,Static analyzer of vicious executables (SAVE),"Software security assurance and malware (Trojans, worms, and viruses, etc.) detection are important topics of information security. Software obfuscation, a general technique that is useful for protecting software from reverse engineering, can also be used by hackers to circumvent the malware detection tools. Current static malware detection techniques have serious limitations, and sandbox testing also fails to provide a complete solution due to time constraints. In this paper, we present a robust signature-based malware detection technique, with emphasis on detecting obfuscated (or polymorphic) malware and mutated (or metamorphic) malware. The hypothesis is that all versions of the same malware share a common core signature that is a combination of several features of the code. After a particular malware has been first identified, it can be analyzed to extract the signature, which provides a basis for detecting variants and mutants of the same malware in the future. Encouraging experimental results on a large set of recent malware are presented.",2004,0, 1228,Modeling of cable fault system,"Modeling is the essential part of implementing the prediction and location of three-phase cable fault. To predict and locate cable fault, a model of three-phase cable fault system is constructed based on a great deal of measured validation data by choosing BP neural network that has nonlinear characteristic and using the unproved BP algorithm, Levenberg-Marquardt data-optimized method. It is shown by the simulation using MATLAB software that the parameters of the model converge rapidly, and the simulated output of the neural network model and the measured output of cable fault system are approximately equal, and the mean value of the relatively predictive error of the fault distance is smaller than 0.3%, so that the model quality is reliable.",2004,0, 1229,Reliability of intelligent power routers,"In this work we seek to determine the reliability of intelligent power routers (IPR). The IPR is our building block to provide scalable coordination in a distributed model for the next generation power network. Our goal in the IPR project is to show that by distributing network intelligence and control functions using the IPR, we will be capable of achieving improved survivability, security, reliability, and re-configurability. In order to calculate the change in reliability of a system operated with and without IPR, the IPR failure mechanisms and failure probabilities must be determined. Since there is no actual data on IPR reliability, none has been built; its failure mechanisms and failure probability will be established by analogy to data routers. In this paper we consider a basic IPR structure consisting of: power hardware (breakers or other power switching elements), computer hardware (communication between IPR and CPU functions), and software. We establish the failure modes for each element of the selected IPR structure to estimate the IPR reliability. This estimate of failure probability will be used in our future work to measure the change in reliability of a power system operated with and without IPR.",2004,0, 1230,Wireless download agent for high-capacity wireless LAN based ubiquitous services,"We propose a wireless download agent which effectively controls the point at which a download starts for providing a comfortable mobile computing environment. A user can get the desired data with the wireless download agent while walking through a service area without stopping. We conducted simulations to evaluate its performance in terms of throughput, download period, and probability of successful download. Our results show that the proposed scheme very well suits the wireless download agent in high-speed wireless access systems with many users. Furthermore, we describe the use of the proposed scheme considering the randomness of the walking directions of users.",2004,0, 1231,Case study of condition based health maintenance of large power transformer at Rihand substation using on-line FDD-EPT,"The constant monitoring of large, medium and small power transformers for purposes of assessing their health, operating condition while maximizing personnel resources and preserving capital is often a topic of spirited discussions at both national and international conferences. Further, the considerations of transformers being out of service for extended periods of time due to conditions that, with the proper diagnostic monitoring equipment could have preemptively detected, diagnosed and prevented catastrophic equipment failure is of prime importance in today's economic conditions. These operating conditions are becoming more serious in several locations around the world. A recent case study of PD monitoring done at the Riband substation in India is discussed here in the sequel. Additional transformers tested in India for PGCIL (Ballabgarh) and NTPC (Noida, Delhi) in 1999 will be presented using the FDD-EPT system. Demonstrations of the FDD-EPT (fault diagnostic device for electrical power transformers) system on transformers for BHEL, MP and Tata Power in Mumbai also provided encouraging results. This will further illustrate the efficacy of this system.",2004,0, 1232,Computational promise of simultaneous recurrent network with a stochastic search mechanism,"This work explores the computational promise of enhancing Simultaneous recurrent neural networks with a stochastic search mechanism as static optimizers. Successful application of simultaneous recurrent neural networks to static optimization problems, where the training had been achieved through one of a number of deterministic gradient descent algorithms including recurrent backpropagation, backpropagation and resilient propagation, was recently reported in the literature. Accordingly at the present time, it became highly desirable to assess if enhancing the neural optimization algorithm with a stochastic search mechanism would be of substantial utility and value, which is the focus of the study reported in this paper. Two techniques are employed to assess the added value of a potential enhancement through a stochastic search mechanism: one method entails comparison of SRN performance with a stochastic search algorithm, the genetic algorithm, and the second method leverages estimation for the quality of optimal solutions through Held-Karp bounds. The traveling salesman problem is employed as the benchmark for the simulation study reported herein. Simulation results suggest that there is likely to be significant improvement possible in the quality of solutions for the traveling salesman problem, and potentially other static optimization problems, if the Simultaneous recurrent neural network is augmented with a stochastic search mechanism.",2004,0, 1233,A hybrid genetic algorithm for tasks scheduling in heterogeneous computing systems,"Efficient application scheduling is critical for achieving high performance in heterogeneous computing systems (HCS). Because an application can be partitioned into a group of tasks and represented as a directed acyclic graph (DAG), the problem can be stated as finding a schedule for a DAG to be executed in a HCS so that the schedule length can be minimized. The tasks scheduling problem is NP-hard in general, except in a few simplified situations. In order to obtain optimal or suboptimal solutions, a large number of scheduling heuristics have been presented in the literature. Genetic algorithm (GA), as a power tool to achieve global optimal, has been successfully used in this field. This work presents a new hybrid genetic algorithm to solve the scheduling problem in HCS. It uses a direct method to encode a solution into chromosome. Topological sort of DAG is used to repair the offspring in order to avoid yielding illegal or infeasible solutions, and it also guarantees that all feasible solutions can be reached with some probability. In order to remedy the GA's weakness in fine-tuning, this paper uses a greedy strategy to improve the fitness of the individuals in crossover operator, based on Lamarckian theory in the evolution. The simulation results comparing with a typical genetic algorithm and a typical list heuristic, both from the literature, show that this algorithm produces better results in terms of both quality of solution and convergence speed.",2004,0, 1234,Applying SPC to autonomic computing,"Statistical process control (SPC) is proposed as the method to frame autonomic computing system. SPC follows a data-driven approach to characterize, evaluate, predict, and improve the system services. Perspectives that are central to process measurement including central tendency, variation, stability, capability are outlined. The principles of SPC hold that by establishing and sustaining stable levels of variability, processes will yield predictable results. SPC is explored to meet and support individual autonomic computing elements' requirement. One timetabling example illustrates how SPC discover and incorporate domain-specific knowledge, thus stabilize and optimize the application service quality. The example represents reasonable application of process control that has been demonstrated to be successful in engineering point of view.",2004,0, 1235,Development of on-line vibration condition monitoring system of hydro generators,"Mechanical vibration information is critical to diagnosing the health of a generator. Existing vibration monitoring systems have poor resistibility against the strong electromagnetic interference in field and difficult to extend. In order to solve these problems, an on-line monitoring system based on LonWorks control network has been developed. In this paper, the structure of hardware and software, the functions and characteristics of system have been described in detail. The analysis result of a vibration signal has proved that this monitoring system can detect fault of generators efficiently. The system has been employed to monitor the vibration of hydro generators operation in a water power plant in Hubei province, China.",2004,0, 1236,Deriving test sets from partial proofs,"Proof-guided testing is intended to enhance the test design with information extracted from the argument for correctness. The target application field is the verification of fault-tolerance algorithms where a complete formal proof is not available. Ideally, testing should be focused on the pending parts of the proof. The approach is experimentally assessed using the example of a group membership protocol (GMP), a complete proof of which has been developed by others in the PVS environment. In order to obtain a partial proof example, we proceed to flaw insertion into the PVS specification. Test selection criteria are then derived from the analysis of the reconstructed (now partial) proof. Their efficiency for revealing the flaw is experimentally assessed, yielding encouraging results.",2004,0, 1237,A generic method for statistical testing,"This paper addresses the problem of selecting finite test sets and automating this selection. Among these methods, some are deterministic and some are statistical. The kind of statistical testing we consider has been inspired by the work of Thevenod-Fosse and Waeselynck. There, the choice of the distribution on the input domain is guided by the structure of the program or the form of its specification. In the present paper, we describe a new generic method for performing statistical testing according to any given graphical description of the behavior of the system under test. This method can be fully automated. Its main originality is that it exploits recent results and tools in combinatorics, precisely in the area of random generation of combinatorial structures. Uniform random generation routines are used for drawing paths from the set of execution paths or traces of the system under test. Then a constraint resolution step is performed, aiming to design a set of test data that activate the generated paths. This approach applies to a number of classical coverage criteria. Moreover, we show how linear programming techniques may help to improve the quality of test, i.e. the probabilities for the elements to be covered by the test process. The paper presents the method in its generality. Then, in the last section, experimental results on applying it to structural statistical software testing are reported.",2004,0, 1238,Reliability estimation for statistical usage testing using Markov chains,"Software validation is an important activity in order to test whether or not the correct software has been developed. Several testing techniques have been developed, and one of these is statistical usage testing (SUT). The main purpose of SUT is to test a software product from a user's point of view. Hence, usage models are designed and then test cases are developed from the models. Another advantage of SUT is that the reliability of the software can be estimated. In this paper, Markov chains are used to represent the usage models. Several approaches using Markov chains have been applied. This paper extends these approaches and presents a new approach to estimate the reliability from Markov chains. The reliability estimation is implemented in a new tool for statistical usage testing called MaTeLo. The tool is developed in a joint European project involving six industrial partners and two university partners. The purpose of the tool is to provide an estimate of the reliability and to automatically produce test cases based on usage models described as to Markov models.",2004,0, 1239,Are found defects an indicator of software correctness? An investigation in a controlled case study,"In quality assurance programs, we want indicators of software quality, especially software correctness. The number of found defects during inspection and testing are often used as the basis for indicators of software correctness. However, there is a paradox in this approach, since the remaining defects is what impacts negatively on software correctness, not the found ones. In order to investigate the validity of using found defects or other product or process metrics as indicators of software correctness, a controlled case study is launched. 57 sets of 10 different programs from the PSP course are assessed using acceptance test suites for each program. In the analysis, the number of defects found during the acceptance test are compared to the number of defects found during development, code size, share of development time spent on testing etc. It is concluded from a correlation analysis that 1) fewer defects remain in larger programs 2) more defects remain when larger share of development effort is spent on testing, and 3) no correlation exist between found defects and correctness. We interpret these observations as 1) the smaller programs do not fulfill the expected requirements 2) that large share effort spent of testing indicates a """"hacker"""" approach to software development, and 3) more research is needed to elaborate this issue.",2004,0, 1240,An exploration of software faults and failure behaviour in a large population of programs,"A large part of software engineering research suffers from a major problem-there are insufficient data to test software hypotheses, or to estimate parameters in models. To obtain statistically significant results, a large set of programs is needed, each set comprising many programs built to the same specification. We have gained access to such a large body of programs (written in C, C++, Java or Pascal) and in this paper we present the results of an exploratory analysis of around 29,000 C programs written to a common specification. The objectives of this study were to characterise the types of fault that are present in these programs; to characterise how programs are debugged during development; and to assess the effectiveness of diverse programming. The findings are discussed, together with the potential limitations on the realism of the findings.",2004,0, 1241,An empirical study on reliability modeling for diverse software systems,"Reliability and fault correlation are two main concerns for design diversity, yet empirical data are limited in investigating these two. In previous work, we conducted a software project with real-world application for investigation on software testing and fault tolerance for design diversity. Mutants were generated by injecting one single real fault recorded in the software development phase to the final versions. In this paper, we perform more analysis and experiments on these mutants to evaluate and investigate the reliability features in diverse software systems. We apply our project data on two different reliability models and estimate the reliability bounds for evaluation purpose. We also parameterize fault correlations to predict the reliability of various combinations of versions, and compare three different fault-tolerant software architectures.",2004,0, 1242,Test-adequacy and statistical testing: combining different properties of a test-set,"Dependability assessment of safety-critical or safety-related software components is an important issue for example within the nuclear industry, the avionics sector or the military. Statistical testing is one way of quantifying the dependability of a given software product. The use of sector-specific standards with their suggested test-criteria is another (nonquantitative) way of aiming at employing only components that are """"dependable enough"""". Ideally, both, the acknowledged test criteria and statistical test methods should come into play when assessing software dependability. We want to - in the long-term - move towards this aim. Thus we investigate in this paper a model to combine the fault-detection power of a given test-set (a test-adequacy criterion) with the statistical power of the test-set, i.e. the number of statistical tests within the test-set. With this model we aim at drawing out of any given test-set - whether devised by a plant engineer or a statistician - the overall contribution it can make to dependability assessment.",2004,0, 1243,Coverage metrics for Continuous Function Charts,"Continuous Function Charts are a diagrammatical language for the specification of mixed discrete-continuous embedded systems, similar to the languages of Matlab/Simulink, and often used in the domain of transportation systems. Both control and data flows are explicitly specified when atomic units of computation are composed. The obvious way to assess the quality of integration test suites is to compute known coverage metrics for the generated code. This production code does not exhibit those structures that would make it amenable to """"relevant"""" coverage measurements. We define a translation scheme that results in structures relevant for such measurements, apply coverage criteria for both control and dataflows at the level of composition of atomic computational units, and argue for their usefulness on the grounds of detected errors.",2004,0, 1244,"Towards a unified approach to the representation of, and reasoning with, probabilistic risk information about software and its system interface","Early risk assessment is key in planning the development of systems, including systems that involve software. Such risk assessment needs a combination of the following elements; 1) Severity estimates for the potential effects of failures, and likelihood estimates for their causes; 2) Fault trees that link causes to failures; 3) Efficacy estimates of design and process steps towards reducing risk; 4) Distinctions between preventing, alleviating and detecting (thereafter removing), risks; 5) Risk preventions that have potential side effects of themselves introducing risks. The paper shows a unified approach that accommodates all these elements. The approach combines fault trees (from probabilistic risk assessment methods) with explicit treatment of risk mitigations (a generalization of the notion of a """"detection"""" seen in FMECA analyses). Fault trees capture the causal relationships by which failure mechanisms may combine to lead to failure modes. Risk mitigations encompass (and distinguish among) options to prevent risks, detect risks, and alleviate risks (i.e., decrease their impact should they occur). This approach has been embodied in extensions to a JPL-developed risk assessment tool, and is illustrated here on software risk assessment information drawn from an actual project's software system FMECA (failure modes, effects and criticality analysis). Since its elements are typical of risk assessment of software and its system interface, the findings should be relevant to a wide range of software systems.",2004,0, 1245,Robust prediction of fault-proneness by random forests,"Accurate prediction of fault prone modules (a module is equivalent to a C function or a C+ + method) in software development process enables effective detection and identification of defects. Such prediction models are especially beneficial for large-scale systems, where verification experts need to focus their attention and resources to problem areas in the system under development. This paper presents a novel methodology for predicting fault prone modules, based on random forests. Random forests are an extension of decision tree learning. Instead of generating one decision tree, this methodology generates hundreds or even thousands of trees using subsets of the training data. Classification decision is obtained by voting. We applied random forests in five case studies based on NASA data sets. The prediction accuracy of the proposed methodology is generally higher than that achieved by logistic regression, discriminant analysis and the algorithms in two machine learning software packages, WEKA [I. H. Witten et al. (1999)] and See5. The difference in the performance of the proposed methodology over other methods is statistically significant. Further, the classification accuracy of random forests is more significant over other methods in larger data sets.",2004,0, 1246,Preliminary results on using static analysis tools for software inspection,"Software inspection has been shown to be an effective defect removal practice, leading to higher quality software with lower field failures. Automated software inspection tools are emerging for identifying a subset of defects in a less labor-intensive manner than manual inspection. This paper investigates the use of automated inspection for a large-scale industrial software system at Nortel Networks. We propose and utilize a defect classification scheme for enumerating the types of defects that can be identified by automated inspections. Additionally, we demonstrate that automated code inspection faults can be used as efficient predictors of field failures and are effective for identifying fault-prone modules.",2004,0, 1247,Predicting class testability using object-oriented metrics,"We investigate factors of the testability of object-oriented software systems. The starting point is given by a study of the literature to obtain both an initial model of testability and existing OO metrics related to testability. Subsequently, these metrics are evaluated by means of two case studies of large Java systems for which JUnit test cases exist. The goal of This work is to define and evaluate a set of metrics that can be used to assess the testability of the classes of a Java system.",2004,0, 1248,New test paradigms for yield and manufacturability - Invited address,"Summary form only given, as follows. Test holds the key to success in the rapidly changing world of process technology and design complexity as well as the fashionable area of Design-for-Manufacturability (DFM). As test chips become prohibitively expensive and less statistically valid, it is only through statistical analysis of volume product test data that we can assess the improvements in parametric variation and in defectivity necessary in both the design and the manufacturing processes to meet the yield and supply chain targets for complex ICs. Volume test data is also key to the implementation of adaptive testing. Probabilistic decisionmaking can be applied to the test flows to reduce test costs and improve test quality by specifically targeting the parameters and defects that are likely to cause failures and reduce unnecessary testing and burn-in of defect-free die. Some key paradigms in the test world have to change, however, if test is to keep up with these challenges or test will continue to be relegated to the """"non-valueadded"""" category that has been a long-standing barrier to investment in test equipment and software.",2004,0, 1249,Low overhead delay testing of ASICs,"Delay testing has become increasingly essential as chip geometries shrink. Low overhead or cost effective delay test methodology is successful when it results in a minimal number of effective tests and eases the demands on an already burdened IC design and test staff. This work describes one successful method in use by IBM ASICs that resulted in a slight total test pattern increase, generally ranging between 10 and 90%. Example ICs showed a pattern increase of as little as 14% from the stuck-at fault baseline with a transition fault coverage of 89%. In an ASIC business, a large number of ICs are processed, which does not allow for the personnel to understand how to test each individual IC design in detail. Instead, design automation software that is timing and testability aware ensures effective and efficient tests. The resultant tests detect random spot timing delay defects. These types of defects are time zero related failures and not reliability wearout mechanisms.",2004,0, 1250,Simulation based system level fault insertion using co-verification tools,"This work presents a simulation-based, fault insertion environment, which allows faults to be """"injected"""" into a Verilog model of the hardware. A co-verification platform is used to allow real, system level software to be executed in the simulation environment. A fault manager is used to keep track of the faults that are inserted on to the hardware and to monitor diagnostic messages to determine whether the software is able to detect, diagnose and/or cope with the injected fault. Examples be provided to demonstrate the capabilities of this approach as well as the resource requirements (time, system, human). Other benefits and issues of this approach also be discussed.",2004,0, 1251,Auto-coding/auto-proving flight control software,"This work describes the results of an experiment to compare conventional software development with software development using automatic code generation from Simulink and mathematically based code verification (proof). A real industrial scale, safety critical system was used as the basis for the experiment in order to validate results, although this imposed some constraints. The principal aims for the experiment were to answer the following three questions. 1. Could automatic code generation be integrated with the verification tools to give a software development process to produce software that would pass the existing functional unit tests? 2. Would the code be of sufficient quality to be flown, i.e. was it certifiable? 3. What were the cost implications of adopting the process as part of a development lifecycle? The experiment showed how to integrate the techniques into existing development processes and indicated where processes could be streamlined. The code and the technique were independently assessed as being certifiable for safety critical applications. The results of the experiment were generally positive indicating the potential for reductions of 60%-70% of the software development costs alone, that would translate into a 30%-40% reduction in software life cycle costs.",2004,0, 1252,Language and Compiler Support for Adaptive Applications,"There exist many application classes for which the users have significant flexibility in the quality of output they desire. At the same time, there are other constraints, such as the need for real-time response or limit on the consumption of certain resources, which are more crucial. This paper provides a combined language/compiler and runtime solution for supporting adaptive execution of these applications, i.e., to allow them to achieve the best precision while still meeting the specified constraint at runtime. The key idea in our language extensions is to have the programmers specify adaptation parameters, i.e, the parameters whose values can be varied within a certain range. A program analysis algorithm states the execution time of an application component as a function of the values of the adaptation parameters and other runtime constants. These constants are determined by initial runs of the application in the target environment. We integrate this work with our previous work on supporting coarse-grained pipelined parallelism, and thus support adaptive execution for data-intensive applications in a distributed environment. Our experimental results on three applications have shown that our combined compile-time/runtime model can predict the execution times quite well, and therefore, support adaptation to meet a variety of constraints.",2004,0, 1253,Assessing Fault Sensitivity in MPI Applications,"Today, clusters built from commodity PCs dominate high-performance computing, with systems containing thousands of processors now being deployed. As node counts for multi-teraflop systems grow to thousands and with proposed petaflop system likely to contain tens of thousands of nodes, the standard assumption that system hardware and software are fully reliable becomes much less credible. Concomitantly, understanding application sensitivity to system failures is critical to establishing confidence in the outputs of large-scale applications. Using software fault injection, we simulated single bit memory errors, register file upsets and MPI message payload corruption and measured the behavioral responses for a suite of MPI applications. These experiments showed that most applications are very sensitive to even single errors. Perhaps most worrisome, the errors were often undetected, yielding erroneous output with no user indicators. Encouragingly, even minimal internal application error checking and program assertions can detect some of the faults we injected.",2004,0, 1254,FPGA implementation of spiking neural networks - an initial step towards building tangible collaborative autonomous agents,"This work contains the results of an initial study into the FPGA implementation of a spiking neural network. This work was undertaken as a task in a project that aims to design and develop a new kind of tangible collaborative autonomous agent. The project intends to exploit/investigate methods for engineering emergent collective behaviour in large societies of actual miniature agents that can learn and evolve. Such multi-agent systems could be used to detect and collectively repair faults in a variety of applications where it is difficult for humans to gain access, such as fluidic environments found in critical components of material/industrial systems. The initial achievement of implementation of a spiking neural network on a FPGA hardware platform and results of a robotic wall following task are discussed by comparison with software driven robots and simulations.",2004,0, 1255,Automatic red-eye detection and removal,"Red-eye is a very common problem in flash photography which can ruin a good photo by introducing color aberration into the subject's eyes. Previous methods to deal with this problem include special speedlight apparatus or flash mode that can reduce the red-eye effect, as well as post-capture red-eye correction software. The paper presents a new approach to detecting and correcting red-eye defects automatically by combining flash and non-flash digital images. It is suitable to be incorporated into compact digital cameras that support continuous shooting. Such a camera would eliminate red-eye immediately after image capture. Unlike existing approaches, our method is simple, fast and can recover the true color of the eyes",2004,0, 1256,Regression benchmarking with simple middleware benchmarks,"The paper introduces the concept of regression benchmarking as a variant of regression testing focused at detecting performance regressions. Applying the regression benchmarking in the area of middleware development, the paper explains how regression benchmarking differs from middleware benchmarking in general. On a real-world example of TAO, the paper shows why the existing benchmarks do not give results sufficient for regression benchmarking, and proposes techniques for detecting performance regressions using simple benchmarks.",2004,0, 1257,Applications of fuzzy-logic-wavelet-based techniques for transformers inrush currents identification and power systems faults classification,"The advent of wavelet transforms (WTs) and fuzzy-inference mechanisms (FIMs) with the ability of the first to focus on system transients using short data windows and of the second to map complex and nonlinear power system configurations provide an excellent tool for high speed digital relaying. This work presents a new approach to real-time fault classification in power transmission systems, and identification of power transformers magnetising inrush currents using fuzzy-logic-based multicriteria approach Omar A.S. Youssef [2004, 2003] with a wavelet-based preprocessor stage Omar A.S. Youssef [2003, 2001]. Three inputs, which are functions of the three line currents, are utilised to detect fault types such as LG, LL, LLG as well as magnetising inrush currents. The technique is based on utilising the low-frequency components generated during fault conditions on the power system and/or magnetising inrush currents. These components are extracted using an online wavelet-based preprocessor stage with data window of 16 samples (based on 1.0 kHz sampling rate and 50 Hz power frequency). Generated data from the simulation of an 330 /33Y kV, step-down transformer connected to a 330 kV model power system using EMTP software were used by the MATLAB program to test the performance of the technique as to its speed of response, computational burden and reliability. Results are shown and they indicate that this approach can be used as an effective tool for high-speed digital relaying, and that computational burden is much simpler than the recently postulated fault classification.",2004,0, 1258,Medical software control quality using the 3D Mojette projector,"The goal of this paper is to provide a tool that allows to assess a set of 2D projection data from a 3D object which can be either real or synthetic data. However, the generated 2D projection set is not sampled onto a classic orthogonal grid but uses a regular grid depending on the discrete angle of projection and the 3D orthogonal grid. This allows a representation of the set of projections that can easily be described in spline spaces. The subsequent projections set is used after an interpolation scheme to compare (in the projection space) the adequation between the original dataset and the obtained reconstruction. These measures are performed from a 3D multiresolution stack in the case of 3D PET projector. Its direct use for the digital radiography reprojection control quality assessment is finally exposed.",2004,0, 1259,A real-time network simulation application for multimedia over IP,"This paper details a secure voice over IP (SVoIP) development tool, the network simulation application (Netsim), which provides real-time network performance implementation of quality of service (QoS) statistics in live real-time data transmission over packet networks. This application is used to implement QoS statistics including packet loss and inter-arrival delay jitter. Netsim is written in Visual C++ using MFC for MS Windows environments. The program acts as a transparent gateway for a SVoIP server/client pair connected via IP networks. The user specifies QoS parameters such as mean delay, standard deviation of delay, unconditional loss probability, conditional loss probability, etc. Netsim initiates and accepts connection, controls packet flow, records all packet sending / arrival times, generates a data log, and performs statistical calculations.",2004,0, 1260,The use of impedance measurement as an effective method of validating the integrity of VRLA battery production,The response of batteries to an injection of AC current to give an indication of battery state-of-health has been well established and extensively reported over a number years but it has been used largely as a means of assessing the condition of batteries in service. In this paper the use of impedance measurement as a quality assurance procedure during manufacture will be described. There are a number of commercially available meters that are used for monitoring in field operations but they have not been developed for use in a manufacturing environment. After extensive laboratory testing a method specifically designed for impedance measurement system at the end of manufacturing lines in order to assure higher product integrity was devised and validated. A special testing station was designed for this purpose and includes battery conditioning prior to the test and sophisticated software driven data analysis in order to increase the effectiveness and reliability of the measurement. The paper reports the analysis of the data collected over two years of monitoring of the entire production with the impedance testing station at the end of the production line. Data comparison with earlier production shows an increase in the number of batteries scrapped internally and correspondingly a reduction in the number of defective products reaching distribution centres and also the field. The accumulated experience has also been very helpful in getting better information about the effect of the various parameters that affect the measured impedance value and this will assist in improving the reliability of impedance measurements in field service.,2004,0, 1261,Consolidating software tools for DNA microarray design and manufacturing,"As the human genome project progresses and some microbial and eukaryotic genomes are recognized, a novel technology, DNA microarray (also called gene chip, biochip, gene microarray, and DNA chip) technology, has attracted increasing number of biologists, bioengineers and computer scientists recently. This technology promises to monitor the whole genome at once, so that researchers can study the whole genome on the global level and have a better picture of the expressions among millions of genes simultaneously. Today, it is widely used in many fields - disease diagnosis, gene classification, gene regulatory network, and drug discovery. We present a concatenated software solution for the entire DNA array flow exploring all steps of a consolidated software tool. The proposed software tool has been tested on Herpes B virus as well as simulated data. Our experiments show that the genomic data follow the pattern predicted by simulated data although the number of border conflicts (quality of the DNA array design) is several times smaller than for simulated data. We also report a trade-off between the number of border conflicts and the running time for several proposed algorithmic techniques employed in the physical design of DNA arrays.",2004,0, 1262,Dynamic load balancing performance in cellular networks with multiple traffic types,"Several multimedia applications are being introduced to cellular networks. Since the quality of service (QoS) requirements such as bandwidth for different services might be different, the analysis of conventional multimedia cellular networks has been done using multi-dimensional Markov-chains in previous works. In these analyses, it is assumed that a call request will be blocked if the number of available channels is not sufficient to support the service. However, it has been shown in previous works that the call blocking rate can be reduced significantly, if a dynamic load balancing scheme is employed. In this paper, we develop an analytical framework for the analysis of dynamic load balancing schemes with multiple traffic types. To illustrate the impact of dynamic load balancing on the performance, we study the integrated cellular and ad hoc relay (iCAR) system. Our results show that with a proper amount of load balancing capability (i.e., load balancing channels), the call blocking probability for all traffic types can be reduced significantly.",2004,0, 1263,"Extend the meaning of """"R"""" to """"R4"""" in ART (automated software regression technology) to improve quality and reduce R&D and production costs","Regression testing has been conventionally employed to check the effectiveness of a solution, track existing issues and any new issues created by the result of fixing the old issues. Positioned at the tail end of the software cycle, regression testing technology can hardly influence or contribute to earlier phases such as architect, design, implementation or device testing. Extending the """"R"""" in ART to R4 (regression, research, retain & grow expertise and early exposure) has been proving. R4 is not only providing ART with more powerful tools to detect issues as early as in the architect phase, but also arming R&D software with more proactive practices to avoid costly catastrophic problems from propagating to customer sites. This paper attempts to share some best practices and contributions from Cisco-ARF (a Cisco automated regression/research facility) whose charter is to ensure the quality of product lines running on tens of million lines of code. These award-winning practices have proven to save multi-million dollars in repair costs, thousands of engineering hours, and continue to set the higher standards for testing technology under proactive leadership and management to gain higher quality and customer satisfaction.",2004,0, 1264,Teaming assessment: is there a connection between process and product?,"It is reasonable to suspect that team process influences the way students work, the quality of their learning and the excellence of their product. This study addresses the relations between team process variables on the one hand, and behaviors and outcomes, on the other. We measured teaming skill, project behavior and performance, and project product grades. We found that knowledge of team process predicts team behavior, but that knowledge alone does not predict performance on the project. Second, both effort and team skills, as assessed by peers, were related to performance. Third, team skills did not correlate with the students' effort. This pattern of results suggests that instructors should address issues of teaming and of effort separately. It also suggests that peer ratings of teammates tap aspects of team behavior relevant to project performance, whereas declarative knowledge of team process does not.",2004,0, 1265,Introducing prob-a-sag - a probabilistic method for voltage sag management,"Comprehensive voltage sag management in a power distribution system includes the technical and economic impact of sags, annual frequency of sags, as well as the effect of possible mitigative means. A probabilistic approach is required for performing the task in complicated industrial processes. Prob-a-sag is a novel method combining all these features. Two-dimensional arrays are used for expressing the quantities and carrying out the analysis or optimization. Equipment sag sensitivity is expressed as tripping probability, which enables the probabilistic sensitivity assessment of a large process. The method is very flexible; increasing the array resolution improves the result precision. If sag quantities other than depth and duration are preferred, we may increase the number of array dimensions accordingly. Prob-a-sag is compatible with any spreadsheet application, or may be implemented in sophisticated network analysis software.",2004,0, 1266,GXP : An Interactive Shell for the Grid Environment,"We describe GXP, a shell for distributed multi-cluster environments. With GXP, users can quickly submit a command to many nodes simultaneously (approximately 600 milliseconds on over 300 nodes spread across five local-area networks). It therefore brings an interactive and instantaneous response to many cluster/network operations, such as trouble diagnosis, parallel program invocation, installation and deployment, testing and debugging, monitoring, and dead process cleanup. It features (1) a very fast parallel (simultaneous) command submission, (2) parallel pipes (pipes between local command and all parallel commands), and (3) a flexible and efficient method to interactively select a subset of nodes to execute subsequent commands on. It is very easy to start using GXP, because it is designed not to require cumbersome per-node setup and installation and to depend only on a very small number of pre-installed tools and nothing else. We describe how GXP achieves these features and demonstrate through examples how they make many otherwise boring and error-prone tasks simple, efficient, and fun",2004,0, 1267,Observations on the implementation and testing of scripted Web applications,"Scripting languages have become a very popular choice for implementing server-side programs in Web applications. Scripting languages are thought to provide quick start up and enhance programmer productivity. We present two case studies in which scripting languages were used. In both studies, the projects struggled with implementation; however, project factors such as the strength of management and the training of the development team are thought to out weigh the choice of programming language in terms of impact on project success. The choice to implement a Web application with a scripting language can lead to undisciplined behavior on the part of management and the development team, so caution must be exercised when implementing complex applications. Testers of scripted implementations should adjust their risk profile to match the error-prone aspects of the language. Dynamically type checked scripting languages are likely to be susceptible to type errors. Scripting languages are powerful enough to successfully implement complex e-commerce applications as long as management and software engineering practice are strong.",2004,0, 1268,Effect of fault dependency and debugging time lag on software error models,"In this paper, we first show how several existing SRGMs based on NHPP models can be comprehensively derived by applying the time-dependent delay function. Moreover, for most conventional SRGMs, they assume that detected errors are immediately corrected. But this assumption may not be realistic in practice. Therefore, we incorporate the ideas of failure dependency and time-dependent delay function into software reliability growth modeling. New SRGMs are proposed and numerical illustrations based on real data set are presented. Evaluation results show that the proposed framework to incorporate both failure dependency and time-dependent delay function for SRGM has a fairly accurate prediction capability.",2004,0, 1269,Incorporating imperfect debugging into software fault processes,"For the traditional SRGMs, it is assumed that a detected fault is immediately removed and is perfectly repaired with no new faults being introduced. In reality, it is impossible to remove all faults from the fault correction process and have a fault-free effect on the software development environment. In order to relax this perfect debugging assumption, we introduce the possibility of imperfect debugging phenomenon. Furthermore, most of the traditional SRGMs have focused on the failure detection process. Consideration of fault correction process in the existing models is limited. However, to achieve desired level of software quality, it is very important to apply powerful technologies for removing the errors in the fault correction process. Therefore, we divide these processes into different two nonhomogeneous Poisson processes (NHPPs). Moreover, these models are considered to be more practical to depict the fault-removal phenomenon in software development.",2004,0, 1270,Analysis of ultrasonic wave propagation in metallic pipe structures using finite element modelling techniques,"This paper describes the development of a FEM representing ultrasonic inspection in a metallic pipe. The model comprises two wedge transducer components, water coupled onto the inner wall of a steel pipe and configured to generate/receive ultrasonic shear waves. One device is used in pulse-echo mode to analyse any reflected components within the system, with the second transducer operating in a passive mode. A number of simple defect representations have been incorporated into the model and both the reflected and transmitted wave components acquired at each wedge. Both regular crack and lamination defects have been investigated, at 3 different locations to evaluate the relationship between propagation path length and defect response. These responses are analysed in both the time and frequency domains. Moreover, the FEM has produced visual interpretation, in the form of a movie simulation, of the interaction between the propagating pressure wave and the defect. A combination of these visual aids and the predicted temporal/spectral waveforms has demonstrated fundamental differences in the response from either a crack or lamination defect.",2004,0, 1271,A highly tunable radio frequency filter using bulk ferroelectric materials,"The desirable attribute of software defined radios (SDR) implies that RF Front-ends must be multi-band and frequency agile. Wideband SDRs need a lower size, weight, and power (SWAP) tunable filter technology to meet the military's current and future communications needs. This is essential for the SDR operating in a battery powered environment such as man-portable Joint Tactical Radios (JTRS). The current method for building a tunable filter is to use discrete varactors at each section of the filter. These devices are nonlinear and their low third order intercept points make them highly susceptible to intermodulation which severely limits their dynamic range. Dielectric ceramics are inherently lossy and prone to breakdown due to the presence of materials defects. In this paper we will present a new approach to construction of tunable RF filters using low loss bulk ceramic high dielectric constant Barium Strontium Titanate (BST) ceramics. These ceramic compositions exhibit large change in dielectric constant with applied electric field. A prototype tunable filter has been built using these materials exhibiting 3:1 permittivity tunability. We will present a lumped circuit filter design for 30-450 MHz using the tunability of these paraelectric capacitors. The circuit design and experimental results along with the achievement of 1.7:1 frequency tunability of these filters will be shown.",2004,0, 1272,Choosing best basis in wavelet packets for fingerprint matching,"Fingerprint matching has been deployed in a variety of security related applications. Traditional minutiae detection based identification algorithms do not utilize the rich discriminatory texture structure of fingerprint images. Furthermore, minutiae detection requires substantial improvement of image quality and is thus error-prone. In this paper, we propose a new algorithm for fingerprint identification using wavelet packet analysis and best basis selection. Each fingerprint is decomposed using two dimensional wavelet packet family corresponding to different scales. The energy distribution of the fingerprint in each subband is extracted as a feature for identification. Wavelet packet decomposition yields a redundant representation of the image. For this reason, several algorithms for selecting the best basis from this redundant representation have been investigated. In this paper, we propose a new method for choosing best basis in wavelet packets for fingerprint matching. Experiments show that our new algorithm improves the accuracy of fingerprint matching.",2004,0, 1273,Fault detection in model predictive controller,"Real-time monitoring and maintaining model predictive controller (MPC) is becoming an important issue with its wide implementation in the industries. In this paper, a measure is proposed to detect faults in MFCs by comparing the performance of the actual controller with the performance of the ideal controller. The ideal controller is derived from the dynamic matrix control (DMC) in an ideal work situation and treated as a measure benchmark. A detection index based on the comparison is proposed to detect the state change of the target controller. This measure is illustrated through the implementation for a water tank process.",2004,0, 1274,Rate control for random access networks: The finite node case,"We consider rate control for random access networks. In earlier work, we proposed a rate control mechanism which we analyzed using the well-known slotted Aloha model with an infinite set of nodes. In this paper, we extend this work to the finite node case and analyze two different packet-scheduling schemes: a backlog-dependent scheme where the retransmission probabilities depend on the total number of a backlogged packets at a given node, and a backlog-independent scheme. Using a Markov chain model, we derive conditions under which the rate control stabilizes the network. We also discuss how this mechanism can be used to provide differentiated quality-of-service both in terms of throughput and delay. We use numerical case studies to illustrate our results.",2004,0, 1275,Instruction level test methodology for CPU core software-based self-testing,"TIS (S. Shamshiri et al., 2004) is an instruction level methodology for CPU core self-testing that enhances the instruction set of a CPU with test instructions. Since the functionality of test instructions is the same as the NOP instruction, NOP instructions can be replaced with test instructions so that online testing can be done with no performance penalty. TIS tests different parts of the CPU and detects stuck-at faults. This method can be employed in offline and online testing of all kinds of processors. Hardware-oriented implementation of TIS was proposed previously (S. Shamshiri et al., 2004) that tests just the combinational units of the processor. Contributions of this paper are first, a software-based approach that reduces the hardware overhead to a reasonable size and second, testing the sequential parts of the processor besides the combinational parts. Both hardware and software oriented approaches are implemented on a pipelined CPU core and their area overheads are compared. To demonstrate the appropriateness of the TIS test technique, several programs are executed and fault coverage results are presented.",2004,0, 1276,Rule-based noise detection for software measurement data,"The quality of training data is an important issue for classification problems, such as classifying program modules into the fault-prone and not fault-prone groups. The removal of noisy instances will improve data quality, and consequently, performance of the classification model. We present an attractive rule-based noise detection approach, which detects noisy instances based on Boolean rules generated from the measurement data. The proposed approach is evaluated by injecting artificial noise into a clean or noise-free software measurement dataset. The clean dataset is extracted from software measurement data of a NASA software project developed for realtime predictions. The simulated noise is injected into the attributes of the dataset at different noise levels. The number of attributes subjected to noise is also varied for the given dataset. We compare our approach to a classification filter, which considers and eliminates misclassified instances as noisy data. It is shown that for the different noise levels, the proposed approach has better efficiency in detecting noisy instances than the C4.5-based classification filter. In addition, the noise detection performance of our approach increases very rapidly with an increase in the number of attributes corrupted.",2004,0, 1277,A mathematical morphological method to thin edge detection in dark region,"The performance of image segmentation depends on the output quality of the edge detection process. Typical edge detecting method is based on detecting pixels in an image with high gradient values, and then applies a global threshold value to extract the edge points of the image. By these methods, some detected edge points may not belong to the edge and some thin edge points in dark regions of the image are being eliminated. These eliminated edges may be with important features of the image. This paper proposes a new mathematical morphological edge-detecting algorithm based on the morphological residue transformation derived from dilation operation to detect and preserve the thin edges. Moreover, this work adopts five bipolar oriented edge masks to prune the miss detected edge points. The experimental results show that the proposed algorithm is successfully to preserve the thin edges in the dark regions.",2004,0, 1278,Smooth ergodic hidden Markov model and its applications in text to speech systems,"In text-to-speech systems, the accuracy of information extraction from text is crucial in producing high quality synthesized speech. In this paper, a new scheme for converting text into its equivalent phonetic spelling is proposed and developed. This method has many advantages over its predecessors and it can complement many other text to speech converting systems in order to get improved performance.",2004,0, 1279,On-chip testing of embedded silicon transducers,"System-on-chip (SoC) technologies are evolving towards the integration of highly heterogeneous devices, including hardware of a different nature, such as digital, analog and mixed-signal, together with software components. Embedding transducers, as predicted by technology roadmaps, is yet another step in this continuous search for higher levels of integration and miniaturisation. Embedded transducers fabricated with silicon/CMOS compatible technologies may have more limitations than transducers fabricated with fully dedicated technologies. However, they offer industry the possibility of providing low cost applications for very large market niches, while still keeping acceptable transducer sensitivity. This is the case, for example, for accelerometers, micro-mirrors display devices or CMOS imagers. Embedded transducers are analog components. However, given the fact that they work with signals other than electrical, the test of these embedded parts poses new challenges. Test technology for SoC devices is rapidly maturing but many difficulties still remain, in particular for addressing the test of analog and mixed-signal parts. In this paper, we present our work in the field of MEMS (micro-electro-mechanical systems) on-chip testing with a brief overview of the state-of-the-art.",2004,0,1189 1280,A generic interface modeling approach for SOC design,"One of the key problems in IP-centric SOC design is integration between different IP cores. Since most IPs have different interface schemes and operation rules: they cannot smoothly communicate each other without any auxiliary glue logic. Furthermore, integration of IPs with different protocols is still a tedious error-prone task. To achieve the goal that make the most of IP reuse and smooth IP cores to communicate, this paper presents a generic language-independent interface modeling approach to assist interface synthesis. Given two different communication protocols, an algorithm is developed to generate synthesizable RTL code of interface subsystem on basis of the proposed interface model for IP integration. The novel algorithm can produce multi-language code such as Verilog, VHDL and SpeeC, which can be used as input for a synthesis tools. The proposed approach has been successfully integrated into our interface synthesis tool.",2004,0, 1281,Using software tools and metrics to produce better quality test software,"Automatic test equipment (ATE) software is often written by test equipment engineers without professional software training. This may lead to poor designs and an excessive number of defects. The Naval Surface Warfare Center (NSWC), Corona Division, as the US Navy's recognized authority on test equipment assessment, has reviewed a large number of test software programs. As an aid in the review process, various software tools have been used such as PC-lintTM or Understand for C++TM. This paper focus on software tools for C compilers since C is the most error prone language in use today. The McCabe cyclomatic complexity metric and the Halstead complexity measures are just two of the ways to measure """"software quality"""". Applying the best practices of industry including coding standards, software tools, configuration management and other practices produce better quality code in less time. Good quality code would also be easier to write, understand, maintain and upgrade.",2004,0, 1282,Investigation of pushback based detection and prevention of network bandwidth attacks,Pushback approach has been applied for the detection and prevention against DDoS attacks by identifying the destination IP addresses in the dropped packets when congestion happens. The identified destination IP addresses are used to guide the subsequent packet dropping at both local router and upstream routers so that the total bandwidth can be controlled within a desired range. This paper investigates an application of pushback approach for the detection and prevention of more general network bandwidth attacks based on the profiles of destination port distribution instead of destination IP addresses. The new approach can be used to detect and prevent against the attacks like Internet worms. The investigation applies the long trace dataset of NLANR - CESCA-I and an Internet Worm Propagation simulator to simulate the generation of profiles and the detection of the Internet CodeRed worm. The dataset statistics and simulation results demonstrate the effectiveness of the new approach in the detection and prevention of Internet worms.,2004,0, 1283,Sonar image quality assessment for an autonomous underwater vehicle,"Within the EU-funded project """"ADVOCATE II"""" the participating partners are developing an advanced machine diagnosis system for autonomous systems, which is based on an integrated approach. The solution combines different """"intelligent"""" modules to create the open software architecture for diagnosis and decision tasks. ATLAS ELEKTRONIK is going to integrate the ADVOCATE modules into an autonomous underwater vehicle (AUV), which must rely on an automatic obstacle avoidance system, based on sonar image processing. Beside typical electronic failures there is the possibility that the image quality is not sufficient for reliable obstacle recognition. The AUV needs to know this fact to react in an appropriate manner. To solve this sonar image assessment problem, a Bayesian belief module (BBN) has been developed. The BBN module is based on the AI technique known as probabilistic graphical models (PGMs). In particular, a time-sliced, object-oriented limited-memory influence diagram is used as the underlying PGM of the BBN module. The BBN module provides a diagnosis and suggests appropriate recovery actions on the sonar image assessment task",2004,0, 1284,A ratio sensitive image quality metric,"Many applications require a fast quality measure for digitally coded images, such as the mobile multimedia communication. Although the widely used peak signal-noise ratio (PSNR) has low computation cost, it fails to predict structured errors that dominate in digital images. On the other hand, human visual system (HVS) based metrics can improve the prediction accuracy. However, they are computationally complex and time consuming. This paper proposes a very simple image quality measure defined by a ratio based mathematical formulation that attempts to simulate the scaling of the human visual sensation to brightness. The current results demonstrate that the novel image quality metric predicts the subjective ratings better and has lower computation cost than the PSNR. As an alternative to the PSNR, the proposed metric can be used for on-line or real-time picture quality assessment applications.",2004,0, 1285,PD diagnosis on medium voltage cables with oscillating voltage (OWTS),"Detecting, locating and evaluating of partial discharges (PD) in the insulating material, terminations and joints provides the opportunity for a quality control after installation and preventive detection of arising service interruption. A sophisticated evaluation is necessary between PD in several insulating materials and also in different types of terminations and joints. For a most precise evaluation of the degree and risk caused by PD it is suggested to use a test voltage shape that is preferably like the same under service conditions. Only under these requirements the typical PD parameters like inception and extinction voltage, PD level and PD pattern correspond to significant operational values. On the other hand the stress on the insulation should be limited during the diagnosis to not create irreversible damages and thereby worsening the condition of the test object. The paper introduces an oscillating wave test system (OWTS), which meets these mentioned demands well. The design of the system, its functionality and especially the operating software are made for convenient field application. Field data and experience reports was presented and discussed. This field data serve also as good guide for the level of danger to the different insulating systems due to partial discharges.",2004,0, 1286,A data collection scheme for reliability evaluation and assessment-a practical case in Iran,"Data collection is an essential element of reliability assessment and many utilities throughout the world have established comprehensive procedures for assessing the performance of their electric power systems. Data collection is also a constituent part of quantitative power system reliability assessment in which system past performance and prediction of future performance are evaluated. This paper presents an overview of the Iran electric power system data collection scheme and the procedure to its reliability analysis. The scheme contains both equipment reliability data collection procedure and structure of reliability assessment. The former constitutes generation, transmission and distribution equipment data. The latter contains past performance and predictive future performance of the Iran power system. The benefits of this powerful data base within an environment of change and uncertainty will help utilities to keep down cost, while meeting the multiple challenges of providing high degrees of reliability and power quality of electrical energy.",2004,0, 1287,Development and performances of a dental digital radiographic system using a high resolution CCD image sensor,"Dental digital radiographic (DDR) system using a high resolution charge-coupled device (CCD) imaging sensor was developed and the performances of this system for dental clinic imaging was evaluated. In order to determine the performances of the system, the modulation transfer function (MTF), the signal to noise ratio according to X-ray exposure, the dose reduction effects and imaging quality of the system were investigated. This system consists of a CCD imaging sensor (pixel size: 22 mum) to detect X-ray, an electrical signal processing circuit and a graphical user interface software to display the images and diagnosis. The MTF was obtained from a Fourier transform of the line spread function (LSF), which was itself derived from the edge spread function (ESF) of a sharp edge image acquired. The spatial resolution of the system was measured at a 10% contrast in terms of the corresponding MTF value and the distance between the X-ray source and the CCD image sensor was fixed at 20 cm. The best image quality obtained at the exposure conditions of 60 kVp, 7 mA and 0.05 sec. At this time, the signal to noise ratio and X-ray dose were 23 and 41% (194 muGy) of a film-based method (468 muGy). The spatial resolution of this system, the result of MTF, was approximately 12 line pairs per mm at the 0.05 exposure time. Based on the results, the developed DDR system using a CCD imaging sensor could be suitably applied for intra-oral radiographic imaging because of its low dose, real time acquisition, no chemical processing, image storage and retrieval etc.",2004,0, 1288,The real-time computing model for a network based control system,"This paper studies a network based real-time control system, and proposes to model this system as a periodic real-time computing system. With efficient scheduling algorithms and software fault-tolerance deadline mechanism, this model proves that the system can meet its task timing constraints while tolerating system faults. The simulation study shows that in cases with high failure probability, the lower priority tasks suffer a lot in completion rate. In cases with low failure probability, this algorithm works well with both high priority and lower priority task. This conclusion suggests that an Internet based control system should manage to keep the failure rate to the minimum to achieve a good system performance.",2004,0, 1289,Transient stability assessment of an electric power system using trajectory sensitivity analysis,"In this paper it is presented a methodology to assess the transient stability of an electric power system using trajectory sensitivity analysis. This approach studies the variations of the system variables with respect to the small variations in initial conditions and parameters. Trajectory sensitivity functions of the post fault system with respect to relevant parameters are evaluated. The developed technique is combined with an accurate, flexible and efficient hybrid method that allows detailed modelling of the different devices of the power network, the simulation of distinct contingency scenarios, as well as the suitable identification of the different modes of instability. Moreover, it indicates unequivocally the set of the critical machines and allows to easily evaluate the transient stability margin and the critical clearing time.",2004,0, 1290,An auditor-based survey for assessing adherence to configuration management in software organizations,"The fundamental problem that confronts Pakistani software industry is that they have very limited experience in software process improvement strategies. We hardly find any Capability Maturity Model (CMM) level 3, 4 or 5 certified company in our software industry. It is therefore, necessary to find out the problems that are faced by companies in adopting CMM or its components. As a first step software configuration management (SCM), which is one component or key process area of CMM level 2, is focused in this research for evaluation. This paper is based on an auditor-based survey of 38 Islamabad based software companies. The goal was to find our adherence of these companies to SCM strategies. Some issues are identified, that need to be resolved for successful implementation of SCM. This paper also highlights where Islamabad based software companies rank them with respect to CMM.",2004,0, 1291,Supporting architecture evaluation process with collaborative applications,"The software architecture community has proposed several approaches to assess the capability of a system's architecture with respect to desired quality attributes (such as maintainability and performance). Scenario-based approaches are considered effective and mature. However, these methods heavily rely on face-to-face meetings, which are expensive and time consuming. Encouraged by the successful adoption of Internet-based technologies for several meeting based activities, we have been developing an approach to support architecture evaluation using Web-based collaborative applications, in this paper, we present a preliminary framework for conducting architecture evaluation in a distributed arrangement. We identify some supportive technologies and their expected benefits. We also present some of the initial findings of a research program designed to assess the effectiveness of the proposed idea. Findings of this study provide some support for distributed architecture evaluation.",2004,0, 1292,A survey of software testing practices in alberta,"Software organizations have typically de-emphasized the importance of software testing. In this paper, the results of a regional survey of software testing and software quality assurance techniques are described. Researchers conducted the study during the summer and fall of 2002 by surveying software organizations in the Province of Alberta. Results indicate that Alberta-based organizations tend to test less than their counterparts in the United States. The results also indicate that Alberta software organizations tend to train fewer personnel on testing-related topics. This practice has the potential for a two-fold impact: first, the ability to detect trends that lead to reduced quality and to identify the root causes of reductions in product quality may suffer from the lack of testing. This consequence is serious enough to warrant consideration, since overall quality may suffer from the reduced ability to detect and eliminate process or product defects. Second, the organization may have a more difficult time adopting methodologies such as extreme programming. This is significant because other industry studies have concluded that many software organizations have tried or will in the next few years try some form of agile method. Newer approaches to software development like extreme programming increase the extent to which teams rely on testing skills. Organizations should consider their testing skill level as a key indication of their readiness for adopting software development techniques such as test-driven development, extreme programming, agile modelling, or other agile methods.",2004,0, 1293,Evolving legacy systems through a multi-objective decision process,"Our previous work on improving the quality of object-oriented legacy systems includes: i) devising a quality-driven re-engineering framework (L. Tahvildari et al., 2003); ii) proposing a software transformation framework based on soft-goal interdependency graphs to enhance quality (L. Tahvildari and K. Kontogiannis, 2002); and iii) investigating the usage of metrics for detecting potential design flaws (L. Tahvildari and K. Kontogiannis, 2004). This paper defines a decision making process that determines a list of source-code improving transformations among several applicable transformations. The decision-making process is developed on a multi-objective decision analysis technique. This type of technique is necessary as there are a number of different, and sometimes conflicting, criterion among non-functional requirements. For the migrant system, the proposed approach uses heuristic estimates to guide the discovery process",2004,0, 1294,Why Build Software,"We really build software to solve real problems. If we successfully solve the intended problem and thus satisfy the customers' needs, we have done a good job-regardless of the measured quality of our product or process. If the software contributes to humanity, generates few complaints, and makes a profit, then the software is good. All other measures are secondary to the real goal. All other measures have but a second-order relationship to quality. For example, we need measurement to help assess the health of a development project, or the status of a product under development. For these goals, there are obviously many other measures that make sense and are critical.",2004,0, 1295,Object oriented software quality prediction using general regression neural networks,"This paper discusses the application of General Regression Neural Network (GRNN) for predicting the software quality attribute -- fault ratio. This study is carried out using static Object-Oriented (OO) measures (64 in total) as the independent variables and fault ratio as the dependent variable. Software metrics used include those concerning inheritance, size, cohesion and coupling. Prediction models are designed using 15 possible combinations of the four categories of the measures. We also tested the goodness of fit of the neural network model with the standard parameters. Our study is conducted in an academic institution with the software developed by students of Undergraduate/Graduate courses.",2004,1, 1296,Effective software-based self-test strategies for on-line periodic testing of embedded processors,"Software-based self-test (SBST) strategies are particularly useful for periodic testing of deeply embedded processors in low-cost embedded systems with respect to permanent and intermittent operational faults. Such strategies are well suited to embedded systems that do not require immediate detection of errors and cannot afford the well-known hardware, information, software, or time-redundancy mechanisms. We first identify the stringent characteristics of a SBST program to be suitable for on-line periodic testing. Also, we study the probability for a SBST program to detect permanent and intermittent faults during on-line periodic testing. Then, we introduce a new SBST methodology with a new classification and test-priority scheme for processor components. After that, we analyze the self-test routine code styles for the three more effective test pattern generation (TPG) strategies in order to select the most effective self-test routine for on-line periodic testing of a component under test. Finally, we demonstrate the effectiveness of the proposed SBST methodology for on-line periodic testing by presenting experimental results for two pipeline reduced instruction set computers reduced instruction set processors of different architecture.",2005,0, 1297,High-impedance fault detection using discrete wavelet transform and frequency range and RMS conversion,"High-impedance faults (HIFs) are faults which are difficult to detect by overcurrent protection relays. Various pattern recognition techniques have been suggested, including the use of wavelet transform . However this method cannot indicate the physical properties of output coefficients using the wavelet transform. We propose to use the Discrete Wavelet Transform (DWT) as well as frequency range and rms conversion to apply a pattern recognition based detection algorithm for electric distribution high impedance fault detection. The aim is to recognize the converted rms voltage and current values caused by arcs usually associated with HIF. The analysis using discrete wavelet transform (DWT) with the conversion yields measurement voltages and currents which are fed to a classifier for pattern recognition. The classifier is based on the algorithm using nearest neighbor rule approach. It is proposed that this method can function as a decision support software package for HIF identification which could be installed in an alarm system.",2005,0, 1298,An Architecture for Dynamic Data Source Integration,"Integrating multiple heterogeneous data sources in to applications is a time-consuming, costly and error-prone engineering task. Relatively mature technologies exist that make integration tractable from an engineering perspective. These technologies however have many limitations, and hence present opportunities for breakthrough research. This paper briefly describes some of these limitations. It then provides an overview of the Data Concierge research project and prototype that is attempting to provide solutions to some of these limitations. The paper focuses on the core architecture and mechanisms in the Data Concierge for dynamically attaching to a previously unidentified source of information. The generic API supported by the Data Concierge is described, along with the architecture and prototype tools for describing the meta-data necessary to facilitate dynamic integration. In addition, we describe the outstanding challenges that remain to be solved before the Data Concierge concept can be realized.",2005,0, 1299,An Alternative to Technology Readiness Levels for Non-Developmental Item (NDI) Software,"Within the Department of Defense, Technology Readiness Levels (TRLs) are increasingly used as a tool in assessing program risk. While there is considerable evidence to support the utility of using TRLs as part of an overall risk assessment, some characteristics of TRLs limit their applicability to software products, especially Non-Developmental Item (NDI) software including Commercial-Off-The-Shelf, Government-Off-The-Shelf, and Open Source Software. These limitations take four principle forms: 1) """"blurring-together"""" various aspects of NDI technology/product readiness; 2) the absence of some important readiness attributes; 3) NDI product """"decay;"""" and 4) no recognition of the temporal nature of system development and acquisition context. This paper briefly explores these issues, and describes an alternate methodology which combines the desirable aspects of TRLs with additional readiness attributes, and defines an evaluation framework which is easily understandable, extensible, and applicable across the full spectrum of NDI software.",2005,0, 1300,Perspectives on Redundancy: Applications to Software Certification,"Redundancy is a feature of systems that arises by design or as an accidental byproduct of design, and can be used to detect, diagnose or correct errors that occur in systems operations. While it is usually investigated in the context of fault tolerance, one can argue that it is in fact an intrinsic feature of a system that can be analyzed on its own without reference to any fault tolerance capability. In this paper, we submit three alternative views of redundancy, which we propose to analyze to gain a better understanding of redundancy; we also explore means to use this understanding to enhance the design of fault tolerant systems.",2005,0, 1301,Reusability metrics for software components,"Summary form only given. Assessing the reusability, adaptability, compose-ability and flexibility of software components is more and more of a necessity due to the growing popularity of component based software development (CBSD). Even if there are some metrics defined for the reusability of object-oriented software (OOS), they cannot be used for CBSD because these metrics require analysis of source code. The aim of this paper is to study the adaptability and compose-ability of software components, both qualitatively and quantitatively. We propose metrics and a mathematical model for the above-mentioned characteristics of software components. The interface characterization is the starting point of our evaluation. The adaptability of a component is discussed in conjunction with the complexity of its interface. The compose-ability metric defined for database components is extended for general software components. We also propose a metric for the complexity and adaptability of the problem solved by a component, based on its use cases. The number of alternate flows from the use case narrative is considered as a measurement for the complexity of the problem solved by a component. This was our starting point in developing a set of metrics for evaluating components functionality-wise. The main advantage of defining these metrics is the possibility to measure adaptability, reusability and quality of software components, and therefore to identify the most effective reuse strategy.",2005,0, 1302,Automatic construction of multidimensional schema from OLAP requirements,"Summary form only given. The manual design of data warehouse and data mart schemes can be a tedious, error-prone, and time-consuming task. In addition, it is a highly complex engineering task that calls for methodological support. This paper lays the grounds for an automatic generation approach of multidimensional schemes. It first defines a tabular format for OLAP requirements. Secondly, it presents a set of algebraic operators used to transform automatically the OLAP requirements, specified in the tabular format, to data mart modelled either as star or constellation schemes. Our approach is illustrated with an example.",2005,0, 1303,Software requirement risk assessment using UML,"Summary form only given. Risk assessment is an integral part of software risk management. There are several methods for risk assessment during various phases of software development and at different levels of abstraction. However, there are very few techniques available for assessing risk at the requirements level and those that are available are highly subjective and are not based on any formal design models. Such techniques are human-intensive and highly error prone. This paper presents a methodology that assesses software risk at the requirements level using Unified Modeling Language (UML) specifications of the software at the early development stages. Each requirement is mapped to a specific operational scenario in UML. We determine the possible failure modes of the scenario and find out the complexity of the scenario in each failure mode. The risk factor of a scenario in a failure mode is obtained by combining the complexity of the failure mode in that scenario and the severity of the failure. The result of applying the methodology on a cardiac pacemaker case study is presented.",2005,0, 1304,Quantifying software architectures: an analysis of change propagation probabilities,"Summary form only given. Software architectures are an emerging discipline in software engineering as they play a central role in many modern software development paradigms. Quantifying software architectures is an important research agenda, as it allows software architects to subjectively assess quality attributes and rationalize architecture-related decisions. In this paper, we discuss the attribute of change propagation probability, which reflects the likelihood that a change that arises in one component of the architecture propagates (i.e. mandates changes) to other components.",2005,0, 1305,Model-based performance risk analysis,"Performance is a nonfunctional software attribute that plays a crucial role in wide application domains spreading from safety-critical systems to e-commerce applications. Software risk can be quantified as a combination of the probability that a software system may fail and the severity of the damages caused by the failure. In this paper, we devise a methodology for estimation of performance-based risk factor, which originates from violations, of performance requirements, (namely, performance failures). The methodology elaborates annotated UML diagrams to estimate the performance failure probability and combines it with the failure severity estimate which is obtained using the functional failure analysis. We are thus able to determine risky scenarios as well as risky software components, and the analysis feedback can be used to improve the software design. We illustrate the methodology on an e-commerce case study using step-by step approach, and then provide a brief description of a case study based on large real system.",2005,0, 1306,Class point: an approach for the size estimation of object-oriented systems,"In this paper, we present an FP-like approach, named class point, which was conceived to estimate the size of object-oriented products. In particular, two measures are proposed, which are theoretically validated showing that they satisfy well-known properties necessary for size measures. An initial, empirical validation is also performed, meant to assess the usefulness and effectiveness of the proposed measures to predict the development effort of object-oriented systems. Moreover, a comparative analysis is carried out, taking into account several other size measures.",2005,0, 1307,Legion: Lessons Learned Building a Grid Operating System,"Legion was the first integrated grid middleware architected from first principles to address the complexity of grid environments. Just as a traditional operating system provides an abstract interface to the underlying physical resources of a machine, Legion was designed to provide a powerful virtual machine interface layered over the distributed, heterogeneous, autonomous, and fault-prone physical and logical resources that constitute a grid. We believe that without a solid, integrated, operating system-like grid middleware, grids will fail to cross the chasm from bleeding-edge supercomputing users to more mainstream computing. This work provides an overview of the architectural principles that drove Legion, a high-level description of the system with complete references to more detailed explanations, and the history of Legion from first inception in August 1993 through commercialization. We present a number of important lessons, both technical and sociological, learned during the course of developing and deploying Legion.",2005,0, 1308,Evaluation of effects of pair work on quality of designs,"Quality is a key issue in the development of software products. Although the literature acknowledges the importance of the design phase of software lifecycle and the effects of the design process and intermediate products on the final product, little progress has been achieved in addressing the quality of designs. This is partly due to difficulties associated in defining quality attributes with precision and measurement of the many different types and styles of design products, as well as problems with assessing the methodologies utilized in the design process. In this research we report on an empirical investigation that we conducted to examine and evaluate quality attributes of design products created through a process of pair-design and solo-design. The process of pair-design methodology involves pair programming principles where two people work together and periodically switch between the roles of driver and navigator. The evaluation of the quality of design products was based on ISO/IEC 9126 standards. Our results show some mixed findings about the effects of pair work on the quality of design products.",2005,0, 1309,Obtaining probabilistic dynamic state graphs for TPPAL processes,"Software engineers work gladly with process algebras, as they are very similar to programming languages. However, graphical models are better in order to understand how a system behaves, and even these graphical models allow us to analyze some properties of the systems. Then, in this paper we present two formalisms for the specification of concurrent systems. On the one hand we present the timed-probabilistic process algebra TPPAL, which is a suitable model for description of systems in which time and probabilities are two important factors to be considered in the description, as it occurs in real-time systems and fault-tolerant systems. Then, the specification written in TPPAL can be automatically translated into a graphical model (the so-called probabilistic dynamic state graphs), which allows us to simulate and evaluate the system. Thus, in this paper we present this translation, which is currently supported by a tool (TPAL).",2005,0, 1310,Detecting indirect coupling,"Coupling is considered by many to be an important concept in measuring design quality There is still much to be learned about which aspects of coupling affect design quality or other external attributes of software. Much of the existing work concentrates on direct coupling, that is, forms of coupling that exists between entities that are directly related to each other. A form of coupling that has so far received little attention is indirect coupling, that is, coupling between entities that are not directly related. What little discussion there is in the literature suggests that any form of indirect coupling is simple the transitive closure of a form of direct coupling. We demonstrate that this is not the case, that there are forms of indirect coupling that cannot be represented in this way and suggest ways to measure it. We present a tool that identifies a particular form of indirect coupling that is integrated in the Eclipse IDE.",2005,0, 1311,ADAMS Re-Trace: A Traceability Recovery Tool,"We present the traceability recovery tool developed in the ADAMS artefact management system. The tool is based on an Information Retrieval technique, namely Latent Semantic Indexing and aims at supporting the software engineer in the identification of the traceability links between artefacts of different types. We also present a case study involving seven student projects which represented an ideal workbench for the tool. The results emphasise the benefits provided by the tool in terms of new traceability links discovered, in addition to the links manually traced by the software engineer. Moreover, the tool was also helpful in identifying cases of lack of similarity between artefacts manually traced by the software engineer, thus revealing inconsistencies in the usage of domain terms in these artefacts. This information is valuable to assess the quality of the produced artefacts.",2005,0, 1312,Towards the Optimization of Automatic Detection of Design Flaws in Object-Oriented Software Systems,"In order to increase the maintainability and the flexibility of a software, its design and implementation quality must be properly assessed. For this purpose a large number of metrics and several higher-level mechanisms based on metrics are defined in literature. But the accuracy of these quantification means is heavily dependent on the proper selection of threshold values, which is oftentimes totally empirical and unreliable. In this paper we present a novel method for establishing proper threshold values for metrics-based rules used to detect design flaws in object-oriented systems. The method, metaphorically called """"tuning machine"""", is based on inferring the threshold values based on a set of reference examples, manually classified in """"flawed"""" respectively """"healthy"""" design entities (e.g., classes, methods). More precisely, the """"tuning machine"""" searches, based on a genetic algorithm, for those thresholds which maximize the number of correctly classified entities. The paper also defines a repeatable process for collecting examples, and discusses the encouraging and intriguing results while applying the approach on two concrete metrics-based rules that quantify two well-known design flaws i.e., """"God Class"""" and """"Data Class"""".",2005,0, 1313,A Tool for Static and Dynamic Model Extraction and Impact Analysis,"Planning changes is often an imprecise task and implementing changes is often time consuming and error prone. One reason for these problems is inadequate support for efficient analysis of the impacts of the performed changes. The paper presents a technique, and associated tool, that uses a mixed static and dynamic model extraction for supporting the analysis of the impacts of changes.",2005,0, 1314,Anomaly detection in Web applications: a review of already conducted case studies,"The quality of Web applications is everyday more important. Web applications are crucial vehicles for commerce, information exchange, and a host of social and educational activities. Since a bug in a Web application could interrupt an entire business and cost millions of dollars, there is a strong demand for methodologies, tools and models that can improve the Web quality and reliability. Aim of our ongoing-work has been to investigate, define and apply a variety of analysis and testing techniques able to support the quality of Web applications. Validity of our solutions was assessed by extensive empirical work. A critical review of this five year long work shows that only 40% of the randomly selected real-world Web applications exhibit no anomalies/failures. Some tables reported in this paper summarize the relations between type of anomalies found and analyses applied. We are in need of better methodologies, techniques and tools for developing, maintaining and testing Web applications.",2005,0, 1315,Information stream oriented content adaptation for pervasive computing,"Content adaptation is a challenging work due to the dynamism and heterogeneity of the pervasive computing environments. Some researchers address this issue by organizing services into customized applications dynamically. However, due to the maintenance of the dependencies between services, these systems become more complicated with the growing of the system scale. Programming for these systems is also error-prone. This paper introduces our work in this field, UbiCon, an information streams oriented content adaptation system. By abstracting information streams into generic CONTENT entities, the system provides a simple and powerful means for services to operate information stream. The CONTENT is created by the system dynamically, and essentially has local association with related services. As a result, the CONTENT is also used as a loosely coupling mechanism for cooperating associated services. By abstracting services with a T model, the services effectively cooperate together with other services. As a result, a collection of sophisticated applications can be built with this services model. As a proof of concept, we have developed a prototype implementation. The preliminary experiments show the effectiveness of this system.",2005,0, 1316,Recovering Internet service sessions from operating system failures,"Current Internet service architectures lack support for salvaging stateful client sessions when the underlying operating system fails due to hangs, crashes, deadlocks, or panics. The backdoors (BD) system is designed to detect such failures and recover service sessions in clusters of Internet servers by extracting lightweight state associated with client service sessions from server memory. The BD architecture combines hardware and software mechanisms to enable accurate monitoring and remote healing actions, even in the presence of failures that render a system unavailable.",2005,0, 1317,Application fault tolerance with Armor middleware,"Many current approaches to software-implemented fault tolerance (SIFT) rely on process replication, which is often prohibitively expensive for practical use due to its high performance overhead and cost. The adaptive reconfigurable mobile objects of reliability (Armor) middleware architecture offers a scalable low-overhead way to provide high-dependability services to applications. It uses coordinated multithreaded processes to manage redundant resources across interconnected nodes, detect errors in user applications and infrastructural components, and provide failure recovery. The authors describe the experiences and lessons learned in deploying Armor in several diverse fields.",2005,0, 1318,The Potemkin village and the art of deception,"Potemkin village is """"something that appears elaborate and impressive but in actual fact lacks substance"""". The software analogies for the Potemkin village are ripe for exploration. Software antipatterns address problem-solution pairs in which the typical solution is more harmful than the problem it solves. In essence, an antipattern presents an identifiable problem, the accompanying harmful effects of the typical solution, and a more appropriate solution called a refactoring. In this sense, the software version of a Potemkin village is an antipattern: Problem: deliver software with an impressive interface quickly. Solution: employ a ready-made architecture that provides an impressive interface quickly; spend as little time as possible on the back-end processing. Refactoring: do it right the first time. The author doesn't want to make the case too strongly that the building of Potemkin villages is a deliberate strategy of fraud that companies perpetrate. Is a weak piece of software covered by an elaborate GUI a deliberate fraud or simply poor design? You must assume the latter in the absence of proof. When faced with a Potemkin village or an emperor's new clothes situation, you must expose it immediately. Doing so is not easy when a high-quality GUI masks the shortcomings. You can, however, detect the situation through design reviews, and code inspections, reviews, or walkthroughs. Therefore, managers who oversee software projects (and customers who buy software) should require these reviews. Testing can sometimes uncover the situation, but it might be too late at this point. Test-driven design, on the other hand, can help avoid a Potemkin village.",2005,0, 1319,A parallel-line detection algorithm based on HMM decoding,"The detection of groups of parallel lines is important in applications such as form processing and text (handwriting) extraction from rule lined paper. These tasks can be very challenging in degraded documents where the lines are severely broken. In this paper, we propose a novel model-based method which incorporates high-level context to detect these lines. After preprocessing (such as skew correction and text filtering), we use trained hidden Markov models (HMM) to locate the optimal positions of all lines simultaneously on the horizontal or vertical projection profiles, based on the Viterbi decoding. The algorithm is trainable so it can be easily adapted to different application scenarios. The experiments conducted on known form processing and rule line detection show our method is robust, and achieves better results than other widely used line detection methods.",2005,0, 1320,Quality metric for approximating subjective evaluation of 3-D objects,"Many factors, such as the number of vertices and the resolution of texture, can affect the display quality of three-dimensional (3-D) objects. When the resources of a graphics system are not sufficient to render the ideal image, degradation is inevitable. It is, therefore, important to study how individual factors will affect the overall quality, and how the degradation can be controlled given limited resources. In this paper, the essential factors determining the display quality are reviewed. We then integrate two important ones, resolution of texture and resolution of wireframe, and use them in our model as a perceptual metric. We assess this metric using statistical data collected from a 3-D quality evaluation experiment. The statistical model and the methodology to assess the display quality metric are discussed. A preliminary study of the reliability of the estimates is also described. The contribution of this paper lies in: 1) determining the relative importance of wireframe versus texture resolution in perceptual quality evaluation and 2) proposing an experimental strategy for verifying and fitting a quantitative model that estimates 3-D perceptual quality. The proposed quantitative method is found to fit closely to subjective ratings by human observers based on preliminary experimental results.",2005,0, 1321,Embedded system engineering using C/C++ based design methodologies,This paper analyzes and compares the effectiveness of various system level design methodologies in assessing performance of embedded computing systems from the earliest stages of the design flow. The different methodologies are illustrated and evaluated by applying them to the design of an aircraft pressurization system (APS). The APS is mapped on a heterogeneous hardware/software platform consisting of two ASICs and a microcontroller. The results demonstrate the high impact of computer aided design (CAD) tools on design time and quality.,2005,0, 1322,IT asset management of industrial automation systems,"The installation and administration of large heterogeneous IT infrastructures, for enterprises as well as industrial automation systems, are becoming more and more complex and time consuming. Industrial automation systems, such as those delivered by ABB Inc., present an additional challenge, in that these control and supervise mission critical production sites. Nevertheless, it is common practice to manually install and maintain industrial networks and the process control software running on them, which can be both expensive and error prone. In order to address these challenges, we believe that in the long term such systems must behave autonomously. As preliminary steps to the realization of this vision, automated IT asset management tools and practices will be highlighted in this contribution. We will point out the advantages of combining process control and network management in the domain of industrial automation technology. Furthermore we will propose a new component model for autonomic network management applied to industrial automation systems.",2005,0, 1323,Fault tolerant data flow modeling using the generic modeling environment,"Designing embedded software for safety-critical, real-time feedback control applications is a complex and error prone task. Fault tolerance is an important aspect of safety. In general, fault tolerance is achieved by duplicating hardware components, a solution that is often more expensive than needed. In applications such as automotive electronics, a subset of the functionalities has to be guaranteed while others are not crucial to the safety of the operation of the vehicle. In this case, we must make sure that this subset is operational under the potential faults of the architecture. A model of computation called fault-tolerant data flow (FTDF) was recently introduced to describe at the highest level of abstraction of the design the fault tolerance requirements on the functionality of the system. Then, the problem of implementing the system efficiently on a platform consists of finding a mapping of the FTDF model on the components of the platform. A complete design flow for this kind of application requires a user-friendly graphical interface to capture the functionality of the systems with the FTDF model, algorithms for choosing an architecture optimally, (possibly automatic) code generation for the parts of the system to be implemented in software and verification tools. In this paper, we use the generic modeling environment (GME) developed at Vanderbilt University to design a graphical design capture system and to provide the infrastructure for automatic code generation. The design flow is embedded into the Metropolis environment developed at the University of California at Berkeley to provide the necessary verification and analysis framework.",2005,0, 1324,Development life cycle management: a multiproject experiment,"A variety of life cycle models for software systems development are generally available. However, it is generally difficult to compare and contrast the methods and very little literature is available to guide developers and managers in making choices. Moreover in order to make informed decisions developers require access to real data that compares the different models and the results associated with the adoption of each model. This paper describes an experiment in which fifteen software teams developed comparable software products using four different development approaches (V-model, incremental, evolutionary and extreme programming). Extensive measurements were taken to assess the time, quality, size, and development efficiency of each product. The paper presents the experimental data collected and the conclusions related to the choice of method, its impact on the project and the quality of the results as well as the general implications to the practice of systems engineering project management.",2005,0, 1325,Prototype of fault adaptive embedded software for large-scale real-time systems,"This paper describes a comprehensive prototype of large-scale fault adaptive embedded software developed for the proposed Fermilab BTeV high energy physics experiment. Lightweight self-optimizing agents embedded within Level 1 of the prototype are responsible for proactive and reactive monitoring and mitigation based on specified layers of competence. The agents are self-protecting, detecting cascading failures using a distributed approach. Adaptive, reconfigurable, and mobile objects for reliablility are designed to be self-configuring to adapt automatically to dynamically changing environments. These objects provide a self-healing layer with the ability to discover, diagnose, and react to discontinuities in real-time processing. A generic modeling environment was developed to facilitate design and implementation of hardware resource specifications, application data flow, and failure mitigation strategies. Level 1 of the planned BTeV trigger system alone will consist of 2500 DSPs, so the number of components and intractable fault scenarios involved make it impossible to design an 'expert system' that applies traditional centralized mitigative strategies based on rules capturing every possible system state. Instead, a distributed reactive approach is implemented using the tools and methodologies developed by the Real-Time Embedded Systems group.",2005,0, 1326,A unified framework for monitoring data streams in real time,"Online monitoring of data streams poses a challenge in many data-centric applications, such as telecommunications networks, traffic management, trend-related analysis, Web-click streams, intrusion detection, and sensor networks. Mining techniques employed in these applications have to be efficient in terms of space usage and per-item processing time while providing a high quality of answers to (1) aggregate monitoring queries, such as finding surprising levels of a data stream, detecting bursts, and to (2) similarity queries, such as detecting correlations and finding interesting patterns. The most important aspect of these tasks is their need for flexible query lengths, i.e., it is difficult to set the appropriate lengths a priori. For example, bursts of events can occur at variable temporal modalities from hours to days to weeks. Correlated trends can occur at various temporal scales. The system has to discover """"interesting"""" behavior online and monitor over flexible window sizes. In this paper, we propose a multi-resolution indexing scheme, which handles variable length queries efficiently. We demonstrate the effectiveness of our framework over existing techniques through an extensive set of experiments.",2005,0, 1327,A power-aware GALS architecture for real-time algorithm-specific tasks,"We propose an adaptive scalable architecture suitable for performing real-time algorithm-specific tasks. The architecture is based on the globally asynchronous and locally synchronous (GALS) design paradigm. We demonstrate that for different real-time commercial applications with algorithm-specific jobs like online transaction processing, Fourier transform etc., the proposed architecture allows dynamic load-balancing and adaptive inter-task voltage scaling. The architecture can also detect process-shifts for the individual processing units and determine their appropriate operating conditions. Simulation results for two representative applications show that for a random job distribution, we obtain up to 67% improvement in MOPS/W (millions of operations per second per watt) over a fully synchronous implementation.",2005,0, 1328,Mitigating the effects of explosions in underground electrical vaults,"An explosion software package is used to assess the effectiveness of several simple and inexpensive safety devices that can minimize the dangers of explosions in underground electrical vaults. The software is capable of determining the forces on a manhole cover as high-pressure air and gases are expelled from the vault. This information is used to determine the feasibility of several devices designed to mitigate the effect of the explosion. The potential designs focus on modifications to the vault and manhole cover that limit the motion of the cover and reduce the severity of the explosive forces. The devices that are examined include bolts that fasten the cover to the vault, vented and lightweight manhole covers and covers that are attached to the vault by tethers. No single safety device will completely eliminate all of the dangers associated with an explosion. Rigidly attaching the manhole cover to the vault with bolts is not recommended, because a reasonable number of bolts are often not sufficient to withstand the high pressures that result when the cover is held down and the vault is unable to vent. If the bolts finally fail, the vault will vent at a high pressure and subject the manhole cover to potentially dangerous forces. A lightweight, vented cover restrained by elastic webbing, on the other hand, will permit gases to vent at a lower pressure and greatly reduce the hazards posed by the explosion. The use of a rigid tether such as a steel cable or chain is not recommended due to the excessive forces that the manhole cover will exert on the tether and the attachment points.",2005,0, 1329,Anger management [emotion recognition],"Aside from monitoring calls for quality assurance purposes, many corporate call centers require call monitoring for anger management. By identifying angry emotions in calls, managers can take appropriate action against call agents who may have behaved improperly. NICE Systems Inc., a supplier of call monitoring systems, has developed an emotion-sensitive software that is able to detect angry emotions during phone conversations using the changes in a voice's pitch. The software engine will go over the data signal, and, second by second, run the algorithm. If emotion is detected, a report is generated that includes the level of certainty that the call included angry emotions. As the software improves and the hardware gets faster, ever more calls will be scanned in ever more sophisticated ways.",2005,0, 1330,Fault tolerance design in JPEG 2000 image compression system,"The JPEG 2000 image compression standard is designed for a broad range of data compression applications. The new standard is based on wavelet technology and layered coding in order to provide a rich feature compressed image stream. The implementations of the JPEG 2000 codec are susceptible to computer-induced soft errors. One situation requiring fault tolerance is remote-sensing satellites, where high energy particles and radiation produce single event upsets corrupting the highly susceptible data compression operations. This paper develops fault tolerance error-detecting capabilities for the major subsystems that constitute a JPEG 2000 standard. The nature of the subsystem dictates the realistic fault model where some parts have numerical error impacts whereas others are properly modeled using bit-level variables. The critical operations of subunits such as discrete wavelet transform (DWT) and quantization are protected against numerical errors. Concurrent error detection techniques are applied to accommodate the data type and numerical operations in each processing unit. On the other hand, the embedded block coding with optimal truncation (EBCOT) system and the bitstream formation unit are protected against soft-error effects using binary decision variables and cyclic redundancy check (CRC) parity values, respectively. The techniques achieve excellent error-detecting capability at only a slight increase in complexity. The design strategies have been tested using Matlab programs and simulation results are presented.",2005,0, 1331,Dynamic routing in translucent WDM optical networks: the intradomain case,"Translucent wavelength-division multiplexing optical networks use sparse placement of regenerators to overcome physical impairments and wavelength contention introduced by fully transparent networks, and achieve a performance close to fully opaque networks at a much less cost. In previous studies, we addressed the placement of regenerators based on static schemes, allowing for only a limited number of regenerators at fixed locations. This paper furthers those studies by proposing a dynamic resource allocation and dynamic routing scheme to operate translucent networks. This scheme is realized through dynamically sharing regeneration resources, including transmitters, receivers, and electronic interfaces, between regeneration and access functions under a multidomain hierarchical translucent network model. An intradomain routing algorithm, which takes into consideration optical-layer constraints as well as dynamic allocation of regeneration resources, is developed to address the problem of translucent dynamic routing in a single routing domain. Network performance in terms of blocking probability, resource utilization, and running times under different resource allocation and routing schemes is measured through simulation experiments.",2005,0, 1332,Optimizing checkpoint sizes in the C3 system,"The running times of many computational science applications are much longer than the mean-time-between-failures (MTBF) of current high-performance computing platforms. To run to completion, such applications must tolerate hardware failures. Checkpoint-and-rest art (CPR) is the most commonly used scheme for accomplishing this - the state of the computation is saved periodically on stable storage, and when a hardware failure is detected, the computation is restarted from the most recently saved state. Most automatic CPR, schemes in the literature can be classified as system-level checkpointing schemes because they take core-dump style snapshots of the computational state when all the processes are blocked at global barriers in the program. Unfortunately, a system that implements this style of checkpointing is tied to a particular platform amd cannot optimize the checkpointing process using application-specific knowledge. We are exploring an alternative called automatic application-level checkpointing. In our approach, programs are transformed by a pre-processor so that they become self-checkpointing and self-rest art able on any platform. In this paper, we evaluate a mechanism that utilizes application knowledge to minimize the amount of information saved in a checkpoint.",2005,0, 1333,Destructive transaction: human-oriented cluster system management mechanism,"Traditional cluster system management tools seldom consider the relevance between managed objects. Such relevance is the reason of related fault and may also lead to human operation errors. Because of this defect, traditional tools do not have the capability of handling of consistency, atomicity and recovery. This article proposes a transaction-based facility, destructive transaction, to solve the problems at some degree. Destructive transaction is a construct to wire down management rules stored in a system administrator's mind. It provides a method to describe managed objects relationships, atomicity facility and recovery for failed operation. And it significantly reduces human caused error possibility.",2005,0, 1334,Understanding perceptual distortion in MPEG scalable audio coding,"In this paper, we study coding artifacts in MPEG-compressed scalable audio. Specifically, we consider the MPEG advanced audio coder (AAC) using bit slice scalable arithmetic coding (BSAC) as implemented in the MPEG-4 reference software. First we perform human subjective testing using the comparison category rating (CCR) approach, quantitatively comparing the performance of scalable BSAC with the nonscaled TwinVQ and AAC algorithms. This testing indicates that scalable BSAC performs very poorly relative to TwinVQ at the lowest bitrate considered (16 kb/s) largely because of an annoying and seemingly random mid-range tonal signal that is superimposed onto the desired output. In order to better understand and quantify the distortion introduced into compressed audio at low bit rates, we apply two analysis techniques: Reng bifrequency probing and time-frequency decomposition. Using Reng probing, we conclude that aliasing is most likely not the cause of the annoying tonal signal; instead, time-frequency or spectrogram analysis indicates that its cause is most likely suboptimal bit allocation. Finally, we describe the energy equalization quality metric (EEQM) for predicting the relative perceptual performance of the different coding algorithms and compare its predictive ability with that of ITU Recommendation ITU-R BS.1387-1.",2005,0, 1335,Acquaintance-based protocol for detecting multimedia objects in peer-to-peer overlay networks,"Multimedia objects are distributed on peer computers (peers) in peer-to-peer (P2P) overlay networks. An application has to find target peers which can support enough quality of service (QoS) of multimedia objects. We discuss types of acquaintance relations of peers with respect to what objects each peer holds, can manipulate, and can grant access rights. We discuss a new type of flooding algorithm to find target peers based on charge and acquaintance concepts so that areas in networks where target peers are expected to exist are more deeply searched. We evaluate the charge-based flooding algorithm compared with a TTL-based flooding algorithm in terms of the number of messages transmitted in networks.",2005,0, 1336,Tool-based configuration of real-time CORBA middleware for embedded systems,"Real-time CORBA is a middleware standard that has demonstrated successes in developing distributed, realtime, and embedded (DRE) systems. Customizing real-time CORBA for an application can considerably reduce the size of the middleware and improve its performance. However, customizing middleware is an error-prone task and requires deep knowledge of the CORBA standard as well as the middleware design. This paper presents ZEN-kit, a graphical tool for customizing RTZen (an RTSJ-based implementation of real-time CORBA). This customization is achieved through modularizing the middleware so that features may be inserted or removed based on the DRE application requirements. This paper presents three main contributions: 1) it describes how real-time CORBA features can be modularized and configured in RTZen using components and aspects, 2) it provides a configuration strategy to customize real-time middleware to achieve low-footprint ORBs, and 3) it presents ZEN-kit, a graphical tool for composing customized real-time middleware.",2005,0, 1337,Customizing event ordering middleware for component-based systems,"The stringent performance requirements of distributed realtime embedded systems often require highly optimized implementations of middleware services. Performing such optimizations manually can be tedious and error-prone. This paper proposes a model-driven approach to generate customized implementations of event ordering services in the context of component based systems. Our approach is accompanied by a number of tools to automate the customization. Given an application App, an event ordering service Order and a middleware platform P, we provide tools to analyze high-level specifications of App to extract information relevant to event ordering and to use the extracted application information to obtain a customized service, Order(App), with respect to the application usage.",2005,0, 1338,Clustering software artifacts based on frequent common changes,"Changes of software systems are less expensive and less error-prone if they affect only one subsystem. Thus, clusters of artifacts that are frequently changed together are subsystem candidates. We introduce a two-step method for identifying such clusters. First, a model of common changes of software artifacts, called co-change graph, is extracted from the version control repository of the software system. Second, a layout of the co-change graph is computed that reveals clusters of frequently co-changed artifacts. We derive requirements for such layouts, and introduce an energy model for producing layouts that fulfill these requirements. We evaluate the method by applying it to three example systems, and comparing the resulting layouts to authoritative decompositions.",2005,0, 1339,Estimating the number of faults remaining in software code documents inspected with iterative code reviews,"Code review is considered an efficient method for detecting faults in a software code document. The number of faults not detected by the review should be small. Current methods for estimating this number assume reviews with several inspectors, but there are many cases where it is practical to employ only two inspectors. Sufficiently accurate estimates may be obtained by two inspectors employing an iterative code review (ICR) process. This paper introduces a new estimator for the number of undetected faults in an ICR process, so the process may be stopped when a satisfactory result is estimated. This technique employs the Kantorowitz estimator for N-fold inspections, where the N teams are replaced by N reviews. The estimator was tested for three years in an industrial project, where it produced satisfactory results. More experiments are needed in order to fully evaluate the approach.",2005,0, 1340,Charge-based flooding algorithm for detecting multimedia objects in peer-to-peer overlay networks,"Multimedia objects are distributed in peer-to-peer (P2P) overlay networks since objects are cached, downloaded, and personalized in peer computers (peers). An application has to find target peers which can support enough quality of service of objects. We discuss a new type of flooding algorithm to find target peers based on charge and acquaintance concepts so that areas in networks where target peers are expected to exist are more deeply searched. In addition, we discuss how peers can be granted access rights to manipulate the objects with help and cooperation of acquaintances. We evaluate the charge-based flooding algorithm compared with a TTL-based flooding algorithm in terms of the number of messages transmitted in networks.",2005,0, 1341,Leveraging user-session data to support Web application testing,"Web applications are vital components of the global information infrastructure, and it is important to ensure their dependability. Many techniques and tools for validating Web applications have been created, but few of these have addressed the need to test Web application functionality and none have attempted to leverage data gathered in the operation of Web applications to assist with testing. In this paper, we present several techniques for using user session data gathered as users operate Web applications to help test those applications from a functional standpoint. We report results of an experiment comparing these new techniques to existing white-box techniques for creating test cases for Web applications, assessing both the adequacy of the generated test cases and their ability to detect faults on a point-of-sale Web application. Our results show that user session data can be used to produce test suites more effective overall than those produced by the white-box techniques considered; however, the faults detected by the two classes of techniques differ, suggesting that the techniques are complementary.",2005,0, 1342,A statistical estimation of average IP packet delay in cellular data networks,"A novel technique for estimating the average delay experienced by an IP packet in cellular data networks with an SR-ARQ loop is presented. This technique uses the following input data: a statistical description of the radio channel, ARQ loop design parameters and the size of a transported IP packet. An analytical model is derived to enable a closed form mathematical estimation of this delay. To validate this model, a computer based simulator was built and tests showed good agreement between the simulation results and the model. This new model is of particular interest in predicting the packet delay for conversational traffic such as that used for VoIP applications.",2005,0, 1343,High-abstraction level complexity analysis and memory architecture simulations of multimedia algorithms,"An appropriate complexity analysis stage is the first and fundamental step for any methodology aiming at the implementation of today's (complex) multimedia algorithms. Such a stage may have different final implementation goals such as defining a new architecture dedicated to the specific multimedia standard under study, or defining an optimal instruction set for a selected processor architecture, or to guide the software optimization process in terms of control-flow and data-flow optimization targeting a specific architecture. The complexity of nowadays multimedia standards, in terms of number of lines of codes and cross-relations among processing algorithms that are activated by specific input signals, goes far beyond what the designer can reasonably grasp from the """"pencil and paper"""" analysis of the (software) specifications. Moreover, depending on the implementation goal different measures and metrics are required at different steps of the implementation methodology or design flow. The process of extracting the desired measures needs to be supported by appropriate automatic tools, since code rewriting, at each design stage, may result resource consuming and error prone. This paper reviews the state of the art of complexity analysis methodologies oriented to the design of multimedia systems and presents an integrated tool for automatic analysis capable of producing complexity results based on rich and customizable metrics. The tool is based on a C virtual machine that allows extracting from any C program execution the operations and data-flow information, according to the defined metrics. The tool capabilities include the simulation of virtual memory architectures. This paper shows some examples of complexity analysis results that can be yielded with the tool and presents how the tools can be used at different stages of implementation methodologies.",2005,0, 1344,Opportunistic file transfer over a fading channel under energy and delay constraints,"We consider transmission control (rate and power) strategies for transferring a fixed-size file (finite number of bits) over fading channels under constraints on both transmit energy and transmission delay. The goal is to maximize the probability of successfully transferring the entire file over a time-varying wireless channel modeled as a finite-state Markov process. We study two implementations regarding the delay constraints: an average delay constraint and a strict delay constraint. We also investigate the performance degradation caused by the imperfect (delayed or erroneous) channel knowledge. The resulting optimal policies are shown to be a function of the channel-state information (CSI), the residual battery energy, and the number of residual information bits in the transmit buffer. It is observed that the probability of successful file transfer increases significantly when the CSI is exploited opportunistically. When the perfect instantaneous CSI is available at the transmitter, the faster channel variations increase the success probability under delay constraints. In addition, when considering the power expenditure in the pilot for channel estimation, the optimal policy shows that the transmitter should use the pilot only if there is sufficient energy left for packet transfer; otherwise, a channel-independent policy should be used.",2005,0, 1345,Design Aspects for Wide-Area Monitoring and Control Systems,"This paper discusses the basic design and special applications of wide-area monitoring and control systems, which complement classical protection systems and Supervisory Control and Data Acquisition/Energy Management System applications. Systemwide installed phasor measurement units send their measured data to a central computer, where snapshots of the dynamic system behavior are made available online. This new quality of system information opens up a wide range of new applications to assess and actively maintain system's stability in case of voltage, angle or frequency instability, thermal overload, and oscillations. Recent developed algorithms and their design for these application areas are introduced. With practical examples, the benefits in terms of system security are shown.",2005,0, 1346,Managing a relational database with intelligent agents,"A prototype relational database system was developed that has indexing capability, which threads into data acquisition and analysis programs used by a wide range of researchers. To streamline the user interface and table design, free-formatted table entries were used as descriptors for experiments. This approach potentially could increase data entry errors, compromising system index and retrieval capabilities. A methodology of integrating intelligent agents with the relational database was developed to cleanse and improve the data quality for search and retrieval. An intelligent agent was designed using JACKTM (Agent Oriented Software Group) and integrated with an Oracle-based relational database. The system was tested by triggering agent corrective measures and was found to improve the quality of the data entries. Wider testing protocols and metrics for assessing its performance are subjects for future studies. This methodology for designing intelligent-based database systems should be useful in developing robust large-scale database systems.",2005,0, 1347,Load balancing of services with server initiated connections,"The growth of on-line Internet services using wireless mobile handsets has increased the demand for scalable and dependable wireless services. The systems hosting such services face high quality-of-service (QoS) requirements in terms of quick response time and high availability. With growing traffic and loads using wireless applications, the servers and networks hosting these applications need to handle larger loads. Hence, there is a need to host such services on multiple servers to distribute the processing and communications tasks and balance the load across various similar entities. Load balancing is especially important for servers facilitating hosting of various wireless applications where it is difficult to predict the load delivered to a server. A load-balancer (LB) is a node that accepts all requests from external entities and directs them to internal nodes for processing based on their processing capabilities and current load patterns. We present the problem of load balancing among multiple servers, each having server initiated connections with other network entities. Challenges involved in balancing loads arising from such connections are presented and some practical solutions are proposed. As a case study, the architectures of load balancing schemes on a set of SMS (short message service) gateway servers is presented, along with deployment strategies. Performance and scalability issues are also highlighted for different possible solutions.",2005,0, 1348,Design and evaluation of hybrid fault-detection systems,"As chip densities and clock rates increase, processors are becoming more susceptible to transient faults that can affect program correctness. Up to now, system designers have primarily considered hardware-only and software-only fault-detection mechanisms to identify and mitigate the deleterious effects of transient faults. These two fault-detection systems, however, are extremes in the design space, representing sharp trade-offs between hardware cost, reliability, and performance. In this paper, we identify hybrid hardware/software fault-detection mechanisms as promising alternatives to hardware-only and software-only systems. These hybrid systems offer designers more options to fit their reliability needs within their hardware and performance budgets. We propose and evaluate CRAFT, a suite of three such hybrid techniques, to illustrate the potential of the hybrid approach. For fair, quantitative comparisons among hardware, software, and hybrid systems, we introduce a new metric, mean work to failure, which is able to compare systems for which machine instructions do not represent a constant unit of work. Additionally, we present a new simulation framework which rapidly assesses reliability and does not depend on manual identification of failure modes. Our evaluation illustrates that CRAFT, and hybrid techniques in general, offer attractive options in the fault-detection design space.",2005,0, 1349,Modeling and analysis of non-functional requirements as aspects in a UML based architecture design,"The problem of effectively designing and analyzing software system to meet its nonfunctional requirements such as performance, security, and adaptability is critical to the system's success. The significant benefits of such work include detecting and removing defects earlier, reducing development time and cost while improving the quality. The formal design analysis framework (FDAF) is an aspect-oriented approach that supports the design and analysis of non-functional requirements for distributed, real-time systems. In the FDAF, nonfunctional requirements are defined as reusable aspects in the repository and the conventional UML has been extended to support the design of these aspects. FDAF supports the automated translation of extended, aspect-oriented UML designs into existing formal notations, leveraging an extensive body of formal methods work. In this paper, the design and analysis of response time performance aspect is described. An example system, the ATM/banking system has been used to illustrate this process.",2005,0, 1350,A software implementation of a genetic algorithm based approach to network intrusion detection,"With the rapid expansion of Internet in recent years, computer systems are facing increased number of security threats. Despite numerous technological innovations for information assurance, it is still very difficult to protect computer systems. Therefore, unwanted intrusions take place when the actual software systems are running. Different soft computing based approaches have been proposed to detect computer network attacks. This paper presents a genetic algorithm (GA) based approach to network intrusion detection, and the software implementation of the approach. The genetic algorithm is employed to derive a set of classification rules from network audit data, and the support-confidence framework is utilized as fitness function to judge the quality of each rule. The generated rules are then used to detect or classify network intrusions in a real-time environment. Unlike most existing GA-based approaches, because of the simple representation of rules and the effective fitness function, the proposed method is easier to implement while providing the flexibility to either generally detect network intrusions or precisely classify the types of attacks. Experimental results show the achievement of acceptable detection rates based on benchmark DARPA data sets on intrusions, while no other complementary techniques or relevant heuristics are applied.",2005,0, 1351,Parallel processors and an approach to the development of inference engine,"The reliable and fault tolerant computers are key to the success to aerospace, and communication industries where failures of the system can cause a significant economic impact and loss of life. Designing a reliable digital system, and detecting and repairing the faults are challenging tasks in order for the digital system to operate without failures for a given period of time. The paper presents a new and systematic software engineering approach of performing fault diagnosis of digital systems, which have employed multiple processors. The fault diagnosis model is based on the classic PMC model to generate data obtained on the basis of test results performed by the processors. The PMC model poses a tremendous challenge to the user in doing fault analysis on the basis of test results performed by the processors. This paper will perform one fault model for developing software. The effort has been made to preserve the necessary and sufficient.",2005,0, 1352,Profiling deployed software: assessing strategies and testing opportunities,"An understanding of how software is employed in the field can yield many opportunities for quality improvements. Profiling released software can provide such an understanding. However, profiling released software is difficult due to the potentially large number of deployed sites that must be profiled, the transparency requirements at a user's site, and the remote data collection and deployment management process. Researchers have recently proposed various approaches to tap into the opportunities offered by profiling deployed systems and overcome those challenges. Initial studies have illustrated the application of these approaches and have shown their feasibility. Still, the proposed approaches, and the tradeoffs between overhead, accuracy, and potential benefits for the testing activity have been barely quantified. This paper aims to overcome those limitations. Our analysis of 1,200 user sessions on a 155 KLOC deployed system substantiates the ability of field data to support test suite improvements, assesses the efficiency of profiling techniques for released software, and the effectiveness of testing efforts that leverage profiled field data.",2005,0, 1353,Predicting the location and number of faults in large software systems,"Advance knowledge of which files in the next release of a large software system are most likely to contain the largest numbers of faults can be a very valuable asset. To accomplish this, a negative binomial regression model has been developed and used to predict the expected number of faults in each file of the next release of a system. The predictions are based on the code of the file in the current release, and fault and modification history of the file from previous releases. The model has been applied to two large industrial systems, one with a history of 17 consecutive quarterly releases over 4 years, and the other with nine releases over 2 years. The predictions were quite accurate: for each release of the two systems, the 20 percent of the files with the highest predicted number of faults contained between 71 percent and 92 percent of the faults that were actually detected, with the overall average being 83 percent. The same model was also used to predict which files of the first system were likely to have the highest fault densities (faults per KLOC). In this case, the 20 percent of the files with the highest predicted fault densities contained an average of 62 percent of the system's detected faults. However, the identified files contained a much smaller percentage of the code mass than the files selected to maximize the numbers of faults. The model was also used to make predictions from a much smaller input set that only contained fault data from integration testing and later. The prediction was again very accurate, identifying files that contained from 71 percent to 93 percent of the faults, with the average being 84 percent. Finally, a highly simplified version of the predictor selected files containing, on average, 73 percent and 74 percent of the faults for the two systems.",2005,1, 1354,Evaluation of product reusability based on a technical and economic model: a case study of televisions,"In the field of sustainable manufacturing, reusing of old products or components is considered as the most environmentally friendly strategy among all other strategies. However, the decision of reusing old components of a used product confronts many uncertainties such as the quality level of the used components and the economic aspect of reusing them compared to producing a new component. This paper presents an integrated technical and economic model to evaluate the reusability of products or components. The model introduces some new parameters, such as product value and product gain, to assist the decision between reuse, remanufacture or disposal. In order to handle uncertainties, a Monte Carlo simulation using @RiskTM is utilized. The results show that the model is capable to assess the potential reusability of used products, while the use of simulation significantly increases the function of the model in addressing uncertainties. A case study of televisions is used to demonstrate the applicability of the model using extensive time-to-failure data for the major parts of a television set. Furthermore, a direction of future work is outlined and briefly discussed.",2005,0, 1355,Flexible Consistency for Wide Area Peer Replication,"The lack of a flexible consistency management solution hinders P2P implementation of applications involving updates, such as read-write file sharing, directory services, online auctions and wide area collaboration. Managing mutable shared data in a P2P setting requires a consistency solution that can operate efficiently over variable-quality failure-prone networks, support pervasive replication for scaling, and give peers autonomy to tune consistency to their sharing needs and resource constraints. Existing solutions lack one or more of these features. In this paper, we described a new consistency model for P2P sharing of mutable data called composable consistency, and outline its implementation in a wide area middleware file service called Swarm. Composable consistency lets applications compose consistency semantics appropriate for their sharing needs by combining a small set of primitive options. Swarm implements these options efficiently to support scalable, pervasive, failure-resilient, wide-area replication behind a simple yet flexible interface. Two applications was presented to demonstrate the expressive power and effectiveness of composable consistency: a wide area file system that outperforms Coda in providing close-to-open consistency over WANs, and a replicated BerkeleyDB database that reaps order-of-magnitude performance gains by relaxing consistency for queries and updates",2005,0, 1356,The virtues of assessing software reliability early,"Software reliability is one of the few software quality attributes with a sound mathematical definition: the probability of a software failure's occurrence within a given period and under specific use conditions. By this definition, reliability is a strictly operational quality attribute. A reliability prediction method that integrates quality information from such sources as architectural system descriptions, use scenarios, system deployment diagrams, and module testing lets managers identify problem areas early and make any necessary organizational adjustments.",2005,0, 1357,A framework for applying inventory control to capacity management for utility computing,"A key concern in utility computing is managing capacity so that application service providers (ASPs) and computing utilities (CUs) operate in a cost effective way. To this end, we propose a framework for applying inventory control to capacity management for utility computing. The framework consists of: conceptual foundations (e.g., establishing connections between concepts in utility computing and those in inventory control); problem formulations (e.g., what factors should be considered and how they affect computational complexity); and quality of service (QoS) forecasting, which is predicting the future effect on QoS of ASP and CU actions taken in the current period (a critical consideration in searching the space of possible solutions).",2005,0, 1358,A tariff model to charge IP services with guaranteed quality: effect of users' demand in a case study,"In this paper, we consider a per-call, usage-based tariff model to charge for IP services with guaranteed quality. This model is based on the virtual delay, which is a quality of service (QoS) index that describes an improved IP service provided by a network domain. We show how to compute the virtual delay, and how to make it dependent on the service demand. Then, we demonstrate the effectiveness of our tariff model to tune revenues, blocking probability, and resource utilization in a meaningful application scenario. Our goal is to give some directions for network resource dimensioning and pricing purposes, which depend on the service demand.",2005,0, 1359,QoS-aware service composition in large scale multi-domain networks,"Next generation networks are envisioned to support dynamic and customizable service compositions at Internet scale. To facilitate the communication between distributed software components, on-demand and QoS-aware network service composition across large scale networks emerges as a key research challenge. This paper presents a fast QoS-aware service composition algorithm for selecting a set of interconnected domains with specific service classes. We further show how such algorithm can be used to support network adaptation and service mobility. In simulation studies performed on large scale networks, the algorithm exhibits very high probability of finding the optimal solution within short execution time. In addition, we present a distributed service composition framework utilizing this algorithm.",2005,0, 1360,On a software-based self-test methodology and its application,"Software-based self-test (SBST) was originally proposed for cost reduction in SOC test environment. Previous studies have focused on using SBST for screening logic defects. SBST is functional-based and hence, achieving a high full-chip logic defect coverage can be a challenge. This raises the question of SBST's applicability in practice. In this paper, we investigate a particular SBST methodology and study its potential applications. We conclude that the SBST methodology can be very useful for producing speed binning tests. To demonstrate the advantage of using SBST in at-speed functional testing, we develop a SBST framework and apply it to an open source microprocessor core, named OpenRISC 1200. A delay path extraction methodology is proposed in conjunction with the SBST framework. The experimental results demonstrate that our SBST can produce tests for a high percentage of extracted delay paths of which less than half of them would likely be detected through traditional functional test patterns. Moreover, the SBST tests can exercise the functional worst-case delays which could not be reached by even 1M of traditional verification test patterns. The effectiveness of our SBST and its current limitations are explained through these experimental findings.",2005,0, 1361,Developing and assuring trustworthy Web services,"Web services are emerging technologies that are changing the way we develop and use computer systems and software. Current Web services testing techniques are unable to assure the desired level of trustworthiness, which presents a barrier to WS applications in mission and business critical environments. This paper presents a framework that assures the trustworthiness of Web services. New assurance techniques are developed within the framework, including specification verification via completeness and consistency checking, specification refinement, distributed Web services development, test case generation, and automated Web services testing. Traditional test case generation methods only generate positive test cases that verify the functionality of software. The Swiss cheese test case generation method proposed in this paper is designed to perform both positive and negative testing that also reveal the vulnerability of Web services. This integrated development process is implemented in a case study. The experimental evaluation demonstrates the effectiveness of this approach. It also reveals that the Swiss cheese negative testing detects even more faults than positive testing and thus significantly reduces the vulnerability of Web services.",2005,0, 1362,Effect of preventive rejuvenation in communication network system with burst arrival,"Long running software systems are known to experience an aging phenomenon called software aging, one in which the accumulation of errors during the execution of software leads to performance degradation and eventually results in failure. To counteract this phenomenon a proactive fault management approach, called software rejuvenation, is particularly useful. It essentially involves gracefully terminating an application or a system and restarting it in a clean internal state. In this paper, we perform the dependability analysis of a client/server software system with rejuvenation under the assumption that the requests arrive according to the Markov modulated Poisson process. Three dependability measures, steady-state availability, loss probability of requests and mean response time on tasks, are derived through the hidden Markovian analysis based on the time-based software rejuvenation scheme. In numerical examples, we investigate the sensitivity of some model parameters to the dependability measures.",2005,0, 1363,A QoS-enable failure detection framework for J2EE application server,"As a basic reliability guarantee technology in distributed systems, failure detection provides the ability of timely detecting the liveliness of runtime systems. Effective failure detection is very important to J2EE application server (JAS), the leading middleware in Web computing environment, and it also needs to meet the requirements of reconfiguration, flexibility and adaptability. Based on the QoS (quality of service) specification of failure detector, this paper presents a QoS-enable failure detection framework for JAS, which satisfies the requirements of dynamically adjusting qualities and flexible integration of failure detectors. The work has been implemented in OnceAS application server that is developed by Institute of Software, Chinese Academy of Sciences. The experiments show that the framework can provide good QoS of failure detection in JAS.",2005,0, 1364,Digital gas fields produce real time scheduling based on intelligent autonomous decentralized system,"In the paper, we focus on exploring to construct a novel digital gas fields produce real-time scheduling system, which is based on the theory of intelligent autonomous decentralized system (IADS). The system combines the practical demand and the characteristics of the gas fields, and it aims at dealing with the real-time property, dynamic and complexity during gas fields produce scheduling. Besides embodying on-line intelligent expansion, intelligent fault tolerance and on-line intelligent maintenance of IADS particular properties, the scheme adequately attaches importance to the flexibility. The model & method based on intelligent information pull push (IIPP) and intelligent management (IM) is applied to the system, thus it is helpful to improve the performance of the scheduling system. In according with the current requirement of gas fields produce scheduling, the concrete solvent is put forward. The related system architecture and software structure is presented. An effective method of solving the real-time property was developed for use in scheduling of the gas field produce. We simulated some experiments, and the result demonstrates the validity of the system. At Last, we predict its promising research trend in the future of gas fields practice application.",2005,0, 1365,A comprehensive model for software rejuvenation,"Recently, the phenomenon of software aging, one in which the state of the software system degrades with time, has been reported. This phenomenon, which may eventually lead to system performance degradation and/or crash/hang failure, is the result of exhaustion of operating system resources, data corruption, and numerical error accumulation. To counteract software aging, a technique called software rejuvenation has been proposed, which essentially involves occasionally terminating an application or a system, cleaning its internal state and/or its environment, and restarting it. Since rejuvenation incurs an overhead, an important research issue is to determine optimal times to initiate this action. In this paper, we first describe how to include faults attributed to software aging in the framework of Gray's software fault classification (deterministic and transient), and study the treatment and recovery strategies for each of the fault classes. We then construct a semi-Markov reward model based on workload and resource usage data collected from the UNIX operating system. We identify different workload states using statistical cluster analysis, estimate transition probabilities, and sojourn time distributions from the data. Corresponding to each resource, a reward function is then defined for the model based on the rate of resource depletion in each state. The model is then solved to obtain estimated times to exhaustion for each resource. The result from the semi-Markov reward model are then fed into a higher-level availability model that accounts for failure followed by reactive recovery, as well as proactive recovery. This comprehensive model is then used to derive optimal rejuvenation schedules that maximize availability or minimize downtime cost.",2005,0, 1366,Embedding policy rules for software-based systems in a requirements context,"Policy rules define what behavior is desired in a software-based system, they do not describe the corresponding action and event sequences that actually """"produce"""" desired (""""legal"""") or undesired (""""illegal"""") behavior. Therefore, policy rules alone are not sufficient to model every (behavioral) aspect of an information system. In other words, like requirements policies only exist in context, and a policy rule set can only be assessed and sensibly interpreted with adequate knowledge of its embedding context. Scenarios and goals are artifacts used in requirements engineering and system design to model different facets of software systems. With respect to policy rules, scenarios are well suited to define how these rules are embedded into a specific environment. A goal is an objective that the system under consideration should or must achieve. Thus, the control objectives of a system must be reflected in the policy rules that actually govern a system's behavior.",2005,0, 1367,Project overlapping and its influence on the product quality,"Time to market, quality and cost are the three most important factors when developing software. In order to achieve and retain a leading position in the market, developers are forced to produce more complex functionalities, much faster and more frequently. In such conditions, it is hard to keep a high quality level and low cost. The article focuses on the reliability aspect of quality as one of the most important quality factors to the customer. Special attention is devoted to the reliability of the software product being developed in project overlapping conditions. The Weibull reliability growth model for predicting the reliability of a product during the development process is adapted and applied to historical data of a sequence of overlapping projects. A few useful tips on reliability modeling for project management and planning in project overlapping conditions are presented",2005,0, 1368,"De-Randomizing"""" congestion losses to improve TCP performance over wired-wireless networks","Currently, a TCP sender considers all losses as congestion signals and reacts to them by throttling its sending rate. With Internet becoming more heterogeneous with more and more wireless error-prone links, a TCP connection may unduly throttle its sending rate and experience poor performance over paths experiencing random losses unrelated to congestion. The problem of distinguishing congestion losses from random losses is particularly hard when congestion is light: congestion losses themselves appear to be random. The key idea is to """"de-randomize"""" congestion losses. This paper proposes a simple biased queue management scheme that """"de-randomizes"""" congestion losses and enables a TCP receiver to diagnose accurately the cause of a loss and inform the TCP sender to react appropriately. Bounds on the accuracy of distinguishing wireless losses and congestion losses are analytically established and validated through simulations. Congestion losses are identified with an accuracy higher than 95% while wireless losses are identified with an accuracy higher than 75%. A closed form is derived for the achievable improvement by TCP endowed with a discriminator with a given accuracy. Simulations confirm this closed form. TCP-Casablanca, a TCP-Newreno endowed with the proposed discriminator at the receiver, yields through simulations an improvement of more than 100% on paths with low levels of congestion and about 1% random wireless packet loss rates. TCP-Ifrane, a sender-based TCP-Casablanca yields encouraging performance improvement.",2005,0, 1369,JUnit: unit testing and coiling in tandem,"Detecting and correcting defects either during or close to the phase where they originate is key for any fast, cost-effective software development. Unit testing, performed by the code writer, is still the most popular technique. The author has summarized the experiences around JUnit, probably the most popular OSS tool for Java unit testing. JUnit is an open source Java library, Kent Beck and Erich Gamma created JUnit, deriving it from Beck's Smalltalk testing framework. Easy integration of coding and unit testing lies at the heart of Extreme Programming and the agile software development movement. In agile programming, software is constructed by implementing functionality incrementally, in short spurts of activity.",2005,0, 1370,Toward understanding the rhetoric of small source code changes,"Understanding the impact of software changes has been a challenge since software systems were first developed. With the increasing size and complexity of systems, this problem has become more difficult. There are many ways to identify the impact of changes on the system from the plethora of software artifacts produced during development, maintenance, and evolution. We present the analysis of the software development process using change and defect history data. Specifically, we address the problem of small changes by focusing on the properties of the changes rather than the properties of the code itself. Our study reveals that 1) there is less than 4 percent probability that a one-line change introduces a fault in the code, 2) nearly 10 percent of all changes made during the maintenance of the software under consideration were one-line changes, 3) nearly 50 percent of the changes were small changes, 4) nearly 40 percent of changes to fix faults resulted in further faults, 5) the phenomena of change differs for additions, deletions, and modifications as well as for the number of lines affected, and 6) deletions of up to 10 lines did not cause faults.",2005,0, 1371,Digital system for detection and classification of electrical events,"The paper describes an algorithm to detect and classify electrical events related to power quality. The events detection is based on monitoring the statistical characteristics of the energy of the error signal, which is defined as the difference between the monitored waveform and a sinusoidal wave generated with the same magnitude, frequency, and phase as the fundamental sinusoidal component. The novel feature is event recognition based on a neural network that uses the error signal as input. Multi-rate techniques are also employed to improve the system operation for on-line applications. Software tests were performed showing the good performance of the system.",2005,0, 1372,On-Line Process Monitoring and Fault Isolation Using PCA,"This paper describes a real-time on-line process monitoring and fault isolation approach using PCA (principal component analysis). It also presents the software implementation architecture using an OPC (OLE for process control) compliant framework, which enables a seamless integration with the real plant and DCS. The proposed approach and architecture are implemented to monitor a refinery process simulation that produces cyclohexane using benzene and hydrogen. The result shows that both sensor faults and process faults can be detected on-line and the dominating process variables may be isolated",2005,0, 1373,Bi-layer segmentation of binocular stereo video,"This paper describes two algorithms capable of real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from colour/contrast or from stereo alone is known to be error-prone. Here, colour, contrast and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, layered dynamic programming (LDP), solves stereo in an extended 6-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive colour model that is learned on the fly, and stereo disparities are obtained by dynamic programming. The second algorithm, layered graph cut (LGC), does not directly solve stereo. Instead the stereo match likelihood is marginalised over foreground and background hypotheses, and fused with a contrast-sensitive colour model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar p performance, substantially better than stereo or colour/contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.",2005,0, 1374,Assured reconfiguration of fail-stop systems,"Hardware dependability improvements have led to a situation in which it is sometimes unnecessary to employ extensive hardware replication to mask hardware faults. Expanding upon our previous work on assured reconfiguration for single processes and building upon the fail-stop model of processor behavior, we define a framework that provides assured reconfiguration for concurrent software. This framework can provide high dependability with lower space, power, and weight requirements than systems that replicate hardware to mask all anticipated faults. We base our assurance argument on a proof structure that extends the proofs for the single-application case and includes the fail-stop model of processor behavior. To assess the feasibility of instantiating our framework, we have implemented a hypothetical avionics system that is representative of what might be found on an unmanned aerial vehicle.",2005,0, 1375,On-line detection of control-flow errors in SoCs by means of an infrastructure IP core,"In sub-micron technology circuits high integration levels coupled with the increased sensitivity to soft errors even at ground level make the task of guaranteeing systems' dependability more difficult than ever. In this paper we present a new approach to detect control-flow errors by exploiting a low-cost infrastructure intellectual property (I-IP) core that works in cooperation with software-based techniques. The proposed approach is particularly suited when the system to be hardened is implemented as a system-on-chip (SoC), since the I-IP can be added easily and it is independent on the application. Experimental results are reported showing the effectiveness of the proposed approach.",2005,0, 1376,Assessing the performance of erasure codes in the wide-area,"The problem of efficiently retrieving a file that has been broken into blocks and distributed across the wide-area pervades applications that utilize grid, peer-to-peer, and distributed file systems. While the use of erasure codes to improve the fault-tolerance and performance of wide-area file systems has been explored, there has been little work that assesses the performance and quantifies the impact of modifying various parameters. This paper performs such an assessment. We modify our previously defined framework for studying replication in the wide-area to include both Reed-Solomon and low-density parity-check (LDPC) erasure codes. We then use this framework to compare Reed-Solomon and LDPC erasure codes in three wide-area, distributed settings. We conclude that although LDPC codes have an advantage over Reed-Solomon codes in terms of decoding cost, this advantage does not always translate to the best overall performance in wide-area storage situations.",2005,0, 1377,Design time reliability analysis of distributed fault tolerance algorithms,"Designing a distributed fault tolerance algorithm requires careful analysis of both fault models and diagnosis strategies. A system will fail if there are too many active faults, especially active Byzantine faults. But, a system will also fail if overly aggressive convictions leave inadequate redundancy. For high reliability, an algorithm's hybrid fault model and diagnosis strategy must be tuned to the types and rates of faults expected in the real world. We examine this balancing problem for two common types of distributed algorithms: clock synchronization and group membership. We show the importance of choosing a hybrid fault model appropriate for the physical faults expected by considering two clock synchronization algorithms. Three group membership service diagnosis strategies are used to demonstrate the benefit of discriminating between permanent and transient faults. In most cases, the probability of failure is dominated by one fault type. By identifying the dominant cause of failure, one can tailor an algorithm appropriately at design time, yielding significant reliability gain.",2005,0, 1378,On a method for mending time to failure distributions,"Many software reliability growth models assume that the time to next failure may be infinite; i.e., there is a chance that no failure will occur at all. For most software products this is too good to be true even after the testing phase. Moreover, if a non-zero probability is assigned to an infinite time to failure, metrics like the mean time to failure do not exist. In this paper, we try to answer several questions: Under what condition does a model permit an infinite time to next failure? Why do all non-homogeneous Poisson process (NHPP) models of the finite failures category share this property? And is there any transformation mending the time to failure distributions? Indeed, such a transformation exists; it leads to a new family of NHPP models. We also show how the distribution function of the time to first failure can be used for unifying finite failures and infinite failures NHPP models.",2005,0, 1379,Experimental dependability evaluation of a fail-bounded jet engine control system for unmanned aerial vehicles,"This paper presents an experimental evaluation of a prototype jet engine controller intended for unmanned aerial vehicles (UAVs). The controller is implemented with commercial off-the-shelf (COTS) hardware based on the Motorola MPC565 microcontroller. We investigate the impact of single event upsets (SEUs) by injecting single bit-flip faults into main memory and CPU registers via the Nexus on-chip debug interface of the MPC565. To avoid the injection of non-effective faults, automated pre-injection analysis of the assembly code was utilized. Due to the inherent robustness of the software, most injected faults were still non-effective (69.4%) or caused bounded failures having only minor effect on the jet engine (7.0%), while 20.1% of the errors were detected by hardware exceptions and 1.9% were detected by executable assertions in the software. The remaining 1.6% is classified as critical failures. A majority of the critical failures were caused by erroneous Booleans or type conversions involving Booleans.",2005,0, 1380,A framework for SOFL-based program review,"Program review is a practical and cost-effective method for detecting errors in program code. This paper describes our recent work aiming to provide support for revealing errors which usually arise from inappropriate implementations of desired specifications. In our approach, the SOFL specification language is employed for specifying software systems. We provide a framework that guides reviewers to compare a code with its specification for effective detection of potential defects.",2005,0, 1381,Comparing fault-proneness estimation models,"Over the last, years, software quality has become one of the most important requirements in the development of systems. Fault-proneness estimation could play a key role in quality control of software products. In this area, much effort has been spent in defining metrics and identifying models for system assessment. Using this metrics to assess which parts of the system are more fault-proneness is of primary importance. This paper reports a research study begun with the analysis of more than 100 metrics and aimed at producing suitable models for fault-proneness estimation and prediction of software modules/files. The objective has been to find a compromise between the fault-proneness estimation rate and the size of the estimation model in terms of number of metrics used in the model itself. To this end, two different methodologies have been used, compared, and some synergies exploited. The methodologies were the logistic regression and the discriminant analyses. The corresponding models produced for fault-proneness estimation and prediction have been based on metrics addressing different aspects of computer programming. The comparison has produced satisfactory results in terms of fault-proneness prediction. The produced models have been cross validated by using data sets derived from source codes provided by two application scenarios.",2005,0, 1382,Inconsistency measurement of software requirements specifications: an ontology-based approach,"Management of requirements inconsistency is key to the development of complex trustworthy software system, and precise measurement is precondition for the management of requirements inconsistency properly. But at present, although there are a lot of work on the detection of requirements inconsistency, most of them are limited in treating requirements inconsistency according to heuristic rules, we still lacks of promising method for handling requirements inconsistency properly. Based on an abstract requirements refinement process model, this paper takes domain ontology as infrastructure for the refinement of software requirements, the aim of which is to get requirements descriptions that are comparable. Thus we can measure requirements inconsistency based on tangent plane of requirements refinement tree, after we have detected inconsistent relations of leaf nodes at semantic level.",2005,0, 1383,"Extended abstract: requirements modeling within iterative, incremental processes","Requirements modeling is an established method for detecting defects in requirement specifications. Although many companies have successfully incorporated requirements modeling in their software processes, other companies face technology adoption problems: methods appear incompatible with processes and technologies in use, and with the attitudes of engineers. We approach this problem by adaptation of standard modeling know-how to existing practices and attitudes in certain companies. By a requirements modeling process that respects the constraints of iterative, incremental, use case driven development, and that uses a modeling language with a low learning barrier (for engineers), we hope to make companies that use the unified process amenable to requirements modeling.",2005,0, 1384,Modeling the effects of 1 MeV electron radiation in gallium-arsenide solar cells using SILVACO virtual wafer fabrication software,The ALTAS device simulator from Silvaco International has the potential for predicting the effects of electron radiation in solar cells by modeling material defects. A GaAs solar cell was simulated in ATLAS and compared to an actual cell with radiation defects identified using deep level transient spectroscopy techniques (DLTS). The solar cells were compared for various fluence levels of 1 MeV electron radiation and showed an average of less than three percent difference between experimental and simulated cell output characteristics. These results demonstrate that ATLAS software can be a viable tool for predicting solar cell degradation due to electron radiation.,2005,0, 1385,Hardware and software implementation of a travelling wave based protection relay,"Fault generated transient signals can undoubtedly be employed to achieve ultra-high-speeds in transmission line protection. Even though such transient based schemes can quickly detect a faulty state on the power system, they face several challenges. Firstly, they have inherent reliability problems and secondly, it is difficult to implement them as a real-time product due to their excessive demand for high speed signal acquisition and processing. This paper examines the theoretical aspects and design procedure of a reliable, high speed protection scheme based on fault generated travelling wave information.",2005,0, 1386,Discerning user-perceived media stream quality through application-layer measurements,"The design of access networks for proper support of multimedia applications requires an understanding of how the conditions of the underlying network (packet loss and delays, for instance) affect the performance of a media stream. In particular, network congestion can affect the user-perceived quality of a media stream. By choosing metrics that indicate and/or predict the quality ranking that a user would assign to a media stream, we can deduce the performance of a media stream without polling users directly. We describe a measurement mechanism utilizing objective measurements taken from a media player application that strongly correlate with user rankings of stream quality. Experimental results demonstrate the viability of the chosen metrics as predictors or indicators of user quality rankings, and suggest a new mechanism for evaluating the present and future quality of a media stream.",2005,0, 1387,Predicting the probability of change in object-oriented systems,"Of all merits of the object-oriented paradigm, flexibility is probably the most important in a world of constantly changing requirements and the most striking difference compared to previous approaches. However, it is rather difficult to quantify this aspect of quality: this paper describes a probabilistic approach to estimate the change proneness of an object-oriented design by evaluating the probability that each class of the system will be affected when new functionality is added or when existing functionality is modified. It is obvious that when a system exhibits a large sensitivity to changes, the corresponding design quality is questionable. The extracted probabilities of change can be used to assist maintenance and to observe the evolution of stability through successive generations and identify a possible """"saturation"""" level beyond which any attempt to improve the design without major refactoring is impossible. The proposed model has been evaluated on two multiversion open source projects. The process has been fully automated by a Java program, while statistical analysis has proved improved correlation between the extracted probabilities and actual changes in each of the classes in comparison to a prediction model that relies simply on past data.",2005,0, 1388,Real-time detection and containment of network attacks using QoS regulation,"In this paper, we present a network measurement mechanism that can detect and mitigate attacks and anomalous traffic in real-time using QoS regulation. The detection method rapidly pursues the dynamics of the network on the basis of correlation properties of the network protocols. By observing the proportion occupied by each traffic protocol and correlating it to that of previous states of traffic, it can be possible to determine whether the current traffic is behaving normally. When abnormalities are detected, our mechanism allows aggregated resource regulation of each protocol's traffic. The trace-driven results show that the rate-based regulation of traffic characterized by protocol classes is a feasible vehicle for mitigating the impact of network attacks on end servers.",2005,0, 1389,Fault prediction of boilers with fuzzy mathematics and RBF neural network,"How to predict potential faults of a boiler in an efficient and scientific way is very important. A lot of comprehensive research has been done, and promising results have been obtained, especially regarding the application of intelligent software. Still there are a lot of problems to be studied. It combines fuzzy mathematics with. RBF neural network in an intuition and natural way. Thus a new method is proposed for the prediction of the potential faults of a coal-fired boiler. The new method traces the development trend of related operation and state variables. The new method has been tested on a simulation machine. And its predicted results were compared with those of traditional statistical results. It is found that the new method has a good performance.",2005,0, 1390,Redundancy concepts to increase transmission reliability in wireless industrial LANs,"Wireless LANs are an attractive networking technology for industrial applications. A major obstacle toward the fulfillment of hard real-time requirements is the error-prone behavior of wireless channels. A common approach to increase the probability of a message being transmitted successfully before a prescribed deadline is to use feedback from the receiver and subsequent retransmissions (automatic repeat request-ARQ-protocols). In this paper, three modifications to an ARQ protocol are investigated. As one of these modifications a specific transmit diversity scheme, called antenna redundancy, is introduced. The other modifications are error-correcting codes and the transmission of multiple copies of the same packet. In antenna redundancy the base station/access point has several antennas. The base station transmits on one antenna at a time, but whenever a retransmission is needed, the base station switches to another antenna. The relative benefits of using FEC versus adding antennas versus sending multiple copies are investigated under different error conditions. One important result is that for independent Gilbert-Elliot channels between the base station antennas and the wireless station the antenna redundancy scheme effectively decreases the probability of missing a deadline, in a numerical example approximately an order of magnitude per additional antenna can be observed. As a second benefit, antenna redundancy decreases the number of transmission trials needed to transmit a message successfully, thus saving bandwidth.",2005,0, 1391,A method for studying partial discharges location and propagation within power transformer winding based on the structural data,"Power transformer inner insulation system is a very critical component. Its degradation may pose apparatus to fail while in service. On the other hand, experimental experiences prove that partial discharges are a major source of insulation failure in power transformers. If the deterioration of the insulation system caused by PD activity can be detected at an early stage, preventive maintenance measures may be taken. Because of the complex structure of the transformer, accurate PD location is difficult and is one of the challenges power utilities are faced with. This problem comes to be vital in open access systems. In this paper a theory for locating partial discharge and its propagation along the winding is proposed, which is based on structural data of a transformer. The lumped element winding model is constructed. Quasi-static condition is applied and each turn of the winding is considered as a segment. Then an algorithm is developed to use the constructed matrices for PD location. A software package in Visual Basic environment has been developed. This paper introduces the background theory and utilized techniques.",2005,0, 1392,Towards Autonomic Virtual Applications in the In-VIGO System,"Grid environments enable users to share nondedicated resources that lack performance guarantees. This paper describes the design of application-centric middleware components to automatically recover from failures and dynamically adapt to grid environments with changing resource availabilities, improving fault-tolerance and performance. The key components of the application-centric approach are a global per-application execution history and an autonomic component that tracks the performance of a job on a grid resource against predictions based on the application execution history, to guide rescheduling decisions. Performance models of unmodified applications built using their execution history are used to predict failure as well as poor performance. A prototype of the proposed approach, an autonomic virtual application manager (AVAM), has been implemented in the context of the In-VIGO grid environment and its effectiveness has been evaluated for applications that generate CPU-intensive jobs with relatively short execution times (ranging from tens of seconds to less than an hour) on resources with highly variable loads - a workload generated by typical educational usage scenarios of In-VIGO-like grid environments. A memory-based learning algorithm is used to build the performance models for CPU-intensive applications that are used to predict the need for rescheduling. Results show that In-VIGO jobs managed by the AVAM consistently meet their execution deadlines under varying load conditions and gracefully recover from unexpected failures",2005,0, 1393,Combining Visualization and Statistical Analysis to Improve Operator Confidence and Efficiency for Failure Detection and Localization,"Web applications suffer from software and configuration faults that lower their availability. Recovering from failure is dominated by the time interval between when these faults appear and when they are detected by site operators. We introduce a set of tools that augment the ability of operators to perceive the presence of failure: an automatic anomaly detector scours HTTP access logs to find changes in user behavior that are indicative of site failures, and a visualizer helps operators rapidly detect and diagnose problems. Visualization addresses a key question of autonomic computing of how to win operators' confidence so that new tools will be embraced. Evaluation performed using HTTP logs from Ebates.com demonstrates that these tools can enhance the detection of failure as well as shorten detection time. Our approach is application-generic and can be applied to any Web application without the need for instrumentation",2005,0, 1394,Mining Logs Files for Computing System Management,"With advancement in science and technology, computing systems become increasingly more difficult to monitor, manage and maintain. Traditional approaches to system management have been largely based on domain experts through a knowledge acquisition process to translate domain knowledge into operating rules and policies. This has been experienced as a cumbersome, labor intensive, and error prone process. There is thus a pressing need for automatic and efficient approaches to monitor and manage complex computing systems. A popular approach to system management is based on analyzing system log files. However, several new aspects of the system log data have been less emphasized in existing analysis methods and posed several challenges. The aspects include disparate formats and relatively short text messages in data reporting, asynchronous data collection, and temporal characteristics in data representation. First, a typical computing system contains different devices with different software components, possibly from different providers. These various components have multiple ways to report events, conditions, errors and alerts. The heterogeneity and inconsistency of log formats make it difficult to automate problem determination. To perform automated analysis, we need to categorize the text messages with disparate formats into common situations. Second, text messages in the log files are relatively short with a large vocabulary size. Third, each text message usually contains a timestamp. The temporal characteristics provide additional context information of the messages and can be used to facilitate data analysis. In this paper, we apply text mining to automatically categorize the messages into a set of common categories, and propose two approaches of incorporating temporal information to improve the categorization performance",2005,0, 1395,Pattern recognition based tools enabling autonomic computing.,"Fault detection is one of the important constituents of fault tolerance, which in turn defines the dependability of autonomic computing. In presented work several pattern recognition tools were investigated in application to early fault detection. The optimal margin classifier technique was utilized to detect the abnormal behavior of software processes. The comparison with the performance of the quadratic classifiers is reported. The optimal margin classifiers were also implemented to the fault detection in hardware components. The impulse parameter probing technique was introduced to mitigate intermittent and transient fault problems. The pattern recognition framework of analysis of responses to a controlled component perturbation yielded promising results",2005,0, 1396,An Active Method to Building Dynamic Dependency Model for Distributed Components,"Currently many research show that, in a sophisticated application system, the faults which are impossible to occur theoretically, may take place in practice. J2EE (Java 2 Platform, Enterprise Edition) distributed environment has been popularly applied to EAI (enterprise application integration). With the growth of the numbers of Jsp, Servlet and EJB components, for a specific J2EE application, it is difficult for administrators to locate the fundamental position of the faults, and delay recovering the faults. Dependency models provide the effective method to trace all possible sources of the faults from the problem vertices against the relationship edges. Bayesian network was presented in 1981 by R.Howard and J.Matheson. It has been successfully applied to fault diagnosis field. Bayesian networks provide a method to describe consequence information naturally. In this paper, we construct the dependency models of software components with the construction algorithm of Bayesian networks. Dependency models can be represented with Bayesian networks. The vertices in Bayesian networks corresponds to the vertices in dependency models, and the conditional probability expressed with the edges in Bayesian networks corresponds to the relative strength expressed with the edges in dependency models. Consequently, it is possible to develop a tool to analyze and recover the faults automatically, and be helpful to find fundamental reasons for various faults, based on dependent relations between the components in dependency models",2005,0, 1397,Introduction to the special session on secure implementations,This paper briefly introduces online testing and its evolution towards very sub micron technologies. How secure circuit designers and online testing experts collaboration can help detect online the occurrence of natural faults that may be used as a basis to counter fault-based attacks taking into account the particular needs of secure application.,2005,0, 1398,A software based online memory test for highly available systems,"In this paper we describe a software based in-system memory test, capable of testing system memory in both offline and online environments. A technique to transparently """"steal"""" a chunk of memory from the system for running tests and then inserting it back for normal application use is proposed. Implementation of the proposed methodology can significantly improve the system's ability to proactively detect and manage functional faults in memory. The solution does not impose any hardware requirements and therefore lends itself for easy deployment on all kinds of systems. An extension of the methodology described is expected to be applicable for in-system testing of other system components as well.",2005,0, 1399,How to characterize the problem of SEU in processors & representative errors observed on flight,"In this paper are first summarized representative examples of anomalies observed in systems operating on-board satellites as the consequence of the effects of radiation on integrated circuit, showing that single event upsets (SEU) are a major concern. An approach to predict the sensitivity to SEUs of a software application running on a processor-based architecture is then proposed. It is based on fault injection experiments allowing estimating the average rate of program dysfunctions per upset. This error rate, if combined with static cross-section figures obtained from radiation ground testing, provides an estimation of the target program error rate. The efficiency of this two-step approach was demonstrated by results obtained when applying it to various processors.",2005,0, 1400,Software based in-system memory test for highly available systems,"In this paper we describe a software based in-system memory test that is capable of testing system memory in both offline and online environments. A technique to transparently """"steal"""" a chunk of memory from the system for running tests and then inserting it back for normal application's use is proposed. Factors like system memory architecture that needs to be considered while adapting any conventional memory testing algorithm for in-system testing are also discussed. Implementation of the proposed techniques can significantly improve the system's ability to proactively detect and manage functional faults in memory. An extension of the methodology described is expected to be applicable for in-system testing of other system components (like processor) as well.",2005,0, 1401,Comparing high-change modules and modules with the highest measurement values in two large-scale open-source products,"Identifying change-prone modules can enable software developers to take focused preventive actions that can reduce maintenance costs and improve quality. Some researchers observed a correlation between change proneness and structural measures, such as size, coupling, cohesion, and inheritance measures. However, the modules with the highest measurement values were not found to be the most troublesome modules by some of our colleagues in industry, which was confirmed by our previous study of six large-scale industrial products. To obtain additional evidence, we identified and compared high-change modules and modules with the highest measurement values in two large-scale open-source products, Mozilla and OpenOffice, and we characterized the relationship between them. Contrary to common intuition, we found through formal hypothesis testing that the top modules in change-count rankings and the modules with the highest measurement values were different. In addition, we observed that high-change modules had fairly high places in measurement rankings, but not the highest places. The accumulated findings from these two open-source products, together with our previous similar findings for six closed-source products, should provide practitioners with additional guidance in identifying the change-prone modules.",2005,0, 1402,A simulation approach to structure-based software reliability analysis,"Structure-based techniques enable an analysis of the influence of individual components on the application reliability. In an effort to ensure analytical tractability, prevalent structure-based analysis techniques are based on assumptions which preclude the use of these techniques for reliability analysis during the testing and operational phases. In this paper, we develop simulation procedures to assess the impact of individual components on the reliability of an application in the presence of fault detection and repair strategies that may be employed during testing. We also develop simulation procedures to analyze the application reliability for various operational configurations. We illustrate the potential of simulation procedures using several examples. Based on the results of these examples, we provide novel insights into how testing and repair strategies can be tailored depending on the application structure to achieve the desired reliability in a cost-effective manner. We also discuss how the results could be used to explore alternative operational configurations of a software application taking into consideration the application structure so as to cause minimal interruption in the field.",2005,0, 1403,The impact of institutional forces on software metrics programs,"Software metrics programs are an important part of a software organization's productivity and quality initiatives as precursors to process-based improvement programs. Like other innovative practices, the implementation of metrics programs is prone to influences from the greater institutional environment the organization exists in. In this paper, we study the influence of both external and internal institutional forces on the assimilation of metrics programs in software organizations. We use previous case-based research in software metrics programs as well as prior work in institutional theory in proposing a model of metrics implementation. The theoretical model is tested on data collected through a survey from 214 metrics managers in defense-related and commercial software organizations. Our results show that external institutions, such as customers and competitors, and internal institutions, such as managers, directly influence the extent to which organizations change their internal work-processes around metrics programs. Additionally, the adaptation of work-processes leads to increased use of metrics programs in decision-making within the organization. Our research informs managers about the importance of management support and institutions in metrics programs adaptation. In addition, managers may note that the continued use of metrics information in decision-making is contingent on adapting the organization's work-processes around the metrics program. Without these investments in metrics program adaptation, the true business value in implementing metrics and software process improvement is not realized.",2005,0, 1404,A game model for selection of purchasing bids in consideration of fuzzy values,"A number of efficiency-based vendor selection and negotiation models have been developed to deal with multiple attributes including price, quality and delivery performance which is treated as important bid attributes. But some alternative vendor's preferences of attributes are difficult of quantitative analysis, such as trust, reliability, and courtesy of the vendor, which are considered to be crucial issues of recent reaches in vendor evaluation. This paper proposes a buyer-seller game model that has distinct advantages over existing methods for bid selection and negotiation, the fuzzy indexes are used to evaluate those attributes which are difficult of quantitative analysis, we propose a new method that expands Talluri (2002) and Joe Zhu (2004)'s method and allows which the elements composing problems are given by fuzzy numerical values, An important outcome of assessing relative efficiencies within a group of decision making units (DMUs) in fuzzy data envelopment analysis is a set of virtual multipliers or weights accorded to each (input or output) factor taken into account. In this paper, by assessing upper bounds on factor weights and compacting the resulted intervals, a CSW is determined. Since resulted efficiencies by the proposed CSW are fuzzy numbers rather than crisp values, it is more informative for decision maker.",2005,0, 1405,Service quality of travel agents: the case of travel agents in China,"Travel agents in China have faced difficult times in recent years because of increasing customer demands and internal competition in the industry. A China Consumer Council report (2003) stated that complaints against travel agencies had increased by 10.6% for the year 2002/2003 as compared with the previous year. The purpose of the study was to assess customers' expectations and perceptions of service provided by travel agents, and to explore how the service factors derived from the factor analysis were related to overall customer satisfaction. The results showed that customers' perceptions of service quality fell short of their expectations, with the reliability dimension having the largest gap. Five factors were derived from the factor analysis of 25 service attributes, and the result of factor analysis showed that overall customer satisfaction was related to these five factors.",2005,0, 1406,A system for predicting the run-time behavior of Web services,"In a service oriented architecture requestors should be able to use the services that best fit their needs. In particular, for Web services it should be possible to fully exploit the advantages of dynamic binding. Up to now, no proposed solution allows the requesting agent to dynamically select the most """"convenient"""" service at invoke time. The reason is that currently the requesting agents do not compare the runtime behavior of different services. In this paper, we propose a system that provides and exploits predictions about the behavior of Web services, expressed in terms of availability, reliability and completion time. We also describe a first prototype (eUDDIr) of the specification. EUDDIr relies on a scalable agents-based monitoring architecture that collects data on Web services runtime activity. The computed predictions are used by requesting agents to implement an effective dynamic service selection. Our proposal is well suited whenever requestors do not wish to explicitly deal with QoS aspects, or in the case that provider agents have no convenience in building up the infrastructure for guaranteed QoS, at the same time aiming to provide services of good quality to their customers. Furthermore, the adoption of eUDDIr effectively improves the service requestors """"satisfaction"""" when they are involved in a Web services composition process.",2005,0, 1407,Automatic generation of software/hardware co-emulation interface for transaction-level communication,"This paper presents a methodology for generating interface of a co-emulation system where processor and emulator execute testbench and design unit, respectively while interacting with each other. To reduce the communication time between the processor and emulator, data transfers are performed in transaction level instead of signal level. To do this, transactor should be located near the DUT mapped on the hardware emulator. Consequently transactor is described in a synthesizable way. Moreover, the transactor design depends on both emulator system protocol and DUT protocol. Therefore, transactor description would not only be time-consuming but also error-prone task. Based on the layered architecture, we propose an automated procedure for generating co-emulation interface from platform-independent transactor. We have also discussed about the practical issues on multiple channel and clock skew problem.",2005,0, 1408,Optimized distributed delivery of continuous-media documents over unreliable communication links,"Video-on-demand (VoD) applications place very high requirements on the delivery medium. High-quality services should provide for a timely delivery of the data-stream to the clients plus a minimum of playback disturbances. The major contributions of this paper are that it proposes a multiserver, multi-installment (MSMI) solution approach (sending the document in several installments from each server) to the delivery problem and achieves a minimization of the client waiting time, also referred to as the access time (AT) or start-up latency in the literature. By using multiple spatially distributed servers, we are able to exploit slow connections that would otherwise prevent the deployment of video-on-demand-like services, to offer such services in an optimal manner. Additionally, the delivery and playback schedule that is computed by our approach is loss-aware in the sense that it is flexible enough to accommodate packet losses without interrupts. The mathematical framework presented covers both computation and optimization problems associated with the delivery schedule, offering a complete set of guidelines for designing MSMI VoD services. The optimizations presented include the ordering of the servers and determining the number of installments based on the packet-loss probabilities of the communication links. Our analysis guarantees the validity of a delivery schedule recommended by the system by providing a percentage of confidence for an uninterrupted playback at the client site. This, in a way, quantifies the degree of quality of service rendered by the system and the MSMI strategy proposed. The paper is concluded by a rigorous simulation study that showcases the substantial advantages of the proposed approach and explores how optimization of the schedule parameters affects performance.",2005,0, 1409,P-RnaPredict-a parallel evolutionary algorithm for RNA folding: effects of pseudorandom number quality,"This paper presents a fully parallel version of RnaPredict, a genetic algorithm (GA) for RNA secondary structure prediction. The research presented here builds on previous work and examines the impact of three different pseudorandom number generators (PRNGs) on the GA's performance. The three generators tested are the C standard library PRNG RAND, a parallelized multiplicative congruential generator (MCG), and a parallelized Mersenne Twister (MT). A fully parallel version of RnaPredict using the Message Passing Interface (MPI) was implemented on a 128-node Beowulf cluster. The PRNG comparison tests were performed with known structures whose sequences are 118, 122, 468, 543, and 556 nucleotides in length. The effects of the PRNGs are investigated and the predicted structures are compared to known structures. Results indicate that P-RnaPredict demonstrated good prediction accuracy, particularly so for shorter sequences.",2005,0, 1410,A study on the application of digital cameras to derive audio information from the TVs on trains,"Digital camera can help people take videos anytime with ease. For this reason, application of digital video camera's technology is the hottest topic right now. Due to the fact that the TV on the train is in a public place it can't give audio information. We were trying to find a way to get the audio information by second time (ST) detection video only. """"Second time"""" (ST) means the user uses their digital camera to take video from the original video. We propose a new way to use digital watermarking by embedding some pointer symbols into the source video, so passengers can use this kind of video to detect high quality audio in ST situations. In order to show other possible application for this process in the future, in this study we also use VB programming to design simulation software """"DDAS"""" (directly detection audio system). The software was designed to handle uncompressed AVl files. We tested this software using ST video. In the Experiments of this software successfully detected audio information and audio file types covering AVI, MPEG, MIDI and WAV file type. According to the questionnaires from the users, DDAS system's output audio file was given a 3.8 MOS level which shows the system has enough audio embedded ability. From the questionnaires, we also found ST video's screen size affects the detected audio's MOS quality. We found the best MOS quality was located in 1:11:4/3 screen size. We also use this kind of technology for study and we found that using multimedia improves the student's grades and the grades about 6 average grades.",2005,0, 1411,How to produce better quality test software,"LabWindows/CVI is a popular C compiler for writing automated test equipment (ATE) test software. Since C was designed as a portable assembly language, it uses many low-level machine operations that tend to be error prone, even for the professional programmer. Test equipment engineers also tend to underestimate the effort required to write high-quality software. Quality software has very few defects and is easy to write and maintain. The examples used in this article are for the C programming language, but the principles also apply to most other programming languages. Most of the tools mentioned work with both C and C++ software.",2005,0, 1412,Development of ANN-based virtual fault detector for Wheatstone bridge-oriented transducers,This paper reports on the development of a new artificial neural network-based virtual fault detector (VFD) for detection and identification of faults in DAS-connected Wheatstone bridge-oriented transducers of a computer-based measurement system. Experimental results show that the implemented VFD is convenient for fusing intelligence into such systems in a user-interactive manner. The performance of the proposed VFD is examined experimentally to detect seven frequently occurring faults automatically in such transducers. The presented technique used an artificial neural network-based two-class pattern classification network with hard-limit perceptrons to fulfill the function of an efficient residual generator component of the proposed VFD. The proposed soft residual generator detects and identifies various transducer faults in collaboration with a virtual instrument software-based inbuilt algorithm. An example application is also presented to demonstrate the use of implemented VFD practically for detecting and diagnosing faults in a pressure transducer having semiconductor strain gauges connected in a Wheatstone bridge configuration. The results obtained in the example application with this strategy are promising.,2005,0, 1413,"Hydratools, a MATLAB based data processing package for Sontek Hydra data","The U.S. Geological Survey (USGS) has developed a set of MATLAB tools to process and convert data collected by Sontek Hydra instruments to netCDF, which is a format used by the USGS to process and archive oceanographic time-series data. The USGS makes high-resolution current measurements within 1.5 meters of the bottom. These data are used in combination with other instrument data from sediment transport studies to develop sediment transport models. Instrument manufacturers provide software which outputs unique binary data formats. Multiple data formats are cumbersome. The USGS solution is to translate data streams into a common data format: netCDF. The Hydratools toolbox is written to create netCDF format files following EPIC conventions, complete with embedded metadata. Data are accepted from both the ADV and the PCADP. The toolbox will detect and remove bad data, substitute other sources of heading and tilt measurements if necessary, apply ambiguity corrections, calculate statistics, return information about data quality, and organize metadata. Standardized processing and archiving makes these data more easily and routinely accessible locally and over the Internet. In addition, documentation of the techniques used in the toolbox provides a baseline reference for others utilizing the data.",2005,0, 1414,Empirical case studies in attribute noise detection,"The problem of determining the noisiest attribute(s) from a set of domain-specific attributes is of practical importance to domain experts and the data mining community. Data noise is generally of two types: attribute noise and mislabeling errors (class noise). For a given domain-specific dataset, attributes that contain a significant amount of noise can have a detrimental impact on the success of a data mining initiative, e.g., reducing the predictive ability of a classifier in a supervised learning task. Techniques that provide information about the noise quality of an attribute are useful tools for a data mining practitioner when performing analysis on a dataset or scrutinizing the data collection processes. Our technique for detecting noisy attributes uses an algorithm that we recently proposed for the detection of instances with attribute noise. This paper presents case studies that confirm our recent work done on detecting noisy attributes and further validates that our technique is indeed able to detect attributes that contain noise.",2005,0,3972 1415,Using POMDP-based state estimation to enhance agent system survivability,"A survivable agent system depends on the incorporation of many recovery features. However, the optimal use of these recovery features requires the ability to assess the actual state of the agent system accurately at a given time. This paper describes an approach for the estimation of the state of an agent system using partially-observable Markov decision processes (POMDPs). POMDPs are dependent on a model of the agent system - components, environment, sensors, and the actuators that can correct problems. Based on this model, we define a state estimation for each component (asset) in the agent system. We model a survivable agent system as a POMDP that takes into account both environmental threats and observations from sensors. We describe the process of updating the state estimation as time passes, as sensor inputs are received, and as actuators affect changes. This state estimation process has been deployed within the Ultralog application and successfully tested using Ultralog's survivability tests on a full-scale (1000+) agent system.",2005,0, 1416,A different view of fault prediction,"We investigated a different mode of using the prediction model to identify the files associated with a fixed percentage of the faults. The tester could ask the tool to identify which files are likely to contain the bulks of faults, with the tester selecting any desired percentage of faults. Again the tool would return a list ordered in decreasing order of the predicted numbers of faults in the files the model expects to be most problematic. If the number of files identified is too large, the tester could reselect a smaller percentage of faults. This would make the number of files requiring particular scrutiny manageable. We expect both modes to be valuable to professional software testers and developers.",2005,0, 1417,On monitoring concurrent systems with TLA: an example,"We present an approach for producing oracles from TLA (temporal logic of action) specification of a system. Such oracles are useful, for monitoring purposes, to detect temporal faults by checking a running implementation of a system against a verified behavioral model. We use the Ben-Ari classical incremental garbage collection algorithm for illustration.",2005,0, 1418,Multiple views to support engineering change management for complex products,"Unforeseen change propagation can have a major impact on products and design processes and cause project delays and excessive costs. However, current change management depends heavily on individual designers' typically limited product overview. For complex products, this approach is error-prone because the amount of data that is necessary to properly assess the risk of changes is too large. The information has to be broken down into smaller chunks so that it is easier to cope with. On the other hand, an overview over the entire product must be provided in order to be able to predict changes resulting from changes in other components. In this paper, we discuss the CPM (change prediction method) tool that incorporates a multiple view strategy to visualise complex change data and allows designers to run what-if scenarios in order to assess the implications of changing components in a complex product during the design process.",2005,0, 1419,A WCET-oriented static branch prediction scheme for real time systems,"Branch prediction mechanisms are becoming commonplace within current generation processors. Dynamic branch predictors, albeit able to predict branches quite accurately in average, are becoming increasingly complex. Thus, determining their worst-case behavior, which is highly recommended for real-time applications, is getting increasingly difficult and error-prone, and may even be soon impossible for the most complex branch predictors. In contrast, static branch predictors are inherently predictable, to the detriment of a lower prediction accuracy. In this paper, we propose a WCET-oriented static branch prediction scheme. Unlike related work on compiler-directed static branch prediction, our scheme does not address program average-case performance (i.e. average-case branch misprediction rate) but addresses worst-case program performance instead (i.e. branch mispredictions which impact programs WCET estimates). Experimental results on a PowerPC 7451 architecture show that the estimated WCET can be decreased by up to 21 % (with an average improvement of 15%) as compared with the method where all branches are conservatively considered mispredicted. Our scheme, although applicable to any processor with support for static branch prediction, is specially suited to processors with complex dynamic predictors, for which safe and tight WCET estimate methods do not exist.",2005,0, 1420,Scheduling tasks with Markov-chain based constraints,"Markov-chain (MC) based constraints have been shown to be an effective QoS measure for a class of real-time systems, particularly those arising from control applications. Scheduling tasks with MC constraints introduces new challenges because these constraints require not only specific task finishing patterns but also certain task completion probability. Multiple tasks with different MC constraints competing for the same resource further complicates the problem. In this paper, we study the problem of scheduling multiple tasks with different MC constraints. We present two scheduling approaches which (i) lead to improvements in """"overall"""" system performance, and (ii) allow the system to achieve graceful degradation as system load increases. The two scheduling approaches differ in their complexities and performances. We have implemented our scheduling algorithms in the QNX real-time operating system environment and used the setup for several realistic control tasks. Data collected from the experiments as well as simulation all show that our new scheduling algorithms outperform algorithms designed for window-based constraints as well as previous algorithms designed for handling MC constraints.",2005,0, 1421,A BIST approach for testing FPGAs using JBITS,"This paper explores the built-in self test (BIST) concepts to test the configurable logic blocks (CLBs) of static RAM (SRAM) based FPGAs using Java Bits (JBits). The proposed technique detects and diagnoses single and multiple stuck-at faults in the CLBs while significantly reducing the time taken to perform the testing. Previous BIST approaches for testing FPGAs use traditional CAD tools which lack control over configurable resources, resulting in the design being placed on the hardware in a different way than intended by the designer. In this paper, the design of the logic BIST architecture is done using JBits 2.8 software for Xilinx Virtex family of devices. The test requires seven configurations and two test sessions to test the CLBs. The time taken to generate the entire BIST logic in both the sessions is approximately 77 seconds as compared with several minutes to hours in traditional design flow.",2005,0, 1422,Visualization of self-stabilizing distributed algorithms,"In this paper, we present a method to build an homogeneous and interactive visualization of self-stabilizing distributed algorithms using Visidia platform. The approach developed in this work allows to simulate the transient failures and their correction mechanism. We use local computations to encode self-stabilizing algorithms like the distributed algorithms implemented in Visidia. The resulting interface is able to select some processes and incorrectly change their states to show the transient failures. The system detects and corrects these transient failures by applying correction rules. Many examples of self-stabilizing distributed algorithms are implemented.",2005,0, 1423,"Software, performance and resource utilisation metrics for context-aware mobile applications","As mobile applications become more pervasive, the need for assessing their quality, particularly in terms of efficiency (i.e., performance and resource utilisation), increases. Although there is a rich body of research and practice in developing metrics for traditional software, there has been little study on how these relate to mobile context-aware applications. Therefore, this paper defines and empirically evaluates metrics to capture software, resource utilisation and performance attributes, for the purpose of modelling their impact in context-aware mobile applications. To begin, a critical analysis of the problem domain identifies a number of specific software, resource utilisation and performance attributes. For each attribute, a concrete metric and technique of measurement is defined. A series of hypotheses are then proposed, and tested empirically using linear correlation analysis. The results support the hypotheses thus demonstrating the impact of software code attributes on the efficiency of mobile applications. As such, a more formal model in the form of mathematical equations is proposed in order to facilitate runtime decisions regarding the efficient placement of mobile objects in a context-aware mobile application framework. Finally, a preliminary empirical evaluation of the model is carried out using a typical application and an existing mobile application framework",2005,0, 1424,Assessing the impact of coupling on the understandability and modifiability of OCL expressions within UML/OCL combined models,"Diagram-based UML notation is limited in its expressiveness thus producing a model that would be severely underspecified. The flaws in the limitation of the UML diagrams are solved by specifying UML/OCL combined models, OCL being an essential add-on to the UML diagrams. Aware of the importance of building precise models, the main goal of this paper is to carefully describe a family of experiments we have undertaken to ascertain whether any relationship exists between object coupling (defined through metrics related to navigations and collection operations) and two maintainability sub-characteristics: understandability and modifiability of OCL expressions. If such a relationship exists, we will have found early indicators of the understandability and modifiability of OCL expressions. Even though the results obtained show empirical evidence that such a relationship exists, they must be considered as preliminaries. Further validation is needed to be performed to strengthen the conclusions and external validity",2005,0, 1425,An industrial case study of implementing and validating defect classification for process improvement and quality management,"Defect measurement plays a crucial role when assessing quality assurance processes such as inspections and testing. To systematically combine these processes in the context of an integrated quality assurance strategy, measurement must provide empirical evidence on how effective these processes are and which types of defects are detected by which quality assurance process. Typically, defect classification schemes, such as ODC or the Hewlett-Packard scheme, are used to measure defects for this purpose. However, we found it difficult to transfer existing schemes to an embedded software context, where specific document- and defect types have to be considered. This paper presents an approach to define, introduce, and validate a customized defect classification scheme that considers the specifics of an industrial environment. The core of the approach is to combine the software engineering know-how of measurement experts and the domain know-how of developers. In addition to the approach, we present the results and experiences of using the approach in an industrial setting. The results indicate that our approach results in a defect classification scheme that allows classifying defects with good reliability, that allows identifying process improvement actions, and that can serve as a baseline for evaluating the impact of process improvements",2005,0, 1426,Measurement-driven dashboards enable leading indicators for requirements and design of large-scale systems,"Measurement-driven dashboards provide a unifying mechanism for understanding, evaluating, and predicting the development, management, and economics of large-scale systems and processes. Dashboards enable interactive graphical displays of complex information and support flexible analytic capabilities for user customizability and extensibility. Dashboards commonly include software requirements and design metrics because they provide leading indicators for project size, growth, and stability. This paper focuses on dashboards that have been used on actual large-scale projects as well as example empirical relationships revealed by the dashboards. The empirical results focus on leading indicators for requirements and design of large-scale systems. In the first set of 14 projects focusing on requirements metrics, the ratio of software requirements to source-lines-of code averaged 1:46. Projects that far exceeded the 1:46 requirements-to-code ratio tended to be more effort-intensive and fault-prone during verification. In the second set of 16 projects focusing on design metrics, the components in the top quartile of the number of component internal states had 6.2 times more faults on average than did the components in the bottom quartile, after normalization by size. The components in the top quartile of the number of component interactions had 4.3 times more faults on average than did the components in the bottom quartile, after normalization by size. When the number of component internal states was in the bottom quartile, the component fault-proneness was low even when the number of component interactions was in the upper quartiles, regardless of size normalization. Measurement-driven dashboards reveal insights that increase visibility into large-scale systems and provide feedback to organizations and projects",2005,0, 1427,Finding predictors of field defects for open source software systems in commonly available data sources: a case study of OpenBSD,"Open source software systems are important components of many business software applications. Field defect predictions for open source software systems may allow organizations to make informed decisions regarding open source software components. In this paper, we remotely measure and analyze predictors (metrics available before release) mined from established data sources (the code repository and the request tracking system) as well as a novel source of data (mailing list archives) for nine releases of OpenBSD. First, we attempt to predict field defects by extending a software reliability model fitted to development defects. We find this approach to be infeasible, which motivates examining metrics-based field defect prediction. Then, we evaluate 139 predictors using established statistical methods: Kendall's rank correlation, Pearson's rank correlation, and forward AIC model selection. The metrics we collect include product metrics, development metrics, deployment and usage metrics, and software and hardware configurations metrics. We find the number of messages to the technical discussion mailing list during the development period (a deployment and usage metric captured from mailing list archives) to be the best predictor of field defects. Our work identifies predictors of field defects in commonly available data sources for open source software systems and is a step towards metrics-based field defect prediction for quantitatively-based decision making regarding open source software components",2005,0, 1428,ZenFlow: a visual Web service composition tool for BPEL4WS,"Web services have become a very powerful technology to build service oriented architectures and standardize the access to legacy services. Through Web service composition new added value Web services can be created out of existing ones. Examples of these compositions are virtual organizations, outsourcing, enterprise application integration, business process definitions and business to business inter/intra-enterprise relationships. In order to enable the construction of business processes as composite Web services, a number of composition languages has been proposed by the software industry. However, the handiwork of specifying a business process with these languages through simple text or XML editors is tough, complex and error prone. Visual support can ease the definition of business processes. In this paper, we describe ZenFlow, a visual composition tool for Web services written in BPEL4WS. ZenFlow provides several visual facilities to ease the definition of a business process such as multiple views of a process, syntactic and semantic awareness, filtering, logical zooming capabilities and hierarchical representations.",2005,0, 1429,A seamless handoff approach of mobile IP based on dual-link,"Mobile IP protocol solves the problem of mobility support for hosts connected to Internet anytime and anywhere, and makes the mobility transparent to the higher layer applications. But the handoff latency in Mobile IP affects the quality of communication. This paper proposes a seamless handoff approach based on dual-link and link layer trigger, using information from link layer to predict and trigger the dual-link handoff. During the handoff, MN keeps one link connected with the current network while it handoff another link to the new network. In this paper we develop a model system based on this approach and provide experiments to evaluate the performance of this approach. The experimental results show that this approach can ensure the seamless handoff of Mobile IP.",2005,0, 1430,A defect estimation approach for sequential inspection using a modified capture-recapture model,"Defect prediction is an important process in the evaluation of software quality. To accurately predict the rate of software defects can not only facilitate software review decisions, but can also improve software quality. In this paper, we have provided a defect estimation approach, which uses defective data from sequential inspections to increase the accuracy of estimating defects. To demonstrate potential improvements, the results of our approach were compared to those of two other popular estimation approaches, the capture-recapture model and the re-inspection model. By using the proposed approach, software organizations may increase the accuracy of their defect predictions and reduce the effort of subsequent inspections.",2005,0, 1431,Power transmission control using distributed max-flow,Existing maximum flow algorithms use one processor for all calculations or one processor per vertex in a graph to calculate the maximum possible flow through a graph's vertices. This is not suitable for practical implementation. We extend the max-flow work of Goldberg and Tarjan to a distributed algorithm to calculate maximum flow where the number of processors is less than the number of vertices in a graph. Our algorithm is applied to maximizing electrical flow within a power network where the power grid is modeled as a graph. Error detection measures are included to detect problems in a simulated power network. We show that our algorithm is successful in executing quickly enough to prevent catastrophic power outages.,2005,0, 1432,Gompertz software reliability model and its application,"In this article, we propose a stochastic model called the Gompertz software reliability model based on the familiar non-homogeneous Poisson process. It is shown that the proposed model can be derived from the well-known statistical theory of extreme-value and has the quite similar asymptotic property to the classical Gompertz curve. In a numerical example with the software failure data observed in a real software development project, we apply the Gompertz software reliability model to assess the software reliability and to predict the number of initial fault contents. We empirically conclude that our new model may function better than the existing models and is attractive in terms of goodness-of-fit test based on information criteria and mean squared error.",2005,0, 1433,Testing the semantics of W3C XML schema,"The XML schema language is becoming the preferred means of defining and validating highly structured XML instance documents. We have extended the conventional mutation method to be applicable for W3C XML schemas. In this paper a technique for using mutation analysis to test the semantic correctness of W3C XML schemas is presented. We introduce a mutation analysis model and a set of W3C XML schema (XSD) mutation operators that can be used to detect faults involving name-spaces, user-defined types, and inheritance. Preliminary evaluation of our technique shows that it is effectiveness to test the semantics of W3C XML schema documents.",2005,0, 1434,A low-latency checkpointing scheme for mobile computing systems,"Fault-tolerant mobile computing systems have different requirements and restrictions, not taken into account by conventional distributed systems. This paper presents a coordinated checkpointing scheme which reduces the delay involved in a global checkpointing process for mobile systems. A piggyback technique is used to track and record the checkpoint dependency information among processes during normal message transmission. During checkpointing, a concurrent checkpointing technique is designed to use the pre-recorded process dependency information to minimize process blocking time by sending checkpoint requests to dependent processes at once, hence saving the time to trace the dependency tree. Our checkpoint algorithm forces a minimum number of processes to take checkpoints. Via probability-based analysis, we show that our scheme can significantly reduce the latency associated with checkpoint request propagation, compared to traditional coordinated checkpointing approach.",2005,0, 1435,A controlled experiment assessing test case prioritization techniques via mutation faults,"Regression testing is an important part of software maintenance, but it can also be very expensive. To reduce this expense, software testers may prioritize their test cases so that those that are more important are run earlier in the regression testing process. Previous work has shown that prioritization can improve a test suite's rate of fault detection, but the assessment of prioritization techniques has been limited to hand-seeded faults, primarily due to the belief that such faults are more realistic than automatically generated (mutation) faults. A recent empirical study, however, suggests that mutation faults can be representative of real faults. We have therefore designed and performed a controlled experiment to assess the ability of prioritization techniques to improve the rate of fault detection techniques, measured relative to mutation faults. Our results show that prioritization can be effective relative to the faults considered, and they expose ways in which that effectiveness can vary with characteristics of faults and test suites. We also compare our results to those collected earlier with respect to the relationship between hand-seeded faults and mutation faults, and the implications this has for researchers performing empirical studies of prioritization.",2005,0, 1436,Optimizing test to reduce maintenance,"A software package evolves in time through various maintenance release steps whose effectiveness depends mainly on the number of faults left in the modules. Software testing is one of the most demanding and crucial phases to discover and reduce faults. In real environment, time available to test a software release is a given finite quantity. The purpose of this paper is to identify a criterion to estimate an efficient time repartition among software modules to enhance fault location in testing phase and to reduce corrective maintenance. The fundamental idea is to relate testing time to predicted risk level of the modules in the release under test. In our previous work we analyzed several kinds of risk prediction factors and their relationship with faults; moreover, we thoroughly investigated the behavior of faults on each module through releases to find significant fault proneness tendencies. Starting from these two lines of analysis, in this paper we propose a new approach to optimize the use of available testing time in a software release. We tuned and tested our hypotheses on a large industrial environment.",2005,0, 1437,Test prioritization using system models,"During regression testing, a modified system is retested using the existing test suite. Because the size of the test suite may be very large, testers are interested in detecting faults in the system as early as possible during the retesting process. Test prioritization tries to order test cases for execution so the chances of early detection of faults during retesting are increased. The existing prioritization methods are based on the code of the system. System modeling is a widely used technique to model state-based systems. In this paper, we present methods of test prioritization based on state-based models after changes to the model and the system. The model is executed for the test suite and information about model execution is used to prioritize tests. Execution of the model is inexpensive as compared to execution of the system; therefore the overhead associated with test prioritization is relatively small. In addition, we present an analytical framework for evaluation of test prioritization methods. This framework may reduce the cost of evaluation as compared to the existing evaluation framework that is based on experimentation (observation). We have performed an experimental study in which we compared different test prioritization methods. The results of the experimental study suggest that system models may improve the effectiveness of test prioritization with respect to early fault detection.",2005,0, 1438,An empirical study of software maintenance of a Web-based Java application,"This paper presents empirical study detailing software maintenance for Web-based Java applications to aid in understanding and predicting the software maintenance category and effort. The specific application described is a Java Web-based administrative application in the e-Government arena. The application is based on the design of an open source model-view-controller framework. The domain factors, which also provide contextual references, are described based on Kitchenham et al.'s software maintenance ontology. This paper characterizes the number of fault reports and maintenance effort for each maintenance category. This study finds that the distribution of software maintenance effort in this observed Web-based Java application is similar to the distribution in previous software maintenance studies, which analyzed non object-oriented and non Web-based applications.",2005,0, 1439,Facilitating the implementation and evolution of business rules,"Many software systems implement, amongst other things, a collection of business rules. However, the process of evolving the business rules associated with a system is both time consuming and error prone. In this paper, we propose a novel approach to facilitating business rule evolution through capturing information to assist with the evolution of rules at the point of implementation. We analyse the process of rule evolution, in order to determine the information that must be captured. Our approach allows programmers to implement rules by embedding them into application programs (giving the required performance and genericity), while still easing the problems of evolution.",2005,0, 1440,Utilization of extended firewall for object-oriented regression testing,"A testing firewall involves identifying various components that are dependent upon changed elements, but are just one level away from the modified elements. This paper investigates situations when data flow paths are longer, and the mechanism of thorough testing of components one level away from the changed elements may not detect certain regression faults caused by the change; this research has led to the notion of an extended firewall that takes these longer data paths into account. Empirical studies are reported that show the extent to which an extended firewall can detect more faults and how much more testing is required to achieve this increased detection.",2005,0, 1441,Measurement and quality in object-oriented design,"In order to support the maintenance of object-oriented software systems, the quality of their design must be evaluated using adequate quantification means. In spite of the current extensive use of metrics, if used in isolation, metrics are oftentimes too fine grained to quantify comprehensively an investigated aspect of the design. To help the software engineer detect and localize design problems, the novel detection strategy mechanism is defined so that deviations from good-design principles and heuristics are quantised inform of metrics-based rules. Using detection strategies an engineer can directly localize classes or methods affected by a particular design flaw (e.g. God Class), rather than having to infer the real design problem from a large set of abnormal metric values. In order to reach the ultimate goal of bridging the gap between qualitative and quantitative statements about design, the dissertation proposes a novel type of quality model, called factor-strategy. In contrast to traditional quality models that express the goodness of design in terms of a set of metrics, this novel model relates explicitly the quality of a design to its conformance with a set of essential principles, rules and heuristics, which are quantified using detection strategies.",2005,0, 1442,Detecting application-level failures in component-based Internet services,"Most Internet services (e-commerce, search engines, etc.) suffer faults. Quickly detecting these faults can be the largest bottleneck in improving availability of the system. We present Pinpoint, a methodology for automating fault detection in Internet services by: 1) observing low-level internal structural behaviors of the service; 2) modeling the majority behavior of the system as correct; and 3) detecting anomalies in these behaviors as possible symptoms of failures. Without requiring any a priori application-specific information, Pinpoint correctly detected 89%-96% of major failures in our experiments, as compared with 20%-70% detected by current application-generic techniques.",2005,0, 1443,Applications of Global Positioning System (GPS) in geodynamics: with three examples from Turkey,"Global Positioning System (GPS) has been very useful tool for the last two decades in the area of geodynamics because or the validation of the GPS results by the Very Long Baseline Interferometry (VLBI) and Satellite Laser Ranging (SLR) observations. The modest budget requirement and the high accuracy relative positioning availability of GPS increased the use of it in determination of crustal and/or regional deformations. Since the civilian use the GPS began in 1980, the development on the receiver and antenna technology with the ease of use software packages reached to a well known state, which may be named as a revolution in the Earth Sciences among other application fields. Analysis of a GPS network can also give unknown information about the fault lines that can not be seen from the ground surface. Having information about the strain accumulation along the fault line may allow us to evaluate future probabilities of regional earthquake hazards and develop earthquake scenarios for specific faults. In this study, the use of GPS in geodynamical studies will be outlined throughout the instrumentation, the measurements, and the methods utilized. The preliminary results of three projects, sponsored by the Scientific & Technical Research Council of Turkey (TUBITAK) and Istanbul Technical University (ITU) which have been carried out in Turkey using GPS will be summarized. The projects are mainly aimed to determine the movements along the fault zones. Two of the projects have been implemented along the North Anatolian Fault Zone. (NAFZ), one is in Mid-Anatolia region, and the ther is in Western Marmara region. The third project has been carried out in the Fethiye-Burdur region. The collected GPS data were processed by the GAMIT/GLOBK software The results are represented as velocity vectors obtained using the yearly combinations of the daily measured GPS data.",2005,0, 1444,Analysis of a method for improving video quality in the Internet with inclusion of redundant essential video data,"This paper presents a variant of a method for combating the burst-loss problem of multicast video packets. This variant uses redundant essential video data to offer biased protection towards all critical video information from loss, especially from contiguous loss when a burst loss of multiple consecutive packets occurs. Simulation results indicate that inclusion of redundant critical video data improves performance of our previously proposed method for solving the burst-loss problem. Furthermore, probability models, created for evaluating the effectiveness of the proposed method and its variant, give results which are consistent with those obtained from simulations.",2005,0, 1445,Conflict classification and analysis of distributed firewall policies,"Firewalls are core elements in network security. However, managing firewall rules, particularly, in multifirewall enterprise networks, has become a complex and error-prone task. Firewall filtering rules have to be written, ordered, and distributed carefully in order to avoid firewall policy anomalies that might cause network vulnerability. Therefore, inserting or modifying filtering rules in any firewall requires thorough intrafirewall and interfirewall analysis to determine the proper rule placement and ordering in the firewalls. In this paper, we identify all anomalies that could exist in a single- or multifirewall environment. We also present a set of techniques and algorithms to automatically discover policy anomalies in centralized and distributed firewalls. These techniques are implemented in a software tool called the """"Firewall Policy Advisor"""" that simplifies the management of filtering rules and maintains the security of next-generation firewalls.",2005,0, 1446,A reflective practice of automated and manual code reviews for a studio project,"In this paper, the target of code review is project management system (PMS), developed by a studio project in a software engineering master's program, and the focus is on finding defects not only in view of development standards, i.e., design rule and naming rule, but also in view of quality attributes of PMS, i.e., performance and security. From the review results, a few lessons are learned. First, defects which had not been found in the test stage of PMS development could be detected in this code review. These are hidden defects that affect system quality and that are difficult to find in the test. If the defects found in this code review had been fixed before the test stage of PMS development, productivity and quality enhancement of the project would have been improved. Second, manual review takes much longer than an automated one. In this code review, general check items were checked by automation tool, while project-specific ones were checked by manual method. If project-specific check items could also be checked by automation tool, code review and verification work after fixing the defects would be conducted very efficiently. Reflecting on this idea, an evolution model of code review is studied, which eventually seeks fully automated review as an optimized code review.",2005,0, 1447,Two approaches for the improvement in testability of communication protocols,"Protocols have grown larger and more complex with the advent of computer and communication technologies. As a result, the task of conformance testing of protocol implementation has also become more complex. The study of design for testability (DFT) is a research area in which researchers investigate design principles that will help to overcome the ever increasing complexity of testing distributed systems. Testability metrics are essential for evaluating and comparing designs. In a previous paper, we introduce a new metric for testability of communication protocols, based on the detection probability of a default. We demonstrate the usefulness of the metric for identifying faults there are more difficult to detect. In this paper, we present two approaches for improved testing of a protocol implementation once those faults that are difficult to detect are identified.",2005,0, 1448,Enhancing transparency and adaptability of dynamically changing execution environments for mobile agents,"Mobile agent-based distributed systems become obtaining significant popularity as a potential vehicle to allow software components to be executed on heterogeneous environments despite mobility of users and computations. However as these systems generally force mobile agents to use only common functionalities provided in every execution environment, the agents may not access environment-specific resources. In this paper, we propose a new framework using aspect oriented programming technique to accommodate a variety of static resources as well as dynamic ones whose amount is continually changed at runtime even in the same execution environment. Unlike previous works, this framework divides roles of software developers into three groups to relieve application programmers from the complex and error-prone parts of implementing dynamic adaptation and allowing each developer to only concentrate on his own part. Also, the framework enables policy decision makers to apply various adaptation policies to dynamically changing environments for adjusting mobile agents to the change of their resources.",2005,0, 1449,IDEM: an Internet delay emulator approach for assessing VoIP quality,"Network emulations are imitations of real-time network behavior that help in testing and assessing protocols, and other network related applications in a controlled hardware and software environment. Most of the emulators existing today are hardware implemented emulators. There is a rising demand to emulate the network behavior using a software tool. Our Internet delay emulator (IDEM) is a software tool that captures the network details and reproduces an environment useful for research oriented projects. IDEM is based on bouncers that are distributed over the Internet. The concepts of firewall routing are used in designing IDEM. IDEM supports both TCP and UDP applications. Rigorous testing shows that actual delay in data sent is accurately modeled by IDEM. Advantages of IDEM especially for delay sensitive applications like VoIP are discussed.",2005,0, 1450,Performance analysis of random access channel in OFDMA systems,"The random access channel (RACH) in OFDMA systems is an uplink contention-based transport channel that is mainly used for subscriber stations to make a resource request to base stations. In this paper we focus on analyzing the performance of RACH in OFDMA systems such that the successful transmission probability, correctly detectable probability and throughput of RACH are analyzed. We also choose an access mechanism with binary exponential backoff delay procedure similar to that in IEEE 802.11. Based on the mechanism, we derive the delay and the blocking probability of RACH in OFDMA systems.",2005,0, 1451,Fault tolerant XGFT network on chip for multi processor system on chip circuits,"This paper presents a fault-tolerant eXtended Generalized Fat Tree (XGFT) Network-On-Chip (NOC) implemented with a new fault-diagnosis-and-repair (FDAR) system. The FDAR system is able to locate faults and reconfigure switch nodes in such a way that the network can route packets correctly despite the faults. This paper presents how the FDAR finds the faults and reconfigures the switches. Simulation results are used for showing that faulty XGFTs could also achieve good performance, if the FDAR is used. This is possible if deterministic routing is used in faulty parts of the XGFTs and adaptive Turn-Back (TB) routing is used in faultless parts of the network for ensuring good performance and Quality-of-Service (QoS). The XGFT is also equipped with parity bit checks for detecting bit errors from the packets.",2005,0, 1452,An iterative hardware Gaussian noise generator,"The quality of generated Gaussian noise samples plays a crucial role when evaluating the bit error rate performance of communication systems. This paper presents a new approach for the field-programmable gate array (FPGA) realization of a high-quality Gaussian noise generator (GNG). The datapath of the GNG can be configured differently based on the required accuracy of the Gaussian probability density function (PDF). Since the GNG is often most conveniently implemented on the same FPGA as the design under evaluation, the area efficiency of the proposed GNG is important. For a particular configuration, the proposed design utilizes only 3% of the configurable slices and two on-chip block memories of a Virtex XC2V4000-6 FPGA to generate Gaussian samples within up to 6.55, where is the standard deviation, and can operate at up to 132 MHz.",2005,0, 1453,Composition assessment metrics for CBSE,"The objective of this paper is the formal definition of composition assessment metrics for CBSE, using an extension of the CORBA component model metamodel as the ontology for describing component assemblies. The method used is the representation of a component assembly as an instantiation of the extended CORBA component model metamodel. The resulting meta-objects diagram can then be traversed using object constraint language clauses. These clauses are a formal and executable definition of the metrics that can be used to assess quality attributes from the assembly and its constituent components. The result is the formal definition of context-dependent metrics that cover the different composition mechanisms provided by the CORBA component model and can be used to compare alternative component assemblies; a metamodel extension to capture the topology of component assemblies. The conclusion is that providing a formal and executable definition of metrics for CORBA component assemblies is an enabling precondition to allow for independent scrutiny of such metrics which is, in turn, essential to increase practitioners confidence on predictable quality attributes.",2005,0, 1454,A tool for reliability and availability prediction,"As the number of embedded systems in everyday usage has been increasing at an enormous rate recently, the demand for reliable and readily available systems is continuously growing. Problems in reliability and availability are typically detected after system implementation, when fault correction and modifications are difficult and expensive to implement. This is why a new method has been developed for predicting reliability and availability already from architectural models. However, to be beneficial in system development, the prediction method should enable quick, easy and repeatable quality analysis. Therefore, the different procedures and features of the method require tool support. Our contribution is the tool that supports reliability and availability prediction at the architectural level. The tool enables a representation of these two quality attributes in the architecture, and assists in analyzing system reliability and availability using architectural models. The tool has been tested and validated by using it for reliability and availability prediction in a case example.",2005,0, 1455,Improvement of frequency resolution for three-phase induction machine fault diagnosis,"This paper deals with the use of the zoom FFT algorithm (ZFFTA) for the electrical fault diagnosis of squirrel-cage three-phase induction machines with a special interest in broken rotor bar situation. The machine stator current can be analysed to observe the side-band harmonics around the fundamental frequency. In this case, it is necessary to take a very long data sequence to get high frequency resolution. This is not always possible due to the hardware and software limitations. The proposed algorithm can be considered for solving high frequency resolution problem without increasing the initial data acquisition size. The ZFFTA is applied to detect incipient rotor fault in a three-phase squirrel-cage induction machine by using both stator current and stray flux sensors.",2005,0, 1456,"Is My Software """"Good Enough"""" to Release? - A Probabilistic Assessment","We present the basics of a probabilistic methodology to assess the overall quality of software preparatory to its release through the evaluation of process and product evidence, the """" 'good enough' to release"""" (GETR) methodology, in this paper. GETR methodology has three main elements: a model whose elements represent activities and artifacts identified in the literature as being effective assessors of software quality, a process for populating certain parts of the model, and methods for analyzing the importance of contributions made by individual evidence to the determination of overall system quality. First, the methodology's components are briefly introduced. A demonstration of how the methodology can be applied is then given through two case studies reviewing release assessments for in-house developed analytical tools. The robustness of the model is also illustrated by the results of the case studies",2005,0, 1457,System Availability Analysis Considering Hardware/Software Failure Severities,"Model-based analysis is a well-established approach to assess the influence of several factors on system availability within the context of system structure. Prevalent availability models in the literature consider all failures to be equivalent in terms of their consequences on system services. In other words, all the failures are assumed to be of the same level of severity. In practice, failures are typically classified into multiple severity levels, where failures belonging to the highest severity level cause a complete loss of service, while failures belonging to levels below the highest level enable the system to operate in a degraded mode. This makes it necessary to consider the influence of failure severities on system availability. In this paper we present a Markov model which considers failure severities of the components of the system in conjunction with its structure. The model also incorporates the repair of the components. Based on the model, we derive a closed form expression which relates system availability to the failure and repair parameters of the components. The failure parameters in the model are estimated based on the data collected during acceptance testing of a satellite system. However, since adequate data are not available to estimate the repair parameters, the closed form expressions are used to assess the sensitivity of the system availability to the repair parameters",2005,0, 1458,Code Normal Forms,"Because of their strong economic impact, complexity and maintainability are among the most widely used terms in software engineering. But, they are also among the most weakly understood. A multitude of software metrics attempts to analyze complexity and a proliferation of different definitions of maintainability can be found in text books and corporate quality guide lines. The trouble is that none of these approaches provides a reliable basis for objectively assessing the ability of a software system to absorb future changes. In contrast to this, relational database theory has successfully solved very similar difficulties through normal forms. In this paper, we transfer the idea of normal forms to code. The approach taken is to introduce semantic dependencies as a foundation for the definition of code normal form criteria",2005,0, 1459,Predicting Risk as a Function of Risk Factors,"In previous research, we showed that risk factors have a significant negative effect on reliability (e.g., failure occurrence). In this research, we show that it is feasible to predict risk (i.e., the probability that risk factors are related to discrepancy reports occurring on a software release). This is an important advance over the previous research because discrepancy reports are available in the requirements phase - when the cost and labor required to correct faults is low, whereas failure data only becomes available in the test phase - when the cost and labor required to correct faults is high. Although using historical failure data to drive traditional software reliability models would produce greater prediction accuracy, the opportunity to provide early prediction of reliability, using risk factors, outweighs this advantage",2005,0, 1460,On the Use of Specification-Based Assertions as Test Oracles,"The """"oracle problem' is a well-known challenge for software testing. Without some means of automatically computing the correct answer for test cases, testers must instead compute the results by hand, or use a previous version of the software. In this paper, we investigate the feasibility of revealing software faults by augmenting the code with complete, specification-based assertions. Our evaluation method is to (1) develop a formal specification, (2) translate this specification into assertions, (3) inject or identify existing faults, and (4) for each version of the assertion-enhanced system containing a fault, execute it using a set of test inputs and check for assertion violations. Our goal is to determine whether specification-based assertions are a viable method of revealing faults, and to begin to assess the extent to which their cost-effectiveness can be improved. Our evaluation is based on two case studies involving real-world software systems. Our results indicate that specification-based assertions can effectively reveal faults, as long as they adversely affect the program state. We describe techniques that we used for translating high-level specifications into code-level assertions. We also discuss the costs associated with the approach, and potential techniques for reducing these costs",2005,0, 1461,Motion analysis of the international and national rank squash players,"In this paper, we present a study on squash player work-rate during the squash matches of two different quality levels. To assess work-rate, the measurement of certain parameters of player motion is needed. The computer vision based software application was used to automatically obtain player motion data from the digitized video recordings of 22 squash matches. The matches were played on two quality levels - international and Slovene national players. We present the results of work-rate comparison between these two groups of players based on game duration and distance covered by the players. We found that the players on the international quality level on average cover significantly larger distances, which is partially caused by longer average game durations.",2005,0, 1462,Fast Motion Estimation by Motion Vector Merging Procedure for H. 264,"In this paper, a fast motion estimation algorithm for variable block-size by using a motion vector merging procedure is proposed for H.264. The motion vectors of adjacent small blocks are merged to predict the motion vectors of larger blocks for reducing the computation. Experimental results show that our proposed method has lower computational complexity than full search, fast full search and fast motion estimation of the H.264 reference software JM93 with slight quality decrease and little bit-rate increase",2005,0, 1463,Pre-layout physical connectivity prediction with application in clustering-based placement,"In this paper, we introduce a structural metric, logic contraction, for pre-layout physical connectivity prediction. For a given set of nodes forming a cluster in a netlist, we can predict their proximity in the final layout based on the logic contraction value of the cluster. We demonstrate a very good correlation of our pre-layout measure with the post-layout physical distances between those nodes. We show an application of the logic contraction to circuit clustering. We compare our seed-growth clustering algorithm with the existing efficient clustering techniques. Experimental results demonstrate the effectiveness of our new clustering method.",2005,0, 1464,An Efficient and Practical Defense Method Against DDoS Attack at the Source-End,"Distributed Denial-of-Service (DDoS) attack is one of the most serious threats to the Internet. Detecting DDoS at the source-end has many advantages over defense at the victim-end and intermediate-network. One of the main problems for source-end methods is the performance degradation brought by these methods, which discourages Internet service providers (ISPs) to deploy the defense system. We propose an efficient detection approach, which only requires limited fixed-length memory and low computation overhead but provides satisfying detection results. The low cost of defense is expected to attract more ISPs to join the defense. The experiments results show our approach is efficient and feasible for defense at the source-end",2005,0, 1465,Operational experience with intelligent software agents for shipboard diesel and gas turbine engine health monitoring,"The power distribution network aboard future navy warships are vital to reliable operations and survivability. Power distribution involves delivering electric power from multiple generation sources to a dynamic set of load devices. Advanced power electronics, intelligent controllers, and a communications infrastructure form a shipboard power distribution network, much like the domestic electric utility power grid. Multiple electric generation and storage devices distributed throughout the ship will eliminate dependence on any single power source through dynamic load management and power grid connectivity. Although new technologies are under development, gas turbine and diesel generators remain as the likely near-term power sources for the future all-electric ship integrated power system (IPS). Health monitoring of these critical IPS power sources are essential to achieving reliability and survivability goals. System complexity, timing constraints, and manning constraints will shift both control and equipment health monitoring functions from human operators to intelligent machines. Drastic manning reductions coupled with a large increase in the number of sensor monitoring points makes automated condition-based maintenance (CBM) a stringent requirement. CBM has traditionally been labor intensive and expensive to implement, relying on human experiential knowledge, interactive data processing, information management, and cognitive processing. The diagnostic robustness and accuracy of these embedded software agents are essential, as false or missed diagnostic calls have severe ramifications within the intelligent, automated control environment. This paper presents some of the first reported results of intelligent diagnostic software agents operating in real-time onboard naval ships with gas turbine and diesel machinery plants. The agents are shown to perform a substantial amount of CBM-related data processing and analysis that would not otherwise be performed by the cre- , including real-time, neural network diagnostic inferencing. The agents are designed to diagnose existing system faults and to predict machinery problems at their earliest stage of development. The results reported herein should be of particular interest to those involved with future all-electric ship designs that includes both gas turbine and diesel engines as primary electrical power sources.",2005,0, 1466,Guest Editor's Introduction: The Promise of Public Software Engineering Data Repositories,"Scientific discovery related to software is based on a centuries-old paradigm common to all fields of science: setting up hypotheses and testing them through experiments. Repeatedly confirmed hypotheses become models that can describe and predict real-world phenomena. The best-known models in software engineering describe relationships between development processes, cost and schedule, defects, and numerous software """"-ilities"""" such as reliability, maintainability, and availability. But, compared to other disciplines, the science of software is relatively new. It's not surprising that most software models have proponents and opponents among software engineers. This introduction to the special issue discusses the power of modeling, the promise of data repositories, and the workshop devoted to this topic.",2005,0, 1467,Building effective defect-prediction models in practice,"Defective software modules cause software failures, increase development and maintenance costs, and decrease customer satisfaction. Effective defect prediction models can help developers focus quality assurance activities on defect-prone modules and thus improve software quality by using resources more efficiently. These models often use static measures obtained from source code, mainly size, coupling, cohesion, inheritance, and complexity measures, which have been associated with risk factors, such as defects and changes.",2005,0, 1468,The art and science of software release planning,"Incremental development provides customers with parts of a system early, so they receive both a sense of value and an opportunity to provide feedback early in the process. Each system release is thus a collection of features that the customer values. Furthermore, each release serves to fix defects detected in former product variants. Release planning (RP) addresses decisions related to selecting and assigning features to create a sequence of consecutive product releases that satisfies important technical, resource, budget, and risk constraints.",2005,0, 1469,A collaborative filtering algorithm embedded BP network to ameliorate sparsity issue,"Collaborative filtering technologies are facing two major challenges: scalability and recommendation quality. Sparsity of source data sets is one major reason causing the poor recommendation quality. To reduce sparsity, we design a collaborative filtering algorithm who firstly selects users whose non-null ratings intersect the most as candidates of nearest neighbors, and then builds up backpropagation neural networks to predict values of the null ratings in the candidates. Experimental results show that this algorithm can increase the accuracy of nearest neighbors, resulting in improving recommendation quality of the recommendation system.",2005,0, 1470,Predictive compression of dynamic 3D meshes,"An efficient algorithm for compression of dynamic time-consistent 3D meshes is presented. Such a sequence of meshes contains a large degree of temporal statistical dependencies that can be exploited for compression using DPCM. The vertex positions are predicted at the encoder from a previously decoded mesh. The difference vectors are further clustered in an octree approach. Only a representative for a cluster of difference vectors is further processed providing a significant reduction of data rate. The representatives are scaled and quantized and finally entropy coded using CABAC, the arithmetic coding technique used in H.264/MPEG4-AVC. The mesh is then reconstructed at the encoder for prediction of the next mesh. In our experiments we compare the efficiency of the proposed algorithm in terms of bit-rate and quality compared to static mesh coding and interpolator compression indicating a significant improvement in compression efficiency.",2005,0, 1471,Error concealment for slice group based multiple description video coding,"This paper develops error concealment methods for multiple description video coding (MDC) in order to adapt to error prone packet networks. The three-loop slice group MDC approach of D. Wang et al. (2005) is used. MDC is very suitable for multiple channel environments, and especially able to maintain acceptable quality when some of these channels fail completely, i.e. in an on-off MDC environment, without experiencing any drifting problem. Our MDC scheme coupled with the proposed concealment approaches proved to be suitable not only for the on-off MDC environment case (data from one channel fully lost), but also for the case where only some packets are lost from one or both channels. Copying video and using motion vectors from correct descriptions are combined together for concealment prior to applying traditional methods. Results are compared to the traditional error concealment method proposed in the H.264 reference software, showing significant improvements for both the balanced and unbalanced channel cases.",2005,0, 1472,CredEx: user-centric credential management for grid and Web services,"User authentication is a crucial security component for most computing systems. But since the security needs of different systems vary widely, authentication mechanisms are similarly diverse. In particular, independently-managed Web and grid services vary with regard to the type of security token (credential) used to prove user identity (username/password, X.509 signing, Kerberos, etc.). Forcing users to manage and present credentials manually for each service is tedious, error-prone and potentially insecure. In contrast, we present CredEx, an open-source, standards-based Web service that facilitates the secure storage of credentials and enables the dynamic exchange of different credential types using the WS-Trust token exchange protocol. With CredEx, a user can achieve single sign-on by acquiring a single (default) credential then dynamically exchanging that credential as needed for services that authenticate a different way. We describe the design and implementation of CredEx by focusing on its use in bridging password-based Web services and PKI-based grid services, illustrating how interoperability between these realms can be based upon the WS-Security and WS-Trust specifications.",2005,0, 1473,Modelling inter-organizational workflow security in a peer-to-peer environment,"The many conflicting technical, organizational, legal and domain-level constraints make the implementation of secure, inter-organizational workflows a very complex task, which is bound to low-level technical knowledge and error prone. The SECTINO project provides a framework for the realization and the high-level management of security-critical workflows based on the paradigm of model driven security. In our case the models are translated into runtime artefacts that configure a target reference architecture based on Web services technologies. In this paper we focus on the global workflow model, which captures the message exchange protocol between partners cooperating in a distributed environments well as basic security patterns. We show how the model maps to workflow and security components of the hosting environments at the partner nodes.",2005,0, 1474,High level extraction of SoC architectural information from generic C algorithmic descriptions,"The complexity of nowadays, algorithms in terms of number of lines of codes and cross-relations among processing algorithms that are activated by specific input signals, goes far beyond what the designer can reasonably grasp from the """"pencil and paper"""" analysis of the (software) specifications. Moreover, depending on the implementation goal different measures and metrics are required at different steps of the implementation methodology or design flow of SoC. The process of extracting the desired measures needs to be supported by appropriate automatic tools, since code rewriting, at each design stage, may result resource consuming and error prone. This paper presents an integrated tool for automatic analysis capable of producing complexity results based on rich and customizable metrics. The tool is based on a C virtual machine that allows extracting from any C program execution the operations and data-flow information, according to the defined metrics. The tool capabilities include the simulation of virtual memory architectures.",2005,0, 1475,Distributed integrity checking for systems with replicated data,"This work presents a new comparison-based diagnosis model and a new algorithm, called Hi-Dif, based on this model. The algorithm is used for checking the integrity of systems with replicated data, for instance, detecting unauthorized Web page modifications. Fault-free nodes running Hi-Dif send a task to two other nodes and the task results are compared. If the comparison produces a match, the two nodes are classified in the same set. On the other hand, if the comparison results in a mismatch, the two nodes are classified in different sets, according to their task results. One of the sets always contains all fault-free nodes. One fundamental difference of the proposed model to previously published models is that the new model allows the task outputs of two faulty nodes to be equal to each other. Considering a system of N nodes, we prove that the algorithm has latency equal to log2N testing rounds in the worst case; that the maximum number of tests required is O(N2); and, that the algorithm is (N-1)-diagnosable. Experimental results obtained by simulation and by the execution of a tool implemented applied to the Web are presented.",2005,0, 1476,ssahaSNP - a polymorphism detection tool on a whole genome scale,"We present a software package which can detect homozygous SNPs and indels on a eukaryotic genome scale from millions of shotgun reads. Matching seeds of a few kmer words are found to locate the position of the read on the genome. Full sequence alignment is performed to detect base variations. Quality values of both variation bases and neighbouring bases are checked to exclude possible sequence base errors. To analyze polymorphism level in the genome, we used the package to detect indels from 20 million WGS reads against the draft WGS assembly. From the dataset, we detected a total number of 663,660 indels, giving an estimated average indel density at about one indel every 2.48 kilobases. Distribution of indels length and variation of indel mapped times are also analyzed.",2005,0, 1477,"Multispectral multidimensional multiplexed data: the more, the merrier","The ability to detect multiple molecular species at once is becoming increasingly important. Multispectral imaging systems can be used to capture multiplexed molecular signals, and can be applied to the analysis of chromogenically stained slides in brightfield mode and of samples stained with a variety of light-emitting dyes (from the visible to the NIR range) in fluorescence mode. Quantum dots make a particularly good match with this imaging technology, which is also extremely helpful for the identification and elimination of interfering autofluorescence. The ability to accurately determine the spectral qualities of dyes in-situ is also valuable. Multispectral imaging has proven to be useful for multicolor FISH, for resolving multiple species of GFP with overlapping emission spectra and for resolving red/brown double-labeled histopathology stains. The uses of spectral imaging in clinical pathology are still being explored and need to be matched to appropriate software tools. Appropriately constrained linear unmixing algorithms and novel automated tools have recently been developed to provide simple, accurate analysis procedures. Conventional hematoxylin-and-eosin-or Papanicolaou-stained pathology sections can have sufficient spectral content to allow the classification of cells of different lineage or to separate normal from neoplastic cells. Analysis of such specimens may succeed using spectral """"signatures"""" and simple segmentation algorithms. The rich data sets also reward the use of more advanced analysis techniques. These can include a number of approaches pioneered for remote sensing purposes, such as spectral similarity mapping, automated clustering algorithms in n dimensions, principal component analysis, as well as other more sophisticated techniques.",2005,0, 1478,System test case prioritization of new and regression test cases,"Test case prioritization techniques have been shown to be beneficial for improving regression-testing activities. With prioritization, the rate of fault detection is improved, thus allowing testers to detect faults earlier in the system-testing phase. Most of the prioritization techniques to date have been code coverage-based. These techniques may treat all faults equally. We build upon prior test case prioritization research with two main goals: (1) to improve user-perceived software quality in a cost effective way by considering potential defect severity and (2) to improve the rate of detection of severe faults during system-level testing of new code and regression testing of existing code. We present a value-driven approach to system-level test case prioritization called the prioritization of requirements for test (PORT). PORT prioritizes system test cases based upon four factors: requirements volatility, customer priority, implementation complexity, and fault proneness of the requirements. We conducted a PORT case study on four projects developed by students in advanced graduate software testing class. Our results show that PORT prioritization at the system level improves the rate of detection of severe faults. Additionally, customer priority was shown to be one of the most important prioritization factors contributing to the improved rate of fault detection.",2005,0, 1479,Quality vs. quantity: comparing evaluation methods in a usability-focused software architecture modification task,"A controlled experiment was performed to assess the usefulness of portions of a usability-supporting architectural pattern (USAP) in modifying the design of software architectures to support a specific usability concern. Results showed that participants using a complete USAP produced modified designs of significantly higher quality than participants using only a usability scenario. Comparison of solution quality ratings with a quantitative measure of responsibilities considered in the solution showed positive correlation between the measures. Implications for software development are that usability concerns can be included at architecture design time, and that USAPs can significantly help software architects to produce better designs to address usability concerns. Implications for empirical software engineering are that validated quantitative measures of software architecture quality may potentially be substituted for costly and often elusive expert assessment.",2005,0, 1480,Determining how much software assurance is enough? A value-based approach,"A classical problem facing many software projects is how to determine when to stop testing and release the product for use. On the one hand, we have found that risk analysis helps to address such """"how much is enough?"""" questions, by balancing the risk exposure of doing too little with the risk exposure of doing too much. In some cases, it is difficult to quantify the relative probabilities and sizes of loss in order to provide practical approaches for determining a risk-balanced """"sweet spot"""" operating point. However, we have found some particular project situations in which tradeoff analysis helps to address such questions. In this paper, we provide a quantitative approach based on the COCOMO II cost estimation model and the COQUALMO qualify estimation model. We also provide examples of its use under the differing value profiles characterizing early startups, routine business operations, and high-finance operations in marketplace competition situation. We also show how the model and approach can assess the relative payoff of value-based testing compared to value-neutral testing based on some empirical results. Furthermore, we propose a way to perform cost/schedule/reliability tradeoff analysis using COCOMO II to determine the appropriate software assurance level in order to finish the project on time or within budget.",2005,0, 1481,OBDD-based evaluation of reliability and importance measures for multistate systems subject to imperfect fault coverage,"Algorithms for evaluating the reliability of a complex system such as a multistate fault-tolerant computer system have become more important. They are designed to obtain the complete results quickly and accurately even when there exist a number of dependencies such as shared loads (reconfiguration), degradation, and common-cause failures. This paper presents an efficient method based on ordered binary decision diagram (OBDD) for evaluating the multistate system reliability and the Griffith's importance measures which can be regarded as the importance of a system-component state of a multistate system subject to imperfect fault-coverage with various performance requirements. This method combined with the conditional probability methods can handle the dependencies among the combinatorial performance requirements of system modules and find solutions for multistate imperfect coverage model. The main advantage of the method is that its time complexity is equivalent to that of the methods for perfect coverage model and it is very helpful for the optimal design of a multistate fault-tolerant system.",2005,0, 1482,Refactoring the aspectizable interfaces: an empirical assessment,"Aspect oriented programming aims at addressing the problem of the crosscutting concerns, i.e., those functionalities that are scattered among several modules in a given system. Aspects can be defined to modularize such concerns. In this work, we focus on a specific kind of crosscutting concerns, the scattered implementation of methods declared by interfaces that do not belong to the principal decomposition. We call such interfaces aspectizable. All the aspectizable interfaces identified within a large number of classes from the Java Standard Library and from three Java applications have been automatically migrated to aspects. To assess the effects of the migration on the internal and external quality attributes of these systems, we collected a set of metrics and we conducted an empirical study, in which some maintenance tasks were executed on the two alternative versions (with and without aspects) of the same system. In this paper, we report the results of such a comparison.",2005,0, 1483,Checking inside the black box: regression testing by comparing value spectra,"Comparing behaviors of program versions has become an important task in software maintenance and regression testing. Black-box program outputs have been used to characterize program behaviors and they are compared over program versions in traditional regression testing. Program spectra have recently been proposed to characterize a program's behavior inside the black box. Comparing program spectra of program versions offers insights into the internal behavioral differences between versions. In this paper, we present a new class of program spectra, value spectra, that enriches the existing program spectra family. We compare the value spectra of a program's old version and new version to detect internal behavioral deviations in the new version. We use a deviation-propagation call tree to present the deviation details. Based on the deviation-propagation call tree, we propose two heuristics to locate deviation roots, which are program locations that trigger the behavioral deviations. We also use path spectra (previously proposed program spectra) to approximate the program states in value spectra. We then similarly compare path spectra to detect behavioral deviations and locate deviation roots in the new version. We have conducted an experiment on eight C programs to evaluate our spectra-comparison approach. The results show that both value-spectra-comparison and path-spectra-comparison approaches can effectively expose program behavioral differences between program versions even when their program outputs are the same, and our value-spectra-comparison approach reports deviation roots with high accuracy for most programs.",2005,0, 1484,Studying the fault-detection effectiveness of GUI test cases for rapidly evolving software,"Software is increasingly being developed/maintained by multiple, often geographically distributed developers working concurrently. Consequently, rapid-feedback-based quality assurance mechanisms such as daily builds and smoke regression tests, which help to detect and eliminate defects early during software development and maintenance, have become important. This paper addresses a major weakness of current smoke regression testing techniques, i.e., their inability to automatically (re)test graphical user interfaces (GUIs). Several contributions are made to the area of GUI smoke testing. First, the requirements for GUI smoke testing are identified and a GUI smoke test is formally defined as a specialized sequence of events. Second, a GUI smoke regression testing process called daily automated regression tester (DART) that automates GUI smoke testing is presented. Third, the interplay between several characteristics of GUI smoke test suites including their size, fault detection ability, and test oracles is empirically studied. The results show that: 1) the entire smoke testing process is feasible in terms of execution time, storage space, and manual effort, 2) smoke tests cannot cover certain parts of the application code, 3) having comprehensive test oracles may make up for not having long smoke test cases, and 4) using certain oracles can make up for not having large smoke test suites.",2005,0, 1485,Empirical validation of object-oriented metrics on open source software for fault prediction,"Open source software systems are becoming increasingly important these days. Many companies are investing in open source projects and lots of them are also using such software in their own work. But, because open source software is often developed with a different management style than the industrial ones, the quality and reliability of the code needs to be studied. Hence, the characteristics of the source code of these projects need to be measured to obtain more information about it. This paper describes how we calculated the object-oriented metrics given by Chidamber and Kemerer to illustrate how fault-proneness detection of the source code of the open source Web and e-mail suite called Mozilla can be carried out. We checked the values obtained against the number of bugs found in its bug database - called Bugzilla - using regression and machine learning methods to validate the usefulness of these metrics for fault-proneness prediction. We also compared the metrics of several versions of Mozilla to see how the predicted fault-proneness of the software system changed during its development cycle.",2005,0, 1486,A predictive QoS control strategy for wireless sensor networks,"The number of active sensors in a wireless sensor network has been proposed as a measure, albeit limited, for quality of service (QoS) for it dictates the spatial resolution of the sensed parameters. In very large sensor network applications, the number of sensor nodes deployed may exceed the number required to provide the desired resolution. Herein we propose a method, dubbed predictive QoS control (PQC), to manage the number of active sensors in such an over-deployed network. The strategy is shown to obtain near lifetime and variance performance in comparison to a Bernoulli benchmark, with the added benefit of not requiring the network to know the total number of sensors available. This benefit is especially relevant in networks where sensors are prone to failure due to not only energy exhaustion but also environmental factors and/or those networks where nodes are replenished over time. The method also has advantages in that only transmitting sensors need to listen for QoS control information and thus enabling inactive sensors to operate at extremely low power levels",2005,0, 1487,A software-based concurrent error detection technique for power PC processor-based embedded systems,"This paper presents a behavior-based error detection technique called control flow checking using branch trace exceptions for powerPC processors family (CFCBTE). This technique is based on the branch trace exception feature available in the powerPC processors family for debugging purposes. This technique traces the target addresses of program branches at run-time and compares them with reference target addresses to detect possible violations caused by transient faults. The reference target addresses are derived by a preprocessor from the source program. The proposed technique is experimentally evaluated on a 32-bit powerPC microcontroller using software implemented fault injection (SWIFI). The results show that this technique detects about 91% of the injected control flow errors. The memory overhead is 39.16% on average, and the performance overhead varies between 110% and 304% depending on the workload used. This technique does not modify the program source code.",2005,0, 1488,A model of soft error effects in generic IP processors,"When designing reliability-aware digital circuits, either hardware or software techniques may be adopted to provide a certain degree of failure detection/tolerance, caused by either hardware faults or soft-errors. These techniques are quite well established when working at a low abstraction level, whereas are currently under investigation when moving to higher abstraction levels, in order to cope with the increasing complexity of the systems being designed. This paper presents a model of soft error effects to be adopted when defining software-only techniques to achieve fault detection capabilities. The work identifies on a generic IP processor the misbehaviors caused by soft errors, classifies and analyzes them with respect to the possibility of detecting them by means of previously published approaches. An experimental validation of the proposed model is carried out on the Leon2 processor.",2005,0, 1489,Multiple transient faults in logic: an issue for next generation ICs?,"In this paper, we first evaluate whether or not a multiple transient fault (multiple TF) generated by the hit of a single cosmic ray neutron can give rise to a bidirectional error at the circuit output (that is an error in which all erroneous bits are 1s rather than 0s, or vice versa, within the same word, but not both). By means of electrical level simulations, we show that this can be the case. Then, we present a software tool that we have developed in order to evaluate the likelihood of occurrence of such bidirectional errors for very deep submicron (VDSM) ICs. The application of this tool to benchmark circuits has proven that such a probability can not be neglected for several benchmark circuits. Finally, we evaluate the behavior of conventional self-checking circuits (generally designed accounting only for single TFs) with respect to such events. We show that the modifications generally introduced to their functional blocks in order to avoid output bidirectional errors due to single TFs (as required when an AUED code is implemented) can significantly reduce (up to the 40%) also the probability to have bidirectional errors because of multiple TFs.",2005,0, 1490,An integrated approach for increasing the soft-error detection capabilities in SoCs processors,"Software implemented hardware fault tolerance (SIHFT) techniques are able to detect most of the transient and permanent faults during the usual system operations. However, these techniques are not capable to detect some transient faults affecting processor memory elements such as state registers inside the processor control unit, or temporary registers inside the arithmetic and logic unit. In this paper, we propose an integrated (hardware and software) approach to increase the fault detection capabilities of software techniques by introducing a limited hardware redundancy. Experimental results are reported showing the effectiveness of the proposed approach in covering soft-errors affecting the processor memory elements and escaping to purely software approaches.",2005,0, 1491,On the transformation of manufacturing test sets into on-line test sets for microprocessors,"In software-based self-test (SBST), a microprocessor executes a set of test programs devised for detecting the highest possible percentage of faults. The main advantages of this approach are its high defect fault coverage (being performed at-speed) and the reduced cost (since it does not require any change in the processor hardware). SBST can also be used for online test of a microprocessor-based system. However, some additional constraints exist in this case (e.g. in terms of test length and duration, as well as intrusiveness). This paper faces the issue of automatically transforming a test set devised for manufacturing test in a test set suitable for online test. Experimental results are reported on an Intel 8051 microcontroller.",2005,0, 1492,"Analyzing the yield of ExScal, a large-scale wireless sensor network experiment","Recent experiments have taken steps towards realizing the vision of extremely large wireless sensor networks, the largest of these being ExScal, in which we deployed about 1200 nodes over a 1.3 km by 300 m open area. Such experiments remain especially challenging because of: (a) prior observations of failure of sensor network protocols to scale, due to network faults and their spatial and temporal variability, (b) complexity of protocol interaction, (c) lack of sufficient data about faults and variability, even at smaller scales, and (d) current inadequacy of simulation and analytical tools to predict sensor network protocol behavior. In this paper, we present detailed data about faults, both anticipated and unanticipated, in ExScal. We also evaluate the impact of these faults on ExScal as well as the design principles that enabled it to satisfy its application requirements despite these faults. We describe the important lessons learnt from the ExScal experiment and suggest services and tools as a further aid to future large scale network deployments.",2005,0, 1493,Camera-marker alignment framework and comparison with hand-eye calibration for augmented reality applications,"An integral part of every augmented reality system is the calibration between camera and camera-mounted tracking markers. Accuracy and robustness of the AR overlay process is greatly influenced by the quality of this step. In order to meet the very high precision requirements of medical skill training applications, we have set up a calibration environment based on direct sensing of LED markers. A simulation framework has been developed to predict and study the achievable accuracy of the backprojection needed for the scene augmentation process. We demonstrate that the simulation is in good agreement with experimental results. Even if a slight improvement of the precision has been observed compared to well-known hand-eye calibration methods, the subpixel accuracy required by our application cannot be achieved even when using commercial tracking systems providing marker positions within very low error limits.",2005,0, 1494,Safety analysis of software product lines using state-based modeling,"The analysis and management of variations (such as optional features) are central to the development of safety-critical, software product lines. However, the difficulty of managing variations, and the potential interactions among them, across an entire product line currently hinders safety analysis in such systems. The work described here contributes to a solution by integrating safety analysis of a product line with model-based development. This approach provides a structured way to construct a state-based model of a product line having significant, safety-related variations. The process described here uses and extends previous work on product-line software fault tree analysis to explore hazard-prone variation points. The process then uses scenario-guided executions to exercise the state model over the variations as a means of validating the product-line safety properties. Using an available tool, relationships between behavioral variations and potentially hazardous states are systematically explored and mitigation steps are identified. The paper uses a product line of embedded medical devices to demonstrate and evaluate the process and results",2005,0, 1495,Error propagation in the reliability analysis of component based systems,"Component based development is gaining popularity in the software engineering community. The reliability of components affects the reliability of the system. Different models and theories have been developed to estimate system reliability given the information about system architecture and the quality of the components. Almost always in these models a key attribute of component-based systems, the error propagation between the components, is overlooked and not taken into account in the reliability prediction. We extend our previous work on Bayesian reliability prediction of component based systems by introducing the error propagation probability into the model. We demonstrate the impact of the error propagation in a case study of an automated personnel access control system. We conclude that error propagation may have a significant impact on the system reliability prediction and, therefore, future architecture-based models should not ignore it",2005,0, 1496,Study of the impact of hardware fault on software reliability,"As software plays increasingly important roles in modern society, reliable software becomes desirable for all stakeholders. One of the root causes of software failure is the failure of the computer hardware platform on which the software resides. Traditionally, fault injection has been utilized to study the impact of these hardware failures. One issue raised with respect to the use of fault injection is the lack of prior knowledge on the faults injected, and the fact that, as a consequence, the failures observed may not represent actual operational failures. This paper proposes a simulation-based approach to explore the distribution of hardware failures caused by three primary failure mechanisms intrinsic to semiconductor devices. A dynamic failure probability for each hardware unit is calculated. This method is applied to an example Z80 system and two software segments. The results lead to the conclusion that the hardware failure profile is location related, time dependent, and software-specific",2005,0, 1497,Providing test quality feedback using static source code and automatic test suite metrics,"A classic question in software development is """"How much testing is enough?"""" Aside from dynamic coverage-based metrics, there are few measures that can be used to provide guidance on the quality of an automatic test suite as development proceeds. This paper utilizes the software testing and reliability early warning (STREW) static metric suite to provide a developer with indications of changes and additions to their automated unit test suite and code for added confidence that product quality will be high. Retrospective case studies to assess the utility of using the STREW metrics as a feedback mechanism were performed in academic, open source and industrial environments. The results indicate at statistically significant levels the ability of the STREW metrics to provide feedback on important attributes of an automatic test suite and corresponding code",2005,0, 1498,Assessing the crash-failure assumption of group communication protocols,"Designing and correctly implementing group communication systems (GCSs) is notoriously difficult. Assuming that processes fail only by crashing provides a powerful means to simplify the theoretical development of these systems. When making this assumption, however, one should not forget that clean crash failures provide only a coarse approximation of the effects that errors can have in distributed systems. Ignoring such a discrepancy can lead to complex GCS-based applications that pay a large price in terms of performance overhead yet fail to deliver the promised level of dependability. This paper provides a thorough study of error effects in real systems by demonstrating an error-injection-driven design methodology, where error injection is integrated in the core steps of the design process of a robust fault-tolerant system. The methodology is demonstrated for the Fortika toolkit, a Java-based GCS. Error injection enables us to uncover subtle reliability bottlenecks both in the design of Fortika and in the implementation of Java. Based on the obtained insights, we enhance Fortika's design to reduce the identified bottlenecks. Finally, a comparison of the results obtained for Fortika with the results obtained for the OCAML-based Ensemble system in a previous work, allows us to investigate the reliability implications that the choice of the development platform (Java versus OCAML) can have",2005,0, 1499,Forecasting field defect rates using a combined time-based and metrics-based approach: a case study of OpenBSD,"Open source software systems are critical infrastructure for many applications; however, little has been precisely measured about their quality. Forecasting the field defect-occurrence rate over the entire lifespan of a release before deployment for open source software systems may enable informed decision-making. In this paper, we present an empirical case study often releases of OpenBSD. We use the novel approach of predicting model parameters of software reliability growth models (SRGMs) using metrics-based modeling methods. We consider three SRGMs, seven metrics-based prediction methods, and two different sets of predictors. Our results show that accurate field defect-occurrence rate forecasts are possible for OpenBSD, as measured by the Theil forecasting statistic. We identify the SRGM that produces the most accurate forecasts and subjectively determine the preferred metrics-based prediction method and set of predictors. Our findings are steps towards managing the risks associated with field defects",2005,0, 1500,A novel method for early software quality prediction based on support vector machine,"The software development process imposes major impacts on the quality of software at every development stage; therefore, a common goal of each software development phase concerns how to improve software quality. Software quality prediction thus aims to evaluate software quality level periodically and to indicate software quality problems early. In this paper, we propose a novel technique to predict software quality by adopting support vector machine (SVM) in the classification of software modules based on complexity metrics. Because only limited information of software complexity metrics is available in early software life cycle, ordinary software quality models cannot make good predictions generally. It is well known that SVM generalizes well even in high dimensional spaces under small training sample conditions. We consequently propose a SVM-based software classification model, whose characteristic is appropriate for early software quality predictions when only a small number of sample data are available. Experimental results with a medical imaging system software metrics data show that our SVM prediction model achieves better software quality prediction than some commonly used software quality prediction models",2005,1, 1501,A light-weight proactive software change impact analysis using use case maps,"Changing customer needs and technology are driving factors influencing software evolution. Consequently, there is a need to assess the impact of these changes on existing software systems. For many users, technology is no longer the main problem, and it is likely to become a progressively smaller problem as standard solutions are provided by technology vendors. Instead, research will focus on the interface of the software with business practices. There exists a need to raise the level of abstraction further by analyzing and predicting the impact of changes at the specification level. In this research, we present a lightweight approach to identify the impact of requirement changes at the specification level. We use specification information included in use case maps to analyze the potential impact of requirement changes on a system.",2005,0, 1502,Empirical assessment of machine learning based software defect prediction techniques,"The wide-variety of real-time software systems, including telecontrol/telepresence systems, robotic systems, and mission planning systems, can entail dynamic code synthesis based on runtime mission-specific requirements and operating conditions. This necessitates the need for dynamic dependability assessment to ensure that these systems perform as specified and not fail in catastrophic ways. One approach in achieving this is to dynamically assess the modules in the synthesized code using software defect prediction techniques. Statistical models; such as stepwise multi-linear regression models and multivariate models, and machine learning approaches, such as artificial neural networks, instance-based reasoning, Bayesian-belief networks, decision trees, and rule inductions, have been investigated for predicting software quality. However, there is still no consensus about the best predictor model for software defects. In this paper; we evaluate different predictor models on four different real-time software defect data sets. The results show that a combination of IR and instance-based learning along with the consistency-based subset evaluation technique provides a relatively better consistency in accuracy prediction compared to other models. The results also show that """"size"""" and """"complexity"""" metrics are not sufficient for accurately predicting real-time software defects.",2005,1, 1503,Towards self-healing systems via dependable architecture and reflective middleware,"Self-healing systems focus on how to reducing the complexity and cost of the management of dependability policies and mechanisms without human intervention. This position paper proposes a systematic approach to self-healing systems via dependable architecture and reflective middleware. Firstly, the differences between self-healing systems and traditional dependable systems are illustrated via the construction of a dependable computing model. Secondly, reflective middleware is incorporated into the dependable computing model for investigating the feasibility and benefits of implementing self-healing systems by reflective middleware. Thirdly, dependable architectures are introduced for providing application specific knowledge related to self-healing. Fourthly, an architecture based deployment tool is implemented for deploying dependable architectures into heterogeneous and distribute environments. Finally, an architecture based reflective J2EE application server, called PKUAS, is implemented for interpreting and enforcing dependable architectures at runtime, that is, discovering or predicting and recovering or preventing failures automatically under the guide of dependable architectures.",2005,0, 1504,Hierarchical behavior organization,"In most behavior-based approaches, implementing a broad set of different behavioral skills and coordinating them to achieve coherent complex behavior is an error-prone and very tedious task. Concepts for organizing reactive behavior in a hierarchical manner are rarely found in behavior-based approaches, and there is no widely accepted approach for creating such behavior hierarchies. Most applications of behavior-based concepts use only few behaviors and do not seem to scale well. Reuse of behaviors for different application scenarios or even on different robots is very rare, and the integration of behavior-based approaches with planning is unsolved. This paper discusses the design, implementation, and performance of a behavior framework that addresses some of these issues within the context of behavior-based and hybrid robot control architectures. The approach presents a step towards more systematic software engineering of behavior-based robot systems.",2005,0, 1505,Unraveling ancient mysteries: reimagining the past using evolutionary computation in a complex gaming environment,"In this paper, we use principles from game theory, computer gaming, and evolutionary computation to produce a framework for investigating one of the great mysteries of the ancient Americas: why did the pre-Hispanic Pueblo (Anasazi) peoples leave large portions of their territories in the late A.D. 1200s? The gaming concept is overlaid on a large-scale agent-based simulation of the Anasazi. Agents in this game use a cultural algorithm framework to modify their finite-state automata (FSA) controllers following the work of Fogel (1966). In the game, there can be two kinds of active agents: scripted and unscripted. Unscripted agents attempt to maximize their survivability, whereas scripted agents can be used to test the impact that various pure and compound strategies for cooperation and defection have on the social structures produced by the overall system. The goal of our experiments here is to determine the extent to which cooperation and competition need to be present among the agent households in order to produce a population structure and spatial distribution similar to what has been observed archaeologically. We do this by embedding a """"trust in networks"""" game within the simulation. In this game, agents can choose from three pure strategies: defect, trust, and inspect. This game does not have a pure Nash equilibrium but instead has a mixed strategy Nash equilibrium such that a certain proportion of the population uses each at every time step, where the proportion relates to the quality of the signal used by the inspectors to predict defection. We use the cultural algorithm to help us determine what the mix of strategies might have been like in the prehistoric population. The simulation results indeed suggest a mixed strategy consisting of defectors, inspectors, and trustors was necessary to produce results compatible with the archaeological data. It is suggested that the presence of defectors derives from the unreliability of the signal which increases under drought conditions and produced increased stress on Anasazi communities and may have contributed to their departure.",2005,0, 1506,Technology of detecting GIC in power grids & its monitoring device,"The magnetic storm results in the transmission lines with geomagnetically induced current (GIC). And GIC happening in random has the frequency between 0.001 Hz~0.1 Hz, and continues from several minutes to several hours. Based on elaborating mechanism and characteristic of GIC in grids and the influence on China power grids, the article has conducted the research work of GIC monitoring technology, and has investigated the method of sampling data of GIC, the survey algorithm and the new monitoring device. The simulated test shows, the monitoring device can effectively measure GIC which is signal of quasi direct current and randomness, has advantages of having few data to be handled and needing little memory space, etc",2005,0, 1507,Experiences of PD Diagnosis on MV Cables using Oscillating Voltages (OWTS),"Detecting, locating and evaluating of partial discharges (PD) in the insulating material, terminations and joints provides the opportunity for a quality control after installation and preventive detection of arising service interruption. A sophisticated evaluation is necessary between PD in several insulating materials and also in different types of terminations and joints. For a most precise evaluation of the degree and risk caused by PD it is suggested to use a test voltage shape that is preferably like the same under service conditions. Only under these requirements the typical PD parameters like inception and extinction voltage, PD level and PD pattern correspond to significant operational values. On the other hand the stress on the insulation should be limited during the diagnosis to not create irreversible damages and thereby worsening the condition of the test object. The paper introduces an oscillating wave test system (OWTS), which meets these mentioned demands well. The design of the system, its functionality and especially the operating software are made for convenient field application. Field data and experience reports will be presented and discussed. This field data serve also as good guide for the level of danger to the different insulating systems due to partial discharges",2005,0, 1508,The GNAM monitoring system and the OHP histogram presenter for ATLAS,"ATLAS is one of the four experiments under construction along the Large Hadron Collider at CERN. During the 2004 combined test beam, the GNAM monitoring system and the OHP histogram presenter were widely used to assess both the hardware setup and the data quality. GNAM is a modular framework where detector specific code can be easily plugged in to obtain online low-level monitoring applications. It is based on the monitoring tools provided by the ATLAS trigger and data acquisition (TDAQ) software, OHP is a histogram presenter, capable to perform both as a configurable display and as a browser. From OHP, requests to execute simple interactive operations (such as reset, rebin or update) on histograms, can be sent to GNAM",2005,0, 1509,FUMSTM artificial intelligence technologies including fuzzy logic for automatic decision making,"Advances in sensing technologies and aircraft data acquisition systems have resulted in generating huge aircraft data sets, which can potentially offer significant improvements in aircraft management, affordability, availability, airworthiness and performance (MAAAP). In order to realise these potential benefits, there is a growing need for automatically trending/mining these data and fusing the data into information and decisions that can lead to MAAAP improvements. Smiths has worked closely with the UK Ministry of Defence (MOD) to evolve Flight and Usage Management Software (FUMSTM) to address this need. FUMSTM provides a single fusion and decision support platform for helicopters, aeroplanes and engines. FUMSTM tools have operated on existing aircraft data to provide an affordable framework for developing and verifying diagnostic, prognostic and life management approaches. Whilst FUMSTM provides automatic analysis and trend capabilities, it fuses the condition indicators (CIs) generated by aircraft health and usage monitoring systems (HUMS) into decisions that can increase fault detection rates and reduce false alarm rates. This paper reports on a number of decision-making processes including logic, Bayesian belief networks and fuzzy logic. The investigation presented in this paper has indicated that decision-making based on logic and fuzzy logic can offer verifiable techniques. The paper also shows how Smiths has successfully applied fuzzy logic to the Chinook HUMS CIs. Fuzzy logic has also been applied to detect sensor problems causing long-term data corruptions.",2005,0, 1510,Metrics for ontologies,"The success of the semantic Web has been linked with the use of ontologies on the semantic Web. Given the important role of ontologies on the semantic Web, the need for domain ontology development and management are increasingly more and more important to most kinds of knowledge-driven applications. More and more these ontologies are being used for information exchange. Information exchange technology should foster knowledge exchange by providing tools to automatically assess the characteristics and quality of an ontology. The scarcity of theoretically and empirically validated measures for ontologies has motivated our investigation. From this investigation a suite of quality metrics have been developed and implemented as a plug-in to the ontology editor Protege so that any ontology specified in a standard Web ontology language such as RDFS or OWL may have a quality assessment analysis performed.",2005,0, 1511,HUGE: an integrated system for human understandable granule extraction,"An integrated system for the extraction of interpretable information granules from data is presented. The system, called HUGE (human understandable granule extraction), is designed as a highly reusable software framework that can embody several techniques for information granulation within a homogeneous user interface. HUGE includes a set of tools that enable both qualitative and quantitative evaluation of the results of a granulation technique. Visualization tools allow the user to assess graphically the properties of extracted information granules. Evaluation tools give the user the possibility to estimate numerically the quality of the derived granules. The usefulness of HUGE in supporting the user in a granulation process is shown through a case study concerning the design of an interpretable fuzzy inference system for predicting automobile fuel consumption.",2005,0, 1512,Outage probability lower bound in CDMA systems with lognormal-shadowed multipath Rayleigh-faded and noise-corrupted links,"The outage probability is one of the common metrics used in performance evaluation of cellular networks. In this paper, we derive a lower bound on the outage probability in CDMA systems where the communication links are disturbed by co-channel interference as well as additive noise. Each link is assumed to be faded according to both a lognormal distribution and a multipath Rayleigh distribution where the former represents the effect of shadowing while the latter represents the effect of short-term fading. The obtained lower bound is given in terms of a single-fold integral that can be easily computed using any modern software package. We present numerical results for the derived bound and compare them with the outage probability obtained by means of Monte Carlo simulations. Based on our results, we conclude that the proposed bound is relatively tight in a wide range of situations, particularly, in the case of small to moderate number of interferers and small to moderate shadowing standard deviation values.",2005,0, 1513,Delay-Centric Link Quality Aware OLSR,"This paper introduces a delay-centric link quality aware routing protocol, LQOLSR (link quality aware optimized link state routing). The LQOLSR chooses find fast and high quality routes in mobile ad hoc networks (MANET). LQOLSR predicts a packet transmission delay according to multiple transmission rates in IEEE 802.11 and selects the fastest route from source to destination by estimating relative transmission delay between nodes. We implement a LQOLSR protocol by modifying the basic OLSR (optimized link state routing) protocol. We evaluate and analyze the performance in a real testbed established in an office building",2005,0, 1514,Filtering of shrew DDoS attacks in frequency domain,"The shrew distributed denial of service (DDoS) attacks are periodic, bursty, and stealthy in nature. They are also known as reduction of quality (RoQ) attacks. Such attacks could be even more detrimental than the widely known flooding DDoS attacks because they damage the victim servers for a long time without being noticed, thereby denying new visitors to the victim servers, which are mostly e-commerce sites. Thus, in order to minimize the huge monetary losses, there is a pressing need to effectively detect such attacks in real-time. Unfortunately, effective detection of shrew attacks remains an open problem. In this paper, we meet this challenge by proposing a new signal processing approach to identifying and detecting the attacks by examining the frequency-domain characteristics of incoming traffic flows to a server. A major strength of our proposed technique is that its detection time is less than a few seconds. Furthermore, the technique entails simple software or hardware implementations, making it easily deployable in a real-life network environment",2005,0, 1515,A novel tuneable low-intensity adversarial attack,"Currently, denial of service (DoS) attacks remain amongst the most critical threats to Internet applications. The goal of the attacker in a DoS attack is to overwhelm a shared resource by sending a large amount of traffic thus, rendering the resource unavailable to other legitimate users. In this paper, we expose a novel contrasting category of attacks that is aimed at exploiting the adaptive behavior exhibited by several network and system protocols such as TCP. The goal of the attacker in this case is not to entirely disable the service but to inflict sufficient degradation to the service quality experienced by legitimate users. An important property of these attacks is the fact that the desired adversarial impact can be achieved by using an non-suspicious low-rate attack stream, which can easily evade detection. Further by tuning various parameters of the attack traffic stream, the attacker can inflict varying degrees of service degradation and at the same time making it extremely difficult for the victim to detect attacker presence. Our simulation based experiments validate our observations and demonstrate that an attacker can significantly degrade the performance of the TCP flows by inducing lowrate attack traffic which is co-ordinated to exploit the congestion control behavior of TCP",2005,0, 1516,Pedagogic data as a basis for Web service fault models,This paper outlines our method for deriving fault models for use with our WS-FIT tool that can be used to assess the dependability of SOA. Since one of the major issues with extracting these heuristic rules and fault models is the availability of software systems we examine the use of systems constructed through pedagogic activities to provide one source of information.,2005,0, 1517,Numerical software quality control in object oriented development,"This paper proposes new method to predict the number of the remaining bugs at the delivery inspection applied to every iteration of OOD, object oriented development. Our method consists of two parts. The first one estimates the number of the remaining bugs by applying the Gompertz curve. The second one uses the interval estimation called OOQP, object oriented quality probe. The basic idea of OOQP is to randomly extract a relatively small number of test cases, usually 10 to 20% of the entire test cases, and to execute them in the actual operation environment. From the test result of OOQP, we can efficiently predict the number of the remaining bugs by the interval estimation. The premier problem of OOQP is that OOD is imposed to use the system design specification document whose contents, like UML, tend to be ambiguous. Our estimation method works well at a matrix-typed organization where a QA team and a development team collaboratively work together to improve the software quality.",2005,0, 1518,Web service group testing with windowing mechanisms,"ASTRAR provides a framework for testing Web services (WS) using the group testing technique. This paper extends the basic two-phase testing process and introduces the windowing mechanism to further improve testing efficiency. Rather than testing a large number of WS simultaneously, WS are divided into subsets called windows and testing is exercised window by window. Testing results are analyzed for different strategies such as using all of the historical data, using the most recent windows, and using the current window only. Based on the results, test cases are ranked according to their potency to detect faults; and oracles and the confidence level of each oracle are established for individual test cases at runtime. In addition, different strategies are proposed to determine the optimal window size at runtime. By incorporating the windowing mechanism, the two-phase training and volume testing process becomes a continuous learning process and the basic group testing process becomes more adaptive to dynamically changing environment.",2005,0, 1519,How software can help or hinder human decision making (and vice-versa),"Summary form only given. Developments in computing offer experts in many fields specialised support for decision making under uncertainty. However, the impact of these technologies remains controversial. In particular, it is not clear how advice of variable quality from a computer may affect human decision makers. Here the author reviews research showing strikingly diverse effects of computer support on expert decision-making. Decisions support can both systematically improve or damaged the performance of decision makers in subtle ways depending on the decision maker's skills, variation in the difficulty of individual decisions and the reliability of advice from the support tool. In clinical trials decision support technologies are often assessed in terms of their average effects. However this methodology overlooks the possibility of differential effects on decisions of varying difficulty, on decision makers of varying competence, of computer advice of varying accuracy and of possible interactions among these variables. Research that has teased apart aggregated clinical trial data to investigate these possibilities has discovered that computer support was less useful for - and sometimes hindered - professional experts who were relatively good at difficult decisions without support; at the same time the same computer support tool helped those experts who were less good at relatively easy decisions without support. Moreover, inappropriate advice from the support tool could bias decision makers' decisions and, predictably, depending on the type of case, improve or harm the decisions.",2005,0, 1520,Predictors of customer perceived software quality,"Predicting software quality as perceived by a customer may allow an organization to adjust deployment to meet the quality expectations of its customers, to allocate the appropriate amount of maintenance resources, and to direct quality improvement efforts to maximize the return on investment. However, customer perceived quality may be affected not simply by the software content and the development process, but also by a number of other factors including deployment issues, amount of usage, software platform, and hardware configurations. We predict customer perceived quality as measured by various service interactions, including software defect reports, requests for assistance, and field technician dispatches using the afore mentioned and other factors for a large telecommunications software system. We employ the non-intrusive data gathering technique of using existing data captured in automated project monitoring and tracking systems as well as customer support and tracking systems. We find that the effects of deployment schedule, hardware configurations, and software platform can increase the probability of observing a software failure by more than 20 times. Furthermore, we find that the factors affect all quality measures in a similar fashion. Our approach can be applied at other organizations, and we suggest methods to independently validate and replicate our results.",2005,0, 1521,A quality-driven systematic approach for architecting distributed software applications,"Architecting distributed software applications is a complex design activity. It involves making decisions about a number of inter-dependent design choices that relate to a range of design concerns. Each decision requires selecting among a number of alternatives; each of which impacts differently on various quality attributes. Additionally, there are usually a number of stakeholders participating in the decision-making process with different, often conflicting, quality goals, and project constraints, such as cost and schedule. To facilitate the architectural design process, we propose a quantitative quality-driven approach that attempts to find the best possible fit between conflicting stakeholders' quality goals, competing architectural concerns, and project constraints. The approach uses optimization techniques to recommend the optimal candidate architecture. Applicability of the proposed approach is assessed using a real system.",2005,0, 1522,Use of relative code churn measures to predict system defect density,"Software systems evolve over time due to changes in requirements, optimization of code, fixes for security and reliability bugs etc. Code churn, which measures the changes made to a component over a period of time, quantifies the extent of this change. We present a technique for early prediction of system defect density using a set of relative code churn measures that relate the amount of churn to other variables such as component size and the temporal extent of churn. Using statistical regression models, we show that while absolute measures of code chum are poor predictors of defect density, our set of relative measures of code churn is highly predictive of defect density. A case study performed on Windows Server 2003 indicates the validity of the relative code churn measures as early indicators of system defect density. Furthermore, our code churn metric suite is able to discriminate between fault and not fault-prone binaries with an accuracy of 89.0 percent.",2005,0, 1523,Main effects screening: a distributed continuous quality assurance process for monitoring performance degradation in evolving software systems,"Developers of highly configurable performance-intensive software systems often use a type of in-house performance-oriented """"regression testing"""" to ensure that their modifications have not adversely affected their software's performance across its large configuration space. Unfortunately, time and resource constraints often limit developers to in-house testing of a small number of configurations and unreliable extrapolation from these results to the entire configuration space, which allows many performance bottlenecks and sources of QoS degradation to escape detection until systems are fielded. To improve performance assessment of evolving systems across large configuration spaces, we have developed a distributed continuous quality assurance (DCQA) process called main effects screening that uses in-the-field resources to execute formally designed experiments to help reduce the configuration space, thereby allowing developers to perform more targeted in-house QA. We have evaluated this process via several feasibility studies on several large, widely-used performance-intensive software systems. Our results indicate that main effects screening can detect key sources of performance degradation in large-scale systems with significantly less effort than conventional techniques.",2005,0, 1524,Is mutation an appropriate tool for testing experiments? [software testing],"The empirical assessment of test techniques plays an important role in software testing research. One common practice is to instrument faults, either manually or by using mutation operators. The latter allows the systematic, repeatable seeding of large numbers of faults; however, we do not know whether empirical results obtained this way lead to valid, representative conclusions. This paper investigates this important question based on a number of programs with comprehensive pools of test cases and known faults. It is concluded that, based on the data available thus far, the use of mutation operators is yielding trustworthy results (generated mutants are similar to real faults). Mutants appear however to be different from hand-seeded faults that seem to be harder to detect than real faults.",2005,0, 1525,Static analysis tools as early indicators of pre-release defect density,"During software development it is helpful to obtain early estimates of the defect density of software components. Such estimates identify fault-prone areas of code requiring further testing. We present an empirical approach for the early prediction of pre-release defect density based on the defects found using static analysis tools. The defects identified by two different static analysis tools are used to fit and predict the actual pre-release defect density for Windows Server 2003. We show that there exists a strong positive correlation between the static analysis defect density and the pre-release defect density determined by testing. Further, the predicted pre-release defect density and the actual pre-release defect density are strongly correlated at a high degree of statistical significance. Discriminant analysis shows that the results of static analysis tools can be used to separate high and low quality components with an overall classification rate of 82.91%.",2005,0, 1526,Checkers using a co-evolutionary on-line evolutionary algorithm,"The game of checkers has been well studied and many computer players exist. The vast majority of these 'software opponents' use a minimax strategy combined with an evaluation function to expand game tree for a number of moves ahead and estimate the quality of the pending moves. In this paper, an alternative approach is described where an on-line evolutionary algorithm is used to co-evolve move sets for both players in the game, playing the entire length of the game tree for each evaluation, thus avoiding the need for the minimax strategy or an evaluation function. The on-line evolutionary algorithm operates in essence as a 'directed' Monte-Carlo search process and although demonstrated on the game of checkers, could potentially be used to play games with a larger branching factor such as Go.",2005,0, 1527,The application of evolutionary computation to the analysis of the profiles of elliptical galaxies: a maximum likelihood approach,Genetic programming technique has been found to be suitable in scenarios where the formulation of models is a data driven process. Evolutionary programming provides a way of searching for parameters in a model without being prone to fall in local minima. A review of how these techniques have been applied to the analysis of elliptical galaxies is given. The effectiveness of a maximum likelihood based fitness function is asserted and is applied to the parameter fitting using evolutionary programming. A maximum likelihood based function is found to show consistent and significant improvement over a hit-based fitness function for modeling the profiles of elliptical galaxies. It is asserted that such a function would potentially improve the quality of model produced by symbolic regression using genetic programming.,2005,0, 1528,Design-level performance prediction of component-based applications,"Server-side component technologies such as Enterprise JavaBeans (EJBs), .NET, and CORBA are commonly used in enterprise applications that have requirements for high performance and scalability. When designing such applications, architects must select suitable component technology platform and application architecture to provide the required performance. This is challenging as no methods or tools exist to predict application performance without building a significant prototype version for subsequent benchmarking. In this paper, we present an approach to predict the performance of component-based server-side applications during the design phase of software development. The approach constructs a quantitative performance model for a proposed application. The model requires inputs from an application-independent performance profile of the underlying component technology platform, and a design description of the application. The results from the model allow the architect to make early decisions between alternative application architectures in terms of their performance and scalability. We demonstrate the method using an EJB application and validate predictions from the model by implementing two different application architectures and measuring their performance on two different implementations of the EJB platform.",2005,0, 1529,Strategy for mutation testing using genetic algorithms,"In this paper, we propose a model to reveal faults and kill mutant using genetic algorithms. The model first instruments the source and mutant program and divides in small units. Instead of checking the entire program, it tries to find fault in each unit or kills each mutant unit. If any unit survives, the new test data is generated using genetic algorithm with special fitness function. The output of each test for each unit is recorded to detect the faulty unit. In this strategy, the source program and the mutant are instrumented in such a way that the input and output behavior of each unit can be traced. A checker module is used to compare and trace the output of each unit. A complete architecture of the model is proposed in the paper",2005,0, 1530,Web engineering: a new emerging discipline,"Web engineering, an emerging new discipline, advocates a process and a systematic approach to development of high quality Web based systems. In contrast the commercial practice still remains ad-hoc. Although the Web based systems has become increasingly complex, the development process is still un-engineered. """"There are very few standard methods for the Web developers to use. Hence, there is a strong need to understand and undertake Web engineering """". (Y. Deshpande and M. Gaedke, 2005) This paper is a result of our extensive survey of literature, our work and interaction with developers. This paper gives an introductory overview on Web engineering, it assesses similarities and differences between development of traditional software and Web based systems, and reviews some of the ongoing work in this area. We discuss the need for development of process model for Web based applications (WBA). such as is available far conventional software, with an overview of our work on development of such a process framework (R. Ahmad, et al, 2005). This paper also attempts to highlight the areas that need further study.",2005,0, 1531,Complexity signatures for system health monitoring,"The ability to assess risk in complex systems is one of the fundamental challenges facing the aerospace industry in general, and NASA in particular. First, such an ability allows for quantifiable trade-offs during the design stage of a mission. Second, it allows the monitoring of die health of the system while in operation. Because many of the difficulties in complex systems arise from the interactions among the subsystems, system health monitoring cannot solely focus on the health of those subsystems. Instead system level signatures that encapsulate the complex system interactions are needed. In this work, we present the entropy-scale (ES) and entropy-resolution (ER) system-level signatures that are both computationally tractable and encapsulate many of the salient characteristics of a system. These signatures are based on the change of entropy as a system is observed across different resolutions and scales. We demonstrate the use of the ES and ER signatures on artificial data streams and simple dynamical systems and show that they allow the unambiguous clustering of many types of systems, and therefore are good indicators of system health. We then show how these signatures can be applied to graphical data as well as data strings by using a simple """"graph-walking"""" method. This method extracts a data stream from a graphical system representation (e.g., fault tree, software call graph) that conserves the properties of the graph. Finally we apply these signatures to analysis of software packages, and show that they provide significantly better correlation with risk markers than many standard metrics. These results indicate that proper system level signatures, coupled with detailed component-level analysis enable the automatic detection of potentially hazardous subsystem interactions in complex systems before they lead to system deterioration or failures",2005,0, 1532,Validation and verification of prognostic and health management technologies,"Impact Technologies and the Georgia Institute of Technology are developing a Web-based software application that will provide JSF (F-35) system suppliers with a comprehensive set of PHM verification and validation (V&V) resources which will include: standards and definitions, V&V metrics for detection, diagnosis, and prognosis, access to costly seeded fault data sets and example implementations, a collaborative user forum for the exchange of information, and an automated tool for impartially evaluating the performance and effectiveness of PHM technologies. This paper presents the development of the prototype software product to illustrate the feasibility of the techniques, methodologies, and approaches needed to verify and validate PHM capabilities. A team of JSF system suppliers has been assembled to contribute, provide feedback and make recommendations to the product under development. The approach being pursued for assessing the overall PHM system accuracy is to quantify the associated uncertainties at each of the individual levels of a PHM system, and build up the accumulated inaccuracies as information is processed through the PHM architecture",2005,0, 1533,Application of General Perception-Based QoS Model to Find Providers' Responsibilities. Case Study: User Perceived Web Service Performance.,"This paper presents a comprehensive model intended to analyze quality of service in telecommunications services and its causes. Although many works have been published in this area, both from a technical viewpoint as well as taking into consideration subjective concerns, they have not resulted in a unique methodology to assess the experienced quality. While most of the studies consider the quality of service only from a technical and end-to-end point of view, we try to analyze quality of service as a general gauge of final users' satisfaction. The proposed model allows us to estimate the quality experienced by end users, while offering detailed results regarding the responsibility of the different agents involved in the service provision. Once we overview the most significant elements of the model, an in-depth analytical study is detailed. Finally, we illustrate a practical study for Web browsing service in order to validate the theoretical model",2005,0, 1534,Software package for equipment management learning,"Among other subjects, equipment management is to be included within a degree in industrial engineering. In particular, industrial engineering practitioners usually have to deal to some extent with decisions related to maintenance and renewal policies: how to devise, select and assess them. For this particular purpose, some mathematical models had been previously developed. A software package has been designed to facilitate the teaching and learning of both the mathematical models and the subjects themselves (evaluation and selection of maintenance and renewal policies). This paper describes the main features of this package. It also examines the major advantages derived from its use both in explanations in class and during the students' homework time by their own. Different experiences carried on with the package are presented in the Universidad Politecnica de Madrid.",2005,0, 1535,Fault analysis study using modeling and simulation tools for distribution power systems,"This article describes a fault analysis study using some of the best available simulation and modeling tools for electrical distribution power systems. Several software tools were identified and assessed in L. Nastac, et al (2005). The fault analysis was conducted with the assessed software tools using the recorded fault data from a real circuit system. The recorded fault data including the topology and the line data with more than 1000 elements were provided by Detroit Edison (DTE) Energy for validation purposes. The effects of pro-fault loading and arcing impedance on the predicted fault current values were also investigated. Then, to ensure that the validated software tools are indeed capable of analyzing circuits with DCs, fault management and relay protection problems were developed and solved using a modified IEEE 34-bus feeder with addition of DCs.",2005,0, 1536,The improvement on the measuring precision of detecting fault composite insulators by using electric field mapping,"According to the statistics of the fault composite insulators, most of the defects take place at the high voltage (HV) end of the insulators. At present, the minimum defect length detected possibly at HV end of the insulators is about 7 cm by using electric field mapping device, which could not indicate the defects located between the last shed and the HV electrode. Therefore, it is important to improve the measuring precision that is suitable for indicating the defects less than 7 cm in inspecting the fault composite insulators based on electric field mapping device. In order to enhance the measuring precision of the device, we analyzed the electric field distribution along with an insulator by using the commercial software ANSYS. We found that a 5 cm defect can be found if we collect two to three electric field data between the two sheds. Therefore, we added a photoelectric cell array to trigger the device for collecting more data between the two sheds. The tests were conducted in our laboratory by using our new device. The results from our experiments show that the sensitivity of detecting the defects is increased and our new device can indicate the defects less than 5 cm at the HV end without grading rings.",2005,0, 1537,A query system for spatiotemporal database applications,"In recent years, an increasing number of database applications deal with continuously changing data objects (CCDOs). In these applications, the underlying data management system must support new types of spatiotemporal queries that refer to CCDOs. The expressive power of the supported query language and the query processing algorithms determine the quality and the efficiency of the query system. In contrast to traditional data objects, CCDOs change continuously. Therefore, the relation between two CCDOs may change over time. We define a motion relation as a sequence of one or more distinct topological relations. This paper surveys existing spatiotemporal query types and proposes a powerful spatiotemporal database query language by defining a complete set of motion predicates that can fill the gap between space and time. In addition, the paper discusses how to efficiently process the proposed queries by utilizing existing indexing schemes.",2005,0, 1538,A multi-agent based fault tolerance system for distributed multimedia object oriented environment: MAFTS,"This paper presents the design and implementation of the MAFTS (a multi-agent based fault-tolerance system), which is running on distributed multimedia object oriented environment. DOORAE (distributed object oriented collaboration environment) is a good example of the foundation technology for a computer-based multimedia collaborative work that allows development of required application by combining many agents composed of units of functional module when user wishes to develop a new application field. MAFTS has been designed and implemented in DOORAE environment. It is a multi-agent system that is implemented with object oriented concept. The main idea is to detect an error by using polling method. This system detects an error by polling periodically processes with relation to sessions. And, it is to classify the type of errors automatically by using learning rules. The characteristic of this system is to use the same method to get back again it as it creates a session.",2005,0, 1539,An adaptive fault tolerance for situation-aware ubiquitous computing,"Since ubiquitous applications need situation-aware middleware services and computing environment (e.g., resources) keeps changing as the applications change, it is challenging to detect errors and recover them in order to provide seamless services and avoid a single point of failure. This paper proposes an adaptive fault tolerance (AFT) algorithm in situation-aware middleware framework and presents its simulation model of AFT-based agents.",2005,0, 1540,A quantitative supplement to the definition of software quality,"This paper proposes a new quantitative definition for software quality. The definition is based on the Taguchi philosophy for assessing and improving the quality of manufacturing processes. The Taguchi approach, originally developed for manufacturing processes, define quality in terms of """"loss imparted to society """" by a product after delivery of the product to the end user. To facilitate the use of the Taguchi definition, several """"loss functions"""" have been developed. These loss functions allow quality to be quantitatively measured in monetary values (e.g. US dollars). To illustrate the application of the Taguchi definition to a software product, examples that utilize some of the loss functions are presented. The proposed definition of software quality shows good correlation to other popular qualitative and quantitative definitions for software quality.",2005,0, 1541,Design of opportunity tree framework for effective process improvement based on quantitative project performance,"Nowadays IT industry drives to improve the software process on marketing and financial benefits. For efficient process improvement, work performance should be enhanced in line with organization's vision by identifying weakness for improvement and risks with process assessment results and then mapping them in the software development environment. According to organization's vision, plans should be developed for marketing and financial strategic objectives. For each plan, improvement strategies should be developed for each work performance unit such as quality, delivery, cycle time, and waste. Process attributes in each unit should be identified and improvement methods shall be determined for them. In order to suggest a PPM (project performing measure) model to quantitatively measure organization's project performing capability and make an optimal decision for process improvement, this paper statistically analyzes SPICE assessment results of 2,392 weakness for improvement by process for 49 appraisals and 476 processes which were assessed through KASPA (Korea Association of Software Process Assessors) from 1999 to 2004 and then makes SEF (SPICE experience factory). It also presents scores on project performing capability and improvement effects by level, and presents weakness for improvement by priority in the performance unit by level. And finally, this paper suggests an OTF (opportunity tree framework) model to show optimal process improvement strategies.",2005,0, 1542,Design of SPICE experience factory model for accumulation and utilization of process assessment experience,"With growing interest in software process improvement (SPI), many companies are introducing international process models and standards. SPICE is most widely used process assessment model in the SPI work today. In the process of introducing and applying SPICE, practical experiences contribute to enhancing the project performance. The experience helps people to make decisions under uncertainty, and to find better compromises. This paper suggests a SPICE experience factory (SEF) model to use SPICE assessment experience. For this, we collected SPICE assessment results which were conducted in Korea from 1999 to 2004. The collected data does not only contain rating information but also specifies strengths and improvement point for each assessed company and its process. To use this assessment result more efficiently, root words were derived from each result items. And root words were classified into four: 1) measurement, 2) work product, 3) process performance, and 4) process definition and deployment. Database was designed and constructed to save all analyzed data in forms of root words. Database also was designed to efficiently search information the organization needs by strength/improvement point, or root word for each level. This paper describes procedures of SEF model and presents methods to utilize it. By using the proposed SEF model, even organizations which plan to undergo SPICE assessment for the first time can establish the optimal improvement strategies.",2005,0, 1543,Requirements for CBD products and process quality,"This paper presents arguments for including the properties of processes involved in various approaches to component-based software development in predicting systems properties. It discusses how processes impact on system properties and relates the issues raised to standards that already address process and product quality. Although many standards still apply, CBD changes interpretations and emphases.",2005,0, 1544,Countering trusting trust through diverse double-compiling,"An air force evaluation of Multics, and Ken Thompson's famous Turing award lecture """"reflections on trusting trust, """" showed that compilers can be subverted to insert malicious Trojan horses into critical software, including themselves. If this attack goes undetected, even complete analysis of a system's source code can not find the malicious code that is running, and methods for detecting this particular attack are not widely known. This paper describes a practical technique, termed diverse double-compiling (DDC), that detects this attack and some compiler defects as well. Simply recompile the source code twice: once with a second (trusted) compiler, and again using the result of the first compilation. If the result is bit-for-bit identical with the untrusted binary, then the source code accurately represents the binary. This technique has been mentioned informally, but its issues and ramifications have not been identified or discussed in a peer-reviewed work, nor has a public demonstration been made. This paper describes the technique, justifies it, describes how to overcome practical challenges, and demonstrates it",2005,0, 1545,Verify results of network intrusion alerts using lightweight protocol analysis,"We propose a method to verify the result of attacks detected by signature-based network intrusion detection systems using lightweight protocol analysis. The observation is that network protocols often have short meaningful status codes saved at the beginning of server responses upon client requests. A successful intrusion that alters the behavior of a network application server often results in an unexpected server response, which does not contain the valid protocol status code. This can be used to verify the result of the intrusion attempt. We then extend this method to verify the result of attacks that still generate valid protocol status code in the server responses. We evaluate this approach by augmenting Snort signatures and testing on real world data. We show that some simple changes to Snort signatures can effectively verify the result of attacks against the application servers, thus significantly improve the quality of alerts",2005,0, 1546,Predicting software escalations with maximum ROI,"Enterprise software vendors often have to release software products before all reported defects are corrected, and a small number of these reported defects will be escalated by customers whose businesses are seriously impacted. Escalated defects must be quickly resolved at a high cost by the software vendors. The total costs can be even greater, including loss of reputation, satisfaction, loyalty, and repeat revenue. In this paper, we develop an Escalation Prediction (EP) system to mine historic defect report data and predict the escalation risk of current defect reports for maximum ROI (Return On Investment). More specifically, we first describe a simple and general framework to convert the maximum ROI problem to cost-sensitive learning. We then apply and compare several best-known cost-sensitive learning approaches for EP. The EP system has produced promising results, and has been deployed in the product group of an enterprise software vendor. Conclusions drawn from this study also provide guidelines for mining imbalanced datasets and cost-sensitive learning.",2005,0, 1547,Audio scene analysis as a control system for hearing aids,"It is well known that simple amplification cannot help many hearing-impaired listeners. As a consequence of this, numerous signal enhancement algorithms have been proposed for digital hearing aids. Many of these algorithms are only effective in certain environments. The ability to quickly and correctly detect elements of the auditory scene can permit the selection/parameterization of enhancement algorithms from a library of available routines. In this work, the authors examine the real time parameterization of a frequency-domain compression algorithm which preserves formant ratios and thus enhances speech understanding for some individuals with severe sensorineural hearing loss in the 2-3 kHz range. The optimal compression ratio is dependent upon qualities of the acoustical signal. We briefly review the frequency-compression technology and describe a Gaussian mixture model classifier which can dynamically set the frequency compression ratio according to broad acoustic categories which we call cohorts. We discuss the results of a prototype simulator which has been implemented on a general purpose computer.",2005,0, 1548,Slingshot: Time-CriticalMulticast for Clustered Applications,"Datacenters are complex environments consisting of thousands of failure-prone commodity components connected by fast, high capacity interconnects. The software running on such datacenters typically uses multicast communication patterns involving multiple senders. We examine the problem of time-critical multicast in such settings, and propose Slingshot, a protocol that uses receiver-based FEC to recover lost packets quickly. Slingshot offers probabilistic guarantees on timeliness by having receivers exchange FEC packets in an initial phase, and optional complete reliability on packets not recovered in this first phase. We evaluate an implementation of Slingshot against SRM, a well-known multicast protocol, and show that it achieves two orders of magnitude faster recovery in datacenter settings",2005,0, 1549,An experimental study of soft errors in microprocessors,"The issue of soft errors is an important emerging concern in the design and implementation of future microprocessors. The authors examine the impact of soft errors on two different microarchitectures: a DLX processor for embedded applications and a high-performance alpha processor. The results contrast impact of soft errors on combinational and sequential logic, identify the most vulnerable units, and assess soft error impact on the application.",2005,0, 1550,"TRUSS: a reliable, scalable server architecture","Traditional techniques that mainframes use to increase reliability -special hardware or custom software - are incompatible with commodity server requirements. The Total Reliability Using Scalable Servers (TRUSS) architecture, developed at Carnegie Mellon, aims to bring reliability to commodity servers. TRUSS features a distributed shared-memory (DSM) multiprocessor that incorporates computation and memory storage redundancy to detect and recover from any single point of transient or permanent failure. Because its underlying DSM architecture presents the familiar shared-memory programming model, TRUSS requires no changes to existing applications and only minor modifications to the operating system to support error recovery.",2005,0, 1551,A quality of service mechanism for IEEE 802.11 wireless network based on service differentiation,"This paper introduces an analytical model for wireless local area network with priority schemes based service differentiation. This model can predict station performance by access stations' number and traffic type before wireless channel condition changed. Then a new algorithm, DTCWF (dynamic tuning of contention window with fairness), is proposed to modify protocol options to limit end to end delay and loss rate of high priority traffic and maximize throughput of other traffics. Simulations validate this model and the comparison between DTCWF, DCF, and EDCA shows that our algorithm can improve quality of service for real-time traffic.",2005,0, 1552,Information extraction system in large-scale Web,"Manually querying search engines in order to acquire a large body of related information is a tedious, error-prone process. Search engines retrieve and rank potentially relevant documents for human perusal, but do not extract facts, assess confidence, or fuse information from multiple documents. This paper present an information extraction system that aims to automate the tedious process of extracting large collections of facts from large-scale, domain-independent, and scalable manner. The paper focus on four major components: search engine interface, extractor, assessor, database, and further analyzes system architecture and reports on simulation results with large-scale information extraction systems.",2005,0, 1553,A dsPic-based measurement system for the evaluation of voltage sag severity through new power quality indexes,"In this paper the authors describe the implementation of a smart sensor based on microcontroller for the extraction of new conceived power quality indexes. The work is carried on starting from the improvement of three indexes presented in a previous work (De Capua et al., 2004) for exhaustively detecting voltage sags. After an examination of the loads which could be more susceptible to the duration or to the depth of the sag, an ANOVA analysis has been conducted in order to evaluate the indexes' sensibility to these characteristics of the sag. Then a new measurement algorithm has been implemented, that is considered more rapid for detecting a sag occurrence. Finally a dsPic-based smart sensor has been realized, to monitor the voltage RMS value and extract the indexes values. These values are transmitted to software located on an external peripheral, through serial communication, for the successive data processing stage.",2005,0, 1554,Locating where faults will be [software testing],"The goal of this research is to allow software developers and testers to become aware of which files in the next release of a large software system are likely to contain the largest numbers of faults or the highest fault densities in the next release, thereby allowing testers to focus their efforts on the most fault-prone files. This is done by developing a negative binomial regression model to help predict characteristics of new releases of a software system, based on information collected about prior releases and the new release under development. The same prediction model was also used to allow a tester to select the files of a new release that collectively contain any desired percentage of the faults. The benefit of being able to make these sorts of predictions accurately should be clear: if we know where to look for bugs, we should be able to target our testing efforts there and, as a result, find problems more quickly and therefore more economically. Two case studies using large industrial software systems are summarized. The first study used seventeen consecutive releases of a large inventory system, representing more than four years of field exposure. The second study used nine releases of a service provisioning system with two years of field experience.",2005,0, 1555,Visualizing the evolution of Web services using formal concept analysis,"The service-oriented paradigm constitutes a promising technology that allows many software systems to benefit of interesting mechanisms such as late binding and automatic discovery. From a service integrator's perspective, it is relevant to understand service evolution, to assess which could be its impact on his/her own system or, eventually, to change the bindings between the system and the services. Given the lack of source code availability, this task is, however, limited to understand how service interfaces evolve. We propose an approach, based on formal concept analysis, to understand how relationships between sets of services change across service evolution. The concept lattice is able to highlight hierarchy relationships and, in general, to identify commonalities and differences between services. Examples built upon real sets of services show the feasibility of the proposed approach.",2005,0, 1556,Change impact analysis for requirement evolution using use case maps,"Changing customer needs and computer technology are the driving factors influencing software evolution. There is a need to assess the impact of these changes on existing software systems. Requirement specification is gaining increasingly attention as a critical phase of software systems development process. In particular for larger systems, it quickly becomes difficult to comprehend what impact a requirement change might have on the overall system or parts of the system. Thus, the development of techniques and tools to support the evolution of requirement specifications becomes an important issue. In this paper we present a novel approach to change impact analysis at the requirement level. We apply both slicing and dependency analysis at the use case map specification level to identify the potential impact of requirement changes on the overall system. We illustrate our approach and its applicability with a case study conducted on a simple telephony system.",2005,0, 1557,Experimental assessment of the time transfer capability of precise point positioning (PPP),"In recent years, many national timing laboratories have installed geodetic global positioning system (GPS) receivers together with their traditional GPS/GLONASS common view (CV) receivers and two way satellite time and frequency transfer (TWSTFT) equipment. A method called precise point positioning (PPP) is in use in the geodetic community allowing precise recovery of geodetic GPS receiver position, clock phase and tropospheric delay by taking advantage of the International GNSS Service (IGS) precise products. Natural Resources Canada (NRCan) has developed software implementing the PPP and a previous assessment of the PPP as a promising time transfer method was carried out at Istituto Elettrotecnico Nazionale (IEN) in 2003. This paper reports on a more systematic work performed at IEN and NRCan to further characterize the PPP method for time transfer application, involving data from nine national timing laboratories. Dual-frequency GPS observations (pseudorange and carrier phase) over the last ninety days of year 2004 were processed using the NRCan PPP software to recover receiver clock estimates at five minute intervals, using the IGS final satellite orbit and clock products. The quality of these solutions is evaluated mainly in terms of short-term noise. In addition, the time and frequency transfer capability of the PPP method were assessed with respect to independent techniques, such as TWSTFT, over a number of European and Transatlantic baselines",2005,0, 1558,Performance evaluation of agent-based material handling systems using simulation techniques,"The increasing influence of global economy is changing the conventional approach to managing manufacturing companies. Real-time reaction to changes in shop-floor operations, quick and quality response in satisfying customer requests, and reconfigurability in both hardware equipment and software modules, are already viewed as essential characteristics for next generation manufacturing systems. Part of a larger research that employs agent-based modeling techniques in manufacturing planning and control, this work proposes an agent-based material handling system and contrasts the centralized and decentralized scheduling approaches for allocation of material handling operations to the available resources in the system. To justify the use of the decentralized agent-based approach and assess its performance compared to conventional scheduling systems, a series of validation tests and a simulation study are carried out. As illustrated by the preliminary results obtained in the simulation study the decentralized agent-based approach can give good feasible solutions in a short amount of time.",2005,0, 1559,A State Machine for Detecting C/C++ Memory Faults,"Memory faults are major forms of software bugs that severely threaten system availability and security in C/C++ program. Many tools and techniques are available to check memory faults, but few provide systematic full-scale research and quantitative analysis. Furthermore, most of them produce high noise ratio of warning messages that require many human hours to review and eliminate false-positive alarms. And thus, they cannot locate the root causes of memory faults precisely. This paper provides an innovative state machine to check memory faults, which has three main contributions. Firstly, five concise formulas describing memory faults are given to make the mechanism of the state machine simple and flexible. Secondly, the state machine has the ability to locate the cause roots of the memory faults. Finally, a case study applying to an embedded software, which is written in 50 thousand lines of C codes, shows it can provide useful data to evaluate the reliability and quality of software",2005,0, 1560,Design of a software distributed shared memory system using an MPI communication layer,"We designed and implemented a software distributed shared memory (DSM) system, SCASH-MPI, by using MPI as the communication layer of the SCASH DSM. With MPI as the communication layer, we could use high-speed networks with several clusters and high portability. Furthermore, SCASH-MPI can use high-speed networks with MPI, which is the most commonly available communication library. On the other hand, existing software DSM systems usually use a dedicated communication layer, TCP, or UDP-Ethernet. SCASH-MPI avoids the need for a large amount of pin-down memory for shared memory use that has limited the applications of the original SCASH. In SCASH-MPI, a thread is created to support remote memory communication using MPI. An experiment on a 4-node Itanium cluster showed that the Laplace Solver benchmark using SCASH-MPI achieves a performance comparable to the original SCASH. Performance degradation is only 6.3% in the NPB BT benchmark Class B test. In SCASH-MPI, page transfer does not start until a page fault is detected. To hide the latency of page transmission, we implemented a prefetch function. The latency in BT Class B was reduced by 64% when the prefetch function was used.",2005,0, 1561,Experimental evaluation of FSM-based testing methods,"The development of test cases is an important issue for testing software, communication protocols and other reactive systems. A number of methods are known for the development of a test suite based on a formal specification given in the form of a finite state machine. Well-known methods are called the W, Wp, UIO, UIOv, DS, H and HIS test derivation methods. These methods have been extensively used by research community in the last years; however no proper comparison has been made between them. In this paper, we experiment with these methods to assess their complexity, applicability, completeness, fault detection capability, length and derivation time of their test suites. The experiments are conducted on randomly generated specifications and on a realistic protocol called the simple connection protocol.",2005,0, 1562,Formal verification of dead code elimination in Isabelle/HOL,"Correct compilers are a vital precondition to ensure software correctness. Optimizations are the most error-prone phases in compilers. In this paper, we formally verify dead code elimination (DCE) within the theorem prover Isabelle/HOL. DCE is a popular optimization in compilers which is typically performed on the intermediate representation. In our work, we reformulate the algorithm for DCE so that it is applicable to static single assignment (SSA) form which is a state of the art intermediate representation in modern compilers, thereby showing that DCE is significantly simpler on SSA form than on classical intermediate representations. Moreover, we formally prove our algorithm correct within the theorem prover Isabelle/HOL. Our program equivalence criterion used in this proof is based on bisimulation and, hence, captures also the case of non-termination adequately. Finally we report on our implementation of this verified DCE algorithm in the industrial-strength scale compiler system.",2005,0, 1563,Test case generation by OCL mutation and constraint solving,"Fault-based testing is a technique where testers anticipate errors in a system under test in order to assess or generate test cases. The idea is to have enough test cases capable of detecting these anticipated errors. This paper presents a method of fault-based test case generation for pre- and postcondition specifications. Here, errors are anticipated on the specification level by mutating the pre- and postconditions. We present the underlying theory by giving test cases a formal semantics and translate this general testing theory to a constraint satisfaction problem. A prototype test case generator serves to demonstrate the automatization of the method. The current tool works with OCL specifications, but the theory and method are general and apply to many state-based specification languages.",2005,0, 1564,Ontology based requirements analysis: lightweight semantic processing approach,"We propose a software requirements analysis method based on domain ontology technique, where we can establish a mapping between a software requirements specification and the domain ontology that represents semantic components. Our ontology system consists of a thesaurus and inference rules and the thesaurus part comprises domain specific concepts and relationships suitable for semantic processing. It allows requirements engineers to analyze a requirements specification with respect to the semantics of the application domain. More concretely, we demonstrate following three kinds of semantic processing through a case study, (1) detecting incompleteness and inconsistency included in a requirements specification, (2) measuring the quality of a specification with respect to its meaning and (3) predicting requirements changes based on semantic analysis on a change history.",2005,0, 1565,Domain consistency in requirements specification,"Fixing requirements errors that are detected late in the software development life cycle can be extremely costly. So, finding problems in requirements specification early in the development cycle is critical and crucial. A formal specification can reduce errors by reducing ambiguity and imprecision and by making some instances of inconsistency and incompleteness obvious. In this paper, with an example of a moderately complex system of the mobile computing domain, we discuss how the consistency conditions found during initial abstract formal specification help in detecting logical errors during early stages of system development. We also discuss the importance of consistency conditions while modelling the domain of a complex system and show how the identified consistency conditions help in better understanding the specification and to gain confidence on the correctness of the specification. We use a combination of techniques, like specification inspection and testing the executable specification of a prototype using test cases, to validate the specification against the requirements as well as to ensure that the specified consistency conditions are respected and maintained by the operations defined in the specification.",2005,0, 1566,A metamorphic approach to integration testing of context-sensitive middleware-based applications,"During the testing of context-sensitive middleware-based software, the middleware identifies the current situation and invokes the appropriate functions of the applications. Since the middleware remains active and the situation may continue to evolve, however, the conclusion of some test cases may not be easily identified. Moreover, failures appearing in one situation may be superseded by subsequent correct outcomes and may, therefore, be hidden. We alleviate the above problems by making use of a special kind of situation, which we call checkpoints, such that the middleware will not activate the functions under test. We propose to generate test cases that start at a checkpoint and end at another. We identify functional relations that associate different execution sequences of a test case. Based on a metamorphic approach, we check the results of the test case to detect any contravention of such relations. We illustrate our technique with an example that shows how re-hidden failures may be detected.",2005,0, 1567,A method of generating massive virtual clients and model-based performance test,"Testing the performance of a server that handles massive connections requires to generate massive virtual client connections and to model realistic traffic. In this paper, we propose a novel approach to generate massive virtual clients and realistic traffic. Our approach exploits the Windows I/O completion port (IOCP), which is the Windows NT operating system support for developing a scalable, high throughput server, and model-based testing scenarios. We describe implementation details of the proposed approach. Through analysis and experiments, we prove that the proposed method can predict and evaluate performance data more accurately in cost-effective way.",2005,0, 1568,Towards a metamorphic testing methodology for service-oriented software applications,"Testing applications in service-oriented architecture (SOA) environments needs to deal with issues like the unknown communication partners until the service discovery, the imprecise black-box information of software components, and the potential existence of non-identical implementations of the same service. In this paper, we exploit the benefits of the SOA environments and metamorphic testing (MT) to alleviate the issues. We propose an MT-oriented testing methodology in this paper. It formulates metamorphic services to encapsulate services as well as the implementations of metamorphic relations. Test cases for the unit test phase is proposed to generate follow-up test cases for the integration test phase. The metamorphic services invoke relevant services to execute test cases and use their metamorphic relations to detect failures. It has potentials to shift the testing effort from the construction of the integration test sets to the development of metamorphic relations.",2005,0, 1569,Asymptotic Performance of a Multichart CUSUM Test Under False Alarm Probability Constraint,"Traditionally the false alarm rate in change point detection problems is measured by the mean time to false detection (or between false alarms). The large values of the mean time to false alarm, however, do not generally guarantee small values of the false alarm probability in a fixed time interval for any possible location of this interval. In this paper we consider a multichannel (multi-population) change point detection problem under a non-traditional false alarm probability constraint, which is desirable for a variety of applications. It is shown that in the multichart CUSUM test this constraint is easy to control. Furthermore, the proposed multichart CUSUM test is shown to be uniformly asymptotically optimal when the false alarm probability is small: it minimizes an average detection delay, or more generally, any positive moment of the stopping time distribution for any point of change.",2005,0, 1570,Optimal Admission Control for a Markovian Queue Under the Quality of Service Constraint,"We study an optimal admission of arriving customers to a Markovian finite-capacity queue, e.g. M/M/c/N queue, with several customer types. The system managers are paid for serving customers and penalized for rejecting them. The rewards and penalties depend on customer types. The goal is to maximize the average rewards per unit time subject to the constraint on the average penalties per unit time. We provide a solution to this problem through a Linear Programming transformation and characterize the structure of optimal policies based on Lagrangian optimization. For a feasible problem, we show the existence of a 1-randomized trunk reservation optimal policy with the acceptance thresholds for different customer types ordered according to a linear combination of the service rewards and rejection costs. I n addition, we prove that any 1-randomized optimal policy has this structure. In particular, we establish the structure of an optimal policy that maximizes the average rewards per unit time subject to the constraint on the blocking probability for one of the customer types or for a group of customer types pooled together, i.e., the QoS (Quality of Service) constraint. In the end, we also formulat the problem with multiple constraints and similar results hold.",2005,0, 1571,"Optimized reasoning-based diagnosis for non-random, board-level, production defects","The """"back-end"""" costs associated with debug of functional test failures can be one of the highest cost adders in the manufacturing process. As boards become more dense and more complex, debug of functional failures will become more and more difficult. Test strategies try to detect and diagnose failures early on in the test process (component and structural tests), but inevitably some defects are not detected until functional testing is done on the board. Finding these defects usually requires an """"expert"""", with engineering level skills in both hardware and software. Depending on the complexity of the product, it could take several months (even years) to develop this level of expertise. During the initial product ramp, this expertise is usually most needed and often unavailable. Debug time is usually very long and scrap rates are generally high. This paper will provide an overview of reasoning-based diagnosis techniques and how they can significantly decrease debug time, especially during new product introduction. Because these engines are """"model-based"""", there is no guarantee how they will perform in real life. In almost all cases, the reasoning engine will have to be modified based on instances where the reasoning engine could not correctly identify the failing component. Making these adjustments to the reasoning is a very complex and sometimes risky endeavor. While the new model may correctly identify the previously missed failure, the reasoning may have been altered to a point where several other diagnoses have now been unknowingly compromised. This paper will propose enhancements to the reasoning engine that will allow a simpler approach to adapting to diagnostic escapes without risking compromises to the original diagnostic engine",2005,0, 1572,The case for outsourcing DFT,"The author discusses about outsourcing analog/mixed-signal DFT. At present we still lack a """"SAF"""" metric for measuring analog IC fault coverage, as most analog faults that are found by testing are of a parametric variety, and can not be measured or scored (as in the SAF coverage grade) by using Boolean techniques. To analyze analog and mixed-signal (A/MS) logic for testability, one has to know what the analog failures are that need to be detected, what the capability of the test equipment will be for these measurements, what the error or repeatability will be, and what the trade off is going to be between increased test accuracy and test time",2005,0, 1573,Application of a Robust and Efficient ICP Algorithm for Fitting a Deformable 3D Human Torso Model to Noisy Data,"We investigate the use of an iterative closest point (ICP) algorithm in the alignment of a point distribution model (PDM) of 3D human female torsos to sample female torso data. An approximate k-d tree procedure for efficient ICP is tested to assess whether it improves the speed of the alignment process. The use of different error norms, namely L2 and L1, are compared to ascertain if either offers an advantage in terms of convergence and in the quality of the final fit when the sample data is clean, noisy or has some data missing. It is found that the performance of the ICP algorithm used is improved in both speed of convergence and accuracy of fit through the combined use of an approximate and exact k-d tree search procedure and with the minimisation of the L1 norm even when up to 50% of the data is noisy or up to 25% is missing. We demonstrate the use of this algorithm in providing, via a fitted torso PDM, smooth surfaces for noisy torso data and valid data points for torsos with missing data.",2005,0, 1574,Performance Model Building of Pervasive Computing,"Performance model building is essential to predict the ability of an application to satisfy given levels of performance or to support the search for viable alternatives. Using automated methods of model building is becoming of increasing interest to software developers who have neither the skills nor the time to do it manually. This is particularly relevant in pervasive computing, where the large number of software and hardware components requires models of so large a size that using traditional manual methods of model building would be error prone and time consuming. This paper deals with an automated method to build performance models of pervasive computing applications, which require the integration of multiple technologies, including software layers, hardware platforms and wired/wireless networks. The considered performance models are of extended queueing network (EQN) type. The method is based on a procedure that receives as input the UML model of the application to yield as output the complete EQN model, which can then be evaluated by use of any evaluation tool.",2005,0, 1575,New approach for selfish nodes detection in mobile ad hoc networks,"A mobile ad hoc network (MANET) is a temporary infrastructureless network, formed by a set of mobile hosts that dynamically establish their own network on the fly without relying on any central administration. Mobile hosts used in MANET have to ensure the services that were ensured by the powerful fixed infrastructure in traditional networks, the packet forwarding is one of these services. The resource limitation of nodes used in MANET, particularly in energy supply, along with the multi-hop nature of this network may cause new phenomena which do not exist in traditional networks. To save its energy a node may behave selfishly and uses the forwarding service of other nodes without correctly forwarding packets for them. This deviation from the correct behavior represents a potential threat against the quality of service (QoS), as well as the service availability, one of the most important security requirements. Some solutions have been recently proposed, but almost all these solutions rely on the watchdog technique as stated in S. Marti et al. (2000) in their monitoring components, which suffers from many problems. In this paper we propose an approach to mitigate some of these problems, and we assess its performance by simulation.",2005,0, 1576,"Model-based Testing Considering Cost, Reliability and Software Quality","The important objectives of software engineering are developing software with high quality, low cost and high reliability. And then people consider more and more about completeness and effectiveness of the technique in order to increase the developers' confidence in software quality. So, the focus of this paper is to minimize the cost of software development during the testing and maintenance stage. Model-based testing and maintenance have been provided in many software development systems. Most software reliability growth models (SRGMs) are typically based on failure data such as number of failures, time of occurrence, failure severity, or the interval between two consecutive failures, whereas other models describe the relationship among the calendar testing, the amount of testing-effort, and the number of software faults detected by testing. In this paper, we propose a new method of software reliability growth models (SRGMs) based on non-homogeneous Poisson process (NHPP) model for reliability growth which during the development test phase. The results provide to be shorter schedule, lower cost and higher quality",2005,0, 1577,Testing FPGAs using JBits RTP cores,"In this paper, we present a fault-testing technique for field programmable gate arrays (FPGAs) that is based on the features offered by Java Bits (JBits). Our technique can detect single and multiple stuck at faults, and is capable of detecting the faulty CLB within the FPGA. The algorithm proposed for testing the faults in the CLB utilizes the unified-library primitives and the run-time parameterizable (RTP) cores of the JBits programming language. The method also explores the object-oriented approach of the Java programming language used in JBits. It has the capability of providing run-time fault avoidance in FPGAs based on the faults detected during the testing process. Since JBits involves programming directly at the bitstream level, the proposed method offers additional advantages over traditional testing techniques",2005,0, 1578,Archeology of code duplication: recovering duplication chains from small duplication fragments,"Code duplication is a common problem, and a well-known sign of bad design. As a result of that, in the last decade, the issue of detecting code duplication led to various solutions and tools that can automatically find duplicated blocks of code. However, duplicated fragments rarely remain identical after they are copied; they are oftentimes modified here and there. This adaptation usually """"scatters"""" the duplicated code block into a large amount of small """"islands"""" of duplication, which detected and analyzed separately hide the real magnitude and impact of the duplicated block. In this paper we propose a novel, automated approach for recovering duplication blocks, by composing small isolated fragments of duplication into larger and more relevant duplication chains. We validate both the efficiency and the scalability of the approach by applying it on several well known open-source case-studies and discussing some relevant findings. By recovering such duplication chains, the maintenance engineer is provided with additional cases of duplication that can lead to relevant refactorings, and which are usually missed by other detection methods.",2005,0, 1579,An approach to fault-tolerant mobile agent execution in distributed systems,"Mobile agents are no longer a theoretical issue since different architectures for their realization have been proposed. With the increasing market of electronic commerce it becomes an interesting aspect to use autonomous mobile agents for electronic business transactions. Being involved in money transactions, supplementary security features for mobile agent systems have to he ensured. Fault-tolerance is fundamental to the further development of mobile agent applications. In the context of mobile agents, fault-tolerance prevents a partial or complete loss of the agent, i.e., ensures that the agent arrives at its destination. Simple approaches such as checkpointing are prone to blocking. Replication can in principle improve solutions based on checkpointing. However, existing solutions in this context either assume a perfect failure detection mechanism (which is not realistic in an environment such as the Internet), or rely on complex solutions based on leader election and distributed transactions, where only a subset of solutions prevents blocking .This paper proposes a novel approach to fault-tolerant mobile agent execution, which is based on modeling agent execution as a sequence of agreement problems. Each agreement problem is one instance of the well-understood consensus problem. Our solution does not require a perfect failure detection mechanism, while preventing blocking and ensuring that the agent is executed exactly once.",2005,0, 1580,Dynamic characterization study of flip chip ball grid array (FCBGA) on peripheral component interconnect (PCI) board application,"This paper outlines and discusses the new mechanical characterization metrologies applied on PCI board envelope. 'The dynamic responses of PCI board were monitored and characterized using accelerometer and strain gauges. PCI board performances were analyzed to differentiate its high risk areas through analysis of board strain responses to solder joint crack. Board """"strain states"""" analysis methodology was introduced to provide immediate accurate board bending modes and deflection associated with experimental results. Using this methodology, it eases the board bend mode analysis which can capture the board strain performance limit at the same time. In addition, high speed camera (HSC) tool was incorporated into the evaluation to understand the boards bend history under shock test. This allows better view of the bending moment and matching to defect locations for corrective action implementation. Detailed failure analysis mapping of solder joint crack percentages was successfully gathered to support those findings. Key influences, such as thermal/mechanical enabling preload masses and shock input profiles on solder joint crack severity were conducted as well to understand the potential risk modulators for SJR performance. Furthermore, commercial simulation software analysis tool was applied to correlate the board's bend modes and predict the high risk solder joint location; which is important for product enabling solutions design. As a result, a system level stiffener solution was designed. Hence, with this characterization and validation concept, a practical stiffener solution for PCI application was validated through a special case study to improve the board SJR performance in its use condition.",2005,0, 1581,Design Phase Analysis of Software Reliability Using Aspect-Oriented Programming,"Software system may have various nonfunctional requirements such as reliability, security, performance and schedulability. If we can predict how well the system will meet such requirements at an early phase of software development, we can significantly save the total development cost and time. Among non-functional requirements, reliability is commonly required as the essential property of the system being developed. Therefore, many analysis methods have been proposed but methods that can be practically performed in the design phase are rare. In this paper we show how design-level aspects can be used to separate reliability concerns from essential functional concerns during software design. The aspect-oriented design technique described in this paper allows one to independently specify fault tolerance and essential functional concerns, and then weave the specifications to produce a design model that reflects both concerns. We illustrate our approach using an example.",2005,0, 1582,A new wavelet-based method for detection of high impedance faults,"Detecting high impedance faults is one of the challenging issues for electrical engineers. Over-current relays can only detect some of the high impedance faults. Distance relays are unable to detect faults with impedance over 100 Omega. In this paper, by using an accurate model for high impedance faults, a new wavelet-based method is presented. The proposed method, which employs a 3 level neural network system, can successfully differentiate high impedance faults from other transients. The paper also thoroughly analyzes the effect of choice of mother wavelet on the detection performance. Simulation results which are carried out using PSCAD/EMTDC software are summarized",2005,0, 1583,A real-time computer vision system for detecting defects in textile fabrics,"This paper proposes a real-time computer vision system for detecting defects in textile fabrics. The developments of both the hardware and software platforms are presented. The design of the prototyped defect detection system ensures that the fabric moves smoothly and evenly so that high quality images can be captured. The paper also proposes a new filter selection method to detect fabric defects, which can automatically tune the Gabor functions to match with the texture information. The filter selection method is further developed into a new defect segmentation algorithm. The scheme is tested both on-line and off-line by using a variety of homogeneous textile images with different defects. The results exhibit accurate defect detection with low false alarm, thus confirming the robustness and effectiveness of the proposed system",2005,0, 1584,Spare Line Borrowing Technique for Distributed Memory Cores in SoC,"In this paper, a new architecture of distributed embedded memory cores for SoC is proposed and an effective memory repair method by using the proposed spare line borrowing (software-driven reconfiguration) technique is investigated. It is known that faulty cells in memory core show spatial locality, also known as fault clustering. This physical phenomenon tends to occur more often as deep submicron technology advances due to defects that span multiple circuit elements and sophisticated circuit design. The combination of new architecture & repair method proposed in this paper ensures fault tolerance enhancement in SoC, especially in case of fault clustering. This fault tolerance enhancement is obtained through optimal redundancy utilization: spare redundancy in a fault-resistant memory core is used to fix the fault in a fault-prone memory core. The effect of spare line borrowing technique on the reliability of distributed memory cores is analyzed through modeling and extensive parametric simulation",2005,0, 1585,Developing distributed applications rapidly and reliably using the TENA middleware,"The test and training enabling architecture (TENA) middleware is the result of a joint interoperability initiative of the Director, Operational Test and Evaluation (DOT&E) of the Office of the Secretary of Defense (OSD). The goals of the initiative are to enable interoperability among ranges, facilities, and simulations in a quick and cost-efficient manner, and to foster reuse of range assets and future range system developments. The TENA middleware uses Unified Modeling Language (UML)-based model-driven code generation to automatically create a complex Common Object Request Broker Architecture (CORBA) application. This model-driven automatic code-generation greatly reduces the amount of software that must be hand-written and tested. Furthermore, the TENA middleware combines distributed shared memory, anonymous publish-subscribe, and model-driven distributed object-oriented programming paradigms into a single distributed middleware system. This unique combination yields a powerful middleware system that enables its users to rapidly develop sophisticated yet understandable distributed applications. The TENA middleware offers powerful programming abstractions that are not present in CORBA alone and provides a strongly-typed application programmer interface (API) that is much less error-prone than the existing CORBA API. These high-level, easy-to-understand programming abstractions combined with an API designed to reduce programming errors enable users to quickly and correctly express the concepts of their applications. Re-usable standardized objects farther simplify the development of applications. The net result of this combination of features is a significant reduction of application programming errors yielding increased overall reliability and decreased overall development time. Distributed applications developed using the TENA middleware exchange data using the publish-subscribe paradigm. Although many publish-subscribe systems exist, the TENA middleware repr- - esents a significant advance in the field due to the many high-level, model-driven programming abstractions it presents to the programmer. The TENA middleware API relies heavily on compile-time type-safety to help ensure reliable behavior at runtime. Careful API design allows a great number of potential errors to be detected at compile-time that might otherwise go unnoticed until run-time - where the cost of an error could be extremely high! The implementation of the TENA middleware uses C++, as well as a real-time CORBA ORB. The TENA middleware is currently in use at dozens of Department of Defense (DoD) testing and training range facilities across the county and has been used to support major test and training events such as Joint Red Flag '05. The TENA Middleware is available at http://www.tena-sda.org/",2005,0, 1586,Model checking class specifications for Web applications,This paper proposes an approach for verifying class specifications of Web applications using model checking. We first present a method to model a dynamic behavior of a Web application from a class specification. We next propose two methods to verify consistencies of the class specification and other design specifications: (1) a page flow diagram which is one of the most essential specifications for Web applications and (2) a behavior diagram such as a UML activity diagram. We applied the proposed methods to real specifications of a Web application designed by a certain company and found several faults of the specifications that had not been detected in actual reviews.,2005,0, 1587,An approach to validation of software architecture model,"Software architectures shift developers' focus from lines-of-code to coarser-grained architectural elements and their interconnection structure. However, the benefits of architecture description languages (ADLs) cannot be fully captured without an automated realization of software architecture designs because manually shifting from a model to its implementation is error-prone. We propose an integrated approach for automatically translating software architecture design models to an implementation and validating the translation as well as the implementation by exploring runtime verification technique and aspect-oriented programming. Specifically, system properties are not only verified against design models, but also verified during the execution of the generated implementation of software architecture design. A prototype tool, SAM Parser, is developed to demonstrate the approach on SAM (Software Architecture Model). In SAM Parser, all the realization and verification code can be automatically generated without human intervention. In this paper, we first brief describe the approach report on a case study conducted at an e-commerce scenario, an online shopping system to assess the benefits of automated realization of software architecture design and validation in a Web service domain.",2005,0, 1588,"Simulation-based validation and defect localization for evolving, semi-formal requirements models","When requirements models are developed in an iterative and evolutionary way, requirements validation becomes a major problem. In order to detect and fix problems early, the specification should be validated as early as possible, and should also be revalidated after each evolutionary step. In this paper, we show how the ideas of continuous integration and automatic regression testing in the field of coding can be adapted for simulation-based, automatic revalidation of requirements models after each incremental step. While the basic idea is fairly obvious, we are confronted with a major obstacle: requirements models under development are incomplete and semi-formal most of the time, while classic simulation approaches require complete, formal models. We present how we can simulate incomplete, semi-formal models by interactively recording missing behavior or functionality. However, regression simulations must run automatically and do not permit interactivity. We therefore have developed a technique where the simulation engine automatically resorts to the interactively recorded behavior in those cases where it does not get enough information from the model during a regression simulation run. Finally, we demonstrate how the information gained from model evolution and regression simulation can be exploited for locating defects in the model.",2005,0, 1589,Identifying error proneness in path strata with genetic algorithms,"In earlier work we have demonstrated that GA can successfully identify error prone paths that have been weighted according to our weighting scheme. In this paper we investigate whether the depth of strata in the software affects the performance of the GA. Our experiments show that the GA performance changes throughout the paths. It performs better in the upper, less in the middle and best in the lower layer of the paths. Although various methods have been applied for detecting and reducing errors in software, little research has been done into partitioning a system into smaller, error prone domains for software quality assurance. To identify error proneness in software paths is important because by identifying them, they can be given priority in code inspections or testing. Our experiments observe to what extent the GA identifies errors seeded into paths using several error seeding strategies. We have compared our GA performance with random path selection.",2005,0, 1590,Ontology-based active requirements engineering framework,"Software-intensive systems are systems of systems that rely on complex interdependencies among themselves as well as with their operational environment to satisfy the required behavior. As we integrate such systems to create information infrastructures that are critical to the quality of our lives and the businesses they support, the need to effectively predict, control and evolve their behavior is ever increasing. To deal with their complexity, an important first step is to understand and model software-intensive systems, their environments and the interdependencies among them at different levels of abstractions from multiple dimensions. In this paper, we present an ontology-based active requirements engineering (Onto-ActRE) framework that adopts a mixed-initiative approach to elicit, represent and analyze the diversity of factors associated with software-intensive systems. The Onto-ActRE framework integrates various RE modeling techniques with complementary semantics in a unifying ontological engineering process. We also present examples from the practice of our framework with appropriate tool support that combines theoretical and practical aspects.",2005,0, 1591,An integrated solution for testing and analyzing Java applications in an industrial setting,"Testing a large-scale, real-life commercial software application is a very challenging task due to the constant changes in the software, the involvement of multiple programmers and testers, and a large amount of code. Integrating testing with development can help find program bugs at an earlier stage and hence reduce the overall cost. In this paper, we report our experience on how to apply eXVantage (a tool suite for code coverage testing, debugging, performance profiling, etc.) to a large, complex Java application at the implementation and unit testing phases in Avaya. Our results suggest that programmers and testers can benefit from using eXVantage to monitor the testing process, gain confidence on the quality of their software, detect bugs which are otherwise difficult to reveal, and identify performance bottlenecks in terms of which part of code is most frequently executed.",2005,0, 1592,Aspect-oriented modularization of assertion crosscutting objects,"Assertion checking is a powerful tool to detect software faults during debugging, testing and maintenance. Although assertion documents the behavior of one component, it is hard to document relations and interactions among several objects since such assertion statements are spread across the modules. Therefore, we propose to modularize such assertion as an aspect in order to improve software maintainability. In this paper, taking Observer pattern as an example, we point out that some assertions tend to be crosscutting, and propose a modularization of such assertion with aspect-oriented language. We show a limitation of traditional assertion and effectiveness of assertion aspect through the case study, and discuss various situations to which assertion aspects are applicable.",2005,0, 1593,Identifying noise in an attribute of interest,"One of the most significant issues facing the data mining community is that of low-quality data. Real-world datasets are often inundated with various types of data integrity issues, particularly noisy data. In response to the difficulties created by low-quality data, we propose a novel technique to detect noisy instances relative to an attribute of interest (AOI). Any attribute in the dataset can be defined by the user as the attribute of interest. A noise ranking of instances relative to the chosen attribute is output. This approach can be iterated for any number of user-specified attributes of interest. The case study described in this work demonstrates how our technique may be used to detect class noise, which occurs when errors are present in the class or dependent variable. In this scenario the class is declared to be the attribute of interest and an instance noise ranking relative to the class is provided. Our technique is compared to the well-known ensemble and classification filters which have been previously proposed for class noise detection. The results of this study demonstrate the effectiveness of our approach and show that our procedure is a useful tool for improving data quality.",2005,0, 1594,Predicting software suitability using a Bayesian belief network,"The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian belief networks, a machine learning method. This research presents a Bayesian network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.",2005,0, 1595,Bayesian networks modeling for software inspection effectiveness,"Software inspection has been broadly accepted as a cost effective approach for defect removal during the whole software development lifecycle. To keep inspection under control, it is essential to measure its effectiveness. As human-oriented activity, inspection effectiveness is due to many uncertain factors that make such study a challenging task. Bayesian networks modeling is a powerful approach for the reasoning under uncertainty and it can describe inspection procedure well. With this framework, some extensions have been explored in this paper. The number of remaining defects in the software is proposed to be incorporated into the framework, with expectation to provide more information on the dynamic changing status of the software. In addition, a different approach is adopted to elicit the prior belief of related probability distributions for the network. Sensitivity analysis is developed with the model to locate the important factors to inspection effectiveness.",2005,0, 1596,Bi-objective model for test-suite reduction based on modified condition/decision coverage,"It is evidence that modified condition/decision coverage (MC/DC) is an effective verification method and can help to detect safety faults despite of its expensive cost. In regression testing, it is quite costly to rerun all of test cases in test suite because new test cases are added to test suite as the software evolves. Therefore, it is necessary to reduce the test suite to improve test efficiency and save test cost. Many existing test-suite reduction techniques are not effective to reduce MC/DC test suite. This paper proposes a new test-suite reduction technique for MC/DC: a bi-objective model that considers both the coverage degree of test case for test requirements and the capability of test cases to reveal error. Our experiment results show that the technique both reduces the size of test suite and better ensures the effectiveness of test suite to reveal error.",2005,0, 1597,Reliability prediction and assessment of fielded software based on multiple change-point models,"In this paper, we investigate some techniques for reliability prediction and assessment of fielded software. We first review how several existing software reliability growth models based on non-homogeneous Poisson processes (NHPPs) can be readily derived based on a unified theory for NHPP models. Furthermore, based on the unified theory, we can incorporate the concept of multiple change-points into software reliability modeling. Some models are proposed and discussed under both ideal and imperfect debugging conditions. A numerical example by using real software failure data is presented in detail and the result shows that the proposed models can provide fairly good capability to predict software operational reliability.",2005,0, 1598,Metamaterials with multiband AMC and EBG properties,"Complex surfaces that perform as artificial magnetic conductor (AMC) and as electromagnetic band gap (EBG) structures in a selection of predefined frequency bands are presented. This is achieved by introducing defected (or perturbed) arrays. In this paper we demonstrate that the absence of vias, which eases the fabrication process, does not prevent the AMC and EBG operation to coincide in the frequency domain. Method of moments based software has been developed for the fast, accurate and simultaneous investigation of the multi-band AMC and EBG properties of such arrays. Furthermore, the angular stability of the multiband AMC surface has also been assessed. Experimental results that demonstrate a dual-band metamaterial surface with simultaneous AMC and EBG properties are presented.",2005,0, 1599,Software review for automatic test equipment,"The nature of test set programming can be tedious and repetitive. A test engineer can often fall victim to puffing blinders on when programming by overlooking errors when reviewing their own work. To avoid this, it makes sense to treat software like a published work where a reviewer, independent of the original programming team, checks the software for design, quality, and errors. This paper describes a disciplined and consistent process for reviewing Automatic Test Equipment (ATE) software. This type of independent review process is comprised of four major steps: Receiving, Processing, Reporting, and Following-Up. It can be conducted and repeated throughout the development life cycle to improve the quality of the software. Early involvement can influence design changes that could lead to simpler and more manageable software. Several errors can be detected prior to its release by reviewing the software with software tools such as PC-LintTM or Understand for C++TM. Having the discipline to follow this simple process can bring about software manageability for future modifications, easier to read software, and software that contains fewer errors.",2005,0, 1600,Translating Atlas 416 to Fortran77 using CMMI processes,"Due to the increasing costs in software engineering and support of ATE, several processes have emerged to enable software improvement and incorporate an error or defect predicting component into software development. Chief among them is the DOD sponsored CMMI (capability maturity model integration). ATE development and support is an excellent arena to test new process systems and models because of its diversity and the increasing trend of use ATE support engineers as integrators. During 2004 and 2005 WRALC/MASTF incorporated CMMI into an ATLAS to FORTRAN77 rehost project and had great success. This paper will demonstrate that by using CMMI processes, a customer will receive a superior, repeatable and more reliable software product, as well as superior system support for the future. In addition, training of new personnel are repeatable, thus resulting in more competent, and long-term organization memory.",2005,0, 1601,On effectiveness of pairwise methodology for testing network-centric software,"Pairwise testing, which can be complemented with partial or full N-wise testing, is a technique which guarantees that all important parametric value pairs are included in a test suite. A percentage of N-wise testing is also included. We conjecture that N-wise enhanced pairwise testing can be used as a black-boxed testing method to increase effectiveness of random testing in exposing unusual or unexpected behaviors, such as security failures in network-centric software. This testing can also be quite cost-efficient since small N test suites grow linearly with the number of parameters. This paper explains the results of random testing of a simulation in which about 20% of the defects with probabilities of occurrence less than 50% are never exposed. This supports the premise that if the unusual or unexpected behaviors are based on defects which are less likely to occur, then random testing needs to be enhanced, especially if those unexposed defects could cause erratic or even critical behaviors to the system. Higher system complexities may indicate higher numbers of unusual or unexpected behaviors. It may be difficult to use the traditional operational profile information to determine the amount of testing for unusual behaviors since the operational usage may be 0 or close to it. Another interesting problem is that some testers lack the experience necessary to effectively analyze the results of a test run. It is important to compensate for the lack of experience so that novice testers are able to test comparatively as effectively as more experienced testers. It is believed that if the size of the test suite is relatively small, then it may be easier to pinpoint the source of a failure. The research presented in this paper is aimed at addressing some of these issues of random testing via enhanced pairwise testing and N-wise testing in general. It is possible that more complex systems, such as those that rely a great deal on a network, would require higher numbers of int- - eractions to combat unexpected combinations for use in some testing instances such as security testing or high assurance testing. A tool is being developed concurrently to help automate a part of the test generation process",2005,0, 1602,Evaluating Web applications testability by combining metrics and analogies,"This paper introduces an approach to describe a Web application through an object-oriented model and to study application testability using a quality model focused on the use of object-oriented metrics and software analogies analysis. The proposed approach uses traditional Web and object-oriented metrics to describe structural properties of Web applications and to analyze them. These metrics are useful to measure some important software attributes, such as complexity, coupling, size, cohesion, reliability, defects density, and so on. Furthermore, the presented quality model uses these object-oriented metrics to describe applications in order to predict some software quality factors (such as test effort, reliability, error proneness, and so on) through an instance-based classification system. The approach uses a classification system to study software analogies and to define a set of information then used as the basis for applications quality factors prediction and evaluation. The presented approaches are applied into the WAAT (Web Applications Analysis and Testing) project",2005,0, 1603,Enhancing Internet robustness against malicious flows using active queue management,"Attackers can easily modify the TCP control protocols of host computers to inject the malicious flows to the Internet. Including DDoS and worm attack flows, these malicious flows are unresponsive to the congestion control mechanism which is necessary to the equilibrium of the whole Internet. In this paper, a new scheme against the large scale malicious flows is proposed based on the principles of TCP congestion control. The kernel is to implement a new scheduling algorithm named as CCU (compare and control unresponsive flows) which is one sort of active queue management (AQM). According to the unresponsive characteristic of malicious flows, CCU algorithm relies on the two processes of malicious flows - detection and punishment. The elastics control mechanism of unresponsive flows benefits the AQM with the high performance and enhances the Internet robustness against malicious flows. The network resource can be regulated for the basic quality of service (QoS) demands of legal users. The experiments prove that CCU can detect and restrain responsive flows more accurately compared to other AQM algorithms.",2005,0, 1604,Managing MPICH-G2 Jobs with WebCom-G,"This paper discusses the use of WebCom-G to handle the management & scheduling of MPICH-G2 (MPI) jobs. Users can submit their MPI applications to a WebCom-G portal via a Web interface. WebCom-G then selects the machines to execute the application on, depending on the machines available to it and the number of machines requested by the user. WebCom-G automatically & dynamically constructs a RSL script with the selected machines and schedules the job for execution on these machines. Once the MPI application has finished executing, results are stored on the portal server, where the user can collect them. A main advantage of this system is fault survival, if any of the machines fail during the execution of a job, WebCom-G can automatically handle such failures. Following a machine failure, WebCom-G can create a new RSL script with the failed machines removed, incorporate new machines (if they are available) to replace the failed ones and re-launch the job without any intervention from the user. The probability of failures in a grid environment is high, so fault survival becomes an important issue",2005,0, 1605,Automatic detection of local and global software failures,"The problem of automatic detection of failures of reactive, session-oriented software programs is described. Detection of failures is carried out by a separate unit, which observes the inputs and outputs of the target program and reports the failures detected.",2005,0, 1606,Managing a project course using Extreme Programming,"Shippensburg University offers an upper division project course in which the students use a variant of Extreme Programming (XP) including: the Planning Game, the Iteration Planning Game, test driven development, stand-up meetings and pair programming. We start the course with two weeks of controlled lab exercises designed to teach the students about test driven development in JUnit/Eclipse and designing for testability (with the humble dialog box design pattern) while practicing pair programming. The rest of our semester is spent in three four-week iterations developing a product for a customer. Our teams are generally large (14-16 students) so that the projects can be large enough to motivate the use of configuration management and defect tracking tools. The requirement of pair programming limits the amount of project work the students can do outside of class, so class time is spent on the projects and teaching is on-demand individual mentoring with lectures/labs inserted as necessary. One significant challenge in managing this course is tracking individual responsibilities and activities to ensure that all of the students are fully engaged in the project. To accomplish this, we have modified the story and task cards from XP to provide feedback to the students and track individual performance against goals as part of the students' grades. The resulting course has been well received by the students. This paper will describe this course in more detail and assess its effect on students' software engineering background through students' feedback and code metrics",2005,0, 1607,A Simulation Task to Assess Students Design Process Skill,"Research has shown that the quality of one's design process is an important ingredient in expertise. Assessing design process skill typically requires a performance assessment in which students are observed (either directly or by videotape) completing a design and assessed using an objective scoring system. This is impractical in course-based assessment. As an alternative, we developed a computer-based simulation task, in which the student respondent """"watches"""" a team develop a design (in this instance a software design) and makes recommendations as to how they should proceed. The specific issues assessed by the simulation were drawn from the research literature. For each issue the student is asked to describe, in words, what the team should do next and then asked to choose among alternatives that the """"team"""" has generated. Thus, the task can be scored qualitatively and quantitatively. The paper describes the task and its uses in course-based assessment",2005,0, 1608,Work in Progress - Computer Software for Predicting Steadiness of the Students,"The paper presents a study which identifies a series of factors influencing students' steadiness in their option for engineering training and the final aim is to elaborate an IT system for monitoring the quality of educational offer. This aim is reached through a research developed in three stages. Only the first and the second stages were described here. The last one is in work. So, the first stage is materialized in elaborating and validating a questionnaire structured on three dimensions: finding the expectations, diagnosis of initial motivation for initiating students in engineering, specifying identity information and elements of personal history from educational student's experience. The sample is randomly chosen and the students from the research group belong to Technical University """"Gh. Asachi"""" Iassy, Romania, attending first, second and third year of study. The second stage of the scientific research establishes the relations between the identified expectations, initial motivation of students for engineering training and personal history in educational area on the one hand and students' educational performance on the other hand. Afterwards, the results of the first two stages represents the starting point for planning computer software to predict the steadiness of students in their professional choice",2005,0, 1609,Prostatectomy Evaluation using 3D Visualization and Quantitation,"Prostate cancer is a disease with a long natural history. Differences in survival outcomes as indicators of inappropriate surgery would take decades to appear. Therefore, the evaluation of the excised specimen according to defined parameters provides a more reasonable and timely assessment of surgical quality. There are currently a number of very different surgical approaches. Some uniform guidelines and quality assessment measuring readily available parameters would be desirable to establish a standard for comparison of surgical approaches and for individual surgical performance. In this paper, we present a novel methodology to objectively quantify the assessment process utilizing a 3D reconstructed model for the prostate gland. To this end, we discuss the development of a process employing image reconstruction and analysis techniques to assess the percent of capsule covered by soft tissue. A final goal is to develop software for the purpose of a quality assurance assessment for pathologists and surgeons to evaluate the adequacy/appropriateness of each surgical procedure; laparoscopic versus open perineal or retropubic prostatectomy. Results from applying this technique are presented and discussed",2005,0, 1610,MagIC System: a New Textile-Based Wearable Device for Biological Signal Monitoring. Applicability in Daily Life and Clinical Setting,"The paper presents a new textile-based wearable system for the unobtrusive recording of cardiorespiratory and motion signals during spontaneous behavior along with the first results concerning the application of this device in daily life and in a clinical environment. The system, called MagIC (Maglietta Interattiva Computerizzata), is composed of a vest, including textile sensors for detecting ECG and respiratory activity, and a portable electronic board for motion detection, signal preprocessing and wireless data transmission to a remote monitoring station. The MagIC system has been tested in freely moving subjects at work, at home, while driving and cycling and in microgravity condition during a parabolic flight. Applicability of the system in cardiac in-patients is now under evaluation. Preliminary data derived from recordings performed on patients in bed and during physical exercise showed 1) good signal quality over most of the monitoring periods, 2) a correct identification of arrhythmic events, and 3) a correct estimation of the average beat-by-beat heart rate. These positive results supports further developments of the MagIC system, aimed at tuning this approach for a routine use in clinical practice and in daily life",2005,0, 1611,Dynamic User Interface Adaptation for Mobile Computing Devices,"A large number of heterogeneous and mobile computing devices nowadays are employed by users to access services they have subscribed to. The work of application developers, which have to maintain several versions of user interface for a single application, is becoming more and more difficult, error-prone and time consuming. New software development models, able to easily adapt the application to the client execution context, have to be exploited. In this work we present a framework that allows developers to specify the user interaction with the application, in an independent manner with respect to the specific execution context, by using an XML-based language. Starting from such a specification, the system will subsequently """"render"""" the actual users application interface on a specific execution environment, adapting it to the used terminal characteristics.",2005,0, 1612,Software Architecture Reliability Analysis Using Failure Scenarios,We propose a Software Architecture Reliability Analysis (SARA) approach that benefits from both reliability engineering and scenario-based software architecture analysis to provide an early reliability analysis of the software architecture. SARA makes use of failure scenarios that are prioritized with respect to the user-perception in order to provide a severity analysis for the software architecture and the individual components.,2005,0, 1613,Change Propagation for Assessing Design Quality of Software Architectures,"The study of software architectures is gaining importance due to its role in various aspects of software engineering such as product line engineering, component based software engineering and other emerging paradigms. With the increasing emphasis on design patterns, the traditional practice of ad-hoc software construction is slowly shifting towards pattern-oriented development. Various architectural attributes like error propagation, change propagation, and requirements propagation, provide a wealth of information about software architectures. In this paper, we show that change propagation probability (CP) is helpful and effective in assessing the design quality of software architectures. We study two different architectures (one that employs patterns versus one that does not) for the same application. We also analyze and compare change propagation metric with respect to other coupling-based metrics.",2005,0, 1614,Predicting Architectural Styles from Component Specifications,"Software Product Lines (SPL), Component Based Software Engineering (CBSE) and Commercial Off The Shelf (COTS) components provide a rich supporting base for creating software architectures. Further, they promise significant improvements in the quality of software configurations that can be composed from pre-built components. Software architectural styles provide a way for achieving a desired coherence for such component-based architectures. This is because the different architectural styles enforce different quality attributes for a system. If the architectural style of an emergent system could be predicted in advance, a System Integrator could make necessary changes to ensure that the quality attributes dictated by the system requirements were satisfied before the actual system was deployed and tested. In this paper we propose a model for predicting architectural styles based on use cases that need to be met by a system configuration. Moreover, our technique can be used to determine stylistic conformance and hence indicate the presence or absence of architectural drift",2005,0, 1615,Simulation of partial discharge propagation and location in Abetti winding based on structural data,"Power transformer monitoring as a reliable tool for maintaining purposes of this valuable asset of power systems has always comprised partial discharge offline measurements and online monitoring. The reason lies in non-destructive feature of PD monitoring. Partial discharge monitoring helps to detect incipient insulation faults and prevent insulation failure of power transformers. This paper introduces a software package developed based on structural data of power transformer and discusses the results of the simulation on Abetti winding, which might be considered as a basic layer winding. A hybrid model is used to model the transformer winding, which has been developed by first author. Firstly, winding is modeled by ladder network method to determine model parameters and then multi-conductor transmission line model is utilized to work out voltage and current vectors and study partial discharge propagation as well as its localization. Utilized method of modeling makes it possible to simulate a transformer winding over a frequency range from a few hundred kHz to a few tens of MHz. The results take advantage of accurate modeling method and provide a reasonable interpretation as to PD propagation and location studies",2005,0, 1616,An Improved Algorithm for Deadlock Detection and Resolution in Mobile Agent Systems,"Mobile agent systems have been proved that are the best paradigm for distributed applications. They have potential advantages to provide a convenient, efficient and high performance distributed applications. Many solutions for problems in distributed systems such as deadlock detection rely on assumptions such as data location and message passing mechanism and static network topology that could not be applied for mobile agent systems. In this paper an improved distributed deadlock detection and resolution algorithm is proposed. The algorithm is based on Ashfield et. al. process. There are some cases in which original algorithm detects false deadlock or does not detect global deadlocks. The proposed algorithm eliminates the original algorithm deficiencies and improves its performance. It also minimizes the detection agent travels through the communication network. Also it has a major impact on improving performance of the mobile agent systems",2005,0, 1617,UVSD: software for detection of color underwater features,"Underwater Video Spot Detector (UVSD) is a software package designed to analyze underwater video for continuous spatial measurements (path traveled, distance to the bottom, roughness of the surface etc.) Laser beams of known geometry are often used in underwater imagery to estimate the distance to the bottom. This estimation is based on the manual detection of laser spots which is labor intensive and time consuming so usually only a few frames can be processed this way. This allows for spatial measurements on single frames (distance to the bottom, size of objects on the sea-bottom), but not for the whole video transect. We propose algorithms and a software package implementing them for the semi-automatic detection of laser spots throughout a video which can significantly increase the effectiveness of spatial measurements. The algorithm for spot detection is based on the support vector machines approach to artificial intelligence. The user is only required to specify on certain frames the points he or she thinks are laser dots (to train an SVM model), and then this model is used by the program to detect the laser dots on the rest of the video. As a result the precise (precision is only limited by quality of the video) spatial scale is set up for every frame. This can be used to improve video mosaics of the sea-bottom. The temporal correlation between spot movements changes and their shape provides the information about sediment roughness. Simultaneous spot movements indicate changing distance to the bottom; while uncorrelated changes indicate small local bumps. UVSD can be applied to quickly identify and quantify seafloor habitat patches, help visualize habitats and benthic organisms within large-scale landscapes, and estimate transect length and area surveyed along video transects.",2005,0, 1618,Collaborative sensing using uncontrolled mobile devices,"This paper considers how uncontrolled mobiles can be used to collaboratively accomplish sensing tasks. Uncontrolled mobiles are mobile devices whose movements cannot be easily controlled for the purpose of achieving a task. Examples include sensors mounted on mobile vehicles of people to monitor air quality and to detect potential airborne nuclear, biological, or chemical agents. We describe an approach for using uncontrolled mobile devices for collaborative sensing. Considering the potentially large number of mobile sensors that may be required to monitor a large geographical area such as a city, a key issue is how to achieve a proper balance between performance and costs. We present analytical results on the rate of information reporting by uncontrolled mobile sensors needed to cover a given geographical area. We also present results from testbed implementations to demonstrate the feasibility of using existing low-cost software technologies and platforms with existing standard protocols for information reporting and retrieval to support a large system of uncontrolled mobile sensors",2005,0, 1619,On-demand overlay networking of collaborative applications,"We propose a new overlay network, called Generic Identifier Network (GIN), for collaborative nodes to share objects with transactions across affiliated organizations by merging the organizational local namespaces upon mutual agreement. Using local namespaces instead of a global namespace can avoid excessive dissemination of organizational information, reduce maintenance costs, and improve robustness against external security attacks. GIN can forward a query with an O(1) latency stretch with high probability and achieve high performance. In the absence of a complete distance map, its heuristic algorithms for self configuration are scalable and efficient. Routing tables are maintained using soft-state mechanisms for fault tolerance and adapting to performance updates of network distances. Thus, GIN has significant new advantages for building an efficient and scalable distributed hash table for modern collaborative applications across organizations",2005,0, 1620,An immune model and its application to a mobile robot simulator,"Immune computation is burgeoning bioinformatics technique inspired from the natural immune system and can solve the information security problems such as antivirus and fault detection. And the immune model is a crucial problem of the artificial immune system. In this paper, an immune model was proposed for the application of a mobile robot simulator, which was infected by some worms, such the love worm and the happy-time worm. The immune model was comprised of three tiers, including the inherent immune tier, the adaptive immune tier and the parallel computing tier. This immune model was built on the theories of the natural immune system and had many excellent features, such as adaptability, immunity, memory, learning, and robustness. And the application example of the immune model in the mobile robot simulator showed, the artificial immune system can detect, recognize, learn and eliminate computer viruses, and can detect and repair faults such as software bugs, and so the immune computation is an excellent approach for antivirus security. Moreover, the application fields and prospect of the immune computation would be rich and successful in the near future",2005,0, 1621,Development of On-Line Diagnostics and Real Time Early Warning System for Vehicles,"On-board diagnostics (OBD) system is developed to detect vehicle system error and malfunction for health diagnosis, OBD generates warning signals to vehicle operators as well as the maintenance engineers. However, once a warning signal is being generated, most operators are not knowledgeable to take any action on it. Data acquisition has to rely on maintenance engineer using special tools. Based on such a practical demand, this paper presents a new vehicle on-line diagnosis and real time early warning system to acquire OBD signals and transmit to a Server of Maintenance Center via GPRS mobile communication for immediate actions. In this paper, hardware and software in both design and implementation are discussed with preliminary tests. The test functions of the proposed system fulfil the rising requirements for modern vehicle system",2005,0, 1622,Reliability and Sensitivity Analysis of Embedded Systems with Modular Dynamic Fault Trees,"Fault trees theories have been used in years because they can easily provide a concise representation of failure behavior of general non-repairable fault-tolerant systems. But the defect of traditional fault trees is lack of accuracy when modeling dynamic failure behavior of certain systems with fault-recovery process. A solution to this problem is called behavioral decomposition. A system will be divided into several dynamic or static modules, and each module can be further analyzed using BDD or Markov chains separately. In this paper, we will show a decomposition scheme that independent subtrees of a dynamic module are detected and solved hierarchically for saving computation time of solving Markov chains without losing unacceptable accuracy when assessing components sensitivities. In the end, we present our analyzing software toolkit that implements our enhanced methodology.",2005,0, 1623,Software Reliability Modeling withWeibull-type Testing-Effort and Multiple Change-Points,"Software reliability is defined as the probability of failure-free software operation for a specified period of time in a specified environment. Over the past 30 years, many software reliability growth models (SRGMs) have been proposed for estimation of reliability growth of products during software development processes. SRGMs proposed in the literature took into consideration the amount of testing-effort spent on software testing which can be depicted as a Weibull-type curve. However, in reality, the consumption rate of testing-effort expenditures may not be a constant and could be changed at some time points. Therefore, in this paper, we will incorporate the concept of multiple change-points into the Weibull-type testing-effort function. New model is proposed and the applicability of proposed model is demonstrated through real software failure data set. Our experimental results show that the proposed model has a fairly accurate prediction capability.",2005,0, 1624,Developing the DigiQUAL protocol for digital library evaluation,"The distributed, project-oriented nature of digital libraries (DLs) has made them difficult to evaluate in aggregate. By modifying the methods and tools used to evaluate physical libraries' content and services, measures can be developed whose results can be used across a variety of DLs. The DigiQUAL protocol being developed by the Association of Research Libraries (ARL) has the potential to provide the National Science Digital Library (NSDL) with a standardized methodology and survey instrument with which to evaluate not only its distributed projects but also to gather data to assess the value and impact of the NSDL",2005,0, 1625,Project Management Trends of Pakistani Software Industry,"The ability to produce cost effective and quality software is heavily dependent on the maturity of the processes used to build the software. Software industry in general and Pakistani software industry in particular emerged and witnesses turmoil in the recent years. Now it is the right time to assess the project management capabilities, trends and requirements of the local industry. To assess the level of maturity of Pakistan software industry in software project management, an interview-based survey of the some Islamabad based well-known companies was carried out. The results of the survey highlight the trends and practices of the industry regarding project management, and its level of maturity",2005,0, 1626,On Image Compression using Digital Curvelet Transform,"This paper describes a novel approach to digital image compression using a new mathematical transform: the curvelet transform. The transform has shown promising results over wavelet transform for 2D signals. Wavelets, though well suited to point singularities have limitations with orientation selectivity, and therefore, do not represent two-dimensional singularities (e.g. smooth curves) effectively. This paper employs the curvelet transform for image compression, exhibiting good approximation properties for smooth 2D functions. Curvelet improves wavelet by incorporating a directional component. The curvelet transform finds a direct discrete-space construction and is therefore computationally efficient. In this paper, we divided 2D spectrum into fine slices using iterated tree structured filter bank. Different amount of quantized curvelet coefficients were then selected for lossy compression and entropy encoding. A comparison with wavelet based compression was made for standard images like Lena, Barbara, etc. Curvelet transform has resulted in high quality image compression for natural images. Our implementation offers exact reconstruction, prone to perturbations, ease of implementation and low computational complexity. The algorithm works fairly well for grayscale and colored images",2005,0, 1627,On the Effect of Ontologies on Quality of Web Applications,"The semantic Web can be seen as means to improve qualitative characteristics of Web applications. Ontologies play a key role in the semantic Web and, therefore, are expected to have a profound effect on quality of a Web application. We apply the Quint2 model to predict an impact of an ontology on a number of quality dimensions of a Web application. We estimate that an ontology is likely to significantly improve the functionality and maintainability dimensions. The usability dimension is affected to a lesser extent. We explain the expected increase of quality with improved effectiveness of development process caused primarily by application domain ontologies",2005,0, 1628,A New Allocation Scheme for Parallel Applications with Deadline and Security Constraints on Clusters,"Parallel applications with deadline and security constraints are emerging in various areas like education, information technology, and business. However, conventional job schedulers for clusters generally do not take security requirements of realtime parallel applications into account when making allocation decisions. In this paper, we address the issue of allocating tasks of parallel applications on clusters subject to timing and security constraints in addition to precedence relationships. A task allocation scheme, or TAPADS (task allocation for parallel applications with deadline and security constraints), is developed to find an optimal allocation that maximizes quality of security and the probability of meeting deadlines for parallel applications. In addition, we proposed mathematical models to describe a system framework, parallel applications with deadline and security constraints, and security overheads. Experimental results show that TAPADS significantly improves the performance of clusters in terms of quality of security and schedulability over three existing allocation schemes",2005,0, 1629,Adaptive Checkpointing for Master-Worker Style Parallelism,"We present a transparent, system-level checkpointing solution for master-worker parallelism that automatically adapts, upon restore, to the number of processor nodes available. We call this adaptive checkpointing. This is important, since nodes in a cluster fail. It also allows one to adapt to using mutliple cluster partitions, as they become available. Checkpointing a master-worker computation has the additional advantage of needing to checkpoint only the master process. This is both fast (0.05 s in our case), and more economical of disk space. We describe a system-level solution. The application writer does not declare what data structures to checkpoint. Furthermore, the solution is transparent. The application writer need not add code to request a checkpoint at appropriate locations. The system-level strategy avoids the labor-intensive and error-prone work of explicitly checkpointing the many data structures of a large program",2005,0, 1630,A Novel Technique for Modeling Radiation Effects in Solar Cells Utilizing SILVACOA Virtual Wafer Fabrication Software,"A novel technique for modeling advanced solar cells using Silvacoreg virtual wafer fabrication software has been previously introduced. Over the past three years the new modeling approach has been extended to cover modeling of advanced multijunction cells, design and optimizations of these devices, as well as design of new quad-junction cells. In this paper, the ATLAS device simulator from Silvaco International has been demonstrated to have the potential for predicting the effects of electron radiation in solar cells by modeling material defects. A gallium arsenide solar cell was simulated in ATLAS and compared to an actual cell with radiation defects identified using deep level transient spectroscopy techniques. The cell data were compared for various fluence levels of 1 MeV electron radiation and the results have shown an average of only five percent difference between experimental and simulated cell output characteristics. These results demonstrate that ATLAS software can be a viable tool for predicting solar cell degradation due to electron radiation.",2005,0, 1631,N-Version Software Systems Design,"The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.",2005,0, 1632,Failure's Identification for Electromechanical Systems with Induction Motor,"In this paper the new approach for guaranteed state estimation for electromechanical systems with induction motor is implemented for failure's identification in such systems. Approach is based on matrix comparison systems method, and a comparison with some other estimation methods was realized in package MATLAB. The experimental unit made for practical approbation of identification algorithms is presented with received experimental data. The new approach (matrix comparison systems method) for guaranteed state estimation of an induction motor (IM) and detect faults was presented in [1, 2]. State estimation is based on a discrete-time varying linear model of the IM [3, 4] with uncertainties and perturbations, which belong to known bounded sets with no hypothesis on their distribution inside these sets. The approach uses the measurement results. Recursive and explicit algorithm is presented and illustrated by example with real 1M parameters, realized in program package MATLAB.",2005,0, 1633,Diagnostics using airborne survey and fault location systems as the means to increase OHTL reliability,"Airborne survey application for diagnostics of overhead transmission lines (OHTL) is quite relevant for power utilities of industrialized counties where power network counts thousands or millions km, of which considerable part has reached 30-50 year lifetime and older. Airborne survey based on aerial scanning as a method of OHTL condition monitoring, is efficient instrument of detection of the line elements deviation off regular condition, serves as a convenient facility of network utility inventory. Advantage of aerial scanning is a combination of high survey accuracy with high work productivity. Processing of digital survey data allows to get essential data required for OHTL reliability analysis: precise span lengths, sag and tension values, conductor clearance to ground, crossed and adjacent objects, clearance to vegetation, distance to nearby trees that may damage OHTL if fallen. For analysis of OHTL reliability, existing software packages allow to carry out modeling condition of separate elements and entire line under extreme ice and wind loads, check safety of conductor clearance to ground and crossed lines under condition of significant conductor overheating determined by necessity to ensure transmission under long-term or short-term (but considerable) load increase. Collection, storage, systemizing, practical use of survey data for development and implementing management decisions and rational usage of network resources is reasonable to accomplish with a specialized information system. Information system helps to provide integrate OHTL monitoring data, modules of record and analysis of technical condition of separate components and entire line, 2D and 3D representation of objects with high georeference accuracy. One of negative examples of insufficient OHTL reliability is fault current caused by lightning, conductor or insulator mechanical damage, etc. Duration of OHTL malfunction, timing and success of emergency elimination depends greatly on accuracy of fa- ult location (FL) on line. Advanced FL system allows to locate fault with accuracy of 5 to 150 m. Combined with aerial scanning data and visualizing line section detected by FL system essentially improves efficiency of service technology, emergency recovery of electric network by maintenance crew, and hence increases system reliability of power objects.",2005,0, 1634,PD diagnosis on medium voltage cables with Oscillating Voltage (OWTS),"Detecting, locating and evaluating of partial discharges (PD) in the insulating material, terminations and joints provides the opportunity for a quality control after installation and preventive detection of arising service interruption. A sophisticated evaluation is necessary between PD in several insulating materials and also in different types of terminations and joints. For a most precise evaluation of the degree and risk caused by PD it is suggested to use a test voltage shape that is preferably like the same under service conditions. Only under these requirements the typical PD parameters like inception and extinction voltage, PD level and PD pattern correspond to significant operational values. On the other hand the stress on the insulation should be limited during the diagnosis to not create irreversible damages and thereby worsening the condition of the test object. The paper introduces an Oscillating Wave Test System (OWTS), which meets these mentioned demands well. The design of the system, its functionality and especially the operating software are made for convenient field application. Field data and experience reports will be presented and discussed. This field data serve also as good guide for the level of danger to the different insulating systems due to partial discharges.",2005,0,1285 1635,On-line detection of stator winding faults in controlled induction machine drives,"The operation of induction machines with fast switching power electric devices puts additional stress on the stator windings what leads to an increased probability of machine faults. These faults can cause considerable damage and repair costs and - if not detected in an early stage - may end up in a total destruction of the machine. To reduce maintenance and repair costs many methods have been developed and presented in literature for an early detection of machine faults. This paper gives an overview of todaypsilas detection techniques and divides them into three major groups according to their underlying methodology. The focus will be on methods which are applicable to todaypsilas inverter-fed machines. In that case and especially if operated under controlled mode, the behavior of the machine with respect to the fault is different than for grid supply. This behavior is discussed and suitable approaches for fault detection are presented. Which method is eventually to choose, will depend on the application and the available sensors as well as hard- and software resources, always considering that the additional effort for the fault detection algorithm has to be kept as low as possible. The applicability of the presented fault detection techniques are also confirmed with practical measurements.",2005,0, 1636,Resource mapping and scheduling for heterogeneous network processor systems,"Task to resource mapping problems are encountered during (i) hardware-software co-design and (ii) performance optimization of Network Processor systems. The goal of the first problem is to find the task to resource mapping that minimizes the design cost subject to all design constraints. The goal of the second problem is to find the mapping that maximizes the performance, subject to all architectural constraints. To meet the design goals in performance, it may be necessary to allow multiple packets to be inside the system at any given instance of time and this may give rise to the resource contention between packets. In this paper, a Randomized Rounding (RR) based solution is presented for the task to resource mapping and scheduling problem. We also proposed two techniques to detect and eliminate the resource contention. We evaluate the efficacy of our RR approach through extensive simulation. The simulation results demonstrate that this approach produces near optimal solutions in almost all instances of the problem in a fraction of time needed to find the optimal solution. The quality of the solution produced by this approach is also better than often used list scheduling algorithm for task to resource mapping problem. Finally, we demonstrate with a case study, the results of a Network Processor design and scheduling problem using our techniques.",2005,0, 1637,Software Engineering Education From Indian Perspective,"Software is omnipresent in today's world. India is a hub to more than 1000 software companies. The software industry is a major employment providing industry in India. As a wholly intellectual artifact, software development is among the most labor demanding, intricate, and error-prone technologies in human history. Software's escalating vital role in systems of pervasive impact presents novel challenges for the education of software engineers. This paper focuses on the current status of software engineering education in India and suggestions for improvement so as to best suit the software industry's needs",2005,0, 1638,A Case Study: GQM and TSP in a Software Engineering Capstone Project,"This paper presents a case study, describing the use of a hybrid version of the team software process (TSP) in a capstone software engineering project. A mandatory subset of TSP scripts and reporting mechanisms were required, primarily for estimating the size and duration of tasks and for tracking project status against the project plan. These were supplemented by metrics and additional processes developed by students. Metrics were identified using the goal-question-metric (GQM) process and used to evaluate the effectiveness of project management roles assigned to each member of the project team. TSP processes and specific TSP forms are identified as evidence of learning outcome attainment. The approach allowed for student creativity and flexibility and limited the perceived overhead associated with use of the complete TSP. Students felt that the experience enabled them to further develop and demonstrate teamwork and leadership skills. However, limited success was seen with respect to defect tracking, risk management, and process improvement. The case study demonstrates that the approach can be used to assess learning outcome attainment and highlights for students the significance of software engineering project management",2005,0, 1639,Medical device software standards,"Much medical device software is safety-related, and therefore needs to have high integrity (in other words its probability of failure has to be low.) There is a consensus that if you want to develop high-integrity software, you need a quality system. This is because software is a complex product that is easy to change and difficult to test, and the management system that handles these issues must include such quality system elements as: detailed traceable specifications, disciplined processes, planned verification and validation and a comprehensive configuration management and change control system. It is also agreed that software quality management systems need specific processes which are different from and additional to more general quality management systems such as that required by EN 13485. Historically, ISO 9000-3 Part 3: Guidelines for the application of ISO 9001:1994 to the development, supply, installation and maintenance of computer software states these additional processes very clearly, but is not mandatory. (This is now ISO 90003.) In both Europe and the USA there is therefore a gap in both regulations and standards for Medical Devices. There is no comprehensive requirement specifically for software development methods. In Europe, IEC 60601-1 Medical electrical equipment Part 1: General requirements for safety and essential performance, has specific requirements for software in section 14 Programmable Electrical Medical Systems (PEMS). This requires (at a fairly abstract level) some basic processes and documents, and includes an invocation of the risk management process of ISO 14971 Medical devices Application of risk management to medical devices In the US, there is an FDA regulation requiring Good Manufacturing Practice, with guidance on software development methods (strangely entitled Software Validat",2005,0, 1640,Finds in testing experiments for model evaluation,"To evaluate the fault location and the failure prediction models, simulation-based and codebased experiments were conducted to collect the required failure data. The PIE model was applied to simulate failures in the simulation-based experiment. Based on syntax and semantic level fault injections, a hybrid fault injection model is presented. To analyze the injected faults, the difficulty to inject (DTI) and difficulty to detect (DTD) are introduced and are measured from the programs used in the code-based experiment. Three interesting results were obtained from the experiments: 1) Failures simulated by the PIE model without consideration of the program and testing features are unreliably predicted; 2) There is no obvious correlation between the DTI and DTD parameters; 3) The DTD for syntax level faults changes in a different pattern to that for semantic level faults when the DTI increases. The results show that the parameters have a strong effect on the failures simulated, and the measurement of DTD is not strict.",2005,0, 1641,iOptimize: A software capability for analyzing and optimizing connection-oriented data networks in real time,"This paper describes a service called iOptimize that analyzes and optimizes service providers' connection-oriented data networks. In these networks, online connection routing is used to set up connections quickly, but the simple path selection scheme and the limited information available for online routing can cause network capacity to be used inefficiently. This may result in the loss or disruption of user data transmissions, leading to a degraded quality of service (QoS) in a network capable of supporting much higher throughput. The objective of iOptimize is to detect and analyze such inefficiencies and to offset them by the occasional rerouting of a selected set of connections, while causing little or no disruption of the existing data services and network operations. By helping service providers support data transport services at the QoS levels desired, iOptimize lets them make the most of their existing network resources and helps them defer capital expenditure. Moreover, by tuning their networks to perform at optimal efficiency, it improves end users' experiences, resulting in lower churn/turnover and reduced maintenance costs.",2005,0, 1642,Defect Classification and Analysis,"Analyses of discovered defects and related information from quality assurance (QA) activities can help both developers and testers to detect and remove potential defects, and help other project personnel to improve the development process, to prevent injection of similar defects and to manage risk better by planning early for product support and services. We next discuss these topics, and illustrate them through several case studies analyzing defects from system testing for some IBM products, and web-related defects for www.seas.smu.edu, the official web site for the School of Engineering and Applied Science, Southern Methodist University (SMU/SEAS).",2005,0, 1643,Testing and Configuration Management,"Software is our lifeblood and the source of profound advances, but no one can deny that much of it is error-prone and likely to become more so with increasing complexity. Useful software is the abstraction of a problem and its solution, conditionally stable for the operational range that has been tested. That definition has spawned thousands of viewgraphs and millions of words, but still the stuff hangs and crashes. It may be obvious, but not trivial, to restate that untested systems will not work. Today testing is an art, whether it is the fine, meticulous art of debugging or the broader-brush scenario testing. The tools already exist for moving test design theory from an art form to the scientific role of
The Price of Quality
Unit Testing
Integration Testing
System Testing
Reliability Testing
Stress Testing
Robust Testing
Robust Design
Prototypes
Identify Expected Results
Orthogonal Array Test Sets (OATS)
Testing Techniques
One-Factor-at-a-Time
Exhaustive
Deductive Analytical Method
Random/Intuitive Method
Orthogonal Array-Based Method
Defect Analysis
Case Study: The Case of the Impossible Overtime
Cooperative Testing
Graphic Footprint
Testing Strategy
Test Incrementally
Test Under No Load
Test Under Expected Load
Test Under Heavy Load
Test Under Overload
Test the Error Recovery Code
Diabolic Testing
Reliability Tests
Footprint
Regression Tests
Software Hot Spots
Software Manufacturing Defined
Configuration Management
Outsourcing
Test Modules
Faster Iteration
Meaningful Test Process Metrics",2005,0, 1644,An investigation of the effect of module size on defect prediction using static measures,"We used several machine learning algorithms to predict the defective modules in five NASA products, namely, CM1, JM1, KC1, KC2, and PC1. A set of static measures were employed as predictor variables. While doing so, we observed that a large portion of the modules were small, as measured by lines of code (LOC). When we experimented on the data subsets created by partitioning according to module size, we obtained higher prediction performance for the subsets that include larger modules. We also performed defect prediction using class-level data for KC1 rather than the method-level data. In this case, the use of class-level data resulted in improved prediction performance compared to using method-level data. These findings suggest that quality assurance activities can be guided even better if defect prediction is performed by using data that belong to larger modules.",2005,1, 1645,Assessment of a New Three-Group Software Quality Classification Technique: An Empirical Case Study,"The primary aim of risk-based software quality classification models is to detect, prior to testing or operations, components that are most-likely to be of high-risk. Their practical usage as quality assurance tools is gauged by the prediction-accuracy and cost-effective aspects of the models. Classifying modules into two risk groups is the more commonly practiced trend. Such models assume that all modules predicted as high-risk will be subjected to quality improvements. Due to the always-limited reliability improvement resources and the variability of the quality risk-factor, a more focused classification model may be desired to achieve cost-effective software quality assurance goals. In such cases, calibrating a three-group (high-risk, medium-risk, and low-risk) classification model is more rewarding. We present an innovative method that circumvents the complexities, computational overhead, and difficulties involved in calibrating pure or direct three-group classification models. With the application of the proposed method, practitioners can utilize an existing two-group classification algorithm thrice in order to yield the three risk-based classes. An empirical approach is taken to investigate the effectiveness and validity of the proposed technique. Some commonly used classification techniques are studied to demonstrate the proposed methodology. They include, the C4.5 decision tree algorithm, discriminant analysis, and case-based reasoning. For the first two, we compare the three-group model calibrated using the respective techniques with the one built by applying the proposed method. Any two-group classification technique can be employed by the proposed method, including those that do not provide a direct three-group classification model, e.x., logistic regression and certain binary classification trees, such as CART. Based on a case study of a large-scale industrial software system, it is observed that the proposed method yielded promising results. For a given classification technique, the expected cost of misclassification of the proposed three-group models were significantly better (generally) when compared to the techniques direct three-group model. In addition, the proposed method is also evaluated against an alternate indirect three-group classification method.",2005,1, 1646,A quorum-based protocol for searching objects in peer-to-peer networks,"Peer-to-peer (P2P) system is an overlay network of peer computers without centralized servers, and many applications have been developed for such networks such as file sharing systems. Because a set of peers dynamically changes, design and verification of efficient protocols is a challenging task. In this paper, we consider an object searching problem under a resource model such that there are some replicas in a system and the lower bound of the ratio =n'/n is known in advance, where n' is a lower bound of the number of peers that hold original or replica for any object type and n is the total number of peers. In addition, we consider object searching with probabilistic success, i.e., for each object search, object must be found with at least probability 0<<1. To solve such a problem efficiently, we propose a new communication structure, named probabilistic weak quorum systems (PWQS), which is an extension of coterie. Then, we propose a fault-tolerant protocol for searching for objects in a P2P system. In our method, each peer does not maintain global information such as the set of all peers and a logical topology with global consistency. In our protocol, each peer communicates only a small part of a peer set and, thus, our protocol is adaptive for huge scale P2P network.",2006,0, 1647,Rate-distortion performance of H.264/AVC compared to state-of-the-art video codecs,"In the domain of digital video coding, new technologies and solutions are emerging in a fast pace, targeting the needs of the evolving multimedia landscape. One of the questions that arises is how to assess these different video coding technologies in terms of compression efficiency. In this paper, several compression schemes are compared by means of peak signal-to-noise ratio (PSNR) and just noticeable difference (JND). The codecs examined are XviD 0.9.1 (conform to the MPEG-4 Visual Simple Profile), DivX 5.1 (implementing the MPEG-4 Visual Advanced Simple Profile), Windows Media Video 9, MC-EZBC and H.264/AVC AHM 2.0 (version JM 6.1 of the reference software, extended with rate control). The latter plays a key role in this comparison because the H.264/AVC standard can be considered as the de facto benchmark in the field of digital video coding. The obtained results show that H.264/AVC AHM 2.0 outperforms current proprietary and standards-based implementations in almost all cases. Another observation is that the choice of a particular quality metric can influence general statements about the relation between the different codecs.",2006,0, 1648,Gate sizing to radiation harden combinational logic,"A gate-level radiation hardening technique for cost-effective reduction of the soft error failure rate in combinational logic circuits is described. The key idea is to exploit the asymmetric logical masking probabilities of gates, hardening gates that have the lowest logical masking probability to achieve cost-effective tradeoffs between overhead and soft error failure rate reduction. The asymmetry in the logical masking probabilities at a gate is leveraged by decoupling the physical from the logical (Boolean) aspects of soft error susceptibility of the gate. Gates are hardened to single-event upsets (SEUs) with specified worst case characteristics in increasing order of their logical masking probability, thereby maximizing the reduction in the soft error failure rate for specified overhead costs (area, power, and delay). Gate sizing for radiation hardening uses a novel gate (transistor) sizing technique that is both efficient and accurate. A full set of experimental results for process technologies ranging from 180 to 70 nm demonstrates the cost-effective tradeoffs that can be achieved. On average, the proposed technique has a radiation hardening overhead of 38.3%, 27.1%, and 3.8% in area, power, and delay for worst case SEUs across the four process technologies.",2006,0, 1649,Helping small companies assess software processes,"A first step toward process improvement is identifying the strengths and weaknesses of an organization's software processes to determine effective improvement actions. An assessment can help an organization examine its processes against a reference model to determine the processes' capability or the organization's maturity, to meet quality, cost, and schedule goals, but small companies have difficulty running them. MARES, a set of guidelines for conducting 15504-conformant software process assessment focuses on small companies",2006,0, 1650,Assessing the Quality of Collaborative Processes,"Use of effective and efficient collaboration is important for organizations to survive and thrive in todays competitive world. This paper presents quality constructs that can be used to evaluate the success of a collaboration process. Two types of collaboration processes are identified: 1) processes that are designed and executed by the same facilitator who designed them, and 2) processes that are designed by a collaboration engineer and executed many times by practitioners. Accordingly, the quality constructs have been divided in two categories. Constructs within the first category apply to both types of collaboration processes. This category includes constructs such as process effectiveness and efficiency, results quantity, results quality, satisfaction, and usability. The second category contains constructs that are useful from the perspective of the collaboration engineering approach: repeatable collaboration processes executed by practitioners. The three constructs important for this perspective are reusability, predictability, and transferability.",2006,0, 1651,Measuring the Quality of Ideation Technology and Techniques,"Ideation is an essential component of creativity and problem-solving. Researchers have measured the quality of ideation treatments by assigning quality scores to each unique idea generated in each session and then by calculating one or more of the sum-of-scores, average-quality-score, or count-of-good-ideas measures. We discuss the validity of these three measures and the potential biases associated with the sum-of-scores and average-quality measures. An experimental study comparing multiple levels of social comparison was used to illustrate the differences in the quality measures and the results revealed that research conclusions were dependent on the quality measure used. Implications for future research are discussed including a recommendation that future ideation research adopt the count-of-good-ideas measure for assessing ideation quality.",2006,0, 1652,Measurement Framework for Assessing Risks in Component-Based Software Development,"As Component-based software development (CBSD) is getting popular and is being considered as both efficient and effective approach to build large software applications and systems, potential risks within component-based practicing areas such as quality assessment, complexity estimation, performance prediction, configuration, and application management should not be taken lightly. In the existing literature there is lack of systematic work in identifying and assessing these risks. In particular, there is lack of a structuring framework that could be helpful for related CBSD stakeholders to measure these risks. In this research we examine prior related research work in software measurement and aim to develop a practical risk measurement framework that classifies potential CBSD risks and related metrics and provides a practical guidance for CBSD stakeholders.",2006,0, 1653,CrossTalk: cross-layer decision support based on global knowledge,"The dynamic nature of ad hoc networks makes system design a challenging task. Mobile ad hoc networks suffer from severe performance problems due to the shared, interference-prone, and unreliable medium. Routes can be unstable due to mobility and energy can be a limiting factor for typical devices such as PDAs, mobile phones, and sensor nodes. In such environments cross-layer architectures are a promising new approach, as they can adapt protocol behavior to changing networking conditions. This article introduces CrossTalk, a cross-layer architecture that aims at achieving global objectives with local behavior. It further compares CrossTalk with other cross-layer architectures proposed. Finally, it analyzes the quality of the information provided by the architecture and presents a reference application to demonstrate the effectiveness of the general approach.",2006,0, 1654,Advancing candidate link generation for requirements tracing: the study of methods,"This paper addresses the issues related to improving the overall quality of the dynamic candidate link generation for the requirements tracing process for verification and validation and independent verification and validation analysts. The contribution of the paper is four-fold: we define goals for a tracing tool based on analyst responsibilities in the tracing process, we introduce several new measures for validating that the goals have been satisfied, we implement analyst feedback in the tracing process, and we present a prototype tool that we built, RETRO (REquirements TRacing On-target), to address these goals. We also present the results of a study used to assess RETRO's support of goals and goal elements that can be measured objectively.",2006,0, 1655,Covering arrays for efficient fault characterization in complex configuration spaces,"Many modern software systems are designed to be highly configurable so they can run on and be optimized for a wide variety of platforms and usage scenarios. Testing such systems is difficult because, in effect, you are testing a multitude of systems, not just one. Moreover, bugs can and do appear in some configurations, but not in others. Our research focuses on a subset of these bugs that are """"option-related""""-those that manifest with high probability only when specific configuration options take on specific settings. Our goal is not only to detect these bugs, but also to automatically characterize the configuration subspaces (i.e., the options and their settings) in which they manifest. To improve efficiency, our process tests only a sample of the configuration space, which we obtain from mathematical objects called covering arrays. This paper compares two different kinds of covering arrays for this purpose and assesses the effect of sampling strategy on fault characterization accuracy. Our results strongly suggest that sampling via covering arrays allows us to characterize option-related failures nearly as well as if we had tested exhaustively, but at a much lower cost. We also provide guidelines for using our approach in practice.",2006,0, 1656,A new insight into postsurgical objective voice quality evaluation: application to thyroplastic medialization,"This paper aims at providing new objective parameters and plots, easily understandable and usable by clinicians and logopaedicians, in order to assess voice quality recovering after vocal fold surgery. The proposed software tool performs presurgical and postsurgical comparison of main voice characteristics (fundamental frequency, noise, formants) by means of robust analysis tools, specifically devoted to deal with highly degraded speech signals as those under study. Specifically, we address the problem of quantifying voice quality, before and after medialization thyroplasty, for patients affected by glottis incompetence. Functional evaluation after thyroplastic medialization is commonly based on several approaches: videolaryngostroboscopy (VLS), for morphological aspects evaluation, GRBAS scale and Voice Handicap Index (VHI), relative to perceptive and subjective voice analysis respectively, and Multi-Dimensional Voice Program (MDVP), that provides objective acoustic parameters. While GRBAS has the drawback to entirely rely on perceptive evaluation of trained professionals, MDVP often fails in performing analysis of highly degraded signals, thus preventing from presurgical/postsurgical comparison in such cases. On the contrary, the new tool, being capable to deal with severely corrupted signals, always allows a complete objective analysis. The new parameters are compared to scores obtained with the GRBAS scale and to some MDVP parameters, suitably modified, showing good correlation with them. Hence, the new tool could successfully replace or integrate existing ones. With the proposed approach, deeper insight into voice recovering and its possible changes after surgery can thus be obtained and easily evaluated by the clinician.",2006,0, 1657,Telephony-based voice pathology assessment using automated speech analysis,"A system for remotely detecting vocal fold pathologies using telephone-quality speech is presented. The system uses a linear classifier, processing measurements of pitch perturbation, amplitude perturbation and harmonic-to-noise ratio derived from digitized speech recordings. Voice recordings from the Disordered Voice Database Model 4337 system were used to develop and validate the system. Results show that while a sustained phonation, recorded in a controlled environment, can be classified as normal or pathologic with accuracy of 89.1%, telephone-quality speech can be classified as normal or pathologic with an accuracy of 74.2%, using the same scheme. Amplitude perturbation features prove most robust for telephone-quality speech. The pathologic recordings were then subcategorized into four groups, comprising normal, neuromuscular pathologic, physical pathologic and mixed (neuromuscular with physical) pathologic. A separate classifier was developed for classifying the normal group from each pathologic subcategory. Results show that neuromuscular disorders could be detected remotely with an accuracy of 87%, physical abnormalities with an accuracy of 78% and mixed pathology voice with an accuracy of 61%. This study highlights the real possibility for remote detection and diagnosis of voice pathology.",2006,0, 1658,Software defect association mining and defect correction effort prediction,"Much current software defect prediction work focuses on the number of defects remaining in a software system. In this paper, we present association rule mining based methods to predict defect associations and defect correction effort. This is to help developers detect software defects and assist project managers in allocating testing resources more effectively. We applied the proposed methods to the SEL defect data consisting of more than 200 projects over more than 15 years. The results show that, for defect association prediction, the accuracy is very high and the false-negative rate is very low. Likewise, for the defect correction effort prediction, the accuracy for both defect isolation effort prediction and defect correction effort prediction are also high. We compared the defect correction effort prediction method with other types of methods - PART, C4.5, and Naive Bayes - and show that accuracy has been improved by at least 23 percent. We also evaluated the impact of support and confidence levels on prediction accuracy, false-negative rate, false-positive rate, and the number of rules. We found that higher support and confidence levels may not result in higher prediction accuracy, and a sufficient number of rules is a precondition for high prediction accuracy.",2006,1, 1659,A Search Theoretical Approach to P2P Networks: Analysis of Learning,"One of the main characteristics of the peer-to-peer systems is the highly dynamic nature of the users present in the system. In such a rapidly changing enviroment, end-user guarantees become hard to handle. In this paper, we propose a search-theoretic view for performing lookups. We define a new search mechanism with cost-analysis for refining the lookups by predicting the arrival and leave possibilities of the users. Our system computes a threshold for the number of times that a user has to perform. We compare our results with the naive approach of accepting the first set of results as the basis.",2006,0, 1660,Business Processes Characterisation Through Definition of Structural and Non-Structural Criteria,"Workflow and Web Services have the main role in the development and in the realisation of B2B architectures. In this context, the principal target is to compose many services supplied by different providers creating new value added services. The Web Services technology provides the base for realising complex business processes through the composition of Web Services: literature proposes, at the moment, two principal approaches to the coordination of network services: orchestration and choreography. In this paper we propose a framework for characterising the components of a business process which can be detected inside existing workflows. We define a collection of structural and non-structural criteria, which allow the constitutive parts (components) of a workflow to be characterised. Targets can be different: these criteria can be used to search for reusable components into existing workflows, but also to verify if a given business process is able to support specific missions.",2006,0, 1661,Detecting move operations in versioning information,"Recently, there is an increasing research interest in mining versioning information, i.e. the analysis of the transactions made on version systems to understand how and when a software system evolves. One particular area of interest is the identification of move operations as these are key indicators for refactorings. Unfortunately, there exists no evaluation which identifies the quality (expressed in precision and recall) of the most commonly used detection technique and its underlying principle of name identity. To overcome this problem, the paper compares the precision and recall values of the name-based technique with two alternative techniques, one based on line matching and one based on identifier matching, by means of two case studies. From the results of these studies we conclude that the name-based technique is very precise, yet misses a significant number of move operations (low recall value). To improve the trade-off it is worthwhile to consider the line-based technique since it detects more matches with a slightly worse precision, or to use the number of overlapping identifiers when combined with an additional filter",2006,0, 1662,Software test cases: is one ever enough?,"In this paper, software testing theory was examined as it pertains to one test at a time. In doing so, the author hopes to highlight some useful facts about testing theory that are somewhat obvious but often overlooked. Some precise statements about how bad the one-test policy can be were also made",2006,0, 1663,Model-based system development for embedded mobile platforms,"With the introduction and popularity of wireless devices, the diversity of the platforms has also been increased. There are different platforms and tools from different vendors such as Microsoft, Sun, Nokia, SonyEricsson and many more. Because of the relatively low-level programming interface, software development for Symbian platform is a tiresome and error prone task, whereas .NET CF contains higher level structures. This paper introduces the problem of the software development for incompatible mobile platforms, moreover, it provides a model-driven architecture (MDA) and Domain Specific Modeling Language (DSML)-based solution. We also discuss the relevance of the model-based approach that facilitates a more efficient software development, because the reuse and the generative techniques are key characteristics of model-based computing. In the presented approach, the platform-independence lies in the graph rewriting-driven visual model transformation. This paper illustrates the creation of model compilers on a metamodeling basis by a software package called Visual Modeling and Transformation System (VMTS), which is an n-layer multipurpose modeling and metamodel-based transformation system. A case study is also presented how model compilers can be used to generate user interface handler code for different mobile platforms from the same platform-independent input models",2006,0, 1664,An approach to simplify the design of IFFT/FFT cores for OFDM systems,"In this paper we present an approach to simplify the design of IFFT/FFT cores for OFDM applications, A novel software tool is proposed, called AFORE. It is able to generate efficient single and multiple mode IFFT/FFT processors. AFORE employs a parallel architecture, where the degree of parallelism can be varied. This way, the tool can find a trade off between area and processing time to meet the system specification. In order to assess the quality of the proposed approach, results are provided for some of the most widely used OFDM standards, such as, WLAN 802.11a/g, WMAN 802.16a, DVB-T.",2006,0, 1665,Modeling and analysis of functionality in eHome systems: dynamic rule-based conflict detection,"The domain of eHome systems is a special application-area for pervasive computing. Many different kinds of devices are introduced to the home area to provide functionality for enhanced comfort or security. A similar level of heterogeneity can be found at the software level: many different vendors supply eHome systems with drivers and services, which intend to compute sensor information and trigger devices in the eHome. This multi-level heterogeneity leads to system faults in terms of deadlocks and unpredictable or disillusioning behavior. We call these error conditions conflicts. Pervasive systems, especially eHome systems, is only useful and thus successful, if such conflicts can be handled properly. In this paper, we analyze eHome systems with respect to types of conflicts and discuss how conflicts can be detected. We show that the dynamic conflict detection is reasonable and possible by a rule-based conflict detection. The detection is well-founded on a formal specification and is seamlessly integrated into the paradigm of component-based software construction",2006,0, 1666,Mission dependability modeling and evaluation of repairable systems considering maintenance capacity,"The mission dependability of repairable systems not only depends on mission reliability and capacity of the system, but also the maintenance capacity during the whole mission. The probability of mission successful completion is one of the important performance measures. For the complex mission that has many sub-missions of kinds, its success probability is associated with the ready and execution duration, maintenance conditions in the working field and success requirements of each sub-mission. Maintenance conditions in the sub-mission working field mainly include replacement and repair of the failed components. According to these different maintenance conditions, we classify all sub-mission into three classes. By analyzing the state transition during the ready period and execution period of each sub-mission, this paper presents a dependability model to evaluate the probability of mixed multi-mission success of repairable systems considering maintenance capacity. A simple example is provided to show the application of the model",2006,0, 1667,The accuracy of fault prediction in modified code - statistical model vs. expert estimation,"Fault prediction models still seem to be more popular in academia than in industry. In industry, expert estimations of fault proneness are the most popular methods of deciding where to focus the fault detection efforts. In this paper, we present a study in which we empirically evaluate the accuracy of fault prediction offered by statistical models as compared to expert estimations. The study is industry based. It involves a large telecommunication system and experts that were involved in the development of this system. Expert estimations are compared to simple prediction models built on another large system, also from the telecommunication domain. We show that the statistical methods clearly outperform the expert estimations. As the main reason for the superiority of the statistical models we see their ability to cope with large datasets, which results in their ability to perform reliable predictions for larger number of components in the system, as well as the ability to perform prediction at a more fine-grain level, e.g., at the class instead of at the component level",2006,0, 1668,Automated translation of C/C++ models into a synchronous formalism,"For complex systems that are reusing intellectual property components, functional and compositional design correctness are an important part of the design process. Common system level capture in software programming languages such as C/C++ allow for a comfortable design entry and simulation, but mere simulation is not enough to ensure proper design integration. Validating that reused components are properly connected to each other and function correctly has become a major issue for such designs and requires the use of formal methods. In this paper, we propose an approach in which we automatically translate C/C++ programs into the synchronous formalism SIGNAL, hence enabling the application of formal methods without having to deal with the complex and error prone task to build formal models by hand. The main benefit of considering the model of SIGNAL for C/C++ languages lies in the formal nature of the synchronous language SIGNAL, which supports verification and optimization techniques. The C/C++ into SIGNAL transformation process is performed in two steps. We first translate C/C++ programs into an intermediate Static Single Assignment form, and next we translate this into SIGNAL programs. Our implementation of the SIGNAL generation is inserted in the GNU compiler collection source code as an additional front end optimization pass. It does benefit from both GCC code optimization techniques as well as the optimizations of the SIGNAL compiler",2006,0, 1669,A routing methodology for achieving fault tolerance in direct networks,"Massively parallel computing systems are being built with thousands of nodes. The interconnection network plays a key role for the performance of such systems. However, the high number of components significantly increases the probability of failure. Additionally, failures in the interconnection network may isolate a large fraction of the machine. It is therefore critical to provide an efficient fault-tolerant mechanism to keep the system running, even in the presence of faults. This paper presents a new fault-tolerant routing methodology that does not degrade performance in the absence of faults and tolerates a reasonably large number of faults without disabling any healthy node. In order to avoid faults, for some source-destination pairs, packets are first sent to an intermediate node and then from this node to the destination node. Fully adaptive routing is used along both subpaths. The methodology assumes a static fault model and the use of a checkpoint/restart mechanism. However, there are scenarios where the faults cannot be avoided solely by using an intermediate node. Thus, we also provide some extensions to the methodology. Specifically, we propose disabling adaptive routing and/or using misrouting on a per-packet basis. We also propose the use of more than one intermediate node for some paths. The proposed fault-tolerant routing methodology is extensively evaluated in terms of fault tolerance, complexity, and performance.",2006,0, 1670,Performance analysis of the FastICA algorithm and Crame r-rao bounds for linear independent component analysis,"The FastICA or fixed-point algorithm is one of the most successful algorithms for linear independent component analysis (ICA) in terms of accuracy and computational complexity. Two versions of the algorithm are available in literature and software: a one-unit (deflation) algorithm and a symmetric algorithm. The main result of this paper are analytic closed-form expressions that characterize the separating ability of both versions of the algorithm in a local sense, assuming a """"good"""" initialization of the algorithms and long data records. Based on the analysis, it is possible to combine the advantages of the symmetric and one-unit version algorithms and predict their performance. To validate the analysis, a simple check of saddle points of the cost function is proposed that allows to find a global minimum of the cost function in almost 100% simulation runs. Second, the Crame r-Rao lower bound for linear ICA is derived as an algorithm independent limit of the achievable separation quality. The FastICA algorithm is shown to approach this limit in certain scenarios. Extensive computer simulations supporting the theoretical findings are included.",2006,0, 1671,Agent-based self-healing protection system,"This paper proposes an agent-based paradigm for self-healing protection systems. Numerical relays implemented with intelligent electronic devices are designed as a relay agent to perform a protective relaying function in cooperation with other relay agents. A graph-theory-based expert system, which can be integrated with supervisory control and a data acquisition system, has been developed to divide the power grid into primary and backup protection zones online and all relay agents are assigned to specific zones according to system topological configuration. In order to facilitate a more robust, less vulnerable protection system, predictive and corrective self-healing strategies are implemented as guideline regulations of the relay agent, and the relay agents within the same protection zone communicate and cooperate to detect, locate, and trip fault precisely with primary and backup protection. Performance of the proposed protection system has been simulated with cascading fault, failures in communication and protection units, and compared with a coordinated directional overcurrent protection system.",2006,0, 1672,Software-based transparent and comprehensive control-flow error detection,"Shrinking microprocessor feature size and growing transistor density may increase the soft-error rates to unacceptable levels in the near future. While reliable systems typically employ hardware techniques to address soft-errors, software-based techniques can provide a less expensive and more flexible alternative. This paper presents a control-flow error classification and proposes two new software-based comprehensive control-flow error detection techniques. The new techniques are better than the previous ones in the sense that they detect errors in all the branch-error categories. We implemented the techniques in our dynamic binary translator so that the techniques can be applied to existing x86 binaries transparently. We compared our new techniques with the previous ones and we show that our methods cover more errors while has similar performance overhead.",2006,0, 1673,An Agglomerative Clustering Methodology For Data Imputation,"The prediction of accurate effort estimates from software project data sets still remains to be a challenging problem. Major amounts of data are frequently found missing in these data sets that are utilized to build effort/cost/time prediction models. Current techniques used in the industry ignore all the missing data and provide estimates based on the remaining complete information. Thus, the very estimates are error prone. In this paper, we investigate the design and application of a hybrid methodology on six real-time software project data sets in order to better the prediction accuracies of the estimates. We perform useful experimental analyses and evaluate the impact of the methodology. Finally, we discuss the findings and elaborate the appropriateness of the methodology",2006,0, 1674,A fault tolerance mechanism for network intrusion detection system based on intelligent agents (NIDIA),"The intrusion detection system (IDS) has as objective to identify individuals that try to use a system in way not authorized or those that have authorization to use but they abuse of their privileges. The IDS to accomplish its function must, in some way, to guarantee reliability and availability to its own application, so that it gets to give continuity to the services even in case of faults, mainly faults caused by malicious agents. This paper proposes an adaptive fault tolerance mechanism for network intrusion detection system based on intelligent agents. We propose the creation of a society of agents that monitors a system to collect information related to agents and hosts. Using the information which is collected, it is possible: to detect which agents are still active; which agents should be replicated and which strategy should be used. The process of replication depends on each type of agent, and its importance to the system at different moments of processing. We use some agents as sentinels for monitoring and thus allowing us to accomplish some important tasks such load balancing, migration, and detection of malicious agents, to guarantee the security of the proper IDS",2006,0, 1675,Persee: addressing the needs of the digitalisation and online accessibility of back collections through robust and integrated tools,"This paper covers the way in which the Persee program has addressed the issue of digitalisation and online accessibility of back collections of journals in social and human sciences. It depicts the main features of the project, considering both its volumetric and qualitative aspects. Emphasis is laid on two main points: on the one hand, the dematerialisation of the document - enhancement in the quality of online images and optical character recognition - used as an assistance to documentation as well as an enrichment of online information. On the other hand, the XML structure of the digitalised issues, allowing, amongst other features, a strict respect of intellectual propriety. As a conclusion, the results of the first year of production of Persee will be assessed",2006,0, 1676,Improving the quality of degraded document images,"It is common for libraries to provide public access to historical and ancient document image collections. It is common for such document images to require specialized processing in order to remove background noise and become more legible. In this paper, we propose a hybrid binarization approach for improving the quality of old documents using a combination of global and local thresholding. First, a global thresholding technique specifically designed for old document images is applied to the entire image. Then, the image areas that still contain background noise are detected and the same technique is re-applied to each area separately. Hence, we achieve better adaptability of the algorithm in cases where various kinds of noise coexist in different areas of the same image while avoiding the computational and time cost of applying a local thresholding in the entire image. Evaluation results based on a collection of historical document images indicate that the proposed approach is effective in removing background noise and improving the quality of degraded documents while documents already in good condition are not affected",2006,0, 1677,Enabling quality and schedule predictability in SoC design using HandoffQC,"Design of state-of-the-art SoCs often require multiple design data handoffs between sub-teams involved in its development. Handoff quality issues account for a significant portion of the wasted effort during SoC development-principally due to completeness, correctness and consistency of different elements of the handoff. Such issues impact silicon quality and schedule predictability, due to the re-work effort involved. HandoffQC has been developed as an integrated QC system to qualify incoming handoffs, which ensures handoff and silicon issues are detected and fixed up-front. HandoffQC also allows for applying learnings from one design to the next, promoting continuous process improvement. The system has been architected to be easily extensible in terms of the quality checks, is user configurable and can easily be integrated into design flows. HandoffQC has been deployed on many production designs where it has successfully identified several handoff and potential silicon issues before they resulted in downstream design re-work",2006,0, 1678,Influence of adaptive data layouts on performance in dynamically changing storage environments,"For most of today's IT environments, the tremendous need for storage capacity in combination with a required minimum I/O performance has become highly critical. In dynamically growing environments, a storage management solution's underlying data distribution scheme has great impact to the overall system I/O performance. The evaluation of a number of open system storage visualization solutions and volume managers has shown that all of them lack the ability to automatically adapt to changing access patterns and storage infrastructures; many of them require an error prone manual re-layout of the data blocks, or rely on a very time consuming re-striping of all available data. This paper evaluates the performance of conventional data distribution approaches compared to the adaptive virtualization solution V:DRIVE in dynamically changing storage environments. Changes of the storage infrastructure are normally not considered in benchmark results, but can have a significant impact on storage performance. Using synthetic benchmarks, V:DRIVE is compared in such changing environments with the non-adaptive Linux logical volume manager (LVM). The performance results of our tests clearly outline the necessity of adaptive data distribution schemes.",2006,0, 1679,A framework to support run-time assured dynamic reconfiguration for pervasive computing environments,"With the increasing use of pervasive computing environments (PCEs), developing dynamic reconfigurable software in such environments becomes an important issue. The ability to change software components in running systems has advantages such as building adaptive, long-life, and self-reconfigurable software as well as increasing invisibility in PCEs. As dynamic reconfiguration is performed in error-prone wireless mobile systems frequently, it can threaten system safety. Therefore, a mechanism to support assured dynamic reconfiguration at run-time for PCEs is required. In this paper, we propose an assured dynamic reconfiguration framework (ADRF) with emphasis on assurance analysis. The framework is implemented and is available for further research. To evaluate the framework, an abstract case study including reconfigurations has been applied using our own developed simulator for PCEs. Our experience shows that ADRF properly preserves reconfiguration assurance.",2006,0, 1680,Model-based runtime analysis of distributed reactive systems,"Reactive distributed systems have pervaded everyday life and objects, but often lack measures to ensure adequate behaviour in the presence of unforeseen events or even errors at runtime. As interactions and dependencies within distributed systems increase, the problem of detecting failures which depend on the exact situation and environment conditions they occur in grows. As a result, not only the detection of failures is increasingly difficult, but also the differentiation between the symptoms of a fault, and the actual fault itself, i.e., the cause of a problem. In this paper, we present a novel and efficient approach for analysing reactive distributed systems at runtime, in that we provide a framework for detecting failures as well as identifying their causes. Our approach is based upon monitoring safety-properties, specified in the linear time temporal logic LTL (respectively, TLTL) to automatically generate monitor components which detect violations of these properties. Based on the results of the monitors, a dedicated diagnosis is then performed in order to identify explanations for the misbehaviour of a system. These may be used to store detailed log files, or to trigger recovery measures. Our framework is built modular, layered, and uses merely a minimal communication overhead - especially when compared to other, similar approaches. Further, we sketch first experimental results from our implementations, and describe how it can be used to build a variety of distributed systems using our techniques.",2006,0, 1681,Evaluating software refactoring tool support,"Up to 75% of the costs associated with the development of software systems occur post-deployment during maintenance and evolution. Software refactoring is a process that can significantly reduce the costs associated with software evolution. Refactoring is defined as internal modification of source code to improve system quality, without change to observable behaviour. Tool support for software refactoring attempts to further reduce evolution costs by automating manual, error-prone and tedious tasks. Although the process of refactoring is well-defined, tools supporting refactoring do not support the full process. Existing tools suffer from issues associated with the level of automation, the stages of the refactoring process supported or automated, the subset of refactorings that can be applied, and complexity of the supported refactorings. This paper presents a framework for evaluating software refactoring tool support based on the DESMET method. For the DESMET application, a functional analysis of the requirements for supporting software refactoring is used in conjunction with a case study. This evaluation was completed to assess the support provided by six Java refactoring tools and to evaluate the efficacy of using the DESMET method for evaluating refactoring tools.",2006,0, 1682,An automated system interoperability test bed for WPA and WPA2,"The discovery of several attacks on WEP during the past few years has rendered the first WLAN security standard useless. Thus, new mechanisms had to be defined to protect current and future wireless infrastructures. However, some parts of the new standards WPA and WPA2/IEEE802.11i respectively require changes in the used hardware. To ensure interoperability between different vendor's products the Wi-Fi alliance provides a certificate that can be obtained by passing several fixed tests. Unfortunately, there exists no standard solution so far to get your products ready for the certification process. Each vendor has to do his homework by hand. To overcome this manual and error-prone process we have developed a test environment for conducting automated system interoperability tests. In this paper we outline the Wi-Fi certification process and categorize necessary test requirements to be fulfilled. We further discuss our solution, i.e., the setup of our test environment and selected implementation details of the associated control software.",2006,0, 1683,Will Johnny/Joanie Make a Good Software Engineer? Are Course Grades Showing the Whole Picture?,"Predicting future success of students as software engineers is an open research area. We posit that current grading means do not capture all the information that may predict whether students will become good software engineers. We use one such piece of information, traceability of project artifacts, to illustrate our argument. Traceability has been shown to be an indicator of software project quality in industry. We present the results of a case study of a University of Waterloo graduate-level software engineering course where traceability was examined as well as course grades (such as mid-term, project grade, etc.). We found no correlation between the presence of good traceability and any of the course grades, lending support to our argument",2006,0, 1684,Using Change Propagation Probabilities to Assess Quality Attributes of Software Architectures 1,"
First Page of the Article
",2006,0, 1685,Can Cohesion Predict Fault Density?,"
First Page of the Article
",2006,0, 1686,Video transcoding using network processors to support dynamically adaptive video multicast,"Heterogeneity of networks and end systems poses challenges for multicast based collaborative applications. In traditional multicasting, the sender transmits video at the same rate to all receivers independent of their network connection, end system equipment, and users' preferences. This wastes resources and may also result in some receivers having their quality expectations unsatisfied. This problem can be addressed, near the network edge, by applying dynamic, in-network transcoding of video streams. In this paper, we design, implement, and assess a network processor (NP) based video transcoding system using the Intel IXP1200. Experiments suggest that our system can adapt the video rate of MPEG-1 streams to a desired level on a per packet basis for moderate traffic levels.",2006,0, 1687,"An """"intent-oriented"""" approach for Multi-Device User Interface Design","A large number of heterogeneous and computing devices, such as PCs, PDAs, and cell phones, nowadays are used to access the same information. Currently, designers designing such multi-device user interfaces have to design a UI separately for each device, which is a time consuming and error prone activity. This paper discusses our approach to the multi-device interface development. In particular we describe how abstract UI descriptions and task model management systems can be combined to develop adaptive UIs for a wide range of devices. The designed software framework allows generating the concrete user interface at runtime, by adapting it to the client's execution environment. As shown in the example application, three different environments have been the target of our implementation work: standard PCs, PDAs and mobile phones equipped with Java micro edition",2006,0, 1688,Detecting anomaly and failure in Web applications,"Improving Web application quality will require automated evaluation tools. Many such tools are already available either as commercial products or research prototypes. The authors use their automated evaluation tools, ReWeb and TestWeb, for Web analysis and testing that improves Web pages and applications and to find some anomalies and failures in four case studies.",2006,0, 1689,Effects of hardware imperfection on six-port direct digital receivers calibrated with three and four signal standards,"Online calibration is essential for the proper operation of six-port digital receivers in communication systems, as such calibration cancels out receiver ageing and manufacturing defects. Simple calibration methods, using only three or four signal standards (SS), were reported in a previous paper, where these methods were assessed using an ADS software simulation for an ideal six-port circuit. A unified and general theory is presented for examining the effects of hardware imperfection on the performance of a six-port receiver (SPR) calibrated using these simplified techniques. This can be used to establish permissible hardware tolerances for proper operation of SPRs in different digital modulations.",2006,0, 1690,Pixy: a static analysis tool for detecting Web application vulnerabilities,"The number and the importance of Web applications have increased rapidly over the last years. At the same time, the quantity and impact of security vulnerabilities in such applications have grown as well. Since manual code reviews are time-consuming, error-prone and costly, the need for automated solutions has become evident. In this paper, we address the problem of vulnerable Web applications by means of static source code analysis. More precisely, we use flow-sensitive, interprocedural and context-sensitive dataflow analysis to discover vulnerable points in a program. In addition, alias and literal analysis are employed to improve the correctness and precision of the results. The presented concepts are targeted at the general class of taint-style vulnerabilities and can be applied to the detection of vulnerability types such as SQL injection, cross-site scripting, or command injection. Pixy, the open source prototype implementation of our concepts, is targeted at detecting cross-site scripting vulnerabilities in PHP scripts. Using our tool, we discovered and reported 15 previously unknown vulnerabilities in three Web applications, and reconstructed 36 known vulnerabilities in three other Web applications. The observed false positive rate is at around 50% (i.e., one false positive for each vulnerability) and therefore, low enough to permit effective security audits",2006,0, 1691,Monitoring Computer Interactions to Detect Early Cognitive Impairment in Elders,Maintaining cognitive performance is a key factor influencing elders' ability to live independently with a high quality of life. We have been developing unobtrusive measures to monitor cognitive performance and potentially predict decline using information from routine computer interactions in the home. Early detection of cognitive decline offers the potential for intervention at a point when it is likely to be more successful. This paper describes recommendations for the conduct of studies monitoring cognitive function based on routine computer interactions in elders' home environments,2006,0, 1692,Test system for device drivers of embedded systems,"Device drivers are difficult to write and error-prone and thus constitutes the main portion of system failures. Therefore, to ensure that device drivers can run properly, their qualities have to be assured. In this paper, we suggested an architecture of a test system for device drivers. This architecture is designed to reflect embedded systems whose resources are usually limited. We also propose a reusable test case generation method for device drivers. We hope our method reduces the high cost of testing device drivers",2006,0, 1693,Risk assessment based on weak information using belief functions: a case study in water treatment,"Whereas probability theory has been very successful as a conceptual framework for risk analysis in many areas where a lot of experimental data and expert knowledge are available, it presents certain limitations in applications where only weak information can be obtained. One such application investigated in this paper is water treatment, a domain in which key information such as input water characteristics and failure rates of various chemical processes is often lacking. An approach to handle such problems is proposed, based on the Dempster-Shafer theory of belief functions. Belief functions are used to describe expert knowledge of treatment process efficiency, failure rates, and latency times, as well as statistical data regarding input water quality. Evidential reasoning provides mechanisms to combine this information and assess the plausibility of various noncompliance scenarios. This methodology is shown to boil down to the probabilistic one where data of sufficient quality are available. This case study shows that belief function theory may be considered as a valuable framework for risk analysis studies in ill-structured or poorly informed application domains",2006,0, 1694,SHARP: a new real-time scheduling algorithm to improve security of parallel applications on heterogeneous clusters,"This paper addresses the problem of improving quality of security for real-time parallel applications on heterogeneous clusters. We propose a new security- and heterogeneity-driven scheduling algorithm (SHARP for short), which strives to maximize the probability that parallel applications are executed in time without any risk of being attacked. Because of high security overhead in existing clusters, an important step in scheduling is to guarantee jobs' security requirements while minimizing overall execution times. The SHARP algorithm accounts for security constraints in addition to different processing capabilities of each node in a cluster. We introduce two novel performance metrics, degree of security deficiency and risk-free probability, to quantitatively measure quality of security provided by a heterogeneous cluster. Both security and performance of SHARP are compared with two well-known scheduling algorithms. Extensive experimental studies using real-world traces confirm that the proposed SHARP algorithm significantly improves security and performance of parallel applications on heterogeneous clusters",2006,0, 1695,A QoS-negotiable middleware system for reliably multicasting messages of arbitrary size,"E-business organizations commonly trade services together with quality of service (QoS) guarantees that are often dynamically agreed upon prior to service provisioning. Violating agreed QoS levels incurs penalties and hence service providers agree to QoS requests only after assessing the resource availability. Thus the system should, in addition to providing the services: (i) monitor resource availability, (ii) assess the affordability of a requested QoS level, and (iii) adapt autonomically to QoS perturbations which might undermine any assumptions made during assessment. This paper will focus on building such a system for reliably multicasting messages of arbitrary size over a loss-prone network of arbitrary topology such as the Internet. The QoS metrics of interest will be reliability, latency and relative latency. We meet the objectives (i)-(iii) by describing a network monitoring scheme, developing two multicast protocols, and by analytically estimating the achievable latencies and reliability in terms of controllable protocol parameters. Protocol development involves extending in two distinct ways an existing QoS-adaptive protocol designed for a single packet. Analytical estimation makes use of experimentally justified approximations and their impact is evaluated through simulations. As the protocol extension approaches are complementary in nature, so are the application contexts they are found best suited to; e.g., one is suited to small messages while the other to large messages",2006,0, 1696,Integrated Verification Approach during ADL-Driven Processor Design,"Nowadays, architecture description languages (ADLs) are getting popular to achieve quick and optimal design convergence during the development of application specific instruction-set processors (ASIPs). Verification, in various stages of such ASIP development, is a major bottleneck hindering widespread acceptance of ADL-based processor design approach. Traditional verification of processors are only applied at register transfer level (RTL) or below. In the context of ADL-based ASIP design, this verification approach is often inconvenient and error-prone, since design and verification are done at different levels of abstraction. In this paper, this problem is addressed by presenting an integrated verification approach during ADL-driven processor design. Our verification flow includes the idea of automatic assertion generation during high-level synthesis and support for automatic test-generation utilizing the ADL-framework for ASIP design. We show the benefit of our approach by trapping errors in a pipelined SPARC-compliant processor architecture",2006,0, 1697,Byzantine Anomaly Testing for Charm++: Providing Fault Tolerance and Survivability for Charm++ Empowered Clusters,"Recently shifts in high-performance computing have increased the use of clusters built around cheap commodity processors. A typical cluster consists of individual nodes, containing one or several processors, connected together with a high-bandwidth, low-latency interconnect. There are many benefits to using clusters for computation, but also some drawbacks, including a tendency to exhibit low Mean Time To Failure (MTTF) due to the sheer number of components involved. Recently, a number of fault-tolerance techniques have been proposed and developed to mitigate the inherent unreliability of clusters. These techniques, however, fail to address the issue of detecting non-obvious faults, particularly Byzantine faults. At present, effectively detecting Byzantine faults is an open problem. We describe the operation of ByzwATCh, a module for run-time detecting Byzantine hardware errors as part of the Charm++ parallel programming framework",2006,0, 1698,Survival of the Internet applications: a cluster recovery model,"Internet applications become increasingly widely used for millions of people in the world and on the other hand the accidents or disruptions of service are also dramatically increasing. Accidents or disruptions occur either because of disasters or because of malicious attacks. The disasters could not be completely prevented. Prevention is a necessary but not a sufficient component of disaster. In this case, we have to prepare thoroughly for reducing the recovery time and get the users back to work faster. In this paper, we present a cluster recovery model to increase the survivability level of Internet applications. We construct a state transition model to describe the behaviors of cluster systems. By mapping through recovery actions to this transition model with stochastic process, we capture system behaviors as well as we get mathematical steady-state solutions of that chain. We first carry out for steady-state behaviors leading to measures like steady-state availability. By transforming this model with the system states we compute a system measure, the mean time to repair (MTTR) and also compute probabilities of cluster systems failures due in face of disruptions. Our model with the recovery actions have several benefits, which include reducing the time to get the users back to work and making recovery performance insensitive to the selection of a failure treatment parameter",2006,0, 1699,Building very high reliability into the design and manufacture of relays,"Protection relays are essential devices in detecting power system faults, instructing circuit breakers when to trip. It is thus essential that the relays offer the level of dependability (assured trip operation for an in-zone fault) and security (stability when no trip operation is required) demanded in power system applications. The lecture investigates typical processes within the design cycle, and manufacturing operations, which seek to ensure compliance. The lecture recaps the evolution of protection relay technologies, from electromechanical to numerical, highlighting possible failure modes and setting/commissioning errors. Special focus is given on modern numerical (digital) relays, presenting the typical hardware and software build which together create the functional device. A case study design cycle is outlined, showing how the process can be controlled. Verification and validation testing, certification/approval testing, and regression test concepts are introduced, especially focusing on real-time digital simulator testing. Manufacturing issues assuring reliability are introduced, right from component sourcing strategies through the serial production stages. The pros and cons of other test/inspection philosophies are presented, such as accelerated testing (eg. heatsoaking).",2006,0, 1700,The hidden cost of mismanaging calculations [engineering calculations engineering computing],"Engineering calculations programmed into custom software may execute efficiently, but tend to be hard to use. Thus, they are virtually useless for managing engineering information. Spreadsheets are a big part of the problem. They are more about crunching numbers than documenting context, so they can be a risky tool for managing calculations. Spreadsheets show answers but omit context and are error prone. They are unsuited to the task of modelling, analysing and documenting engineering designs. An electronic calculation 'worksheet' is a good solution for effectively documenting design and engineering processes. Unlike spreadsheets, they employ real mathematical notation and capture - in human-readable text - the assumptions, methods and critical data behind every calculation. They may also include illustrative graphs, annotations and sketches - in essence, knowledge captured in a shareable form. Organisations can build on the value of these worksheets by organising, tracking, and controlling and sharing them in a Web-based repository. Calculations can be retrieved any time for reuse, validation, refinement, reporting and publishing - all in their proper context.",2006,0, 1701,How Developers Copy,"Copy-paste programming is dangerous as it may lead to hidden dependencies between different parts of the system. Modifying clones is not always straight forward, because we might not know all the places that need modification. This is even more of a problem when several developers need to know about how to change the clones. In this paper, we correlate the code clones with the time of the modification and with the developer that performed the modification to detect patterns of how developers copy from one another. We develop visualization, named clone evolution view, to represent the evolution of the duplicated code. We show the relevance of our approach on several large case studies and we distill our experience in forms of interesting copy patterns",2006,0, 1702,Leveraged Quality Assessment using Information Retrieval Techniques,"The goal of this research is to apply language processing techniques to extend human judgment into situations where obtaining direct human judgment is impractical due to the volume of information that must be considered. On aspect of this is leveraged quality assessments, which can be used to evaluate third-party coded subsystems, to track quality across the versions of a program, to assess the compression effort (and subsequent cost) required to make a change, and to identify parts of a program in need of preventative maintenance. A description of the QALP tool, its output from just under two million lines of code, and an experiment aimed at evaluating the tool's use in leveraged quality assessment are presented. Statistically significant results from this experiment validate the use of the QALP tool in human leverage quality assessment",2006,0, 1703,A Metric-Based Heuristic Framework to Detect Object-Oriented Design Flaws,"One of the important activities in re-engineering process is detecting design flaws. Such design flaws prevent an efficient maintenance, and further development of a system. This research proposes a novel metric-based heuristic framework to detect and locate object-oriented design flaws from the source code. It is accomplished by evaluating design quality of an object-oriented system through quantifying deviations from good design heuristics and principles. While design flaws can occur at any level, the proposed approach assesses the design quality of internal and external structure of a system at the class level which is the most fundamental level of a system. In a nutshell, design flaws are detected and located systematically in two phases using a generic OO design knowledge-base. In the first phase, hotspots are detected by primitive classifiers via measuring metrics indicating a design feature (e.g. complexity). In the second phase, individual design flaws are detected by composite classifiers using a proper set of metrics. We have chosen JBoss application server as the case study, due to its pure OO large size structure, and its success as an open source J2EE platform among developers",2006,0, 1704,Identification of Design Roles for the Assessment of Design Quality in Enterprise Applications,"The software industry is increasingly confronted with the issues of understanding and maintaining a special type of object-oriented systems, namely enterprise applications (EA). In the recent years many specific rules and patterns for the design of such applications were proposed. These new specific principles of EA design define precise roles (patterns) for classes and methods, and then describe """"good-design"""" rules in terms of such roles. Yet, these roles are rarely explicitly documented; therefore, due to their importance for an efficient understanding and assessment of EA design, they must be identified and localized in the source code based on their specificities. In this paper we define a suite of techniques for the identification and location of four such roles, all related to the data source layer of an EA. Using the knowledge about these roles we show how this can improve the accuracy of formerly defined techniques for detecting two well-known design problems (i.e., data class and feature envy), making them more applicable for the usage on enterprise systems. Based on an experimental study conducted on three EAs, we prove the feasibility of the approach, discuss its benefits and touch the issues that need to be addressed in the future",2006,0, 1705,Towards a Client Driven Characterization of Class Hierarchies,"Object-oriented legacy systems are hard to maintain because they are hard to understand. One of the main understanding problems is revealed by the so-called """"yo-yo effect"""" that appears when a developer or maintainer wants to track a polymorphic method call. At least part of this understanding problem is due to the dual nature of the inheritance relation i.e., the fact that it can he used both as a code and/or as an interface reuse mechanism. Unfortunately, in order to find out the original intention for a particular hierarchy it is not enough to look at the hierarchy itself; rather than that, an in-depth analysis of the hierarchy's clients is required. In this paper we introduce a new metrics-based approach that helps us characterize the extent to which a base class was intended for interface reuse, by analyzing how clients use the interface of that base class. The idea of the approach is to quantify the extent to which clients treat uniformly the instances of the descendants of the base class, when invoking methods belonging to this common interface, We have evaluated our approach on two medium-sized case studies and we have found that the approach does indeed help to characterize the nature of a base class with respect to interface reuse. Additionally, the approach can be used to detect some interesting patterns in the way clients actually use the descendants through the interface of the base class",2006,0, 1706,Fault evaluation for security-critical communication devices,"Communications devices for government or military applications must keep data secure, even when their electronic components fail. Combining information flow and risk analyses could make fault-mode evaluations for such devices more efficient and cost-effective. Conducting high-grade information security evaluations for computer communications devices is intellectually challenging, time-consuming, costly, and error prone. We believe that our structured approach can reveal potential fault modes because it simplifies evaluating a device's logical design and physical construction. By combining information-flow and risk-analysis techniques, evaluators can use the process to produce a thorough and transparent security argument. In other work, we have applied static analysis techniques to the evaluation problem, treating a device's schematic circuitry diagram as an information flow graph. This work shows how to trace information flow in different operating modes by representing connectivity between components as being conditional on specific device states. We have also developed a way to define the security-critical region of components with particular security significance by identifying components that lie on a path from a high-security data source to a low-security sink. Finally, to make these concepts practical, we have implemented them in an interactive analysis tool that reads schematic diagrams written in the very high speed integrated circuit (VHSIC) hardware description language.",2006,0, 1707,QoS assessment via stochastic analysis,"Using a stochastic modeling approach based on the Unified Modeling Language and enriched with annotations that conform to the UML profile for schedulability performance, and time, the authors propose a method for assessing quality of service (QoS) in fault-tolerant (FT) distributed systems. From the UML system specification, they produce a generalized stochastic Petri net (GSPN) performance model for assessing an FT application's QoS via stochastic analysis. The ArgoSPE tool provides support for the proposed technique, helping to automatically produce the GSPN model",2006,0, 1708,Automatic Instruction-Level Software-Only Recovery,"As chip densities and clock rates increase, processors are becoming more susceptible to transient faults that can affect program correctness. Computer architects have typically addressed reliability issues by adding redundant hardware, but these techniques are often too expensive to be used widely. Software-only reliability techniques have shown promise in their ability to protect against soft-errors without any hardware overhead. However, existing low-level software-only fault tolerance techniques have only addressed the problem of detecting faults, leaving recovery largely unaddressed. In this paper, we present the concept, implementation, and evaluation of automatic, instruction-level, software-only recovery techniques, as well as various specific techniques representing different trade-offs between reliability and performance. Our evaluation shows that these techniques fulfill the promises of instruction-level, software-only fault tolerance by offering a wide range of flexible recovery options",2006,0, 1709,BlueGene/L Failure Analysis and Prediction Models,"The growing computational and storage needs of several scientific applications mandate the deployment of extreme-scale parallel machines, such as IBM's BlueGene/L which can accommodate as many as 128 K processors. One of the challenges when designing and deploying these systems in a production setting is the need to take failure occurrences, whether it be in the hardware or in the software, into account. Earlier work has shown that conventional runtime fault-tolerant techniques such as periodic checkpointing are not effective to the emerging systems. Instead, the ability to predict failure occurrences can help develop more effective checkpointing strategies. Failure prediction has long been regarded as a challenging research problem, mainly due to the lack of realistic failure data from actual production systems. In this study, we have collected RAS event logs from BlueGene/L over a period of more than 100 days. We have investigated the characteristics of fatal failure events, as well as the correlation between fatal events and non-fatal events. Based on the observations, we have developed three simple yet effective failure prediction methods, which can predict around 80% of the memory and network failures, and 47% of the application I/O failures",2006,0, 1710,"Performance Assurance via Software Rejuvenation: Monitoring, Statistics and Algorithms","We present three algorithms for detecting the need for software rejuvenation by monitoring the changing values of a customer-affecting performance metric, such as response time. Applying these algorithms can improve the values of this customer-affecting metric by triggering rejuvenation before performance degradation becomes severe. The algorithms differ in the way they gather and use sample values to arrive at a rejuvenation decision. Their effectiveness is evaluated for different sets of control parameters, including sample size, using simulation. The results show that applying the algorithms with suitable choices of control parameters can significantly improve system performance as measured by the response time",2006,0, 1711,Content browsing and semantic context viewing through JPEG 2000-based scalable video summary,"The paper presents a novel method and software platform for remote and interactive browsing of a summary of long video sequences as well as revealing the semantic links between shots and scenes in their temporal context. The solution is based on interactive navigation in a scalable mega image resulting from a JPEG 2000 coded key-frame-based video summary. Each key-frame could represent an automatically detected shot, event or scene, which is then properly annotated using some semi-automatic tools or learning methods. The presented system is compliant with the new JPEG 2000 Part 9 'JPIP - JPEG 2000 interactivity, API and protocols', which lends itself to working under varying transmission channel conditions such as GPRS or 3G wireless networks. While keeping the advantages of a single 2D video summary, like the limited storage cost, the flexibility offered by JPEG 2000 allows the application to highlight interactively key-frames corresponding to the desired content first within a low-quality and low-resolution version of the full video summary. It then offers fine grain scalability for a user to navigate and zoom into particular scenes or events represented by the key-frames. This possibility of visualising key-frames of interest and playing back the corresponding video shots within the context of the whole sequence (e.g. an episode of a media file) enables the user to understand the temporal relations between semantically related events/actions/physical settings, providing a new way to present and search for contents in video sequences.",2006,0, 1712,Smart laser vision sensors simplify inspection,"For all in-process and finished product applications, laser sensors are used in the rubber and tire industry to enhance competitiveness by improving productivity. The basic benefits of using laser sensors for quality control include increasing yield and productivity, increasing quality by providing 100% product inspection, reducing scrap production and rejects, and in-process inspection to detect and correct trends quickly before production of scrap. New developments in laser-based measuring systems can now provide high-speed digital data communications, eliminating the effects of errors from electrical noise and eliminating the need for A/D converters. New smart sensor developments allow application specific analysis software to run inside the sensor, simplifying operation, improving reliability, and reducing cost by eliminating the need for external signal processing hardware.",2006,0, 1713,Schedules with minimized access latency for disseminating dependent information on multiple channels,"In wireless mobile environments, data broadcasting is an effective approach to disseminate information to mobile clients. In some applications, the access pattern of all the data to be downloaded can be represented by a DAG. In this paper, we consider the problem of efficiently generating the broadcast schedule on multiple channels when the data set has a DAG access pattern. We prove that it is NP-hard to find an optimal broadcast schedule which not only minimizes the latency but also satisfies the ancestor property which preserves the data dependency. We further rule out a condition for the input DAGs under which one can generate an optimal broadcast schedule in linear time and propose an algorithm, LBS, to generate the schedule under such a condition. For general DAGs, we provide three heuristics: the first one uses the overall access probability of each vertex; the second one considers the total access probability of a vertex's descendants; the third one combines the above two heuristics. We analyze these three heuristics and compare them through experiments. Our result shows that the third one can achieve a better broadcast schedule in terms of overall latency but costs more running time",2006,0, 1714,Detection and repair of software errors in hierarchical sensor networks,"Sensor networks are being increasingly deployed for collecting critical data in various applications. Once deployed, a sensor network may experience faults at the individual node level or at an aggregate network level due to design errors in the protocol, implementation errors, or deployment conditions that are significantly different from the target environment. In many applications, the deployed system may fail to collect data in an accurate, complete, and timely manner due to such errors. If the network produces incorrect data, the resulting decisions on the data may be incorrect, and negatively impact the application. Hence, it is important to detect and diagnose these faults through run-time observation. Existing technologies face difficulty with wireless sensor networks due to the large scale of the networks, the resource constraints of bandwidth and energy on the sensing nodes, and the unreliability of the observation channels for recording the behavior. This paper presents a semi-automatic approach named H-SEND (hierarchical sensor network debugging) to observe the health of a sensor network and to remotely repair errors by reprogramming through the wireless network. In H-SEND, a programmer specifies correctness properties of the protocol (""""invariants""""). These invariants are associated with conditions (the """"observed variables"""") of individual nodes or the network. The compiler automatically inserts checking code to ensure that the observed variables satisfy the invariants. The checking can be done locally or remotely, depending on the nature of the invariant. In the latter case, messages are generated automatically. If an error is detected at run-time, the logs of the observed variables are examined to analyze and correct the error. After errors are corrected, new programs or patches can be uploaded to the nodes through the wireless network. We construct a prototype to demonstrate the benefit of run-time detection and correction",2006,0, 1715,Industry-oriented software-based system for quality evaluation of vehicle audio environments,"A new set of integrated software tools are proposed for the evaluation of vehicle audio quality for industrial purposes, taking advantage of the auralization approach that allows to simulate the binaural listening experience outside the cockpit. Two main cooperating tools are implemented. The first fulfills the function of acquiring relevant data for system modeling and for canceling the undesired effects of the acquisition chain. The second offers a user-friendly interface for real-time simulation of different car audio systems and the consequent evaluation of both objective and subjective performances. In the latter case, the listening procedure is directly experienced at the PC workplace, leading to a significant simplification of the audio-quality assessing task for comparing the selected systems. Moreover, such kind of subjective evaluation allowed to validate the proposed approach through a complete set of experiments (developed by means of a dedicated software environment) based on appropriate ITU recommendations.",2006,0, 1716,A probabilistic approach for fault tolerant multiprocessor real-time scheduling,"In this paper we tackle the problem of scheduling a periodic real time system on identical multiprocessor platforms, moreover the tasks considered may fail with a given probability. For each task we compute its duplication rate in order to (1) given a maximum tolerated probability of failure, minimize the size of the platform such at least one replica of each job meets its deadline (and does not fail) using a variant of EDF namely EDF(k) or (2) given the size of the platform, achieve the best possible reliability with the same constraints. Thanks to our probabilistic approach, no assumption is made on the number of failures which can occur. We propose several approaches to duplicate tasks and we show that we are able to find solutions always very close to the optimal one",2006,0, 1717,A configurable framework for stream programming exploration in baseband applications,"This paper presents a configurable framework to be used for rapid prototyping of stream based languages. The framework is based on a set of design patterns defining the elementary structure of a domain specific language for high-performance signal processing. A stream language prototype for baseband processing has been implemented using the framework. We introduce language constructs to efficiently handle dynamic reconfiguration of distributed processing parameters. It is also demonstrated how new language specific primitive data types and operators can be used to efficiently and machine independently express computations on bitfields and data-parallel vectors. These types and operators yield code that is readable, compact and amenable to a stricter type checking than is common practice. They make it possible for a programmer to explicitly express parallelism to be exploited by a compiler. In short, they provide a programming style that is less error prone and has the potential to lead to more efficient implementations",2006,0, 1718,Analysis of checksum-based execution schemes for pipelined processors,"The performance requirements for contemporary microprocessors are increasing as rapidly as their number of applications grows. By accelerating the clock, performance can be gained easily but only with high additional power consumption. The electrical potential between logic `0' and `1' is decreased as integration and clock rates grow, leading to a higher susceptibility for transient faults, caused e.g. by power fluctuations or single event upsets (SEUs). We introduce a technique which is based on the well-known cyclic redundancy check codes (CRCs) to secure the pipelined execution of common microprocessors against transient faults. This is done by computing signatures over the control signals of each pipeline stage including dynamic out-of-order scheduling. To correctly compute the checksums, we resolve the time-dependency of instructions in the pipeline. We first discuss important physical properties of single event upsets (SEUs). Then we present a model of a simple processor with the applied scheme as an example. The scheme is extended to support n-way simultaneous multithreaded systems, resulting in two basic schemes. A cost analysis of the proposed SEU-detection schemes leads to the conclusion that both schemes are applicable at reasonable costs for pipelines with 5 to 10 stages and maximal 4 hardware threads. A worst-case simulation using software fault-injection of transient faults in the processor model showed that errors can be detected with an average of 83% even at a fault rate of 10-2. Furthermore, the scheme is able to detect an error within an average of only 5.05 cycles",2006,0, 1719,Predicting failures of computer systems: a case study for a telecommunication system,"The goal of online failure prediction is to forecast imminent failures while the system is running. This paper compares similar events prediction (SEP) with two other well-known techniques for online failure prediction: a straightforward method that is based on a reliability model and dispersion frame technique (DFT). SEP is based on recognition of failure-prone patterns utilizing a semi-Markov chain in combination with clustering. We applied the approaches to real data of a commercial telecommunication system. Results are presented in terms of precision, recall, F-measure and accumulated runtime-cost. The results suggest a significantly improved forecasting performance.",2006,0, 1720,Bilayer Segmentation of Live Video,"This paper presents an algorithm capable of real-time separation of foreground from background in monocular video sequences. Automatic segmentation of layers from colour/contrast or from motion alone is known to be error-prone. Here motion, colour and contrast cues are probabilistically fused together with spatial and temporal priors to infer layers accurately and efficiently. Central to our algorithm is the fact that pixel velocities are not needed, thus removing the need for optical flow estimation, with its tendency to error and computational expense. Instead, an efficient motion vs nonmotion classifier is trained to operate directly and jointly on intensity-change and contrast. Its output is then fused with colour information. The prior on segmentation is represented by a second order, temporal, Hidden Markov Model, together with a spatial MRF favouring coherence except where contrast is high. Finally, accurate layer segmentation and explicit occlusion detection are efficiently achieved by binary graph cut. The segmentation accuracy of the proposed algorithm is quantitatively evaluated with respect to existing groundtruth data and found to be comparable to the accuracy of a state of the art stereo segmentation algorithm. Foreground/ background segmentation is demonstrated in the application of live background substitution and shown to generate convincingly good quality composite video.",2006,0, 1721,Image Matching Using Photometric Information,"Image matching is an essential task in many computer vision applications. It is obvious that thorough utilization of all available information is critical for the success of matching algorithms. However most popular matching methods do not incorporate effectively photometric data. Some algorithms are based on geometric, color invariant features, thus completely neglecting available photometric information. Others assume that color does not differ significantly in the two images; that assumption may be wrong when the images are not taken at the same time, for example when a recently taken image is compared with a database. This paper introduces a method for using color information in image matching tasks. Initially the images are segmented using an off-the-shelf segmentation process (EDISON). No assumptions are made on the quality of the segmentation. Then the algorithm employs a model for natural illumination change to define the probability of two segments to originate from the same surface. When additional information is supplied (for example suspected corresponding point features in both images), the probabilities are updated. We show that the probabilities can easily be utilized in any existing image matching system. We propose a technique to make use of them in a SIFT-based algorithm. The techniques capabilities are demonstrated on real images, where it causes a significant improvement in comparison with the original SIFT results in the percentage of correct matches found.",2006,0, 1722,Application of set membership identification for fault detection of MEMS,"In this article, a set membership (SM) identification technique is tailored to detect faults in microelectromechanical systems. The SM-identifier estimates an orthotope which contains the system's parameter vector. Based on this orthotope, the system's output interval is predicted. If the actual output is outside of this interval, then a fault is detected. Utilization of this scheme can discriminate mechanical-component faults from electronic component variations frequently encountered in MEMS. For testing the suggested algorithm's performance in simulation studies, an interface between classical control-software (MATLAB) and circuit emulation (HSPICE) is developed",2006,0, 1723,Establishing software product quality requirements according to international standards,"Software product quality is an important concern in the computer environment and whose immediate results are appreciated in all the activities where computers are used. The ISO/IEC 9126 standard series settle a software product quality model, for example, in the annex, shows the identification of the quality requirements like a necessary step for product quality. However, the standard does not included the way to get quality requirements, neither how to establish metrics levels. Establishing quality requirements and metric levels seems to be simple activities but they could be annoying and prone to errors if there is not a systematic approach for the process. This article presents a proposal for establishing product quality requirements according to the ISO/IEC 9126 standard.",2006,0, 1724,Usability measures for software components,"The last decade marked the first real attempt to turn software development into engineering through the concepts of Component-Based Software Development (CBSD) and Commercial Off-The-Shelf (COTS) components, with the goal of creating high-quality parts that could be joined together to form a functioning system. One of the most critical processes in CBSD is the selection of the software components (from either in-house or external repositories) that fulfill some architectural and user-defined requirements. However, there is currently a lack of quality models and metrics that can help evaluate the quality characteristics of software components during this selection process. This paper presents a set of measures to assess the Usability of software components, and describes the method followed to obtain and validate them.",2006,0, 1725,Fully distributed three-tier active software replication,"Keeping strongly consistent the state of the replicas of a software service deployed across a distributed system prone to crashes and with highly unstable message transfer delays (e.g., the Internet), is a real practical challenge. The solution to this problem is subject to the FLP impossibility result, and thus there is a need for """"long enough"""" periods of synchrony with time bounds on process speeds and message transfer delays to ensure deterministic termination of any run of agreement protocols executed by replicas. This behavior can be abstracted by a partially synchronous computational model. In this setting, before reaching a period of synchrony, the underlying network can arbitrarily delay messages and these delays can be perceived as false failures by some timeout-based failure detection mechanism leading to unexpected service unavailability. This paper proposes a fully distributed solution for active software replication based on a three-tier software architecture well-suited to such a difficult setting. The formal correctness of the solution is proved by assuming the middle-tier runs in a partially synchronous distributed system. This architecture separates the ordering of the requests coming from clients, executed by the middle-tier, from their actual execution, done by replicas, i.e., the end-tier. In this way, clients can show up in any part of the distributed system and replica placement is simplified, since only the middle-tier has to be deployed on a well-behaving part of the distributed system that frequently respects synchrony bounds. This deployment permits a rapid timeout tuning reducing thus unexpected service unavailability",2006,0, 1726,Detecting computer-induced errors in remote-sensing JPEG compression algorithms,"The JPEG image compression standard is very sensitive to errors. Even though it contains error resilience features, it cannot easily cope with induced errors from computer soft faults prevalent in remote-sensing applications. Hence, new fault tolerance detection methods are developed to sense the soft errors in major parts of the system while also protecting data across the boundaries where data flow from one subsystem to the other. The design goal is to guarantee no compressed or decompressed data contain computer-induced errors without detection. Detection methods are expressed at the algorithm level so that a wide range of hardware and software implementation techniques can be covered by the fault tolerance procedures while still maintaining the JPEG output format. The major subsystems to be addressed are the discrete cosine transform, quantizer, entropy coding, and packet assembly. Each error detection method is determined by the data representations within the subsystem or across the boundaries. They vary from real number parities in the DCT to bit-level residue codes in the quantizer, cyclic redundancy check parities for entropy coding, and packet assembly. The simulation results verify detection performances even across boundaries while also examining roundoff noise effects in detecting computer-induced errors in processing steps.",2006,0, 1727,GNAM: a low-level monitoring program for the ATLAS experiment,"During the last years many test-beam sessions were carried out on each ATLAS subdetector in order to assess the performances in standalone mode. During these tests, different monitoring programs were developed to ease the setup of correct running conditions and the assessment of data quality. The experience has converged into a common effort to develop a monitoring program, which aims to be exploitable by various subdetector groups. The requirements which drove the design of the program as well as its architecture are discussed in this paper. Characteristic features of the application are a modular software based on a Finite State Machine core to implement the synchronization with the data acquisition system and exploiting the ROOT Tree as transient data store. The first version of this monitoring program was used for the 2004 ATLAS Combined Test Beam.",2006,0, 1728,An Architecture for Visualisation and Interactive Analysis of Proteins,"Data sets in the biological domain are often semantically complex and difficult to integrate and visualise. Converting between the file formats required by interactive analysis tools and those used by the global databases is a costly and error prone process. This paper describes a data model designed to enable efficient rendering of and interaction with biological data, and two demonstrator applications from different fields of protein analysis that provide co-ordinated views of data held in the underlying model",2006,0, 1729,The Power of the Defender,"We consider a security problem on a distributed network. We assume a network whose nodes are vulnerable to infection by threats (e.g. viruses), the attackers. A system security software, the defender, is available in the system. However, due to the networks size, economic and performance reasons, it is capable to provide safety, i.e. clean nodes from the possible presence of attackers, only to a limited part of it. The objective of the defender is to place itself in such a way as to maximize the number of attackers caught, while each attacker aims not to be caught. In [7], a basic case of this problem was modeled as a non-cooperative game, called the Edge model. There, the defender could protect a single link of the network. Here, we consider a more general case of the problem where the defender is able to scan and protect a set of k links of the network, which we call the Tuple model. It is natural to expect that this increased power of the defender should result in a better quality of protection for the network. Ideally, this would be achieved at little expense on the existence and complexity of Nash equilibria (profiles where no entity can improve its local objective unilaterally by switching placements on the network). In this paper we study pure and mixed Nash equilibria in the model. In particular, we propose algorithms for computing such equilibria in polynomial time and we provide a polynomial-time transformation of a special class of Nash equilibria, called matching equilibria, between the Edge model and the Tuple model, and vice versa. Finally, we establish that the increased power of the defender results in higher-quality protection of the network.",2006,0, 1730,Service Plans for Context- and QoS-aware Dynamic Middleware,"State-of-the-art context- and QoS-aware dynamic middleware platforms use information about the environment, in order to evaluate alternative configurations of an application and select the one that best meets the users QoS requirements. The specification of the alternatives is prepared at designtime and associated with the software during deployment. From the information and requirements in the specification, the middleware can synthesis, filter, and compare the alternative application configurations. This paper presents a platform independent specification, referred to as service plan, which contains information elements for specifying configurations, dependencies on the environment, and QoS characteristics. The service plan is specified at a conceptual level to ensure that it can be implemented in a wide range of middleware platforms. The paper describes how the concept is used during deployment, instantiation, and reconfiguration. From the implementation and validation the expressiveness and usefulness of the service plan concept is assessed.",2006,0, 1731,Productivity and code quality improvement of mixed-signal test software by applying software engineering methods,Typical nowadays mixed-signal ICs are approaching 1000 or even more parametric tests. These tests are usually coded in a procedural or a semi-object oriented language. The huge code base of the programs is a significant challenge for maintaining code quality which inherently translates into outgoing quality. The paper presents software metrics of typical mixed-signal power management and audio devices with regard to the number of tests conducted. It is shown that classical ways to handle test programs are error prone and tend to systematically repeat known mistakes. The adoption of selected software engineering methods can avoid such mistakes and improves the productivity of the mixed-signal test generation. Results of a pilot project show significant productivity improvement. Open-source based software is employed to provide the necessary tool support. They establish a potential roadmap to get away from proprietary tester specific tool sets,2006,0, 1732,Gate Layout Improvement Aimed at Testability,In the presented paper the improvement of the layout of complex standard gates from the industrial cell library aimed at decreasing the probability of occurrence of undetectable faults is considered. Such improvement allows us to determine the defect coverage table correctly and as a result to estimate properly the optimal sequence of input test pattern for defects detection. The ability of gate layout improvement is based on the results of defects probabilities determination and identification of functional faults caused by these defects. The results are obtained by FIESTA-Extra software tool,2006,0, 1733,A Classification Scheme for Evaluating Management Instrumentation in Distributed Middleware Infrastructure,"Management instrumentation is an integrated capability of a software system that enables an external observer to monitor the system's availability, performance, and reliability during operation. It is highly useful for taking both proactive and reactive actions to keep a software system operational in mission-critical environments where tolerance for an unavailable or poor-performing system is very low. Middleware infrastructure components have taken important positions in distributed software systems due to various benefits related to the development, deployment, and runtime operations. Keeping these components highly available and up to the expected performance requires integrated capabilities that allow regular monitoring of critical functionality, measurement of Quality of Service (QoS), debugging and troubleshooting, and health-checks in the context of actual business processes.. Yet, currently there is no approach that enables systematic evaluation of the relative strengths and weaknesses of a middleware component's management instrumentation. In this paper, we will present an approach to evaluating management instrumentation of middleware infrastructure components. We use a classification-based scheme that has a functional dimension called Capability and two main quality dimensions called Usability and Precision. We further categorize each dimension into smaller, more precise instrumentation features, such as Tracing, Distributed Correlation and Granularity. In presenting our approach, we hope to achieve the following: i) educate middleware users on how to systematically assess or compare the overall manageability of a MidIn component using the classification scheme, and ii) share with middleware researchers on the importance of good integrated manageability in middleware infrastructure.",2006,0, 1734,Improving Accuracy of Multiple Regression Analysis for Effort Prediction Model,"In this paper, we outline the effort prediction model and the evaluation experiment. In addition we explore the parameters in the model. The model predicts effort of embedded software developments via multiple regression analysis using the collaborative filtering. Because companies, recently, focus on methods to predict effort of projects, which prevent project failures such as exceeding deadline and cost, due to more complex embedded software, which brings the evolution of the performance and function enhancement. In the model, we have fixed two parameters named k and ampmax, which would influence the accuracy of predicting effort. Hence, we investigate a tendency of them in the model and find the optimum value",2006,0, 1735,Distributed dynamic event tree generation for reliability and risk assessment,"Level 2 probabilistic risk assessments of nuclear plants (analysis of radionuclide release from containment) may require hundreds of runs of severe accident analysis codes such as MELCOR or RELAP/SCDAP to analyze possible sequences of events (scenarios) that may follow given initiating events. With the advances in computer architectures and ubiquitous networking, it is now possible to utilize multiple computing and storage resources for such computational experiments. This paper presents a system software infrastructure that supports execution and analysis of multiple dynamic event-tree simulations on distributed environments. The infrastructure allow for 1) the testing of event tree completeness, and, 2) the assessment and propagation of uncertainty on the plant state in the quantification of event trees",2006,0, 1736,PalProtect: A Collaborative Security Approach to Comment Spam,"Collaborative security is a promising solution to many types of security problems. Organizations and individuals often have a limited amount of resources to detect and respond to the threat of automated attacks. Enabling them to take advantage of the resources of their peers by sharing information related to such threats is a major step towards automating defense systems. In particular, comment spam posted on blogs as a way for attackers to do search engine optimization (SEO) is a major annoyance. Many measures have been proposed to thwart such spam, but all such measures are currently enacted and operate within one administrative domain. We propose and implement a system for cross-domain information sharing to improve the quality and speed of defense against such spam",2006,0, 1737,Resource Availability Prediction in Fine-Grained Cycle Sharing Systems,"Fine-grained cycle sharing (FGCS) systems aim at utilizing the large amount of computational resources available on the Internet. In FGCS, host computers allow guest jobs to utilize the CPU cycles if the jobs do not significantly impact the local users of a host. A characteristic of such resources is that they are generally provided voluntarily and their availability fluctuates highly. Guest jobs may fail because of unexpected resource unavailability. To provide fault tolerance to guest jobs without adding significant computational overhead, it requires to predict future resource availability. This paper presents a method for resource availability prediction in FGCS systems. It applies a semi-Markov Process and is based on a novel resource availability model, combining generic hardware-software failures with domain-specific resource behavior in FGCS. We describe the prediction framework and its implementation in a production FGCS system named iShare. Through the experiments on an iShare testbed, we demonstrate that the prediction achieves accuracy above 86% on average and outperforms linear time series models, while the computational cost is negligible. Our experimental results also show that the prediction is robust in the presence of irregular resource unavailability",2006,0, 1738,Market-Based Resource Allocation using Price Prediction in a High Performance Computing Grid for Scientific Applications,"We present the implementation and analysis of a market-based resource allocation system for computational grids. Although grids provide a way to share resources and take advantage of statistical multiplexing, a variety of challenges remain. One is the economically efficient allocation of resources to users from disparate organizations who have their own and sometimes conflicting requirements for both the quantity and quality of services. Another is secure and scalable authorization despite rapidly changing allocations. Our solution to both of these challenges is to use a market-based resource allocation system. This system allows users to express diverse quantity- and quality-of-service requirements, yet prevents them from denying service to other users. It does this by providing tools to the user to predict and tradeoff risk and expected return in the computational market. In addition, the system enables secure and scalable authorization by using signed money-transfer tokens instead of identity-based authorization. This removes the overhead of maintaining and updating access control lists, while restricting usage based on the amount of money transferred. We examine the performance of the system by running a bioinformatics application on a fully operational implementation of an integrated grid market",2006,0, 1739,Ensuring numerical quality in grid computing,"We propose an approach which gives the user valuable information on the various platforms avail able in a grid in order to assess the numerical quality of an algorithm run on each of these platforms. In this manner, the user is provided with at least very strong hints whether a program performs reliably in a grid before actually executing it. Our approach extends IeeeCC754 by two """"grid-enabled"""" modes: The first mode calculates a """"numerical checksum"""" on a specific grid host and executes the job only if the check sum is identical to a locally generated one. The second mode provides the user with information on the reliability and IEEE 754-conformity of the underlying floating-point implementation of various platforms. In addition, it can help to find a set of compiler options to optimize the application's performance while retaining numerical stability",2006,0, 1740,Dialogue-Based Authoring of Units of Learning,"Authoring learning content along with specifying its relationship to pedagogical scenarios is a tedious and error-prone task. This paper describes DBAT-LD (dialogue-based authoring of learning designs), which is a chat bot (natural language dialogue system) that interacts with authors of units of learning and secures the content of the dialogue in an XML-based target format of choice. DBAT-LD can be geared towards different specifications thereby contributing to interoperability in e-learning. In the case of IMS-LD, DBAT-LD elicits and reconstructs an account of learning activities along with the pedagogical support activities by Koper, R. & Olivier, B. (2004) and delivers a Level A description of a unit of learning",2006,0, 1741,CEDA: control-flow error detection through assertions,"This paper presents an efficient software technique, control flow error detection through assertions (CEDA), for online detection of control flow errors. Extra instructions are automatically embedded into the program at compile time to continuously update run-time signatures and to compare them against pre-assigned values. The novel method of computing run-time signatures results in a huge reduction in the performance overhead, as well as the ability to deal with complex programs and the capability to detect subtle control flow errors. The widely used C compiler, GCC, has been modified to implement CEDA, and the SPEC benchmark programs were used as the target to compare with earlier techniques. Fault injection experiments were used to evaluate the fault detection capabilities. Based on a new comparison metric, method efficiency, which takes into account both error coverage and performance overhead, CEDA is found to be much better than previously proposed methods",2006,0, 1742,Software-based adaptive and concurrent self-testing in programmable network interfaces,"Emerging network technologies have complex network interfaces that have renewed concerns about network reliability. In this paper, we present an effective low-overhead failure detection technique, which is based on a software watchdog timer that detects network processor hangs and a self-testing scheme that detects interface failures other than processor hangs. The proposed adaptive and concurrent self-testing scheme achieves failure detection by periodically directing the control flow to go through only active software modules in order to detect errors that affect instructions in the local memory of the network interface. The paper shows how this technique can be made to minimize the performance impact on the host system and be completely transparent to the user",2006,0, 1743,Ontological approach to improving design quality,"The creation of quality software depends on the existence of a quality software design. In particular, it is important to identify inconsistencies that might be injected at the design level. We have developed a common ontology that integrates software specification knowledge and software design knowledge in order to facilitate the interoperability of formal requirements modeling tools and software design tools with the end goal of detecting errors in software designs. Our approach focuses initially on the integration of unified modeling language (UML) with the formal requirements modeling language, knowledge acquisition in automated specification (KAOS), in order to help automate the detection of inconsistencies in UML designs thereby enhancing the quality of the original design and ultimately integrating the multiple views inherent in UML. We demonstrate the integration of UML and KAOS with an elevator system case study",2006,0, 1744,Recent case studies in bearing fault detection and prognosis,"This paper updates current efforts by the authors to develop fully-automated, online incipient fault detection and prognosis algorithms for drivetrain and engine bearings. The authors have developed and evolved ImpactEnergytrade, a feature extraction and analysis driven system that integrates high frequency vibration/acoustic emission data, collected using accelerometers and other sensors such as a laser interferometer to assess the health of bearings and gearboxes in turbine engines. ImpactEnergy combines advanced diagnostic features derived from waveform analysis, high-frequency enveloping, and more traditional time domain processing like root mean square (RMS) and kurtosis with classification techniques to provide bearing health information. The adaptable algorithm suite has been applied across numerous air vehicle relevant programs for the Air Force, Navy, Army, and DARPA. The techniques presented in this paper are tested and validated in a laboratory environment by monitoring multiple bearings on test rigs that replicate the operational loads of a turbomachinery environment. The capability of the software on full-scale test rigs at major OEMs (original equipment manufacturer) locations will be shown with specific data results. The team will review developments across these multiple programs and discuss specific implementation efforts to transition to the fleet in a variety of manned and unmanned platforms",2006,0, 1745,Designing for recovery [software design],"How should you design your software to detect, react, and recover from exceptional conditions? If you follow Jim Shore's advice and design with a fail fast attitude, you won't expend any effort recovering from failures. Shore argues that a """"patch up and proceed"""" strategy often obfuscates problems. Shore's simple design solution is to write code that checks for expected values upon entry and returns failure notifications when it can't fulfil its responsibilities. He argues that careful use of assertions allows for early and visible failure, so you can quickly identify and correct problems",2006,0, 1746,Practical application of model-based programming and state-based architecture to space missions,"Innovative systems and software engineering solutions are required to meet the increasingly challenging demands of deep-space robotic missions. While recent advances in the development of integrated systems and software engineering approaches have begun to address some of these issues, these methods are still at the core highly manual and, therefore, error-prone. This paper describes a task aimed at infusing MIT's model-based executive, Titan, into JPL's Mission Data System (MDS), a unified state-based architecture, systems engineering process, and supporting software framework. Results of the task are presented, including a discussion of the benefits and challenges associated with integrating mature model-based programming techniques and technologies into a rigorously-defined domain specific architecture",2006,0, 1747,A core plug and play architecture for reusable flight software systems,"The Flight Software Branch, at Goddard Space Flight Center (GSFC), has been working on a run-time approach to facilitate a formal software reuse process. The reuse process is designed to enable rapid development and integration of high-quality software systems and to more accurately predict development costs and schedule. Previous reuse practices have been somewhat successful when the same teams are moved from project to project. But this typically requires taking the software system in an all-or-nothing approach where useful components cannot be easily extracted from the whole. As a result, the system is less flexible and scalable with limited applicability to new projects. This paper will focus on the rationale behind, and implementation of, the run-time executive. This executive is the core for the component-based flight software commonality and reuse process adopted at Goddard",2006,0, 1748,Probabilistic fusion of stereo with color and contrast for bilayer segmentation,"This paper describes models and algorithms for the real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from color/contrast or from stereo alone is known to be error-prone. Here, color, contrast, and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, layered dynamic programming (LDP), solves stereo in an extended six-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive color model that is learned on-the-fly and stereo disparities are obtained by dynamic programming. The second algorithm, layered graph cut (LGC), does not directly solve stereo. Instead, the stereo match likelihood is marginalized over disparities to evaluate foreground and background hypotheses and then fused with a contrast-sensitive color model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar performance, substantially better than either stereo or color/contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output",2006,0, 1749,Data Warehousing Process Maturity: An Exploratory Study of Factors Influencing User Perceptions,"This paper explores the factors influencing perceptions of data warehousing process maturity. Data warehousing, like software development, is a process, which can be expressed in terms of components such as artifacts and workflows. In software engineering, the Capability Maturity Model (CMM) was developed to define different levels of software process maturity. We draw upon the concepts underlying CMM to define different maturity levels for a data warehousing process (DWP). Based on the literature in software development and maturity, we identify a set of features for characterizing the levels of data warehousing process maturity and conduct an exploratory field study to empirically examine if those indeed are factors influencing perceptions of maturity. Our focus in this paper is on managerial perceptions of DWP. The results of this exploratory study indicate that several factors-data quality, alignment of architecture, change management, organizational readiness, and data warehouse size-have an impact on DWP maturity, as perceived by IT professionals. From a practical standpoint, the results provide useful pointers, both managerial and technological, to organizations aspiring to elevate their data warehousing processes to more mature levels. This paper also opens up several areas for future research, including instrument development for assessing DWP maturity",2006,0, 1750,Assessing the effectiveness of static code analysis,For complex systems identifying and mitigating a gap between suppliers provided software and customer certification needs is difficult. Getting it wrong can cause program delays or even project failure. A mitigation strategy is to carry out additional assurance analysis such as static code analysis (SCA). This can add significantly to the procurement expense and may require repeating with new software upgrades. The purpose of this paper is to present an analysis of the effectiveness of nearly 10 years efforts of additional independent SCA assurance on a large software intensive project. The evidence presented also is supported by SCA findings on other projects conducting additional SCA. The analysis work was carried out for a Ministry of Defence Integrated Project Team as part of their continual assessment and improvement of safety.,2006,0, 1751,Building statistical test-cases for smart device software - an example,"Statistical testing (ST) of software or logic-based components can produce dependability information on such components by yielding an estimate for their probability of failure on demand. An example of software-based components that are increasingly used within safety-related systems e.g. in the nuclear industry, are smart devices. Smart devices are devices with intelligence, capable of more than merely representing correctly a sensed quantity but of functionality such as processing data, self-diagnosis and possibly exchange of data with other devices. Examples are smart transmitters or smart sensors. If such devices are used in a safety-related context, it is crucial to assess whether they fulfil the dependability requirements posed on them to ensure they are dependable enough to be used within the specific safety-related context. This involves making a case for the probability of systematic failure of the smart device. This failure probability is related to faults present in the logic or software-based part of the device. In this paper we look at a technique that can be used to establish a probability of failure for the software part of a smart monitoring unit. This technique is """"statistical testing"""" (ST). Our aim is to share our own experience with ST and to describe some of the issues we have encountered so far on the way to perform ST on this device software.",2006,0, 1752,Enabling Self-Managing Applications using Model-based Online Control Strategies,"The increasing heterogeneity, dynamism and uncertainty of emerging DCE (Distributed Computing Environment) systems imply that an application must be able to detect and adapt to changes in its state, its requirements and the state of the system to meet its desired QoS constraints. As system and application scales increase, ad hoc heuristic-based approaches to application adaptation and self-management quickly become insufficient. This paper builds on the Accord programming system for rule-based self-management and extends it with model-based control and optimization strategies. This paper also presents the development of a self-managing data streaming service based on online control using Accord. This service is part of a Grid-based fusion simulation workflow consisting of long-running simulations, executing on remote supercomputing sites and generating several terabytes of data, which must then be streamed over a wide-area network for live analysis and visualization. The self-managing data streaming service minimize data streaming overheads on the simulations, adapt to dynamic network bandwidth and prevent data loss. An evaluation of the service demonstrating its feasibility is presented.",2006,0, 1753,Visualization for a Multi-Sensor Data Analysis,"This paper describes our efforts in creating the software in order to analyze the multi-sensor data for gas transmission pipeline inspection. The amount of data is usually considerable because the hardware system that consists of multiple heterogeneous sensors records multi-sensor values for long-distance inspection. It imposes a heavy burden on the operators who should sieve the huge and complex data, detect features of the pipeline and decide a feature as a significant defect. In our system, the virtual 3D pipeline helps the user to examine the inside of pipeline intuitively by navigating according to the realistic pipeline trajectory. We mapped the geographical data of the pipeline and heterogeneous sensor data on the virtual 3D pipeline. Moreover, our system offer the various feature detail views to help the users rapid and precise decision. Users can switch the navigation mode and the feature detail mode easily. Consequently, the virtual pipeline plays a role as an intuitive interaction metaphor for pipeline inspection",2006,0, 1754,Continuous geodetic time-transfer analysis methods,"We address two issues that limit the quality of time and frequency transfer by carrier phase measurements from the Global Positioning System (GPS). The first issue is related to inconsistencies between code and phase observations. We describe and classify several types of events that can cause inconsistencies and observe that some of them are related to the internal clock of the GPS receiver. Strategies to detect and overcome time-code inconsistencies have been developed and implemented into the Bernese GPS software package. For the moment, only inconsistencies larger than the 20 ns code measurement noise level can be detected automatically. The second issue is related to discontinuities at the day boundaries that stem from the processing of the data in daily batches. Two new methods are discussed: clock handover and ambiguity stacking. The two approaches are tested on data obtained from a network of stations, and the results are compared with an independent time-transfer method. Both methods improve the stability of the transfer for short averaging times, but there is no benefit for averaging times longer than 8 days. We show that continuous solutions are sufficiently robust against modeling and preprocessing errors to prevent the solution from accumulating a permanent bias.",2006,0, 1755,An Efficient Radio Admission Control Algorithm for 2.5G/3G Cellular Networks,"We design an efficient radio admission control algorithm that minimizes blocking probability subject to the condition that the overload probability is smaller than a pre-specified threshold. Our algorithm is quite general and can be applied to both TDMA-based cellular technologies, such as GPRS and EDGE, and CDMA-based technologies, such as UMTS and CDMA2000. We extend prior work in measurement-based admission control in wireline networks to wireless cellular networks and to heterogeneous users. We take the variance of the resource requirement into account while making the admission decision. Using simulation results, we show that our admission control algorithm is able to meet the target overload probability over a range of call arrival rates and radio conditions. We also compare our scheme with a simple admission control algorithm and also show how to use our approach for the carrier selection problem",2006,0, 1756,VoIP service quality monitoring using active and passive probes,"Service providers and enterprises all over the world are rapidly deploying Voice over IP (VoIP) networks because of reduced capital and operational expenditure, and easy creation of new services. Voice traffic has stringement requirements on the quality of service, like strict delay and loss requirements, and 99.999% network availability. However, IP networks have not been designed to easily meet the above requirements. Thus, service providers need service quality management tools that can proactively detect and mitigate service quality degradation of VoIP traffic. In this paper, we present active and passive probes that enable service providers to detect service impairments. We use the probes to compute the network parameters (delay, loss and jitter) that can be used to compute the call quality as a Mean Opinion Score using a voice quality metric, E-model. These tools can be used by service providers and enterprises to identify network impairments that cause service quality degradation and take corrective measures in real time so that the impact on the degradation perceived by end-users is minimal",2006,0, 1757,Outlier Detection in Wireless Sensor Networks using Bayesian Belief Networks,"Data reliability is an important issue from the user's perspective, in the context of streamed data in wireless sensor networks (WSN). Reliability is affected by the harsh environmental conditions, interferences in wireless medium and usage of low quality sensors. Due to these conditions, the data generated by the sensors may get corrupted resulting in outliers and missing values. Deciding whether an observation is an outlier or not depends on the behavior of the neighbors' readings as well as the readings of the sensor itself. This can be done by capturing the spatio-temporal correlations that exists among the observations of the sensor nodes. By using naive Bayesian networks for classification, we can estimate whether an observation belongs to a class or not. If it falls beyond the range of the class, then it can be detected as an outlier. However naive Bayesian networks do not consider the conditional dependencies among the observations of sensor attributes. So, we propose an outlier detection scheme based on Bayesian belief networks, which captures the conditional dependencies among the observations of the attributes to detect the outliers in the sensor streamed data. Applicability of this scheme as a plug-in to the component oriented middleware for sensor networks (COMiS) of our early research work is also presented",2006,0, 1758,A Low-level Simulation Study of Prioritization in IEEE 802.11e Contention-based Networks,"This work deals with the performance evaluation of the IEEE 802.11e EDCA proposal for service prioritization in wireless LANs. A large amount of study has been carried out in the scientific community to evaluate the performance of the EDCA proposal, mainly in terms of throughput and access delay differentiation. However, we argue that further performance insights are needed in order to fully understand the principles behind the EDCA prioritization mechanisms. To this purpose, rather than limit our investigation on throughput and delay performance figures, we take a closer look to their operation also in terms of low-level performance metrics (such as probability of accessing specific channel slots). The paper contribution is threefold: first, we specify a detailed NS2 simulation model by enlightening the typical mis-configuration and errors that may occur when NS2 is used as simulation platform for WLANs and we cross-validate the simulation results with our custom-made C++ simulation tool; second, we describe some performance figures related to the different forms of prioritization provided by the EDCA mechanisms; finally, we verify some assumptions commonly used in the EDCA analytical models",2006,0, 1759,Case study of ANSI standard delta-wye distribution transformer,"An over current relay (51N) of a distribution feeder that feeds a branchy overhead line failed to eliminate a fault, which resulted from a fallen conductor at the primary side of the delta-wye distribution transformer. This was linked to the earth's high resistance. In the first part of this paper, the aforementioned reason will be proven to be not credible and that the load-side fallen conductor resulted in a very small return current that cannot be detected. In the second part of this paper, the fact that the voltage will build up at the distribution transformer primary side in case the transformer has a primary single phase blown fuse and unbalance loads will be presented",2006,0, 1760,Photovoltaic Power Conditioning System With Line Connection,"A photovoltaic (PV) power conditioning system (PCS) with line connection is proposed. Using the power slope versus voltage of the PV array, the maximum power point tracking (MPPT) controller that produces a smooth transition to the maximum power point is proposed. The dc current of the PV array is estimated without using a dc current sensor. A current controller is suggested to provide power to the line with an almost-unity power factor that is derived using the feedback linearization concept. The disturbance of the line voltage is detected using a fast sensing technique. All control functions are implemented in software with a single-chip microcontroller. Experimental results obtained on a 2-kW prototype show high performance such as an almost-unity power factor, a power efficiency of 94%, and a total harmonic distortion (THD) of 3.6%",2006,0, 1761,Intrusion-tolerant middleware: the road to automatic security,"The pervasive interconnection of systems throughout the world has given computer services a significant socioeconomic value that both accidental faults and malicious activity can affect. The classical approach to security has mostly consisted of trying to prevent bad things from happening-by developing systems without vulnerabilities, for example, or by detecting attacks and intrusions and deploying ad hoc countermeasures before any part of the system is damaged. Building an intrusion-tolerant system to arrive at some notion of intrusion-tolerant middleware for application support presents multiple challenges. Surprising as it might seem, intrusion tolerance isn't just another instantiation of accidental fault tolerance",2006,0, 1762,ReStore: Symptom-Based Soft Error Detection in Microprocessors,"Device scaling and large-scale integration have led to growing concerns about soft errors in microprocessors. To date, in all but the most demanding applications, implementing parity and ECC for caches and other large, regular SRAM structures have been sufficient to stem the growing soft error tide. This will not be the case for long and questions remain as to the best way to detect and recover from soft errors in the remainder of the processor - in particular, the less structured execution core. In this work, we propose the ReStore architecture, which leverages existing performance enhancing checkpointing hardware to recover from soft error events in a low cost fashion. Error detection in the ReStore architecture is novel: symptoms that hint at the presence of soft errors trigger restoration of a previous checkpoint. Example symptoms include exceptions, control flow misspeculations, and cache or translation look-aside buffer misses. Compared to conventional soft error detection via full replication, the ReStore framework incurs little overhead, but sacrifices some amount of error coverage. These attributes make it an ideal means to provide very cost effective error coverage for processor applications that can tolerate a nonzero, but small, soft error failure rate. Our evaluation of an example ReStore implementation exhibits a 2times increase in MTBF (mean time between failures) over a standard pipeline with minimal hardware and performance overheads. The MTBF increases by 20times if ReStore is coupled with protection for certain particularly vulnerable pipeline structures",2006,0, 1763,A behavior-based process for evaluating availability achievement risk using stochastic activity networks,"With the increased focus on the availability of complex, multifunction systems, modeling processes and analysis tools are needed that help the availability systems engineer understand the impact of architectural and logistics design choices concerning system availability. Because many fielded systems are required to achieve a specified minimal availability over a short measurement period, a modeling methodology must also support computation of the distribution of operational availability for the specified measurement period. This paper describes a two-part behavior-based availability achievement risk methodology that starts with a description of the system's availability related behavior followed by a stochastic activity network-based simulation to obtain numeric estimate of expected availability and the distribution of availability over a selected time frame. The process shows how the system engineer freed to explore complex behavior not possible with combinatorial estimation methods in wide use today",2006,0, 1764,Methodology for maintainability-based risk assessment,"A software product spends more than 65% of its lifecycle in maintenance. Software systems with good maintainability can be easily modified to fix faults or to adapt to changing environment. We define maintainability-based risk as a product of two factors: the probability of performing maintenance tasks and the impact of performing these tasks. In this paper, we present a methodology for assessing maintainability-based risk to account for changes in the system requirements. The proposed methodology depends on the architectural artifacts and their evolution through the life cycle of the system. We illustrate the methodology on a case study using UML models",2006,0, 1765,Risk assessment of real time digital control systems,"This paper describes stochastic methods for assessing risk in integrated hardware and software systems. The methods assess evaluate availability, outage probabilities, and effectiveness-weighted degraded states based on data from measurements with a specified confidence level. System-level reliability/availability models can also identify the elements where failure rate, recovery probability, or recovery time improvement will provide the greatest benefit. The validity of this approach is determined by the extent to which the system failure behavior conforms to a stochastic process (i.e., random, non-deterministic failures). Evidence from large studies of other high availability computer systems provides substantial evidence of such behavior in mature systems. The approach is limited to the systems with failure rates higher than 10-6per hour and the availability below 0.999999, i.e., below safety grade. To assess safety critical systems, the risk assessment method described here can be used as an adjunct for other approaches described in various industry standards that intended to minimize the likelihood that deterministic defects are introduced into the system design",2006,0, 1766,Modeling and analysis of causes and consequences of failures,"This paper presents a computer-supported method for modeling and analyzing causes and consequences of failures. The developed method is one of the main results from a nine-year research project, which was completed in February 2005 and carried out by Tampere University of Technology. The applicability of the developed methods and software has been tested in the companies, which have been involved in the research project. The participating companies are both manufacturers and users in metal, energy, process and electronics industries. Their products and systems have to respond to high safety and reliability demands. Most of the participating companies have started to apply the proposed method and software for modeling and analysis of failure logic for their products and systems. The application of the method forces experts to identify all potential component hardware failures, human errors, possible disturbances and deviations in the process, and environmental conditions related to the selected TOP-event. Based on experience, and with the help of the methods, it is possible to find out those problem areas of the design stage, which can delay product development and/or reduce safety and reliability",2006,0, 1767,The risks of applying qualitative reliability prediction methods: a case study,"The fast technological innovation of the past decades contributed to an increasing complexity in products. This increased product complexity together with four different business drivers (time, profitability, functionality and quality) have an important influence on the reliability strategies used within companies. New methods are necessary to predict reliability in product design. In current business processes qualitative reliability prediction methods are often applied to estimate the reliability risks present in products and processes. An example of a popular qualitative reliability prediction method is the so-called failure mode and effects analysis (FMEA). Many successful implementations of the FMEA method are described in literature from various professional fields. On the other hand, several setbacks of the traditional FMEA approach are described in literature. Most of these drawbacks result from the qualitative analysis approach. Nevertheless, the FMEA reliability prediction method is probably the most implemented method in practice. Present-day companies do not seem to take notice of the drawbacks of qualitative reliability prediction methods as described in literature. A convincing reason for this is the fact that no proven alternatives exist for these qualitative methods. Therefore the goal of this paper is to illustrate the risks of applying qualitative reliability prediction methods in practice and make suggestions for improving the application of these methods. This illustration is based on a complete reliability prediction approach named ROMDA. This ROMDA approach adopts FMEA to predict product reliability and will be presented in the second section. Subsequently this ROMDA approach is applied in a practical situation after which the reliability predictions are evaluated. Based on this evaluation, general conclusions and recommendations are described in order to improve the application of qualitative reliability prediction methods in practice",2006,0, 1768,A practical mtbf estimate for pcb design considering component and non-component failures,"Accurate reliability prediction for MTBF (mean-time-between-failures) is always desirable and anticipated before the new product is ramped up for customer shipment. In reality it is often difficult to obtain the accurate MTBF estimate for a new product due to the lack of the testing time in pilot line and limited field failure data. In this paper, a practical reliability prediction model is presented for predicting the MTBF of PCB (printed-circuit-board) in the design phase. Unlike conventional reliability prediction models, which usually focus on parts failure rates, the method presented here not only incorporates component failures but also non-component failures which include design, software, manufacturing and process issues. Component failure rates are computed using either historical data or the nominal failure rates together with operating conditions such as temperature and electrical derating. Triangular distributions are used to model non-component failure rates due to design errors, software bugs, manufacturing and handling problems. Finally, the confidence intervals for the new product MTBF are obtained based on six-sigma criteria. The method was applied to the design of a DC/analog instrument board that is used in the semiconductor testing equipment",2006,0, 1769,Toward Formal Verification of 802.11 MAC Protocols: a Case Study of Applying Petri-nets to Modeling the 802.11 PCF,"Centralized control functions for the IEEE 802.11 family of WLAN standards are vital for the distribution of traffic with stringent quality of service (QoS) requirements. These centralized control functions overlay a time-based organizational """"super-frame"""" structure on the medium, allocating part of the super-frame to polling traffic and part to contending traffic. This allocation directly determines how well the two forms of traffic are supported. Given the vital role of this allocation in the success of a system, we must have confidence in the configuration used, beyond that provided by empirical simulation results. Formal mathematical methods are a means to conduct rigorous analysis that will permit us such confidence, and the Petri-net formalism offers an intuitive representation with formal semantics. We present an extended Petri-net model of the super-frame, and use this model to assess the performance of different super-frame configurations and the effects of different traffic patterns. We believe that using such a model to analyze performance in this manner is new in itself",2006,0, 1770,Dependency Analysis and Default Tolerance in BHDL,"Most co-design verification methods depend on co-simulation of two or more types of components that are designed by different technologies during the last steps of design. Systems are getting more complex so the necessary time for simulation, detecting and correcting faults increases. BHDL project uses a formal method, B method, at the very early stage of design in order to produce a correct by design multitechnology system. Furthermore, BHDL can take in account the possibility to describe a fault scenario with a suitable correction in order to satisfy an ideal system specification",2006,0, 1771,Component Reusability and Cohesion Measures in Object-Oriented Systems,"In software component reuse processing, the success of software systems is decided by the quality of components. One important characteristic to measure quality of components is component reusability. Component reusability measures how easily the component can be reused in a new environment. This paper provides a new measure of cohesion developed to assess the reusability of Java components retrieved from the Internet by a search engine. This measure differs from the majority of established metrics in two respects: it reflects the degree of similarity between classes quantitatively, and they also take account of indirect similarities. An empirical comparison of the new measure with the established metrics is described. The new measures are shown to be consistently superior at ranking components according to their reusability",2006,0, 1772,Fault Tolerance in Mobile Agent Systems by Cooperating the Witness Agents,"Mobile agents travel through servers for perform their programs and fault tolerance is fundamental and important in their itinerary. In the paper, are considering and described existent methods of fault tolerance in mobile agents. Then the method is considered that which uses cooperating agents to fault tolerance and to detect server and agent failure, meaning three type of agents involved: actual agent which performs programs for its owner, witness agent which monitors the actual agent and the witness agent after itself, probe which is sent for recovery the actual agent or the witness agent on the side of the witness agent. Traveling agent through servers, the witness is created by actual agent. Scenarios of failure and recovery of server and agent are discussed in the method. During performing the actual agent, the witness agents are increased by addition the servers. Proposed scheme is that minimizes the witness agents as far as possible, because with considering and comparing could concluded that existing all of witness agent is not necessary on the initial servers. Simulation of this method is done by C-Sim",2006,0, 1773,CCK: An Improved Coordinated Checkpoint/Rollback Protocol for Dataflow Applications in Kaapi,Fault tolerance protocols play an important role in today long runtime scientific parallel applications because the probability of failure may be important due to the number of unreliable components involved during simulation. In this paper we present our approach and preliminary results about a new checkpoint/recovery protocol based on a coordinated scheme. This protocol is highly coupled to the availability of an abstract representation of the execution,2006,0, 1774,Two architectures for testing distributed real-time systems,"A real-time system is a system that is required to react to stimuli from the environment within time intervals dictated by the environment. In real-time applications, the timing requirements are the main constraints and their mastering is the predominant factor for assessing the quality of service. The safety-critical nature of their domain and their inherent complexity advocate the use of formal methods in the software development process. Testing is one of the formal techniques that can be used to ensure the quality of real-time systems. This paper addresses and proposes a centralized architecture and a distributed architecture for the execution of test cases on distributed real-time systems. These two architectures are implemented in CORBA and Java. The specification model used is n-ports timed input output automata, a variant of timed automata of Alur and Dill (1994)",2006,0, 1775,How Much Software Quality Investment Is Enough: A Value-Based Approach,"This article draws on results from the emerging field of value-based software engineering (VBSE). VBSE aims to provide a quantitative approach to questions as how much software quality investment is enough. Based on the COCOMO II cost-estimation model and the COQUALMO quality-estimation model, quantitative risk analysis helps determine when to stop testing software and release the product. Further, we show how the model and approach can assess the relative payoff of value-based testing as compared to value-neutral testing",2006,0, 1776,Software Reliability Analysis by Considering Fault Dependency and Debugging Time Lag,"Over the past 30 years, many software reliability growth models (SRGM) have been proposed. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of personnel, the size of debugging team, the technique(s) being used, and so on. During software testing, practical experiences show that mutually independent faults can be directly detected and removed, but mutually dependent faults can be removed iff the leading faults have been removed. That is, dependent faults may not be immediately removed, and the fault removal process lags behind the fault detection process. In this paper, we will first give a review of fault detection & correction processes in software reliability modeling. We will then illustrate the fact that detected faults cannot be immediately corrected with several examples. We also discuss the software fault dependency in detail, and study how to incorporate both fault dependency and debugging time lag into software reliability modeling. The proposed models are fairly general models that cover a variety of known SRGM under different conditions. Numerical examples are presented, and the results show that the proposed framework to incorporate both fault dependency and debugging time lag for SRGM has a better prediction capability. In addition, an optimal software release policy for the proposed models, based on cost-reliability criterion, is proposed. The main purpose is to minimize the cost of software development when a desired reliability objective is given",2006,0, 1777,A New Methodology for Predicting Software Reliability in the Random Field Environments,"This paper presents a new methodology for predicting software reliability in the field environment. Our work differs from some existing models that assume a constant failure detection rate for software testing and field operation environments, as this new methodology considers the random environmental effects on software reliability. Assuming that all the random effects of the field environments can be captured by a unit-free environmental factor, eta, which is modeled as a random-distributed variable, we establish a generalized random field environment (RFE) software reliability model that covers both the testing phase and the operating phase in the software development cycle. Based on the generalized RFE model, two specific random field environmental reliability models are proposed for predicting software reliability in the field environment: the gamma-RFE model, and the beta-RFE model. A set of software failure data from a telecommunication software application is used to illustrate the proposed models, both of which provide very good fittings to the software failures in both testing and operation environments. This new methodology provides a viable way to model the user environments, and further makes adjustments to the reliability prediction for similar software products. Based on the generalized software reliability model, further work may include the development of software cost models and the optimum software release policies under random field environments",2006,0, 1778,Reliability Growth Modeling for Software Fault Detection Using Particle Swarm Optimization,"Modeling the software testing process to obtain the predicted faults (failures) depends mainly on representing the relationship between execution time (or calendar time) and the failure count or accumulated faults. A number of unknown function parameters such as the mean failure function mu(t;beta) and the failure intensity function lambda(t;beta) are estimated using either least-square or maximum likelihood estimation techniques. Unfortunately, the model parameters are normally in nonlinear relationships. This makes traditional parameter estimation techniques suffer many problems in finding the optimal parameters to tune the model for a better prediction. In this paper, we explore our preliminary idea in using particle swarm optimization (PSO) technique to help in solving the reliability growth modeling problem. The proposed approach will be used to estimate the parameters of the well known reliability growth models such as the exponential model, power model and S-shaped models. The results are promising.",2006,0, 1779,IMPRES: integrated monitoring for processor reliability and security,"Security and reliability in processor based systems are concerns requiring adroit solutions. Security is often compromised by code injection attacks, jeopardizing even 'trusted software'. Reliability is of concern where unintended code is executed in modern processors with ever smaller feature sizes and low voltage swings causing bit flips. Countermeasures by software-only approaches increase code size by large amounts and therefore significantly reduce performance. Hardware assisted approaches add extensive amounts of hardware monitors and thus incur unacceptably high hardware cost. This paper presents a novel hardware/software technique at the granularity of micro-instructions to reduce overheads considerably. Experiments show that our technique incurs an additional hardware overhead of 0.91% and clock period increase of 0.06%. Average clock cycle and code size overheads are just 11.9% and 10.6% for five industry standard application benchmarks. These overheads are far smaller than have been previously encountered",2006,0, 1780,Signature-based workload estimation for mobile 3D graphics,"Until recently, most 3D graphics applications had been regarded as too computationally intensive for devices other than desktop computers and gaming consoles. This notion is rapidly changing due to improving screen resolutions and computing capabilities of mass-market handheld devices such as cellular phones and PDAs. As the mobile 3D gaming industry is poised to expand, significant innovations are required to provide users with high-quality 3D experience under limited processing, memory and energy budgets that are characteristic of the mobile domain. Energy saving schemes such as dynamic voltage and frequency scaling (DVFS), as well as system-level power and performance optimization methods for mobile devices require accurate and fast workload prediction. In this paper, we address the problem of workload prediction for mobile 3D graphics. We propose and describe a signature-based estimation technique for predicting 3D graphics workloads. By analyzing a gaming benchmark, we show that monitoring specific parameters of the 3D pipeline provides better prediction accuracy over conventional approaches. We describe how signatures capture such parameters concisely to make accurate workload predictions. Signature-based prediction is computationally efficient because first, signatures are compact, and second, they do not require elaborate model evaluations. Thus, they are amenable to efficient, real-time prediction. A fundamental difference between signatures and standard history-based predictors is that signatures capture previous outcomes as well as the cause that led to the outcome, and use both to predict future outcomes. We illustrate the utility of signature-based workload estimation technique by using it as a basis for DVFS in 3D graphics pipelines",2006,0, 1781,QoS aware CORBA Middleware for Bluetooth,"The wireless nature and the mobility of Bluetooth enabled devices combined with the heterogeneity of the wide range of hardware and software capabilities present in those devices makes Bluetooth connection and resource management very complicated and error prone. To manage such diversity of software and hardware, middleware technologies masking the underlying platforms have been designed. One such middleware solution for Bluetooth based on common object request broker architecture (CORBA) is introduced and the mapping of GIOP messages to Bluetooth logical link control and adaptation protocol (L2CAP) links is explained in detail. The paper also describes how CORBA policy objects influence object reference creation and service contexts in the request/reply sequences and how client-server transport level quality of services (QoS) negotiations are achieved through QoS information embedded in object references and service contexts",2006,0, 1782,Self-adjusting Component-Based Fault Management,"The Trust4All project aims to define an open, component-based framework for the middleware layer in high-volume embedded appliances that enables robust and reliable operation, upgrading and extension. To improve availability of each individual application in a Trust4All system, we propose a runtime configurable fault management mechanism (FMM) which detects deviations from given service specifications by intercepting interface calls. There are two novel contributions associated with FMM. First, when repair is necessary, FMM picks a repair action that incurs the best tradeoff between the success rate and the cost of repair. Second, considering that it is rather difficult to obtain sufficient information about third party components during their early stage of usage, FMM is designed to be able to accumulate appropriate knowledge, e.g. the success rate of a specific repair action in the past and rules that can avoid a specific failure, and self-adjust its capability accordingly",2006,0, 1783,State of the Art and Practice of OpenSource Component Integration,"The open source software (OSS) development approach has become a remarkable option to consider for cost-efficient, high quality software development. Utilizing OSS as part of an in-house software application requires the software company to take the role of a component integrator. In addition, integrating OSS as part of in-house software has a few differences compared to integrating closed source software and in-house software, such as access to source code and the fact that OSS evolves differently than closed source software. This paper describes the current state of the art and practice of open source integration techniques. The main observations are that the lack of documentation and heterogeneity of platforms are problems that neither the state of the art or practice could solve. In addition, although literature provides techniques and methods for predicting and solving both architecture and component level integration problems, these were not used in practice. Instead, companies relied on experience and rules of thumb",2006,0, 1784,All Things Considered: Inspecting Statecharts by Model Transformation,"Inspections are a cost-effective way of finding errors. However, checklist-based inspections of statecharts can only find a limited class of flaws while scenario-based inspections can never practically traverse the vast numbers of possible combinations of states in complex models made up of multiple communicating finite state machines. A technique for systematic and comprehensive validation of such models is described, based on partitioning the overall behaviour into sets of transitions which show the system-level response in a simple and explicit way. This process is supported by a tool, Statestep, which helps the user to deal methodically and thoroughly with (for example) millions of possibilities. As an example, a subtle error is exposed in a small but non-trivial published statechart design. The technique offers the possibility of detecting any error, no matter how obscure the scenario in which it occurs",2006,0, 1785,Practical Use of Software Reliability Methods in New Product Development,"This paper presents seven software reliability estimation methods studied in the Nokia case unit, which operates in turbulent telecommunications business environment characterised by uncertainty and inability to predict the future. Tens of software reliability models have been developed since the beginning of 1970's. However, a few - if not any - of them have worked optimally across projects. This paper focuses on investigating the practical use of the methods in real-life complex development situations and demonstrates how the methods can be applied to new product development (NPD) process in the case unit. The results show that none of the methods operate alone but need to be combined with each other. Finally, ideas for further research are proposed",2006,0, 1786,Characterization of a Real Internet Radio Service,"The increase in the number of Web-pages where links to Internet-radios are offered has made these services one of the most popular in nowadays Internet. This popularity has motivated the interest of the scientific community and a lot of research has been carried out in order to improve and study these services. This paper presents the analysis of the Internet-radio hosted by the www.asturies.com digital newspaper. The study has been performed thanks to a log database stored over a period of almost two years. The traffic between every service device has been studied and different elements about users' behaviour have been analyzed. The conclusions are essential to improve the configuration of one of these services. Service models for Internet-radio services can be developed and help managers to test different configurations or predict future situations, avoiding problems in advance",2006,0, 1787,A Resilient Telco Grid Middleware,"Grid computing can exploit distributed, underutilized or not, resources to provide massive parallel CPU capacity. Load balancing, applications sharing, as well as geographically dispersed databases features are other Grid's aspects which are of interest for a telecommunications operator (Telco). Building a Grid middleware in order to implement Telco's services is thus a way to assess the validity of this type of architecture for future applications. To achieve a trustworthy platform, the middleware needs to take into account accidental or malicious faults which can impact different resilience aspects. This paper describes a secure and highly available architecture which, besides traditional Grid middleware functionalities (resource broker, job mapping, system monitoring, ...), makes use of fault-tolerant mechanisms (process duplication, failure handling, ...) to guarantee QoS defined in the service level agreement. Security is carried out by analyzing each node's defense capability issue and finding a suitable solution to match this with the appropriate user's job.",2006,0, 1788,Hybrid Prediction Model for improving Reliability in Self-Healing System,"In ubiquitous environments, which involve an even greater number of computing devices, with more informal modes of operation, this type of problem have rather serious consequences. In order to solve these problems when they arise, effective reliable systems are required. Also, system management is changing from a conventional central administration, to autonomic computing. However, most existing research focuses on healing after a problem has already occurred. In order to solve this problem, a prediction model is required to recognize operating environments and predict error occurrence. In this paper, a hybrid prediction model through four algorithms supporting self-healing in autonomic computing is proposed. This prediction model adopts a selective healing model, according to system situations for self-diagnosing and prediction of problems using four algorithms. In this paper, a hybrid prediction model is adopted to evaluate the proposed model in a self-healing system. In addition, prediction is compared with existing research and the effectiveness is demonstrated by experiment",2006,0, 1789,Risk Management through Architecture Design,Management of risks is critical issue in the project management and it is important to ensure that risk management is done in a sensible way. Many techniques to manage and reduce risks have been done previously but only few have addressed the design analysis to reduce risk and none have attempted to use software architecture analysis and design to manage risks. In this paper we try to find a solution through various software architectural design patterns. We present results of an experiment comparing this new technique with software risk evaluation tool (SRE) for creating test cases and detect faults. However the risks detected may differ suggesting that these two are different approach to same problem,2006,0, 1790,"Monitoring and Improving the Quality of ODC Data using the """"ODC Harmony Matrices"""": A Case Study","Orthogonal defect classification (ODC) is an advanced software engineering technique to provide in-process feedback to developers and testers using defect data. ODC institutionalization in a large organization involves some challenging roadblocks such as the poor quality of the collected data leading to wrong analysis. In this paper, we have proposed a technique ('Harmony Matrix') to improve the data collection process. The ODC Harmony Matrix has useful applications. At the individual defect level, results can be used to raise alerts to practitioners at the point of data collection if a low probability combination is chosen. At the higher level, the ODC Harmony Matrix helps in monitoring the quality of the collected ODC data. The ODC Harmony Matrix complements other approaches to monitor and enhances the ODC data collection process and helps in successful ODC institutionalization, ultimately improving both the product and the process. The paper also describes precautions to take while using this approach",2006,0, 1791,Predicting return-on-investment for product line generations,"The decision of an organization to introduce product line engineering depends on a sound and careful analysis of risks and return on investment. The latter is computed by an economic model, which relies on high quality input and must reflect the envisioned migration strategy sufficiently. To facilitate risk analysis, this paper applies Monte-Carlo simulation to an existing product line economic model. Additionally, the model is extended by the support of product line generations that is, considering the degeneration of product line infrastructures and taking reinvestment into an existing product line into account. The practical application of the model is demonstrated by an industrial case study",2006,0, 1792,Experiences with product line development of embedded systems at Testo AG,"Product line practices are increasingly becoming popular in the domain of embedded software systems. This paper presents results of assessing success, consistency, and quality of Testo's product line of climate and flue gas measurement devices after its construction and the delivery of three commercial products. The results of the assessment showed that the incremental introduction of architecture-centric product line development can be considered successful even though there is no quantifiable reduction of time-to-market as well as development and maintenance costs so far. The success is mainly shown by the ability of Testo to develop more complex products and the satisfaction of the involved developers. A major issue encountered is ensuring the quality of reusable components and the conformance of the products to the architecture during development and maintenance",2006,0, 1793,Generating a Test Strategy with Bayesian Networks and Common Sense,"Testing still represents an important share of the overall development effort and, coming late in the software life cycle, it is on the critical path both from a schedule and quality perspective. In an effort to conduct smarter software testing, Motorola Labs have developed the Bayesian test assistant (BTA), an advanced decision support tool to optimize all verification and validation activities, in development and system testing. With Bayesian networks, the theory underlying BTA, Motorola Labs built a library of causal models to predict, from key process, people and product factors, the quality of artefacts at each step of the software development. In this paper we present how BTA links the predictions from development models by mapping dependencies between components or subsystems to predict the level of risk in each system feature. As a result, and well before system testing starts, BTA generates a test strategy that optimizes the writing of test cases. During system test, BTA scores test cases to select an optimum set for each test step, leading to a faster discovery of defects. We also describe how BTA was deployed on large telecomm system releases in several Motorola organizations and the improvement driven so far in system testing",2006,0, 1794,On the Automation of Software Fault Prediction,"This paper discusses the issues involved in building a practical automated tool to predict the incidence of software faults in future releases of a large software system. The possibility of creating such a tool is based on the authors' experience in analyzing the fault history of several large industrial software projects, and constructing statistical models that are capable of accurately predicting the most fault-prone software entities in an industrial environment. The emphasis of this paper is on the issues involved in the tool design and construction and an assessment of the extent to which the entire process can be automated so that it can be widely deployed and used by practitioners who do not necessarily have any particular statistical or modeling expertise",2006,0, 1795,Testing the Implementation of Business Rules Using Intensional Database Tests,"One of the key roles of any information system is to enforce the business rules and policies set by the owning organisation. As for any important functionality, it is necessary to verify the implementation of any business rule carefully, through thorough testing. However, business rules have some specific features which make testing a particular challenge. They represent a more fine-grained unit of functionality than is usually considered by testing tools (programs, module, UML models, etc.) and their implementations are typically spread across a system (or perhaps some specific layer of a system). There is no convenient one-to-one relationship between programs and business rules that can facilitate their testing. To the best of our knowledge, no tools, methods or guidelines exist for helping software developers to test the implementation of business rules. Standard testing tools can help to a certain extent, but they leave the rule-specific work entirely in the programmer's hands. In this paper, we discuss the problems of testing business rules, and elicit the key features of a good test suite for a collection of business rules. We focus in particular on constraint business rules - an important class of rule that is commonly applied to the persistent data managed by the information system. We show how intensional database tests provide a suitable platform on which to implement business rule tests rapidly, and show how existing intensional test suites can be automatically adapted to test business rules. We have applied these ideas in a case study, which has allowed us to compare the relative costs of creating and executing these augmented test suites, as well as providing some evidence of their ability to detect faults in business rule implementations",2006,0, 1796,Designing an Architecture for Delivering Mobile Information Services to the Rural Developing World,"Paper plays a crucial role in many developing world information practices. However, paper-based records are inefficient, error-prone and difficult to aggregate. Therefore we need to link paper with the flexibility of online information systems. A mobile phone is the perfect bridging device. Long battery life, connectivity, solid-state memory, low price and immediate utility make it better suited to developing world conditions than a PC. However, mobile software platforms are difficult to use, difficult to develop for, and make the assumption of ubiquitous connectivity. To address these limitations we present CAM - a framework for developing mobile applications for the rural developing world. CAM applications are accessed by capturing barcodes using the phone camera, or by entering numeric strings with the keypad. Supporting minimal navigation, direct linkage to paper practices and offline multimedia interaction, CAM is uniquely adapted to rural user, application and infrastructure constraints",2006,0, 1797,Transient fault-tolerance through algorithms,"This article describes that single-version enhanced processing logic or algorithms can be very effective in gaining dependable computing through hardware transient fault tolerance (FT) in an application system. Transients often cause soft errors in a processing system resulting in mission failure. Errors in program flow, instruction codes, and application data are often caused by electrical fast transients. However, firmware and software fixes can have an important role in designing an ESD, or EMP-resistant system and are more cost effective than hardware. This technique is useful for detecting and recovering transient hardware faults or random bit errors in memory while an application is in execution. The proposed single-version software fix is a practical, useful, and economic tool for both offline and online memory scrubbing of an application system without using conventional N versions of software (NVS) and hardware redundancy in an application like a frequency measurement system",2006,0, 1798,Performance evaluation of maximal ratio combining diversity over the Weibull fading channel in presence of co-channel interference,"The Weibull distribution has recently attracted much attention among the radio community as a statistical model that better describes the fading phenomenon on wireless channels. In this paper, we consider a multiple access system in which each of the desired signal as well as the co-channel interferers are subject to Weibull fading. We analyze the performance of the L-branch maximal ratio combining (MRC) receiver in terms of the outage probability in such scenario in the two cases where the diversity branches are assumed to be independent or correlated. The analysis is also applicable to the cases where the diversity branches and/or the interferers fading amplitudes are non-identically distributed. Due to the difficulty of handling the resulting outage probability expressions numerically using the currently available mathematical software packages, we alternatively propose using Pade approximation (PA) to make the results numerically tractable. We provide numerical results for different number of interferers, different number of diversity branches as well as different degrees of correlation and power unbalancing between diversity branches. All our numerical results are verified by means of Monte-Carlo simulations and excellent agreement between the two sets is noticed",2006,0, 1799,A Software Simulation Study of a MD DS/SSMA Communication System with Adaptive Channel Coding,"Studies have shown that adaptive forward error correction (FEC) coding schemes enable a communication system to take advantage of varying channel conditions by switching to less powerful and/or less redundant FEC channel codes when conditions are good, thus enabling an increase in system throughput. The focus of this study is the simulation performance of a complete simulated multi-dimensional (MD) direct-sequence spread spectrum multiple access (DS/SSMA) communication system that employs an advanced adaptive channel coding scheme. The system is simulated and evaluated over a fully user-definable software-based multi-user (MU) multipath fading channel simulator (MFCS). Channel conditions are varied and the switching and adaptation performance of the system is monitored and evaluated. Sensing for adaptation is made possible by a sophisticated quality-of-service monitoring unit (QoSMU) that uses a sophisticated pseudo-error-rate (PER) extrapolation technique to estimate the system's true probability-of-error in real-time, without the need for known transmitted data. The system attempts to keep the estimated bit-error-rate (BER) performance within a predetermined range by switching between different FEC codes as conditions change. This paper commences with a short overview of each of the functional units of the system. Lastly, the simulation results for the coded and uncoded BER performances, as well as the real-time adaptation performance of the system are presented and discussed. This paper conclusively proves that adaptive coded systems have large throughput utilization advantages over that of fixed coded systems",2006,0, 1800,AutoTest: A Tool for Automatic Test Case Generation in Spreadsheets,"In this paper we present a system that helps users test their spreadsheets using automatically generated test cases. The system generates the test cases by backward propagation and solution of constraints on cell values. These constraints are obtained from the formula of the cell that is being tested when we try to execute all feasible DU associations within the formula. AutoTest generates test cases that execute all feasible DU pairs. If infeasible DU associations are present in the spreadsheet, the system is capable of detecting and reporting all of these to the user. We also present a comparative evaluation of our approach against the """"Help Me Test"""" mechanism in Forms/3 and show that our approach is faster and produces test suites that give better DU coverage",2006,0, 1801,A New Genetic Algorithm Approach for Secure JPEG Steganography,"Steganography is the act of hiding a message inside another message in such a way that can only be detected by its intended recipient. Naturally, there are security agents who would like to fight these data hiding systems by steganalysis, i.e. discovering covered messages and rendering them useless. There is currently no steganography system which can resist all steganalysis attacks. In this paper we propose a novel GA evolutionary process to make a secure steganographic encoding on JPEG images. Our steganography step is based on OutGuess which is proved to be the least vulnerable steganographic system. A combination of OutGuess steganalysis approach and maximum absolute difference (MAD) for the image quality are used as the GA fitness function. The model presented here is based on JPEG images; however, the idea can potentially be used in other multimedia steganography as well",2006,0, 1802,Lightweight Fault Localization with Abstract Dependences,"Locating faults is one of the most time consuming tasks in today's fast paced economy. Testing and formal verification techniques like model-checking are usually used for detecting faults but do not attempt to locate the root-cause for the detected faulty behavior. This article makes use of abstract dependences between program variables for locating faults in programs. We discuss the basic ideas, the underlying theory, and first experimental results, as well our model's limitations. Our fault localization model is based on a previous work that uses the abstract dependences for fault detection. First case studies indicate our model's practical applicability",2006,0, 1803,Analysis of Restart Mechanisms in Software Systems,"Restarts or retries are a common phenomenon in computing systems, for instance, in preventive maintenance, software rejuvenation, or when a failure is suspected. Typically, one sets a time-out to trigger the restart. We analyze and optimize time-out strategies for scenarios in which the expected required remaining time of a task is not always decreasing with the time invested in it. Examples of such tasks include the download of Web pages, randomized algorithms, distributed queries, and jobs subject to network or other failures. Assuming the independence of the completion time of successive tries, we derive computationally attractive expressions for the moments of the completion time, as well as for the probability that a task is able to meet a deadline. These expressions facilitate efficient algorithms to compute optimal restart strategies and are promising candidates for pragmatic online optimization of restart timers",2006,0, 1804,Design by Contract to Improve Software Vigilance,"Design by contract is a lightweight technique for embedding elements of formal specification (such as invariants, pre and postconditions) into an object-oriented design. When contracts are made executable, they can play the role of embedded, online oracles. Executable contracts allow components to be responsive to erroneous states and, thus, may help in detecting and locating faults. In this paper, we define vigilance as the degree to which a program is able to detect an erroneous state at runtime. Diagnosability represents the effort needed to locate a fault once it has been detected. In order to estimate the benefit of using design by contract, we formalize both notions of vigilance and diagnosability as software quality measures. The main steps of measure elaboration are given, from informal definitions of the factors to be measured to the mathematical model of the measures. As is the standard in this domain, the parameters are then fixed through actual measures, based on a mutation analysis in our case. Several measures are presented that reveal and estimate the contribution of contracts to the overall quality of a system in terms of vigilance and diagnosability",2006,0, 1805,Using Mutation Analysis for Assessing and Comparing Testing Coverage Criteria,"The empirical assessment of test techniques plays an important role in software testing research. One common practice is to seed faults in subject software, either manually or by using a program that generates all possible mutants based on a set of mutation operators. The latter allows the systematic, repeatable seeding of large numbers of faults, thus facilitating the statistical analysis of fault detection effectiveness of test suites; however, we do not know whether empirical results obtained this way lead to valid, representative conclusions. Focusing on four common control and data flow criteria (block, decision, C-use, and P-use), this paper investigates this important issue based on a middle size industrial program with a comprehensive pool of test cases and known faults. Based on the data available thus far, the results are very consistent across the investigated criteria as they show that the use of mutation operators is yielding trustworthy results: generated mutants can be used to predict the detection effectiveness of real faults. Applying such a mutation analysis, we then investigate the relative cost and effectiveness of the above-mentioned criteria by revisiting fundamental questions regarding the relationships between fault detection, test suite size, and control/data flow coverage. Although such questions have been partially investigated in previous studies, we can use a large number of mutants, which helps decrease the impact of random variation in our analysis and allows us to use a different analysis approach. Our results are then; compared with published studies, plausible reasons for the differences are provided, and the research leads us to suggest a way to tune the mutation analysis process to possible differences in fault detection probabilities in a specific environment",2006,0, 1806,Towards Regulatory Compliance: Extracting Rights and Obligations to Align Requirements with Regulations,"In the United States, federal and state regulations prescribe stakeholder rights and obligations that must be satisfied by the requirements for software systems. These regulations are typically wrought with ambiguities, making the process of deriving system requirements ad hoc and error prone. In highly regulated domains such as healthcare, there is a need for more comprehensive standards that can be used to assure that system requirements conform to regulations. To address this need, we expound upon a process called semantic parameterization previously used to derive rights and obligations from privacy goals. In this work, we apply the process to the privacy rule from the U.S. Health Insurance Portability and Accountability Act (HIPAA). We present our methodology for extracting and prioritizing rights and obligations from regulations and show how semantic models can be used to clarify ambiguities through focused elicitation and to balance rights with obligations. The results of our analysis can aid requirements engineers, standards organizations, compliance officers, and stakeholders in assuring systems conform to policy and satisfy requirements",2006,0, 1807,Matching Antipatterns to Improve the Quality of Use Case Models,"Use case modeling is an effective technique used to capture functional requirements. Use case models are mainly composed of textual descriptions written in natural language and simple diagrams that adhere to a few syntactic rules. This simplicity can be deceptive as many modelers create use case models that are incorrect, inconsistent, and ambiguous and contain restrictive design decisions. In this paper, a new methodology is described that utilizes antipatterns to detect potentially defective areas in use case models. This paper introduces the tool ARBIUM, which will support the proposed technique and aid analysts to improve the quality of their models. ARBIUM presents a framework that will allow developers to define their own antipatterns using OCL and textual descriptions. The proposed approach and tool are applied to a distributed biodiversity database use case model to demonstrate its feasibility. Our results indicate that they can improve the overall clarity and precision of use case models",2006,0, 1808,New generator split-phase transverse differential protection based on wavelet transform,"This paper presents a new split-phase transverse differential protection for a large generator based on wavelet transform. Research results show that there is almost no harmonic component on normal conditions, but it will produce great high-frequency current component when an internal fault occurs, which can be used to detect generator internal fault. With decomposition and reconstruction of the transient currents with wavelet transform, the high-frequency band fault currents are exploited in the new scheme. And the realization of the proposed protection device is also described in this paper, including the relay software and hardware design. The results from the experimental and field tests demonstrate that the new scheme is successful in detecting the generator internal fault. It has higher sensitivity and selectivity than the traditional protection scheme",2006,0, 1809,Development of circuit models for extractor components in high power microwave sources,"Summary form only given. The state-of-the-art in high power microwave (HPM) sources has greatly improved in recent years, in part due to advances in the computational tools available to analyze such devices. Chief among these advances is the widespread use of parallel particle-in-cell (PIC) techniques. Parallel PIC software allows high fidelity, three-dimensional, electromagnetic simulations of these complex devices to be performed. Despite these advances, however, parallel PIC software could be greatly supplemented by fast-running parametric codes specifically designed to mimic the behavior of the source in question. These tools can then be used to develop zero-order point designs for eventual assessment via full PIC simulation. One promising technique for these parametric formulations is circuit models, where in the full field description is reduced to capacitances, inductances, and resistances that can be quickly solved to yield small signal growth rates, resonant frequencies, quality factors, and potentially efficiencies. Building on the extensive literature from the vacuum electronics community, this poster will investigate the circuit models associated with the purely electromagnetic components of the extractor in the absence of space charge. Specifically, three-dimensional time-domain computational electromagnetics (AFRL's ICEPIC software) will be used to investigate the modification of the resonant frequencies and mode quality factors as a function of slot and load geometry. These field calculations will be reduced to circuit parameters for potential inclusion in parametric models, and the fidelity of the resulting description will be assessed",2006,0, 1810,On the Use of Mutation Faults in Empirical Assessments of Test Case Prioritization Techniques,"Regression testing is an important activity in the software life cycle, but it can also be very expensive. To reduce the cost of regression testing, software testers may prioritize their test cases so that those which are more important, by some measure, are run earlier in the regression testing process. One potential goal of test case prioritization techniques is to increase a test suite's rate of fault detection (how quickly, in a run of its test cases, that test suite can detect faults). Previous work has shown that prioritization can improve a test suite's rate of fault detection, but the assessment of prioritization techniques has been limited primarily to hand-seeded faults, largely due to the belief that such faults are more realistic than automatically generated (mutation) faults. A recent empirical study, however, suggests that mutation faults can be representative of real faults and that the use of hand-seeded faults can be problematic for the validity of empirical results focusing on fault detection. We have therefore designed and performed two controlled experiments assessing the ability of prioritization techniques to improve the rate of fault detection of test case prioritization techniques, measured relative to mutation faults. Our results show that prioritization can be effective relative to the faults considered, and they expose ways in which that effectiveness can vary with characteristics of faults and test suites. More importantly, a comparison of our results with those collected using hand-seeded faults reveals several implications for researchers performing empirical studies of test case prioritization techniques in particular and testing techniques in general",2006,0, 1811,Dependability analysis: performance evaluation of environment configurations,Prototyping-based fault injection environments are employed to perform dependability analysis and thus predict the behavior of circuits in presence of faults. A novel environment has been recently proposed to perform several types of dependability analyses in a common optimized framework. The approach takes advantage of hardware speed and of software flexibility to achieve optimized trade-offs between experiment duration and processing complexity. This paper discusses the possible repartition of tasks between hardware and embedded software with respect to the type of circuit to analyze and to the instrumentation achieved. The performances of the approach are evaluated for each configuration of the environment,2006,0, 1812,Practical application of probabilistic reliability analyses,"Liberalization of the energy markets has increased the cost pressure on network operators significantly. Corresponding cost saving measures in general will have negative effects on quality of supply, especially on supply reliability. On the other hand, a decrease of supply reliability is not acceptable for customers and politicians - and the public awareness was focused by several blackouts in the European and American transmission systems. In order to handle this delicate question of balancing network costs and supply reliability, detailed and above all quantitative information is required in the network planning process. A suitable tool for this task is probabilistic reliability analysis, which has already been in use for several years successfully. The method and possible applications are briefly described here and application is demonstrated with various examples from practical studies focusing on distribution systems. The results prove that reliability analyses are becoming an indispensable component of customer-oriented network planning",2006,0, 1813,Use of wavelets for out of step blocking function of distance relays,"Out of step blocking function in distance relays is required to distinguish between a power swing and a fault. Detection of symmetrical faults during power swings can present a challenge. In cases when the values of the apparent impedances seen by a distance relay before and after a three-phase fault during a power swing are very close, most of the proposed schemes can run into problems. If such values fall into zone-1 of the relay, not only the detection, but the speed of detection is also crucial. This paper introduces wavelet analysis to detect power swings as well as reliably and quickly detect a symmetrical fault during a power swing. Total number of dyadic wavelet levels of voltage/current waveforms and the choice of particular levels for such detection are carefully studied. Different power swing conditions and fault instants are simulated with PSCAD/EMTDCreg software to test the proposed methodology",2006,0, 1814,Reliability analysis of protective relays in fault information processing system in China,"The reliability indices of protective relays are first put forward in this paper. A Markov probability model is then established to evaluate the reliability of relay protection. With the state space analytical method, all the steady state probabilities and state transition probabilities can be calculated utilizing the data stored in the fault information processing system. We can get an equation that represents the influence of routine test intervals on relay unavailability. Based on this, the optimum routine test interval for protective relays can be determined. This paper also proposes an efficient method of processing large amount of information by the fault information processing system and evaluating the reliability of protective relays with it, and the corresponding software package is also developed. The application of it to an actual power system in China proves the method to be correct and effective",2006,0, 1815,Investigation of radiometric partial discharge detection for use in switched HVDC testing,"This paper reports on initial trials of a non-contact radio frequency partial discharge detection technique that has potential for use within fast switching HVDC test systems. Electromagnetic environments of this type can arise within important electrical transmission nodes such converter stations, so the methods described could in future be useful for condition monitoring purposes. The radiometric technique is outlined and the measurement system and its components are described. Preliminary field trials are reported and results presented for a discharge detected in part of the HV test system during set-up for long-term testing of a reactor. The calculated and observed locations of the discharge were in agreement to within 60 cm inside a test housing of diameter 5 m and height 8 m. Techniques for improving the location accuracy are discussed. The issue of data volume presents a considerable challenge for the RF measurement techniques. On the basis of observations, strategies for moving towards automated interpretation of the partial discharge signals are proposed, which will make use of intelligent software techniques",2006,0, 1816,Sonar Power Amplifier Testing System Based on Virtual Instrument,"This paper presents an intelligent test system for power amplifiers by using virtual instrument technology. The automatic range switching circuit and Hall current transducers are applied in this system, realizing effectively the measurement of wide-voltage signal and large current. LabVIEW, a graphical programming language, and expert system technology are employed to develop testing software, implementing performance test of sonar power amplifier parts, and measurement of technique index and fault diagnosis. The proposed system has been used in the certain type sonar power amplifier system. The test results show that flexibility and data processing capacity of test instruments are improved upon greatly, which can satisfy more needs of measurement. And the proposed system can detect and locate the fault position quickly",2006,0, 1817,The Application of Evidence Theory in the Field of Equipment Fault Diagnosis,"In this paper, we explain the fusion technology of information briefly, and discuss the general course and the merged rule about the equipment fault diagnoses by the D-S evidence theory in detail. We give the relation between basic probability assignment and matrix by the location operation of C-language, and obtain the basic probability assignment by the Matlab software which make the matrix operation easier. We diagnose the fault of the voltage transformer using the D-S evidence theory",2006,0, 1818,Fabrication of SiGe-On-Insulator by Improved Ge Condensation Technique,"Silicon germanium on insulator (SGOI) is a straightforward material for ultimate device scaling. This substrate combines two advantages: high carrier's velocity of the Si1 - xGex alloy and low parasitic capacitance due to the presence of a buried oxide. Several fabrication techniques for SGOI substrates, as SMOX, SMART-CUTtrade or liquid phase epitaxy have been proposed. Tezuka et al. present a new approach involving an epitaxial growth of low Ge contents SiGe alloy on SOI substrate followed by a high temperature oxidation. By selective oxidation of Silicon and diffusion of Germanium within the remaining SGOI layer, Ge content increases. A high Ge concentration SGOI layer is then obtained. Ge condensation technique is based on two competitive mechanisms: silicon oxidation involving Ge pill up at the oxide interface and Ge diffusion within the SiGe layer. Both take place during the high temperature oxidation. To favour Ge diffusion and carry out homogeneous SGOI profiles, we propose an improved Ge condensation technique with a multi-steps oxidation. Samples have been characterized by spectroscopic ellipsometry (SE), X ray reflection (XRR), X-ray fluorescence (XRF) and secondary ions mass spectroscopy (SIMS), transmission electronic microscopy (TEM) to assess the process quality for uniform SGOI fabrication with different Ge contents. Relaxation of SGOI layers has been observed either by atomic force microscopy or Raman spectroscopy. Influence of oxidation time has been studied and well defined oxidation recipes are proposed to obtain different Ge content SGOI substrates. Numerical simulations including the initial parameters, i.e., top SOI thickness, SiGe grown layer thickness and compositions have been studied by Athena software",2006,0, 1819,Data Analysis and Confidence based on SVM Density Estimation,"Data-driven models are frequently used in industry to predict various characteristics of processes. In order to build robust model, the quality of the data needs to be analysed. These models are also required to associate a level of confidence with their predictions. In a high-dimensional setting it is important to incorporate data density information when analyzing the quality of the data and the determining the confidence in a prediction. The SVM density estimation together with results from the typicalness framework form a powerful tool that is effective for industrial applications.",2006,0, 1820,Utilizing Computational Intelligence in Estimating Software Readiness,"Defect tracking using computational intelligence methods is used to predict software readiness in this study. By comparing predicted number of faults and number of faults discovered in testing, software managers can decide whether the software are ready to be released or not. Our predictive models can predict: (i) the number of faults (defects), (ii) the amount of code changes required to correct a fault and (iii) the amount of time (in minutes) to make the changes in respective object classes using software metrics as independent variables. The use of neural network model with a genetic training strategy is introduced to improve prediction results for estimating software readiness in this study. Existing object-oriented metrics and complexity software metrics are used in the Business Tier neural network based prediction model. New sets of metrics have been defined for the Presentation Logic Tier and Data Access Tier.",2006,0, 1821,Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults,"In the last decade, empirical studies on object-oriented design metrics have shown some of them to be useful for predicting the fault-proneness of classes in object-oriented software systems. This research did not, however, distinguish among faults according to the severity of impact. It would be valuable to know how object-oriented design metrics and class fault-proneness are related when fault severity is taken into account. In this paper, we use logistic regression and machine learning methods to empirically investigate the usefulness of object-oriented design metrics, specifically, a subset of the Chidamber and Kemerer suite, in predicting fault-proneness when taking fault severity into account. Our results, based on a public domain NASA data set, indicate that 1) most of these design metrics are statistically related to fault-proneness of classes across fault severity, and 2) the prediction capabilities of the investigated metrics greatly depend on the severity of faults. More specifically, these design metrics are able to predict low severity faults in fault-prone classes better than high severity faults in fault-prone classes",2006,1, 1822,A Classification-Based Fault Detection and Isolation Scheme for the Ion Implanter,"We propose a classification-based fault detection and isolation scheme for the ion implanter. The proposed scheme consists of two parts: 1) the classification part and 2) the fault detection and isolation part. In the classification part, we propose a hybrid classification tree (HCT) with learning capability to classify the recipe of a working wafer in the ion implanter, and a k-fold cross-validation error is treated as the accuracy of the classification result. In the fault detection and isolation part, we propose a warning signal generation criteria based on the classification accuracy to detect and fault isolation scheme based on the HCT to isolate the actual fault of an ion implanter. We have compared the proposed classifier with the existing classification software and tested the validity of the proposed fault detection and isolation scheme for real cases to obtain successful results",2006,0, 1823,Emulation of Software Faults: A Field Data Study and a Practical Approach,"The injection of faults has been widely used to evaluate fault tolerance mechanisms and to assess the impact of faults in computer systems. However, the injection of software faults is not as well understood as other classes of faults (e.g., hardware faults). In this paper, we analyze how software faults can be injected (emulated) in a source-code independent manner. We specifically address important emulation requirements such as fault representativeness and emulation accuracy. We start with the analysis of an extensive collection of real software faults. We observed that a large percentage of faults falls into well-defined classes and can be characterized in a very precise way, allowing accurate emulation of software faults through a small set of emulation operators. A new software fault injection technique (G-SWFIT) based on emulation operators derived from the field study is proposed. This technique consists of finding key programming structures at the machine code-level where high-level software faults can be emulated. The fault-emulation accuracy of this technique is shown. This work also includes a study on the key aspects that may impact the technique accuracy. The portability of the technique is also discussed and it is shown that a high degree of portability can be achieved",2006,0, 1824,Novelty Detection Based Machine Health Prognostics,In this paper we present a new novelty detection algorithm for continuous real time monitoring of machine health and prediction of potential machine faults. The kernel of the system is a generic evolving model that is not dependent on the specific measured parameters determining the health of a particular machine. Two alternative strategies are introduced in order to predict abrupt and gradually developing (incipient) changes. This algorithm is realized as an autonomous software agent that continuously updates its decision model implementing an unsupervisory recursive learning algorithm. Results of validation of the proposed algorithm by accelerated testing experiments are also discussed,2006,0, 1825,Modeling the Reliability of Existing Software using Static Analysis,"Software unreliability represents an increasing risk to overall system reliability. As systems become larger and more complex, mission critical and safety critical systems have had increasing functionality controlled exclusively through software. This change, coupled with generally increasing reliability in hardware modules, has resulted in a shift of the root cause of systems failure from hardware to software. Market forces, including decreased time to market, reduced development team sizes, and other factors, have encouraged projects to reuse existing software as well as to purchase COTS software solutions. This has made the usage of the more than 200 existing software reliability models increasingly difficult. Traditional software reliability models require significant testing data to be collected during software development in order to estimate software reliability. If this data is not collected in a disciplined manner or is not made available to software engineers, these modeling techniques can not be applied. It is imperative that practical reliability modeling techniques be developed to address these issues. It is on this premise that an appropriate software reliability model combining static analysis of existing source code modules, limited testing with path capture, and Bayesian belief networks is presented. Static analysis is used to detect faults within the source code which may lead to failure. Code coverage is used to determine which paths within the source code are executed as well as how often they execute. Finally, Bayesian belief network is then used to combine these parameters and estimate the resulting software reliability",2006,0, 1826,Weighted Proportional Sampling : AGeneralization for Sampling Strategies in Software Testing,"Current activities to measure the quality of software products rely on software testing. The size and complexity of software systems make it almost impossible to perform complete coverage testing. During the past several years, many techniques to improve the test effectiveness (i.e., the ability to find faults) have been proposed to address this issue. Two examples of such strategies are random testing and partition testing. Both strategies follow an input domain sampling to perform the testing process. The procedure and assumptions for selecting these points seem to be different for both strategies: random testing considers only the probability of each sub-domain (i.e. uniform sampling) while partition testing considers only the sampling rate of each sub-domain (i.e., proportional sampling). This paper describes a more general sampling strategy, named weighted proportional sampling strategy. This strategy unifies both strategies into a general model that encompasses both of them as special cases. This paper also proposes an optimization model to determine the number of sampled points depending on the sampling strategy",2006,0, 1827,Estimating the Heavy-Tail Index for WWW Traces,"Heavy-tailed behavior of WWW traffic has serious implications for the design and performance analysis of computer networks. This behavior gives rise to rare events which could be catastrophic for the QoS of an application. Thus, an accurate detection and quantification of the degree of thickness of a distribution is required. In this paper we detect and quantify the degree of tail-thickness for the file size and transfer times distributions of several WWW traffic traces. For accomplishing the above, the behavior of four estimators in real WWW traces characteristics is studied. We show that Hill-class estimators present varying degrees of accuracy and should be used as a first step towards the estimation of the tail-index. The QQ estimator, on the other hand, is shown to be more robust and adaptable, thus giving rise to more confident point estimates",2006,0, 1828,Toward Component Non-functional Interoperability Analysis: A UML-based and Goal-Oriented Approach,"Component-based development (CBD) has a great potential of reducing development cost and time by integrating existing software components. But it also faces many challenges one of which is ensuring interoperability of the components that may have been developed with different functional and non-functional goals. The software community has traditionally focused more on the functional aspect of the interoperability such as syntactic and semantic compatibility. However, incompatibility from the non-functional aspect could lead to poor quality such as insufficient security or even inoperable system. This paper presents a preliminary framework for analyzing non-functional requirements (NFRs) defined for the component required and provided interfaces. The components are considered non-functionally interoperable when they agree on the definition and implementation techniques used to achieve the NFRs. Any detected mismatches can be resolved using a combination of the three presented tactics, including replacing the server component, negotiating for more attainable NFRs, or using an adapter component to bridge the non-functional differences. A running example based on a simplified Web-based conference management system is used to illustrate the application of this framework",2006,0, 1829,Proactive maintenance with variant workload under Distributed Multimedia Application Systems,"Distributed multimedia applications such as video on demand (VoD), require dynamic quality of service (QoS) guarantee from service servers for their continuous multimedia streams. In this paper, we build the service availability Markovian model for unified failure-recovery mechanism under variant load scenarios, by calculating local and global kernels of the Markovain model, we get the steady-state availability and unavailability probabilities. Numerical results show that there exist differences between the system availability and the request perceived availability under variant load state (the nature of this kind of system's scenario). This provides strategies for improving VoD system's availability",2006,0, 1830,Automated Information Aggregation for Scaling Scale-Resistant Services,"Machine learning provides techniques to monitor system behavior and predict failures from sensor data. However, such algorithms are """"scale resistant"""" $high computational complexity and not parallelizable. The problem then becomes identifying and delivering the relevant subset of the vast amount of sensor data to each monitoring node, despite the lack of explicit """"relevance"""" labels. The simplest solution is to deliver only the """"closest"""" data items under some distance metric. We demonstrate a better approach using a more sophisticated architecture: a scalable data aggregation and dissemination overlay network uses an influence metric reflecting the relative influence of one node's data on another, to efficiently deliver a mix of raw and aggregated data to the monitoring components, enabling the application of machine learning tools on real-world problems. We term our architecture level of detail after an analogous computer graphics technique",2006,0, 1831,Programming Language Inherent Support for Constrained XML Schema Definition Data Types and OWL DL,"Recently, the Web Ontology Language (OWL) and XML schema definition (XSD) have become ever more important when it comes to conceptualize knowledge and to define programming language independent type systems. However, writing software that operates on ontological data and on XML instance documents still suffers from a lack of compile time support for OWL and XSD. Especially, obeying lexical- and value space constraints that may be imposed on XSD simple data types and preserving the consistency of assertional ontological knowledge is still error prone and laborious. Validating XML instance documents and checking the consistency of ontological knowledge bases according to given XML schema definitions and ontological terminologies, respectively, requires significant amounts of code. This paper presents novel compile time- and code generation features, which were implemented as an extension of the C# programming language. Zhi# provides compile time-and runtime support for constrained XML schema definition simple data types and it guarantees terminological validity for modifications of assertional ontological data",2006,0, 1832,Software Library Usage Pattern Extraction Using a Software Model Checker,"The need to manually specify temporal properties of software systems is a major barrier to wider adoption of software model checking, because the specification of software temporal properties is a difficult, time-consuming, and error-prone process. To address this problem, we propose to automatically extract software library usage patterns, which are one type of temporal specifications. Our approach uses a model checker to check a set of software library usage pattern candidates against existing programs using that library, and identifies valid patterns based on model checking results. These valid patterns can help programmers learn about common software library usage. They can also be used to check new programs using the same library. We applied our approach to C programs using the OpenSSL library and the C standard library, and extracted valid usage patterns using BLAST. We also successfully used the extracted valid usage patterns to detect an error in an open source project hosted by SourceForge.net",2006,0, 1833,Multilevel Modelling Software Development,"Different from other engineering areas, the level of reuse in software engineering is very low. Also, developing large-scale applications which involve thousands of software elements such as classes and thousands of interactions among them is a complex and error-prone task. Industry currently lacks modelling practices and modelling tool support to tackle these issues. Model driven development (MDD) has emerged as an approach to diminishing software development complexity. We claim that models alone are not enough to tackle low reuse and complexity. Our contribution is a multilevel modelling development (MMD) framework whereby models are defined at different abstraction levels. A modelling level is constructed out by assembling software elements defined at the adjacent lower-level. MMD effectively diminish development complexity and facilitates large-scale reuse",2006,0, 1834,Learning Bayesian Networks for Systems Diagnosis,"This paper proposes the construction of a Bayesian network for failure diagnosis in industrial systems. We built this network considering the plant mathematical model and it includes parameters and structure learning through the Beta Dirichlet distributions. We experience the previous methodology by means of a case study, where we simulate some failures that can occurs in the valves used to interconnect a deposits system. With those failures information, we train the network and this way we learn the structure and parameters of the Bayesian network. Once obtained the network, we design the diagnosis probabilistic inference through the poly-trees algorithm. It will give us the valves failure probabilities according to the evidences that show up in our entrance sensors. In this work, we try the existent uncertainty in the diagnosis variables through the probabilistic and fuzzy approach. Since the information provided by our sensors (diagnosis variables) is represented in a fuzzy logic form, for then to be converted to probability intervals, generalizing the Dempster-Shafer theory to fuzzy sets. After that, we spread this information in interval form throughout the diagnosis Bayesian network to get our diagnosis results. The probability interval is more advisable in the taking decisions that a singular value",2006,0, 1835,Analyzing and Extending MUMCUT for Fault-based Testing of General Boolean Expressions,"Boolean expressions are widely used to model decisions or conditions of a specification or source program. The MUMCUT, which is designed to detect seven common faults where Boolean expressions under test are assumed to be in Irredundant Disjunctive Normal Form (IDNF), is an efficient fault-based test case selection strategy in terms of the fault-detection capacity and the size of selected test suite. Following up our previous work that reported the fault-detection capacity of the MUMCUT when it is applied to general form Boolean expressions, in this paper we present the characteristic of the types of single faults committed in general Boolean expressions that a MUMCUT test suite fails to detect, analyze the certainty why a MUMCUT test suite fails to detect these types of undetected faults, and provide some extensions to enhance the detection capacity of the MUMCUT for these types of undetected faults.",2006,0, 1836,Predication of Software Reliability Based on Grey System,"There are lots of factors influencing software reliability in course of developing software, so scientists have to omit minor factors and to preserve key factors. Furthermore, artificially simplify and limit the ways affecting the preserved factors, which is used as the basis to establish mathematic model and predict software reliability. As same as other software reliability models, there are also some assumptions or limitations in this paper. However, theses assumptions or limitations are originated from basic assumption of grey system. Most traditional software reliability models are derived from probability method, but grey system is not. Therefore, this paper will only make a comparison between our mathematic model derived by grey system and that derived by probability method. Finally, the result was that both models are limitary discrete monotone increasing exponential function. However, precision of reliability predication will be dependent on testing of bigger real cases for a long time. This paper, based on grey system, simply has put forward our viewpoint.",2006,0, 1837,Ontology Based Software Reconfiguration in a Ubiquitous Computing Environment,"A middleware in ubiquitous computing environment (UbiComp) is required to support seamless on-demand services over diverse resource situations in order to meet various user requirements [1]. Since UbiComp applications need situation-aware middleware services in this environment. In this paper, we propose a semantic middleware architecture to support dynamic software component reconfiguration based fault and service ontology to provide fault-tolerance in a ubiquitous computing environment. Our middleware includes autonomic management to detect faults, analyze causes of them, and plan semantically meaningful strategies to deal with a problem with associating fault and service ontology trees. We implemented a referenced prototype, Web-service based Application Execution Environment (Wapee), as a proof-of-concept, and showed the efficiency in runtime recovery.",2006,0, 1838,Risk: A Good System Security Measure,"What gets measured gets done. Security engineering as a discipline is still in its infancy. The field is hampered by its lack of adequate measures of goodness. Without such a measure, it is difficult to judge progress and it is particularly difficult to make engineering trade-off decisions when designing systems. The qualities of a good metric include that it: (1) measures the right thing, (2) is quantitatively measurable, (3) can be measured accurately, (4) can be validated against ground truth, and (5) be repeatable. By """"measures the right thing"""", the author means that it measures some set of attributes that directly correlates to closeness to meeting some stated goal. For system security, the author sees the right goal as """"freedom from the possibility of suffering damage or loss from malicious attack."""" Damage or loss applies to the mission effectiveness of the information infrastructure of a system. The mission can be maximizing profits while making quality cars or it could be defending an entire nation against foreign incursion",2006,0, 1839,On the Distribution of Property Violations in Formal Models: An Initial Study,"Model-checking techniques are successfully used in the verification of both hardware and software systems of industrial relevance. Unfortunately, the capability of current techniques is still limited and the effort required for verification can be prohibitive (if verification is possible at all). As a complement, fast, but incomplete, search tools may provide practical benefits not attainable with full verification tools, for example, reduced need for manual abstraction and fast detection of property violations during model development. In this report we investigate the performance of a simple random search technique. We conducted an experiment on a production-sized formal model of the mode-logic of a flight guidance system. Our results indicate that random search quickly finds the vast majority of property violations in our case-example. In addition, the times to detect various property violations follow an acutely right-skewed distribution and are highly biased toward the easy side. We hypothesize that the observations reported here are related to the phase transition phenomenon seen in Boolean satisfiability and other NP-complete problems. If so, these observations could be revealing some of the fundamental aspects of software (model) faults and have implications on how software engineering activities, such as analysis, testing, and reliability modeling, should be performed",2006,0, 1840,Scale Free in Software Metrics,"Software has become a complex piece of work by the collective efforts of many. And it is often hard to predict what the final outcome will be. This transition poses new challenge to the software engineering (SE) community. By employing methods from the study of complex network, we investigate the object oriented (OO) software metrics from a different perspective. We incorporate the weighted methods per class (WMC) metric into our definition of the weighted OO software coupling network as the node weight. Empirical results from four open source OO software demonstrate power law distribution of weight and a clear correlation between the weight and the out degree. According to its definition, it suggests uneven distribution of function among classes and a close correlation between the functionality of a class and the number of classes it depending on. Further experiment shows similar distribution also exists between average LCOM and WMC as well as out degree. These discoveries will help uncover the underlying mechanisms of software evolution and will be useful for SE to cope with the emerged complexity in software as well as efficient test cases design",2006,0, 1841,Security Consistency in UML Designs,"Security attacks continually threaten distributed systems, disrupting both individuals and organizations economically and physically. In the software lifecycle, early detection and correction of security flaws in the design phase can reduce overall costs associated with maintenance. Current software development methodologies such as the model driven architecture rely on quality Unified Modeling Language (UML) design models. Often these models are complex and consist of many structural and behavioral views. This can lead to inconsistencies between views. Existing approaches remedy many of these inconsistencies but do not address security consistency across design views. This paper presents an approach to detecting and resolving security faults in UML designs. The approach defines the notion of security inconsistency in designs, analyzes UML views for security inconsistencies, and generates a set of recommended design changes that include Object Constraint Language (OCL) expressions. The OCL can be used as a test oracle in both the design and implementation phases of the software life-cycle",2006,0, 1842,Proportional Intensity-Based Software Reliability Modeling with Time-Dependent Metrics,"The black-box approach based on stochastic software reliability models is a simple methodology with only software fault data in order to describe the temporal behavior of fault-detection processes, but fails to incorporate some significant development metrics data observed in the development process. In this paper we develop proportional intensity-based software reliability models with time-dependent metrics, and propose a statistical framework to assess the software reliability with the time-dependent covariate as well as the software fault data. The resulting models are similar to the usual proportional hazard model, but possess somewhat different covariate structure from the existing one. We compare these metrics-based software reliability models with some typical non-homogeneous Poisson process models, which are the special cases of our models, and evaluate quantitatively the goodness-of-fit from the viewpoint of information criteria. As an important result, the accuracy on reliability assessment strongly depends on the kind of software metrics used for analysis and can be improved by incorporating the time-dependent metrics data in modeling",2006,0, 1843,On Detection Conditions of Double FaultsRelated to Terms in Boolean Expressions,"Detection conditions of specific classes of faults have recently been studied by many researchers. Under the assumption that at most one of these faults occurs in the software under test, these fault detection conditions were mainly used in two ways. First, they were used to develop test case selection strategies for detecting corresponding classes of faults. Second, they were used to study fault class hierarchies, where a test case that detects a particular class of faults can also detect some other classes of faults. In this paper, we study detection conditions of double faults. Besides developing new test case selection strategies and studying new fault class hierarchies, our analysis provides further insights to the effect of fault coupling. Moreover, these fault detection conditions can be used to compare effectiveness of existing test case selection strategies (which were originally developed for the detection of single occurrence of certain classes of faults) in detecting double faults that may be present in the software",2006,0, 1844,Test Case Prioritization Using Relevant Slices,"Software testing and retesting occurs continuously during the software development lifecycle to detect errors as early as possible. The sizes of test suites grow as software evolves. Due to resource constraints, it is important to prioritize the execution of test cases so as to increase chances of early detection of faults. Prior techniques for test case prioritization are based on the total number of coverage requirements exercised by the test cases. In this paper, we present a new approach to prioritize test cases based on the coverage requirements present in the relevant slices of the outputs of test cases. We present experimental results comparing the effectiveness of our prioritization approach with that of existing techniques that only account for total requirement coverage, in terms of ability to achieve high rate of fault detection. Our results present interesting insights into the effectiveness of using relevant slices for test case prioritization",2006,0, 1845,Traceability between Software Architecture Models,"Software architecture (SA) is the blueprint of the software system and considered as one of the most important artifacts in component based development. The design and analysis of SA can be very complex. Under the inspiration of model-driven development, the design of SA has been no more constrained in one stage. It is a trend to construct multiple SA models in multiple stages during the software life cycle. Thus, the traceability between these SA models becomes a new challenge. The information between these SA models in deferent stages is usually not recorded well and easy to be lost lately, which makes the maintenance and evolution difficult and error-prone. In this paper, we present an approach to recording the information between SA models via a traceability model for reducing the loss of design decisions and helping developers understand the software system well",2006,0, 1846,A Technique to Reduce the Test Case Suites for Regression Testing Based on a Self-Organizing Neural Network Architecture,"This paper presents a technique to select subsets of the test cases, reducing the time consumed during the evaluation of a new software version and maintaining the ability to detect defects introduced. Our technique is based on a model to classify test case suites by using an ART-2A self-organizing neural network architecture. Each test case is summarized in a feature vector, which contains all the relevant information about the software behavior. The neural network classifies feature vectors into clusters, which are labeled according to software behavior. The source code of a new software version is analyzed to determine the most adequate clusters from which the test case subset will be selected. Experiments compared feature vectors obtained from all-uses code coverage information to a random selection approach. Results confirm the new technique has improved the precision and recall metrics adopted",2006,0, 1847,Automated Health-Assessment of Software Components using Management Instrumentatio,"Software components are regularly reused in many large-scale, mission-critical systems where the tolerance for poor performance is quite low. As new components are integrated within an organization's computing infrastructure, it becomes critical to ensure that these components continue to meet the expected quality of service (QoS) requirements. Management instrumentation is an integrated capability of a software system that enables an external entity to assess that system's internals, such as its operational states, execution traces, and various quality attributes during runtime. In this paper, we present an approach that enables the efficient generation, measurement, and assessment of various QoS attributes of software components during runtime using management instrumentation. Monitoring the quality of a component in this fashion has many benefits, including the ability to proactively detect potential QoS-related issues within a component to avoid potentially expensive downtime of the overall environment. The main contributions of our approach consist of three parts: a lightweight component instrumentation framework that transparently generates a pre-defined set of QoS-related diagnostic data when integrated within a component, a method to formally define the health state of a component in terms of the expected QoS set forth by the target environment, and finally a method for publishing the QoS-related diagnostic data during runtime so that an external entity can measure the current health of a component and take appropriate actions. The main QoS types that we consider are: performance, reliability, availability, throughput, and resource usage. Experimentation results show that our approach can be efficiently utilized in large mission-critical systems",2006,0, 1848,Consensus ontology generation in a socially interacting multiagent system,"This paper presents an approach for building consensus ontologies from the individual ontologies of a network of socially interacting agents. Each agent has its own conceptualization of the world. The interactions between agents are modeled by sending queries and receiving responses and later assessing each other's performance based on the results. This model enables us to measure the quality of the societal beliefs in the resources which we represent as the expertise in each domain. The dynamic nature of our system allows us to model the emergence of consensus that mimics the evolution of language. We present an algorithm for generating the consensus ontologies which makes use of the authoritative agent's conceptualization in a given domain. As the expertise of agents change after a number of interactions, the consensus ontology that we build based on the agents' individual views evolves. We evaluate the consensus ontologies by using different heuristic measures of similarity based on the component ontologies",2006,0, 1849,QoS-Based Service Composition,"QoS has been one of the major challenges in Web services area. Though negotiated in the contract, service quality usually can not be guaranteed by providers. Therefore, the service composer is obligated to detect real quality status of component services. Local monitoring cannot fulfil this task. We propose a Probe-based architecture to address this problem. By running light weighted test cases, the Probe can collect accurate quality data to support runtime service composition and composition re-planning",2006,0, 1850,A fault tolerant VoIP implementation based on open standards,"This paper highlights the design and implementation aspects for making voice over IP softswitches more dependable on commercial of-the-shelf telecommunication platforms. As a proof-of-concept, the open source Asterisk Private Branch Exchange application was made fault tolerant by using high availability middleware based on the Service Availability Forum's application interface specifications (AIS). The prototype was implemented on Intel x86 architecture blade servers running Carrier Grade Linux in an active/hot-standby configuration. Primarily, the Asterisk application was re-engineered and adapted to use AIS defined interfaces and model. In case of application, component or node failures, the middleware detects and triggers the application failover to the hot-standby node. The Asterisk application on the hot-standby node detects it is now the active instance, so it retrieves the checkpoint data and immediately continues to service both existing and new call-sessions thus improving overall availability",2006,0, 1851,"Improving access to relevant data on faults, errors and failures in real systems","In order to be able to test the effectiveness and verify proposed techniques for enhanced availability based on field data from systems it is important to have reliability data of the components and the information necessary to characterize or model the system. This includes inter alia the type and number of components, their protection and dependency relations as well the automatic recovery mechanisms built into the system. An important benefit of making system models and logs available to the research community in a standard format is that it opens up the possibility for creating tools to assess and optimize deployed as well as hypothetical system configurations. Specialized tools for on-line and off-line analysis and classification of reliability data also become viable. Availability modeling tools could be benchmarked against actual data. Depending on the usefulness of such tools and the level of adoption of standard models and formats in the industry a market for reliability data analysis tools could emerge over time. These tools could be used during the design, deployment and operation phases of a system in order to predict or enhance the availability of the services it provides",2006,0, 1852,Rephrasing Rules for Off-The-Shelf SQL Database Servers,"We have reported previously (Gashi et al., 2004) results of a study with a sample of bug reports from four off-the-shelf SQL servers. We checked whether these bugs caused failures in more than one server. We found that very few bugs caused failures in two servers and none caused failures in more than two. This would suggest a fault-tolerant server built with diverse off-the-shelf servers would be a prudent choice for improving failure detection. To study other aspects of fault tolerance, namely failure diagnosis and state recovery, we have studied the """"data diversity"""" mechanism and we defined a number of SQL rephrasing rules. These rules transform a client sent statement to an additional logically equivalent statement, leading to more results being returned to an adjudicator. These rules therefore help to increase the probability of a correct response being returned to a client and maintain a correct state in the database",2006,0, 1853,Model-Based Testing of Community-Driven Open-Source GUI Applications,"Although the World-Wide-Web (WWW) has significantly enhanced open-source software (OSS) development, it has also created new challenges for quality assurance (QA), especially for OSS with a graphical-user interface (GUI) front-end. Distributed communities of developers, connected by the WWW, work concurrently on loosely-coupled parts of the OSS and the corresponding GUI code. Due to the unprecedented code churn rates enabled by the WWW, developers may not have time to determine whether their recent modifications have caused integration problems with the overall OSS; these problems can often be detected via GUI integration testing. However, the resource-intensive nature of GUI testing prevents the application of existing automated QA techniques used during conventional OSS evolution. In this paper we develop new process support for three nested techniques that leverage developer communities interconnected by the WWW to automate model-based testing of evolving GUI-based OSS. The """"innermost"""" technique (crash testing) operates on each code check-in of the GUI software and performs a quick and fully automatic integration test. The second technique {smoke testing) operates on each day's GUI build and performs functional """"reference testing"""" of the newly integrated version of the GUI. The third (outermost) technique (comprehensive GUI testing) conducts detailed integration testing of a major GUI release. An empirical study involving four popular OSS shows that (1) the overall approach is useful to detect severe faults in GUI-based OSS and (2) the nesting paradigm helps to target feedback and makes effective use of the WWW by implicitly distributing QA",2006,0, 1854,Improving Effectiveness of Automated Software Testing in the Absence of Specifications,"Program specifications can be valuable in improving the effectiveness of automated software testing in generating test inputs and checking test executions for correctness. Unfortunately, specifications are often absent from programs in practice. We present a framework for improving effectiveness of automated testing in the absence of specifications. The framework supports a set of related techniques, including redundant-test detection, non-redundant-test generation, test selection, test abstraction, and program-spectra comparison. The framework has been implemented and empirical results have shown that the developed techniques within the framework improve the effectiveness of automated testing by detecting high percentage of redundant tests among test inputs generated by existing tools, generating non-redundant test inputs to achieve high structural coverage, reducing inspection efforts for detecting problems in the program, and exposing behavioral differences during regression testing",2006,0, 1855,Towards Portable Metrics-based Models for Software Maintenance Problems,"The usage of software metrics for various purposes has become a hot research topic in academia and industry (e.g. detecting design patterns and bad smells, studying change-proneness, quality and maintainability, predicting faults). Most of these topics have one thing in common: they are all using some kind of metrics-based models to achieve their goal. Unfortunately, only few researchers have tested these models on unknown software systems so far. This paper tackles the question, which metrics are suitable for preparing portable models (which can be efficiently applied to unknown software systems). We have assessed several metrics on four large software systems and we found that the well-known RFC and WMC metrics differentiate the analyzed systems fairly well. Consequently, these metrics cannot be used to build portable models, while the CBO, LCOM and LOC metrics behave similarly on all systems, so they seem to be suitable for this purpose",2006,0, 1856,A Method for an Accurate Early Prediction of Faults in Modified Classes,"In this paper we suggest and evaluate a method for predicting fault densities in modified classes early in the development process, i.e., before the modifications are implemented. We start by establishing methods that according to literature are considered the best for predicting fault densities of modified classes. We find that these methods can not be used until the system is implemented. We suggest our own methods, which are based on the same concept as the methods suggested in the literature, with the difference that our methods are applicable before the coding has started. We evaluate our methods using three large telecommunication systems produced by Ericsson. We find that our methods provide predictions that are of similar quality to the predictions based on metrics available after the code is implemented. Our predictions are, however, available much earlier in the development process. Therefore, they enable better planning of efficient fault prevention and fault detection activities",2006,0, 1857,Predicting for MTBF Failure Data Series of Software Reliability by Genetic Programming Algorithm,"At present, most of software reliability models have to build on certain presuppositions about software fault process, which also brings on the incongruence of software reliability models application. To solve these problems and cast off traditional models' multi-subjective assumptions, this paper adopts genetic programming (GP) evolution algorithm to establishing software reliability model based on mean time between failures' (MTBF) time series. The evolution model of GP is then analyzed and appraised according to five characteristic criteria for some common-used software testing cases. Meanwhile, we also select some traditional probability models and the neural network model to compare with the new GP model separately. The result testifies that the new model evolved by GP has the higher prediction precision and better applicability, which can improve the applicable inconsistency of software reliability modeling to some extent",2006,0, 1858,Fault Detection and Analysis of Control Software for a Mobile Robot,"In certain circumstances mobile robots are unreachable from human being, for example Mars exploration rover. So robots should detect and handle faults of control software themselves. This paper is intended to detect faults of control software by computers. Support vector machine (SVM) based classification is applied to fault diagnostics of control software for a mobile robot. Both training and testing data are sampled by simulating several fault software strategies and recording the operation parameters of the robot. The correct classification percentages for different situations are discussed",2006,0, 1859,A Fast and Reliable Segmentation Method Based on Active Contour Model and Edgeflow,"In this paper, we proposed a new method to segment a given image, based on curve evolution and edgeflow techniques. The approach automatically detect boundaries, and change of topology in terms of the edgeflow fields. We present the numerical implementation and the experimental results based on the semi-implicit method. Experimental results show that one can obtains a high quality edge contour",2006,0, 1860,Software Project Management Using Decision Networks,"The Bayesian networks support resource allocation in software project and also help in analyzing trade-offs among resources. The model predicts the probability distribution of every variable given incomplete data. Even though the Bayesian networks conveniently facilitate scenario-based analysis, they do not support finding an optimal solution in multi-criteria decision making. This paper proposes extending the Bayesian networks into the decision networks to optimize an organizational target and to handle the multi-criteria environment of software project management. Specifically, the decision networks are used to find an optimal set of software activities under constraints of software cost and quality. The preliminary results demonstrate that the Bayesian networks can be easily extended into the decision networks, which then allow for optimization. The proposed methodology provides a flexible process for utilizing the encoded knowledge within the Bayesian networks to facilitate decision making, which could be applicable in other domains of problems",2006,0, 1861,Video Stream Annotations for Energy Trade-offs in Multimedia Applications,"Recent applications for distributed mobile devices, including multimedia video/audio streaming, typically process streams of incoming data in a regular, predictable way. The behavior of these applications during runtime can be accurately predicted most of the time by analyzing the data to be processed and annotating the stream with the information collected. We introduce an annotation-based approach to power-quality trade-offs and demonstrate its application on CPU frequency scaling during video decoding, for an improved user experience on portable devices. Our experiments show that up to 50% of the power consumed by the CPU during video decoding can be saved with this approach",2006,0, 1862,Disaster Hardening for Software Systems,"Summary form only given. We can treat the software system development as a 'disaster-prone' system. We consider a crash as an example of a disaster. We consider the minimum infrastructural requirements based on the application, and the operational and user environments. We review the strategies of disaster awareness, anticipation, proactive pre-emption, and precaution to prevent and/or mitigate the effects of the major or minor catastrophes. We survey methods used in software quality improvements and show their applicability to activities in disaster mitigation and control. We illustrate these with examples using the CMM, sigma six and Taguchi-based ideas which show that mutual exchange of concepts enrich the important fields of software development and disaster control and mitigation. We conclude by identifying the important overlap between the two areas of software development and disaster mitigation. The software change management and maintenance methods can greatly benefit by ideas from disaster technology. The software developed under damage and disaster control techniques can be robust, resilient and long lasting",2006,0, 1863,Metrics-Based Software Reliability Models Using Non-homogeneous Poisson Processes,"The traditional software reliability models aim to describe the temporal behavior of software fault-detection processes with only the fault data, but fail to incorporate some significant test-metrics data observed in software testing. In this paper we develop a useful modeling framework to assess the quantitative software reliability with time-dependent covariate as well as software-fault data. The basic ideas employed here are to introduce the discrete proportional hazard model on a cumulative Bernoulli trial process, and to represent a generalized fault-detection processes having time-dependent covariate structure. The resulting stochastic models are regarded as combinations of the proportional hazard models and the familiar non-homogeneous Poisson processes. We compare these metrics-based software reliability models with some typical non-homogeneous Poisson process models, and evaluate quantitatively both goodness-of-fit and predictive performances from the viewpoint of information criteria. As an important result, the accuracy on reliability assessment strongly depends on the kind of software metrics used for analysis and can be improved by incorporating time-dependent metrics data in modeling",2006,0, 1864,Metamodel-based Test Generation for Model Transformations: an Algorithm and a Tool,"In a model-driven development context (MDE), model transformations allow memorizing and reusing design know-how, and thus automate parts of the design and refinement steps of a software development process. A model transformation program is a specific program, in the sense it manipulates models as main parameters. Each model must be an instance of a """"metamodel"""", a metamodel being the specification of a set of models. Programming a model transformation is a difficult and error-prone task, since the manipulated data are clearly complex. In this paper, we focus on generating input test data (called test models) for model transformations. We present an algorithm to automatically build test models from a metamodel",2006,0, 1865,"Studying the Characteristics of a """"Good"""" GUI Test Suite","The widespread deployment of graphical-user interfaces (GUIs) has increased the overall complexity of testing. A GUI test designer needs to perform the daunting task of adequately testing the GUI, which typically has very large input interaction spaces, while considering tradeoffs between GUI test suite characteristics such as the number of test cases (each modeled as a sequence of events), their lengths, and the event composition of each test case. There are no published empirical studies on GUI testing that a GUI test designer may reference to make decisions about these characteristics. Consequently, in practice, very few GUI testers know how to design their test suites. This paper takes the first step towards assisting in GUI test design by presenting an empirical study that evaluates the effect of these characteristics on testing cost and fault detection effectiveness. The results show that two factors significantly effect the fault-detection effectiveness of a test suite: (1) the diversity of states in which an event executes and (2) the event coverage of the suite. Test designers need to improve the diversity of states in which each event executes by developing a large number of short test cases to detect the majority of """"shallow"""" faults, which are artifacts of modern GUI design. Additional resources should be used to develop a small number of long test cases to detect a small number of """"deep"""" faults",2006,0, 1866,"Tail-Splitting"""" to Predict Failing Software Modules - with a Case Study on an Operating Systems Product","Tail-splitting"""" is a new technique to identify defect prone modules by enhancing the focus of the Pareto distribution by a development process factor. The simple yet powerful influence of a varying tail membership as a function of development process phases is captured by the tail-split-string which tags each module. The case studies on an operating systems product demonstrate that the tail-split-string identifies a small set of modules with a high probability of field failure. The tail-boundary in the algorithm provides for a natural tuning parameter to control the size of the identified set to suit the resources available for rework. Release managers have found that the method is particularly useful to sift modules, with low false positive, for late stage rework",2006,0, 1867,"Adequacy, Accuracy, Scalability, and Uncertainty of Architecture-based Software Reliability: Lessons Learned from Large Empirical Case Studies","Our earlier research work on applying architecture-based software reliability models on a large scale case study allowed us to test how and when they work, to understand their limitations, and to outline the issues that need future research. In this paper we first present an additional case study which confirms our earlier findings. Then, we present uncertainty analysis of architecture-based software reliability for both case studies. The results show that Monte Carlo method scales better than the method of moments. The sensitivity analysis based on Monte Carlo method shows that (1) small number of parameters contribute to the most of the variation in system reliability and (2) given an operational profile, components' reliabilities have more significant impact on system reliability than transition probabilities. Finally, we summarize the lessons learned from conducting large scale empirical case studies for the purpose of architecture-based reliability assessment and uncertainty analysis",2006,0, 1868,Assessing the Relationship between Software Assertions and Faults: An Empirical Investigation,"The use of assertions in software development is thought to help produce quality software. Unfortunately, there is scant empirical evidence in commercial software systems for this argument to date. This paper presents an empirical case study of two commercial software components at Microsoft Corporation. The developers of these components systematically employed assertions, which allowed us to investigate the relationship between software assertions and code quality. We also compare the efficacy of assertions against that of popular bug finding techniques like source code static analysis tools. We observe from our case study that with an increase in the assertion density in a file there is a statistically significant decrease in fault density. Further, the usage of software assertions in these components found a large percentage of the faults in the bug database",2006,0, 1869,"Tool-Supported Verification of Contingency Software Design in Evolving, Autonomous Systems","Advances in software autonomy can support system robustness to a broader range of operational anomalies, called contingencies, than ever before. Contingency management includes, but goes beyond, traditional fault protection. Increased autonomy to achieve contingency management brings with it the challenge of how to verify that the software can detect and diagnose contingencies when they occur. The approach used in this work to investigate the verification was two-fold: (1) to integrate in a single model the representation of the contingencies and of the data signals and software monitors required to identify those contingencies, and (2) to use tool-supported verification of the diagnostics design to identify gaps in coverage of the contingencies. Results presented here indicate that tool-supported verification of the adequacy and correct behavior of such diagnostic software for contingency management can improve on-going contingency analysis, thereby reducing the risk that change has introduced gaps in the contingency software",2006,0, 1870,On the Effect of Fault Removal in Software Testing - Bayesian Reliability Estimation Approach,"In this paper, we propose some reliability estimation methods in software testing. The proposed methods are based on the familiar Bayesian statistics, and can be characterized by using test outcomes in input domain models. It is shown that the resulting approaches are capable of estimating software reliability in the case where the detected software faults are removed. In numerical examples, we compare the proposed methods with the existing method, and investigate the effect of fault removal on the reliability estimation in software testing. We show that the proposed methods can give more accurate estimates of software reliability",2006,0, 1871,Building Phase-Type Software Reliability Models,"This paper presents a unified framework for software reliability modeling with non-homogeneous Poisson processes, where each software fault-detection time obeys the phase-type distribution and the initial number of inherent faults is given by a Poisson distributed random variable. However, it is worth noting that the resulting software reliability models, called phase-type software reliability models, generalize the existing models but may involve a number of model parameters in the phase-type software reliability model, so that the usual maximum likelihood estimation based on the Newton's method or quasi-Newton's method does not often function well. In this paper, we develop EM (expectation-maximization) algorithms for the phase-type software reliability models with two types of fault data: fault-detection time data and grouped data with arbitrary time intervals. In numerical examples, we compare the EM algorithms with the quasi-Newton's method and illustrate the effectiveness on our unified model and parameter estimation method",2006,0, 1872,Softgoal Traceability Patterns,"Goal oriented methods help software engineers to model high-level systemic goals, propose and evaluate architectural solutions, and detect and resolve conflicts that occur. This paper describes a new technique, known as softgoal traceability patterns, for enabling reusable class mechanisms such as design patterns to be applied within a goal-oriented framework. Softgoal traceability patterns increase the reliability of a design in respect to its goals through the automated generation of design elements and the establishment of bidirectional traces between goals and design. These traces are used to monitor the integrity of the design in respect to architectural quality goals, and to support impact analysis when design changes are proposed. Softgoal traceability patterns are described using the well-known Observer pattern and then expanded with a more complex pattern that incorporates authentication",2006,0, 1873,Improved Pattern Matching to Find DNA Patterns,"The process of finding given patterns in DNA sequences is widely used in modern biological sciences. This paper shows an algorithmic improvement for exact pattern matching introducing a new heuristic. For implementation the author created an application that uses three well-known heuristics to ensure the O(n) worst case time, the O(n log sigma (m)/m) average case time and the O(n/m) best case time of searching for an m length pattern in an n length text that use a sigma letter alphabet. This application served as a testbed for the new H4 heuristic. The novelty is in optimization of the direction of text window movement in the preprocessing phase. The idea takes advantages of RAM based searching: usually all the text resides in today's gigabyte memory so the opposite direction of searching window moving requires the same time as the usual. A new function predicts the better moving direction in preprocessing time, based on the unsymmetrical property of pattern. Tests proved that this heuristic may result in fewer jumps and tested characters in the search phase of pattern matching",2006,0, 1874,"The Experimental Paradigm in Reverse Engineering: Role, Challenges, and Limitations","In many areas of software engineering, empirical studies are playing an increasingly important role. This stems from the fact that software technologies are often based on heuristics and are moreover expected to be used in processes where human intervention is paramount. As a result, not only it is important to assess their cost-effectiveness under conditions that are as realistic and representative as possible, but we must also understand the conditions under which they are more suitable and applicable. There exists a wealth of empirical methods aimed at maximizing the validity of results obtained through empirical studies. However, in the case of reverse engineering, as for other domains of investigation, researchers and practitioners are faced with specific constraints and challenges. This is the focus of this keynote address and what the current paper attempts to clarify",2006,0, 1875,How Programs Represent Reality (and how they don't),"Programming is modeling the reality. Most of the times, the mapping between source code and the real world concepts are captured implicitly in the names of identifiers. Making these mappings explicit enables us to regard programs from a conceptual perspective and thereby to detect semantic defects such as (logical) redundancies in the implementation of concepts and improper naming of program entities. We present real world examples of these problems found in the Java standard library and establish a formal framework that allows their concise classification. Based on this framework, we present our method for recovering the mappings between the code and the real world concepts expressed as ontologies. These explicit mappings enable semi-automatic identification of the discussed defect classes",2006,0, 1876,Refactoring Detection based on UMLDiff Change-Facts Queries,"Refactoring is an important activity in the evolutionary development of object-oriented software systems. Several IDEs today support the automated application of some refactorings; at the same time, there is substantial on-going research aimed at developing support for deciding when and how software should be refactored and for estimating the effect of the refactoring on the quality requirements of the software. On the other hand, understanding the refactorings in the evolutionary history of a software system is essential in understanding its design rationale. Yet, only very limited support exists for detecting refactorings. In this paper, we present our approach for detecting refactorings by analyzing the system evolution at the design level. We evaluate our method with case studies, examining two realistic examples of object-oriented software",2006,0, 1877,Quality Assessment of Enterprise Software Systems,"In the last years, as object-oriented software systems became more and more complex, the need of having tools that help us to understand and to assess the quality of their design has increased significantly. This applies also to enterprise applications, a novel category of software systems. Unfortunately, the existing techniques for design's understanding and quality assessment of object-oriented systems are not sufficient and sometimes not suitable when applied on enterprise applications. In the current Ph.D. we propose a new approach which increases the level of understanding and the accuracy assessment of the design of enterprise software systems",2006,0, 1878,An Approach for Evaluating Trust in IT Infrastructure,Trustworthiness of an IT infrastructure can be justified using the concept of trust case which denotes a complete and explicit structure encompassing all the evidence and argumentation supporting trust within a given context. A trust case is developed by making an explicit set of claims about the system of interest and showing how the claims are interrelated and supported by evidence. The approach uses Dempster-Shafer belief function framework to quantify the trust case. We demonstrate how recommendations issued by different stakeholders enable stakeholder-specific views of the trust case and reasoning about the level of trust in a given IT infrastructure,2006,0, 1879,Introduction to the Dependability Modeling of Computer Systems,"Computer systems and networks are considered as a union of all resources, i.e. hardware, software and people (users, administrators and managers), essential for the realization of predicted tasks. The system dependability is defined as a generalization of performability and reliability, combining the notions of both these terms. The functional-reliability models are based on the observation that only a subset of the system resources is involved in a task execution and only inefficiencies of these resources may influence the correctness of task realization. The systems are working in a real environment, which is often hostile: it may be a source of threats, such as security intrusions, faulty or modified software and human errors. The set of analyzed events is a sum of hardware failures and malfunctions, software faults, human errors, intrusions (intended and addressed threats), and viruses (unaddressed threats that are broadcast in the system and in its environment)",2006,0, 1880,An Empirical Study on a Specification-Based Program Review Approach,"Program review is an effective technique for detecting faults in software systems by reading and analyzing program code. However, challenges still remain in providing systematic and rigorous review techniques. We have recently developed a rigorous review approach and a software tool that provide reviewers with support in analyzing whether a program accurately implements the functions and properties defined in its specification. In this paper, we describe an empirical study of the application of our review approach and tool to a software system for automated teller machines (ATMs). We also discuss the effectiveness of the review approach, as well as some weaknesses, based on the results of our study, and suggest potential solutions to the problems encountered during the study",2006,0, 1881,Measurement Techniques in On-Demand Overlays for Reliable Enterprise IP Telephony,"Maintaining good quality of service for real-time applications like IP Telephony requires quick detection and reaction to network impairments. In this paper, we propose and study novel measurement techniques in ORBIT, which is a simple, easily deployable architecture that uses single-hop overlays implemented with intelligent endpoints and independent relays. The measurement techniques provide rapid detection and recovery of IP Telephony during periods of network trouble. We study our techniques via detailed simulations of several multi-site enterprise topologies of varying sizes and three typical fault models. We show that our proposed techniques can detect network impairments rapidly and rescue IP Telephony calls in sub-second intervals. We observed that all impacted calls were rescued with only a few relays in the network and the run-time overhead was low. Furthermore, the relay sites needed to be provisioned with minimal additional bandwidth to support the redirected calls.",2006,0, 1882,Modeling and Performance Analysis of Beyond 3G Integrated Wireless Networks,"Next-generation wireless networking is evolving towards a multi-service heterogeneous paradigm that converges different pervasive access technologies and provides a large set of novel revenue generating applications. Hence, system complexity increases due to its embedded heterogeneity, which can not be accounted by the existing modeling and performance evaluation techniques. Consequently, the development of new modeling approaches becomes as a crucial requirement for proper system design and performance evaluation. This paper presents a novel mobility model for a two-tier integrated wireless system using a new modeling approach that accommodates the aforementioned complexity. Additionally, a novel session model is developed as an adapted version of the proposed mobility model. These models use phase-type distributions that are known to approximate any generic probability laws. Using the proposed session model, a novel generic analytical framework is developed to obtain several salient performance metrics such as network utilization times and handoff rates. Simulation and analysis results prove the proposed model validity and demonstrate the accuracy of the novel modeling approach when compared with traditional modeling techniques.",2006,0, 1883,Performance Analysis and Enhancement for Priority Based IEEE 802.11 Network,"In this paper, a novel non-saturation analytical model for priority based IEEE 802.11 network is introduced. Unlike previous work that is focused on MAC backoff for saturation stations, this model uses Markov and M/ M/1/K theories to predict MAC and queuing service time and loss. Then a performance prediction based enhancement scheme is proposed. By dynamic tuning of protocol options, this proposed scheme limits end-to-end delay and loss rate of real-time traffic and maximizes throughput. Consequently, call admission control is taken to protect existing traffics when the channel is saturated. Simulations validate this model and the comparison with IEEE802.11e EDCA shows that our mechanism can guarantee quality of service more efficiently.",2006,0, 1884,"A Prognostic and Warning System for Power Electronic Modules in Electric, Hybrid, and Fuel Cell Vehicles","Reliability of power electronics modules is of paramount importance for the commercial success of various types of electric vehicles. In this paper, we study the technical feasibility of detecting early symptoms and warning signs of power module degradation due to thermomechanical stress and fatigue, and developing a prognostic system that monitors the state of health of the power modules in electric, hybrid, and fuel cell vehicles. A signature degradation trace of the on-voltage of IGBT modules was observed from accelerated power cycling test. This on-voltage """"anomaly"""" can be attributed to sequential events of solder joint degradation followed by wirebond lift-off mechanisms. A quasi real-time IGBT failure prognostic algorithm based on monitoring the abnormal VCEsat variation at specific currents and temperatures is developed. The algorithm was verified using extensive SIMULINK modeling. The prognostic system can be implemented cost-effectively in existing vehicle hardware/software architectures",2006,0, 1885,Research on Test-platform and Condition Monitoring Method for AUV,"To improve the reliability and intelligence of autonomous underwater vehicle, an AUV test-platform named """"Beaver"""" is developed. The hardware and software system structure are introduced in detail. By analyzing the performance and the fault mechanism of thruster, it establishes the condition monitoring system for thrusters and sensors based on the double closed-loop PID controller, which includes the performance model of thruster based on RBF neural network and forward model of AUV based on improved dynamic recursive Elman neural network, and it probes into the method of combine fault detection. The results of experiment indicate that the """"Beaver"""" can achieve the basic motion control and meet the requirement in test, and the combine fault detection method by parallel connected performance model and forward model can detect the typical fault of thrusters and sensors, which certificates the reliability and the effectiveness of condition monitoring system",2006,0, 1886,Fault Tolerant PID Control based on Software Redundancy for Nonlinear Uncertain Processes,"The fault diagnosis and close-loop tolerant PID control for nonlinear multi-variables system under multiple sensor failures are investigated in the paper. A complete FDT architecture based on software redundancy is proposed to efficiently handle the fault diagnosis and the accommodation for multiple sensor failures in online situations. The methods colligates the adaptive threshold technique with the envelope and weighting moving average residual to detect multi-type sensor fault, use fault propagation technique, variable structure analyzing technique and neural network techniques to online reconstruct sensor signal, and achieves the tolerant PID control through recombining feedback loop of PID controller. The three-tank with multiple sensor fault concurrence is simulated, the simulating result shows that the fault detection and tolerant control strategy has stronger robustness and tolerant fault ability",2006,0, 1887,Validating Requirements Engineering Process Improvements - A Case Study,"The quality of the Requirements Engineering (RE) process plays a critical role in successfully developing software systems. Often, in software organizations, RE processes are assessed and improvements are applied to overcome their deficiency. However, such improvements may not yield desired results for two reasons. First, the assessed deficiency may be inaccurate because of ambiguities in measurement. Second, the improvements are not validated to ascertain their correctness to overcome the process deficiency. Therefore, a Requirements Engineering Process Improvement (REPI) exercise may fail to establish its purpose. A major shortfall in validating RE processes is the difficulty in representing process parameters in some cognitive form. We address this issue with an REPI framework that has both measurement and visual validation properties. The REPI validation method presented is empirically tested based on a case study in a large software organization. The results are promising towards considering this REPI validation method in practice by organizations.",2006,0, 1888,Prioritizing Software Inspection Results using Static Profiling,"Static software checking tools are useful as an additional automated software inspection step that can easily be integrated in the development cycle and assist in creating secure, reliable and high quality code. However, an often quoted disadvantage of these tools is that they generate an overly large number of warnings, including many false positives due to the approximate analysis techniques. This information overload effectively limits their usefulness. In this paper we present ELAN, a technique that helps the user prioritize the information generated by a software inspection tool, based on a demand-driven computation of the likelihood that execution reaches the locations for which warnings are reported. This analysis is orthogonal to other prioritization techniques known from literature, such as severity levels and statistical analysis to reduce false positives. We evaluate feasibility of our technique using a number of case studies and assess the quality of our predictions by comparing them to actual values obtained by dynamic profiling.",2006,0, 1889,Modeling and Verifying Configuration in Service Deployment,"When deploying a service, software systems required by the service should be configured correctly. However, since software has tens or hundreds of configuration parameters and the parameters of different software may have simple or complex, explicit or implicit dependencies and constraints, to configure multiple software systems in service deployment becomes a difficult, error-prone and time-consuming task. In this paper, we propose a model based approach to automated configuration. Motivated by two real cases found in IBM software products and solutions, our approach has three contributions. Firstly, a meta-model of configuration, called software resource configuration model, is defined for integrating configuration parameters and experiences of different software into a global model. Secondly, a set of configuration rules are designed for verifying incorrect configuration that violates constraints specified by service deployment engineers. Thirdly, a supporting tool, called Comfort, is implemented and evaluated by the motivating cases",2006,0, 1890,Modeling Request Routing in Web Applications,"For Web applications, determining how requests from a Web page are routed through server components can be time-consuming and error-prone due to the complex set of rules and mechanisms used in a platform such as J2EE. We define request routing to be the possible sequences of server-side components that handle requests. Many maintenance tasks require the developer to understand the request routing, so this complexity increases maintenance costs. However, viewing this problem at the architecture level provides some insight. The request routing in these Web applications is an example of a pipeline architectural pattern: each request is processed by a sequence of components that form a pipeline. Communication between pipeline stages is event-based, which increases flexibility but obscures the pipeline structure because communication is indirect. Our approach for improving the maintainability of J2EE Web applications is to provide a model that exposes this architectural information. We use Z to formally specify request routing models and analysis operations that can be performed on them, then provide tools to extract request routing information from an application's source code, create the request routing model, and analyze it automatically. We have applied this approach to a number of existing applications up to 34K LOC, showing improvement via typical maintenance scenarios. Since this particular combination of patterns is not unique to Web applications, a model such as our request routing model could provide similar benefits for these systems",2006,0, 1891,Byzantine Fault Tolerance in MDS of Grid System,"Fault tolerance is a challenge problem in reliable distributed system. In grid, detecting and correcting fault techniques is used in fault tolerance of MDS system. These techniques are limited in dealing with the benign faults on servers and the Internet. But they will not work when malicious faults on servers or software errors occur. In this paper, a secure aware MDS system, which can tolerate malicious faults occurred on servers, is proposed. By using a new Byzantine-fault-tolerant algorithm, the proposed MDS system guarantees safety and liveness properties under the condition that no more than f replicas are faulty if it consists of 3f+1 tightly coupled servers, and it maintains the seamless interfaces to application programs as the usual formal MDS system does",2006,0, 1892,An Efficient Test Pattern Selection Method for Improving Defect Coverage with Reduced Test Data Volume and Test Application Time,"Testing using n-detection test sets, in which a fault is detected by n (n > 1) input patterns, is being increasingly advocated to increase defect coverage. However, the data volume for an n-detection test set is often too large, resulting in high testing time and tester memory requirements. Test set selection is necessary to ensure that the most effective patterns are chosen from large test sets in a high-volume production testing environment. Test selection is also useful in a time-constrained wafer-sort environment. The authors use a probabilistic fault model and the theory of output deviations for test set selection - the metric of output deviation is used to rank candidate test patterns without resorting to fault grading. To demonstrate the quality of the selected patterns, experimental results were presented for resistive bridging faults and non-feedback zero-resistance bridging faults in the ISCAS benchmark circuits. Our results show that for the same test length, patterns selected on the basis of output deviations are more effective than patterns selected using several other methods",2006,0, 1893,Design for Testability of Software-Based Self-Test for Processors,"In this paper, the authors propose a design for testability method for test programs of software-based self-test using test program templates. Software-based self-test using templates has a problem of error masking where some faults detected in a test generation for a module are not detected by the test program synthesized from the test. The proposed method achieves 100% template level fault efficiency in a sense that the proposed method completely resolves the problem of error masking. Moreover, the proposed method adds only observation points to the original design, and it enables at-speed testing and does not induce delay overhead",2006,0, 1894,Automated Caching of Behavioral Patterns for Efficient Run-Time Monitoring,"Run-time monitoring is a powerful approach for dynamically detecting faults or malicious activity of software systems. However, there are often two obstacles to the implementation of this approach in practice: (1) that developing correct and/or faulty behavioral patterns can be a difficult, labor-intensive process, and (2) that use of such pattern-monitoring must provide rapid turn-around or response time. We present a novel data structure, called extended action graph, and associated algorithms to overcome these drawbacks. At its core, our technique relies on effectively identifying and caching specifications from (correct/faulty) patterns learned via machine-learning algorithm. We describe the design and implementation of our technique and show its practical applicability in the domain of security monitoring of sendmail software",2006,0, 1895,Combined software and hardware techniques for the design of reliable IP processors,"In the recent years both software and hardware techniques have been adopted to carry out reliable designs, aimed at autonomously detecting the occurrence of faults, to allow discarding erroneous data and possibly performing the recovery of the system. The aim of this paper is the introduction of a combined use of software and hardware approaches to achieve complete fault coverage in generic IP processors, with respect to SEU faults. Software techniques are preferably adopted to reduce the necessity and costs of modifying the processor architecture; since a complete fault coverage cannot be achieved, partial hardware redundancy techniques are then introduced to deal with the remaining, not covered, faults. The paper presents the methodological approach adopted to achieve the complete fault coverage, the proposed resulting architecture, and the experimental results gathered from the fault injection analysis campaign",2006,0, 1896,Online hardening of programs against SEUs and SETs,"Processor cores embedded in systems-on-a-chip (SoCs) are often deployed in critical computations, and when affected by faults they may produce dramatic effects. When hardware hardening is not cost-effective, software implemented hardware fault tolerance (SIHFT) can be a solution to increase SoCs' dependability. However, SIHFT increases the time for running the hardened application, and the memory occupation. In this paper we propose a method that eliminates the memory overhead, using a new approach to instruction hardening and control flow checking during the execution of the application, without the need for introducing any change in its source code. The proposed method is also non-intrusive, since it does not require any modification in the main processor's architecture. The method is suitable for hardening SoCs against transient faults and also for detecting permanent faults",2006,0, 1897,A Software-Based Error Detection Technique Using Encoded Signatures,"In this paper, a software-based control flow checking technique called SWTES (software-based error detection technique using encoded signatures) is presented and evaluated. This technique is processor independent and can be applied to any kind of processors and microcontrollers. To implement this technique, the program is partitioned to a set of blocks and the encoded signatures are assigned during the compile time. In the run-time, the signatures are compared with the expected ones by a monitoring routine. The proposed technique is experimentally evaluated on an ATMEL MCS51 microcontroller using software implemented fault injection (SWIFI). The results show that this technique detects about 90% of the injected errors. The memory overhead is about 135% on average, and the performance overhead varies between 11% and 191% depending on the workload used",2006,0, 1898,A Fault Tolerant and Multi-Paradigm Grid Architecture for Time Constrained Problems. Application to Option Pricing in Finance.,"This paper introduces a Grid software architecture offering fault tolerance, dynamic and aggressive load balancing and two complementary parallel programming paradigms. Experiments with financial applications on a real multi-site Grid assess this solution. This architecture has been designed to run industrial and financial applications, that are frequently time constrained and CPU consuming, feature both tightly and loosely coupled parallelism requiring generic programming paradigm, and adopt client-server business architecture.",2006,0, 1899,Worqbench: An Integrated Framework for e-Science Application Development,"With the proliferation of Grid computing, potentially vast computational resources are available for solving complex problems in science and engineering. However, writing, deploying, and testing e-Science applications over highly heterogeneous and distributed infrastructure are complex and error prone. Further complicating matters, programmers may need to target a variety of different Grid middleware packages. This paper presents the design and implementation of Worqbench, an integrated, modular and middleware neutral framework for e- Science application development on the Grid. Worqbench can be incorporated into a number of existing Integrated Development Environments, further leveraging the advantages of such systems. We illustrate one such implementation in the Eclipse environment.",2006,0, 1900,A Model Driven Exception Management Framework for Developing Reliable Software Systems,"Programming languages provide exception handling mechanisms to structure fault tolerant activities into software systems. However, the use of exceptions at this low level of abstraction can be error-prone and complex leading to new programming errors. In this paper, we present a model-driven framework to support the iterative development of reliable software systems. This framework is comprised of UML-based modeling notations and a transformation engine that supports the automated generation of exception management features for a software system. It leverages domain specific exception modeling languages, patterns, modeling tools and framework libraries. The feasibility of this approach is demonstrated through the development of a case study business application, known as Project Tracker",2006,0, 1901,Requirements Traceability and Transformation Conformance in Model-Driven Development,"The variety of design artefacts (models) produced in a model-driven design process results in an intricate relationship between requirements and the various models. This paper proposes a methodological framework that simplifies management of this relationship. This framework is a basis for tracing requirements, assessing the quality of model transformation specifications, metamodels, models and realizations. We propose a notion of conformance between application models which reduces the effort needed for assessment activities. We discuss how this notion of conformance can be integrated with model transformations",2006,0, 1902,Formal proofs for QoS-oriented Transformations,"The methodology of Model Driven Architecture (MDA) has been a popular area of research in recent years. To cater for the increasing awareness of the importance in software Quality of Service (QoS), some have suggested MDA as a solution. However, unlike functional properties, QoS displayed in a development cycle is prone to changes after deployment due to a non-constant runtime environment and usage. A possible solution is to provide a monitoring framework to ensure that QoS violations are always detectable. However, as with any MDA based approach it is dangerous to simply assume that transformations will do exactly as specified. This paper describes an approach for producing formal proofs for our particular QoS-oriented transformational system[1], based on the proof-as-programs methodology.",2006,0, 1903,Extending an SSI Cluster for Resource Discovery in Grid Computing,"Grid technologies enable large-scale sharing of resources within formal or informal consortia of individuals and/or virtual organizations. In these settings, the discovery, characterization, and monitoring of resources, services, and computations can be challenging due to the considerable diversity, large numbers, dynamic behavior, and geographical distribution of the entities in which a user might be interested. Hence, information services are a vital part of any grid software infrastructure, providing fundamental mechanisms for discovery and monitoring, and thus for planning and adapting application behavior. This paper proposes a resource discovery system for grid computing with fault-tolerant capabilities starting from an SSI clustering operating system. The proposed system uses dynamic leader-determination and registration mechanisms to automatically recover from nodes and network failures. The system is centralized and uses dynamic (or soft-state) registration to detect and recover from failures. Provisional or backup leader determination provides tolerance and recovery in the event of the leader node failing. The system was tested against a control network modeled after existing grid computing resource discovery components, such as Globus monitoring and discovery system (MDS). In various failure scenarios, the proposed system showed better resilience and performance than the control system",2006,0, 1904,Data Processing Workflow Automation in Grid Architecture,"Because of the poor performance and the expensive license cost, traditional relational database management systems are no longer good choices for processing huge amount of data. Grid computing is replacing the place of RDBMS in data processing. Traditionally a workflow is generated by data experts, which is time consuming, labor intensive and error prone. More over, it becomes the bottleneck of the overall performance of data processing in the grid architecture. This paper proposes a multi-layer workflow automation strategy that can automatically generate a workflow from a business language. A prototype has been implemented and a simulation has been designed",2006,0, 1905,GPFlow: An Intuitive Environment for Web Based Scientific Workflow,"Increasingly scientists are using collections of software tools in their research. These tools are typically used in concert, often necessitating laborious and error prone manual data reformatting and transfer. We present an intuitive workflow environment to support scientists with their research. The workflow, GPFlow, wraps legacy tools, presenting a high level, interactive Web based frontend to scientists. The workflow backend is realized by a commercial grade workflow engine (BizTalk). The workflow model is inspired by spreadsheets and is novel in its support for an intuitive method of interaction as required by many scientists, for example, bioinformaticians. We apply GPFlow to two bioinformatics experiments and demonstrate its flexibility and simplicity",2006,0, 1906,Configuring Processes and Business Documents - An Integrated Approach to Enterprise Systems Collaboration,"Enterprise systems (ES) provide standardized, off-the-shelf support for operations and management within organizations. With the advent of ES based on a service-oriented architecture (SOA) and an increasing demand of IT-supported interorganizational collaboration, implementation projects face paradigmatically new challenges. The configuration of ES is costly and error-prone. Dependencies between business processes and business documents are hardly explicit and foster component proliferation instead of reuse. Configurative modeling can support the problem in two ways: First, conceptual modeling abstracts from technical details and provides more intuitive access and overview. Second, configuration allows the projection of variants from master models providing manageable variants with controlled flexibility. We aim at tackling the problem by proposing an integrated model-based framework for configuring both, processes and business documents, on an equal basis; as together, they constitute the core business components of an ES",2006,0, 1907,On the Assessment of the Mean Failure Frequency of Software in Late Testing,"We propose an approach for assessing the mean failure frequency of a program, based on the statistical test of hypotheses. The approach can be used to establish stopping rules and evaluate the quality of a program based on its mean failure frequency during the late testing phases. Our proposal shows how to set and satisfy conservative bounds for the minimum number of test executions that are needed to achieve a target mean failure frequency with a specified level of statistical significance, based on the quality goal of testing and the specific test execution profile chosen. We relax a few assumptions of the literature, so our approach can be used in a larger set of real-life cases.",2006,0, 1908,Estimating Software Quality with Advanced Data Mining Techniques,"Current software quality estimation models often involve the use of supervised learning methods for building a software fault prediction models. In such models, dependent variable usually represents a software quality measurement indicating the quality of a module by risk-basked class membership, or the number of faults. Independent variables include various software metrics as McCabe, Error Count, Halstead, Line of Code, etc... In this paper we present the use of advanced tool for data mining called Multimethod on the case of building software fault prediction model. Multimethod combines different aspects of supervised learning methods in dynamical environment and therefore can improve accuracy of generated prediction model. We demonstrate the use Multimethod tool on the real data from the Metrics Data Project Data (MDP) Repository. Our preliminary empirical results show promising potentials of this approach in predicting software quality in a software measurement and quality dataset.",2006,0, 1909,Application of Computational Redundancy in Dangling Pointers Detection,"Many programmers manipulate dynamic data improperly, which may produce dynamic memory problems, such as dangling pointer. Dangling pointers can occur when a function returns a pointer to an automatic variable, or when trying to access a deleted object. The existence of dangling pointers causes the programs to behave incorrectly. Dangling pointers are common defect that are easy to commit, but are difficult to discover. In this paper we propose a dynamic approach that detects dangling pointers in computer programs. Redundant computation is an execution of a program statement(s) that does not contribute to the program output. The notion of redundant computation is introduced as a potential indicator of defects in programs. We investigate the application of redundant computation in dangling pointers detection. The results of the initial experiment show that, the redundant computation detection can help the debuggers to localize the source(s) of the dangling pointers. During the experiment, we find that, our approach may be capable of detecting other types of dynamic memory problems such as memory leaks and inaccessible objects, which we plan to investigate in our future research.",2006,0, 1910,Using RDL to Facilitate Customization of Variability Points,"Reusable software assets have variability points, which are locations for customization. Reusable asset consumers, i.e. reusers, must become knowledgeable about the techniques used to implement variability, and the activities required to customize the variability points. Moreover reuse activities must be constrained to a specific sequence to avoid a lengthy error-prone reuse process and inconsistencies in the final application. Specifying reuse activities required to customize variability points using a programming language is a valuable contribution to the reuse process, given the way reuse is clearly exposed. Moreover reuse scripts described in such a language can be input to a reuse environment for automatic or semi-automatic assistance to the reuse process. In this work we discuss software reuse activities and illustrate how RDL (Reuse Description Language) can be used to facilitate variability pointAs customization.",2006,0, 1911,A Junction Tree Propagation Algorithm for Bayesian Networks with Second-Order Uncertainties,"Bayesian networks (BNs) have been widely used as a model for knowledge representation and probabilistic inferences. However, the single probability representation of conditional dependencies has been proven to be over-constrained in realistic applications. Many efforts have proposed to represent the dependencies using probability intervals instead of single probabilities. In this paper, we move one step further and adopt a probability distribution schema. This results in a higher order representation of uncertainties in a BN. We formulate probabilistic inferences in this context and then propose a mean/covariance propagation algorithm based on the well-known junction tree propagation for standard BNs. For algorithm validation, we develop a two-layered Markov likelihood weighting approach that handles high-order uncertainties and provides """"ground-truth"""" solutions to inferences, albeit very slowly. Our experiments show that the mean/covariance propagation algorithm can efficiently produce high-quality solutions that compare favorably to results obtained through painstaking sampling",2006,0, 1912,A New Intelligent Traffic Shaper for High Speed Networks,"In this paper, a new intelligent traffic shaper is proposed to obtain a reasonable utilization of bandwidth while preventing traffic overload in other part of the network and as a result, reducing total number of packet dropping in the whole network. This approach trains an intelligent agent to learn an appropriate value for token generation rate of a Token Bucket at various states of the network. This method shows satisfactory results in simulations from the aspects of keeping dropping probability low while injecting as many packets as possible into the network by minimization of used buffer size at each router in order to keep the delay occurred by packets waiting in long buffers to be sent, as small as possible",2006,0, 1913,Using Boosting Techniques to Improve Software Reliability Models Based on Genetic Programming,"Software reliability models are used to estimate the probability of a software fails along the time. They are fundamental to plan test activities and to ensure the quality of the software being developed. Two kind of models are generally used: time or test coverage based models. In our previous work, we successfully explored genetic programming (GP) to derive reliability models. However, nowadays boosting techniques (BT) have been successfully applied with other machine learning techniques, including GP. BT merge several hypotheses of the training set to get better results. With the goal of improving the GP software reliability models, this work explores the combination GP and BT. The results show advantages in the use of the proposed approach",2006,0, 1914,Bootstrapping Performance and Dependability Attributes ofWeb Services,"Web services gain momentum for developing flexible service-oriented architectures. Quality of service (QoS) issues are not part of the Web service standard stack, although non-functional attributes like performance, dependability or cost and payment play an important role for service discovery, selection, and composition. A lot of research is dedicated to different QoS models, at the same time omitting a way to specify how QoS parameters (esp. the performance related aspects) are assessed, evaluated and constantly monitored. Our contribution in this paper comprises: a) an evaluation approach for QoS attributes of Web services, which works completely service-and provider independent, b) a method to analyze Web service interactions by using our evaluation tool and extract important QoS information without any knowledge about the service implementation. Furthermore, our implementation allows assessing performance specific values (such as latency or service processing time) that usually require access to the server which hosts the service. The result of the evaluation process can be used to enrich existing Web service descriptions with a set of up-to-date QoS attributes, therefore, making it a valuable instrument for Web service selection",2006,0, 1915,QoS Explorer: A Tool for Exploring QoS in Composed Services,"This paper presents QoS explorer, an interactive tool we have developed which predicts quality of service (QoS) of a workflow from the QoS characteristics of its constituents, even when the relationships involved are complex. This facilitates design and instantiation of workflows to satisfy QoS constraints, as it enables the user to discover and focus effort on the aspects of a workflow which most affect their primary QoS concerns, thus improving efficiency of workflow development. Further, the underlying model we use is more sophisticated than those of similar recent work (Jaeger et al., 2005; Ardagna and Pernici, 2005; Menasce, 2004), and includes processing of entire statistical distributions and probabilistic states (instead of the simple numeric constants used elsewhere) to model such non-constant variables as execution time",2006,0, 1916,Prediction-Table Based Fault-Tolerant Real-Time Scheduling Algorithm,"In order to predict accurately whether primary versions of real-time tasks is executable in software fault-tolerant module, a new algorithm, PTBA, prediction-table based algorithm, is presented. PTBA uses prediction-table to predict whether a host primary can meet its pre-deadline. Prediction-table contains the pre-assignment information of tasks between the current time and the alternates' notification time. If the prediction result shows that host primary has not enough time to execute, it will be aborted. Otherwise, prediction-table is referenced to schedule tasks with low overhead. The novelty of PTBA is that it schedules primaries according to their corresponding alternates' notification time and has no extra scheduling overhead in prediction-table mode. Simulation results show that PTBA allows more execution time for primaries and wastes less processor time than the well-known similar algorithms. PTBA is appropriate to the situation where the periods of tasks are short and software fault probability is low",2006,0, 1917,Object-Relational Database Metrics Formalization,"In this paper the formalization of a set of metrics for assessing the complexity of ORBDs is presented. An ontology for the SQL:2003 standard was produced, as a framework for representing the SQL schema definitions. It was represented using UML. The metrics were defined with OCL, upon the SQL:2003 ontology",2006,0, 1918,Correctness-preserving synthesis for real-time control software,"Formal theories for real-time systems (such as timed process algebra, timed automata and timed Petri nets) have gained great success in the modelling of concurrent timing behavior and in the analysis of real-time properties. However, due to the ineliminable timing differences between a model and its realization, synthesising a software realization from a model in a correctness-preserving way is still a challenging research topic. In this paper, we tackle this problem by solving a set of sub-problems. First, we introduce property relations between real-time systems on the basis of their absolute and relative timing differences. Second, we bridge the timing differences between a model and its realization by a sequence of (absolute and relative) timing differences. Third, we propose two parameterised hypotheses to capture the timing differences between the model and its realization. The parameters of both hypotheses are used to predict the real-time properties of the realization from those of the model. Finally, we introduce a synthesis tool, which shows that the two hypotheses can be satisfied during software synthesis",2006,0, 1919,Improving Coverage in Functional Testing,"Input-predicate/output (IP/O)n-chains coverage criterion, originally proposed for black-box testing of telecommunications software, is adapted to white-box testing of programs written in block-structured languages. This criterion is based on the analysis of the effects of inputs on predicates and outputs in a program. It requires that each such effect in a program is examined at least once during testing and thus provides a means of capturing the implemented functionality and checking the consistency of the program with respect to its functional requirements. It is shown that its fault-detecting ability is higher than the all-uses criterion, and compares favorably with the required k-tuples+ criterion",2006,0, 1920,Quality Assessment of Mutation Operators Dedicated for C# Programs,"The mutation technique inserts faults in a program under test in order to assess or generate test cases, or evaluate the reliability of the program. Faults introduced into the source code are defined using mutation operators. They should be related to different, also object-oriented features of a program. The most research on OO mutations was devoted to Java programs. This paper describes analytical and empirical study performed to evaluate the quality of advanced mutation operators for C# programs. Experimental results demonstrate effectiveness of different mutation operators. Unit tests suites and functional tests were used in experiments. A detailed analysis was conducted on mutation operators dealing with delegates and exception handling",2006,0, 1921,Generating Optimal Test Set for Neighbor Factors Combinatorial Testing,"Combinatorial testing is a specification-based testing method, which can detect the faults triggered by interaction of factors. For one kind of software in which the interactions only exist between neighbor factors, this paper proposes the concept of neighbor factors combinatorial testing, presents the covering array generation algorithms for neighbor factors pair-wise (N=2) coverage, neighbor factors N-way (Nges2) coverage and variable strength neighbor factors coverage, and proves that the covering arrays generated by these three algorithms are optimal. Finally we analyze an application scenario, which shows that this approach is very practical",2006,0, 1922,Probabilistic Adaptive Random Testing,"Adaptive random testing (ART) methods are software testing methods which are based on random testing, but which use additional mechanisms to ensure more even and widespread distributions of test cases over an input domain. Restricted random testing (RRT) is a version of ART which uses exclusion regions and restricts test case generation to outside of these regions. RRT has been found to perform very well, but its use of strict exclusion regions (from within which test cases cannot be generated) has prompted an investigation into the possibility of modifying the RRT method such that all portions of the input domain remain available for test case generation throughout the duration of the algorithm. In this paper, we present a probabilistic approach, probabilistic ART (PART), and explain two different implementations. Preliminary empirical data supporting the methods is also examined",2006,0, 1923,A Semi-empirical Model of Test Quality in Symmetric Testing: Application to Testing Java Card APIs,"In the smart card quality assurance field, software testing is the privileged way of increasing the confidence level in the implementation correctness. When testing Java Card application programming interfaces (APIs), the tester has to deal with the classical oracle problem, i.e. to find a way to evaluate the correctness of the computed output. In this paper, we report on an experience in testing methods of the Oberthur Card Systems Cosmo 32 RSA Java Card APIs by using the Symmetric Testing paradigm. This paradigm exploits user-defined symmetry properties of Java methods as test oracles. We propose an experimental environment that combines random testing and symmetry checking for (on-card) cross testing of several API methods. We develop a semi-empirical model (a model fed by experimental data) to help deciding when to stop testing and to assess test quality",2006,0, 1924,Detecting Malicious Manipulation in Grid Environments,"Malicious manipulation of jobs results endangers the efficiency and performance of grid computing applications. The presence of nodes interested in depreciating jobs results may be detected and minimized with the usage of fault tolerance techniques. In order to detect this kind of nodes, this paper presents a distributed and hierarchical diagnosis model based on comparison and reputation, which can be applied to both public and private grids. This strategy defines the status of a node according to its level of confidence, measured through its behavior. The proposed model was submitted to simulations to evaluate its effectiveness under different quota of malicious nodes. The results reveals that 8 test rounds can detect practically all malicious nodes and even with less rounds, the correctness remains high without a significant overhead increase",2006,0, 1925,Solving Consensus Using Structural Failure Models,"Failure models characterise the expected component failures in fault-tolerant computing. In the context of distributed systems, a failure model usually consists of two parts: a functional part specifying in what way individual processing entities may fail and a structural part specifying the potential scope of failures within the system. Such models must be expressive enough to cover all relevant practical situations, but must also be simple enough to allow uncomplicated reasoning about fault-tolerant algorithms. Usually, an increase in expressiveness complicates formal reasoning, but enables more accurate models that allow to improve the assumption coverage and resilience of solutions. In this paper, we introduce the structural failure model class DiDep that allows to specify directed dependent failures, which, for example, occur in the area of intrusion tolerance and security. DiDep is a generalisation of previous classes for undirected dependent failures, namely the general adversary structures, the fail-prone systems, and the core and survivor sets, which we show to be equivalent. We show that the increase in expressiveness of DiDep does not significantly penalise the simplicity of corresponding models by giving an algorithm that transforms any consensus algorithm for undirected dependent failures into a consensus algorithm for a DiDep model. We characterise the improved resilience obtained with DiDep and show that certain models even allow to circumvent the famous FLP impossibility result",2006,0, 1926,Hidden Markov Models as a Support for Diagnosis: Formalization of the Problem and Synthesis of the Solution,"In modern information infrastructures, diagnosis must be able to assess the status or the extent of the damage of individual components. Traditional one-shot diagnosis is not adequate, but streams of data on component behavior need to be collected and filtered over time as done by some existing heuristics. This paper proposes instead a general framework and a formalism to model such over-time diagnosis scenarios, and to find appropriate solutions. As such, it is very beneficial to system designers to support design choices. Taking advantage of the characteristics of the hidden Markov models formalism, widely used in pattern recognition, the paper proposes a formalization of the diagnosis process, addressing the complete chain constituted by monitored component, deviation detection and state diagnosis. Hidden Markov models are well suited to represent problems where the internal state of a certain entity is not known and can only be inferred from external observations of what this entity emits. Such over-time diagnosis is a first class representative of this category of problems. The accuracy of diagnosis carried out through the proposed formalization is then discussed, as well as how to concretely use it to perform state diagnosis and allow direct comparison of alternative solutions",2006,0, 1927,Recovering from Distributable Thread Failures with Assured Timeliness in Real-Time Distributed Systems,"We consider the problem of recovering from failures of distributable threads with assured timeliness. When a node hosting a portion of a distributable thread fails, it causes orphans - i.e., thread segments that are disconnected from the thread's root. We consider a termination model for recovering from such failures, where the orphans must be detected and aborted, and failure-exception notification must be delivered to the farthest, contiguous surviving thread segment for resuming thread execution. We present a realtime scheduling algorithm called AUA, and a distributable thread integrity protocol called TP-TR. We show that AUA and TP-TR bound the orphan cleanup and recovery time, thereby bounding thread starvation durations, and maximize the total thread accrued timeliness utility. We implement AUA and TP-TR in a real-time middleware that supports distributable threads. Our experimental studies with the implementation validate the algorithm/protocol's time-bounded recovery property and confirm their effectiveness",2006,0, 1928,Software Release Time Management: How to Use Reliability Growth Models to Make Better Decisions,"In late years, due to the significance of software application, professional testing of software becomes an increasingly important task. Once all detected faults are removed, project managers can begin to determine when to stop testing. Software reliability has important relations with many aspects of software, including the structure, the operational environment, and the amount of testing. Actually, software reliability analysis is a key factor of software quality and can be used for planning and controlling the testing resources during development. Over the past three decades, many software reliability growth models (SRGMs) have been proposed. For most traditional SRGMs, one common assumption is that the fault detection rate is a constant over time. However, the fault detection process in the operational phase is different from that in the testing phase. Thus, in this paper, we use the testing compression factor (TCF) to reflect the fact and describe the possible phenomenon. In addition, sometimes the one-to-one mapping relationship between failures and faults may not be realistic. Therefore, we also incorporate the concept of quantified ratio, not equal to 1, of faults to failures into software reliability growth modeling. We estimate the parameters of the proposed model based on real software failure data set and give a fair comparison with other SRGMs. Finally, we show how to use the proposed model to conduct software release time management",2006,0, 1929,Comparison and Assessment of Improved Grey Relation Analysis for Software Development Effort Estimation,"The goal of software project planning is to provide a framework that allows project manager to make reasonable estimates of the resources. In fact, software development is highly unpredictable - only 10% of projects on time and budget. Thus, it is very important for software project managers to accurately and precisely estimate software development effort since the resources are limited. One of the most widely used approaches of software effort estimation is the analogy method. Since the method of analogy is constructed on the foundation of distance-based similarity, there are still some drawbacks and restrictions for application. For example, the anomalistic and outlying values will influence the function to determine similarity. Contrarily, grey relational analysis (GRA) is a distinct measurement from the traditional distance scale and can dig out the realistic law from small-sample data. In this paper, we show how to apply GRA to evaluate the effort estimation results for different data sequences and to compare its accuracy with that of Analogy method. Experimental result shows that the GRA provides a better predictive performance than other methods. We can see that the GRA is more suitable for predicting software development effort with unbalanced dataset",2006,0, 1930,Predicting the Viterbi Score Distribution for a Hidden Markov Model and Application to Speech Recognition,"Hidden Markov models are used in many important applications such as speech recognition and handwriting recognition. Finding an effective HMM to fit the data is important for successful operation. Typically, the Baum-Welch algorithm is used to train an HMM, and seeks to maximize the likelihood probability of the model, which is closely related to the Viterbi score. However, random initialization causes the final model quality to vary due to locally optimum solutions. Conventionally, in speech recognition systems, models are selected from a collection of already trained models using some performance criterion. In this paper, we investigate an alternative method of selecting models using the Viterbi score distribution. A method to determine the Viterbi score distribution is described based on Viterbi score variation with respect to the number of states (N) in the model. Our tests show that the distribution is approximately Gaussian when the number of states is greater than 3. The paper also investigates the relationship between performance and the Viterbi score's percentile value and discusses several interesting implications for Baum-Welch training",2006,0, 1931,Phoenix: Detecting and Recovering from Permanent Processor Design Bugs with Programmable Hardware,"Although processor design verification consumes ever-increasing resources, many design defects still slip into production silicon. In a few cases, such bugs have caused expensive chip recalls. To truly improve productivity, hardware bugs should be handled like system software ones, with vendors periodically releasing patches to fix hardware in the field. Based on an analysis of serious design defects in current AMD, Intel, IBM, and Motorola processors, this paper proposes and evaluates Phoenix - novel field-programmable on-chip hardware that detects and recovers from design defects. Phoenix taps key logic signals and, based on downloaded defect signatures, combines the signals into conditions that flag defects. On defect detection, Phoenix flushes the pipeline and either retries or invokes a customized recovery handler. Phoenix induces negligible slowdown, while adding only 0.05% area and 0.48% wire overheads. Phoenix detects all the serious defects that are triggered by concurrent control signals. Moreover, it recovers from most of them, and simplifies recovery for the rest. Finally, we present an algorithm to automatically size Phoenix for new processors",2006,0, 1932,Two-Dimensional Software Reliability Models and Their Application,"In general, the software-testing time may be measured by two kinds of time scales: calendar time and test-execution time. In this paper, we develop two-dimensional software reliability models with two-time measures and incorporate both of them to assess the software reliability with higher accuracy. Since the resulting software reliability models are based on the familiar non-homogeneous Poisson processes with two-time scales, which are the natural extensions of one-dimensional models, it is possible to treat both the time data simultaneously and effectively. We investigate the dependence of test-execution time as a testing effort on the software reliability assessment, and validate quantitatively the software reliability models with two-time scales. We also consider an optimization problem when to stop the software testing in terms of two-time measurements",2006,0, 1933,An Evaluation of Similarity Coefficients for Software Fault Localization,"Automated diagnosis of software faults can improve the efficiency of the debugging process, and is therefore an important technique for the development of dependable software. In this paper we study different similarity coefficients that are applied in the context of a program spectral approach to software fault localization (single programming mistakes). The coefficients studied are taken from the systems diagnosis/automated debugging tools Pinpoint, Tarantula, and AMPLE, and from the molecular biology domain (the Ochiai coefficient). We evaluate these coefficients on the Siemens Suite of benchmark faults, and assess their effectiveness in terms of the position of the actual fault in the probability ranking of fault candidates produced by the diagnosis technique. Our experiments indicate that the Ochiai coefficient consistently outperforms the coefficients currently used by the tools mentioned. In terms of the amount of code that needs to be inspected, this coefficient improves 5% on average over the next best technique, and up to 30% in specific cases",2006,0, 1934,A Best Practice Guide to Resources Forecasting for the Apache Webserver,"Recently, measurement based studies of software systems proliferated, reflecting an increasingly empirical focus on system availability, reliability, aging and fault tolerance. However, it is a non-trivial, error-prone, arduous, and time-consuming task even for experienced system administrators and statistical analysis to know what a reasonable set of steps should include to model and successfully predict performance variables or system failures of a complex software system. Reported results are fragmented and focus on applying statistical regression techniques to captured numerical system data. In this paper, we propose a best practice guide for building empirical models based on our experience with forecasting Apache Web server performance variables and forecasting call availability of a real world telecommunication system. To substantiate the presented guide and to demonstrate our approach step-by-step we model and predict the response time and the amount of free physical memory of an Apache Web server system. Additionally, we present concrete results for a) variable selection where we cross benchmark three procedures, b) empirical model building where we cross benchmark four techniques and c) sensitivity analysis. This best practice guide intends to assist in configuring modeling approaches systematically for best estimation and prediction results",2006,0, 1935,Verification of Intelligent Agents with ACTL for Epistemic Reasoning,"Verification of multi-agent systems (MAS) is a huge challenge, especially for those systems where security and safety are of major importance. Verification detects faults, defects and drawbacks in an early stage of software development. Here, we give a formal model for verification of MAS by means of model checking technique. We extend the existing action computation tree logic (ACTL) with epistemic operators in order to reason about knowledge properties of MAS. We introduce new operators for manipulation on agent's actions with data. We explain their syntax and semantics for our ACTL-er (ACTL for epistemic reasoning), and provide a case study for a MAS system of foraging bees.",2006,0, 1936,A Predictive Method for Providing Fault Tolerance in Multi-agent Systems,"The growing importance of multi-agent applications and the need for a higher quality of service in these systems justify the increasing interest in fault-tolerant multi-agent systems. In this article, we propose an original method for providing dependability in multi- agent systems through replication. Our method is different from other works because our research focuses on building an automatic, adaptive and predictive replication policy where critical agents are replicated to avoid failures. This policy is determined by taking into account the criticality of the plans of the agents, which contain the collective and individual behaviors of the agents in the application. The set of replication strategies applied at a given moment to an agent is then fine-tuned gradually by the replication system so as to reflect the dynamicity of the multi-agent system. We report on experiments assessing the efficiency of our approach.",2006,0, 1937,Integrating Processes of Logistics Outsourcing Risk Management in e-Business,"Logistics outsourcing has been recognized to have important potential benefits, including reduced costs, improved quality, the ability to focus on core competencies and access to new technologies. Most prior studies have articulated the advantages of logistics outsourcing and paid little attention to the risks in e-business environments. The main purpose of this study is to present how the current logistics outsourcing risk management process can be integrated and improved through the use of new e-business applications",2006,0, 1938,Software Defect Content Estimation: A Bayesian Approach,"Software inspection is a method to detect errors in software artefacts early in the development cycle. At the end of the inspection process the inspectors need to make a decision whether the inspected artefact is of sufficient quality or not. Several methods have been proposed to assist in making this decision like capture recapture methods and Bayesian approach. In this study these methods have been analyzed and compared and a new Bayesian approach for software inspection is proposed. All of the estimation models rely on an underlying assumption that the inspectors are independent. However, this assumption of independence is not necessarily true in practical sense, as most of the inspection teams interact with each other and share their findings. We, therefore, studied a new Bayesian model where the inspectors share their findings, for defect estimate and compared it with Bayesian models in the literature, where inspectors examine the artefact independently. The simulations were carried out under realistic software conditions with a small number of difficult defects and a few inspectors. The models were evaluated on the basis of decision accuracy and median relative error and our results suggest that the dependent inspector assumption improves the decision accuracy (DA) over the previous Bayesian model and CR models",2006,0, 1939,Security Design Patterns: Survey and Evaluation,"Security design patterns have been proposed recently as a tool for the improvement of software security during the architecture and design phases. Since the appearance of this research topic in 1997, several catalogs have emerged, and the security pattern community has produced significant contributions, with many related to design. In this paper, we survey major contributions in the state of the art in the field of security design patterns and assess their quality in the context of an established classification. From our results, we determined a classification of inappropriate pattern qualities. Using a six sigma approach, we propose a set of desirable properties that would prevent flaws in new design patterns, as well as a template for expressing them",2006,0, 1940,TTCN-3 Testing of Hoorn-Kersenboogerd Railway Interlocking,"Railway control systems are safety-critical, so we have to ensure that they are designed and implemented correctly. Testing these systems is a key issue. Prior to system testing, the software of a railway control system is tested separately from the hardware. The interlocking is a layer of railway control systems that guarantees safety. It allows to execute commands given by a user only if they are safe; unsafe commands are rejected. Railway interlockings are central to efficient and safe traffic management for railway infrastructure managers and operators. European integration requires new standards for specification and testing interlockings. Here we propose an approach to testing interlockings with TTCN-3 and give an example for its application. The code of interlockings is simulated during test execution. For assessing the quality of the tests, we propose an approach inspired by the classification tree method",2006,0, 1941,"Uniform Crime Report """"SuperClean"""" Data Cleaning Tool","The analysis of UCR data provides a basis for crime prevention in the United States as well as a sound decision making tool for policy makers. The decisions made with the use of UCR data range from major funding for resource allocation down to patrol distribution by local police departments. The FBI collects and maintains the database of the Uniform Crime Reports (UCR), from 18,000 reporting police agencies nationwide. However, many of these data sets have missing, incomplete, or incorrect data points that render crime analysis less effective. UCR experts have stated that in the current form UCR data is unreliable and sporadic. Efforts have previously been made to design a software application to correct these necessary problems, but the application was deemed insufficient due to limited portability and usability. Software requirements restricted potential users and the user interface was ineffective. However, this previous work describes the functions needed to effectively clean and assess UCR data. This paper describes the design of an application used to clean, process, and correct UCR data so that ideal policy decisions can be made. Erroneous portions of the data will be found using the outlier detection function that is based on a statistical model of anomalous behavior. These methods incorporate sponsor specifications and user requirements. This project builds upon the GRASP (geospatial repository for analysis and safety planning) project's goal of sharing information between law enforcement agencies. Eventually this application could be integrated with GRASP to form a single repository for UCR and spatial crime data. This paper describes how the new stand alone application will allow users to clean, correct, and process UCR data in an efficient, user-friendly manner. Formal testing provides a basis to assess the effectiveness of the application based on the metrics of time, cost, and quality. The results are the basis for improvements to the application",2006,0, 1942,Improvement in Reliability and Energy Yield Prediction of Thin-Film CdS/CdTe PV Modules,"In this work, we illustrate improvement in thin-film PV module durability via process optimization. Data are presented from large installations, allowing accurate estimation of failure rates and distributions of failure modes. Improvement in product quality is also described; we show that recent thin-film products can meet industry expectations for consistent power output. In addition, results of product characterization performed at First Solar are presented. Dependence of module output on irradiance and temperature is illustrated, and we show that common assumptions about such dependence (based on experience with conventional PV technology) may not hold for thin-film modules. Predicted behavior of module output (as computed with PV system modeling software) and real-world data are compared. It is shown that adjustment of the parametric description of the module can be used to successfully reduce discrepancy between predicted and actual module behaviors",2006,0, 1943,Semantic-Based Workflow Composition for Video Processing in the Grid,"We outline the problem of automatic video processing for the EcoGrid. This poses many challenges as there is a vast amount of raw data that need to be analysed effectively and efficiently. Furthermore, ecological data are subject to environmental changes and are exception-prone, hence their qualities vary. As manual processing by humans can be time and labour intensive, video and image processing tools can go some way to addressing such problems since they are computationally fast. However, most video analyses that utilise a combination of these tools are still done manually. We propose a semantic-based hybrid workflow composition method that strives to provide automation to speed up this process. The requirements for such a system are presented, whereby we aim for a solution that best satisfies these requirements and that overcomes the limitations of existing grid workflow composition systems",2006,0, 1944,Triple Modular Redundancy with Standby (TMRSB) Supporting Dynamic Resource Reconfiguration,"A fault tolerance model called triple modular redundancy with standby (TMRSB) is developed which combines the two popular fault tolerance techniques of triple modular redundancy (TMR) and standby (SB) fault tolerance. In TMRSB systems, each module of a TMR arrangement has access to several independent standby configurations. When a fault is detected in a module's active configuration, the physical resources within that module are re-mapped to restore the desired fault-free functionality by reconfiguring the resource pool to one of the standby configurations. A mathematic model for TMRSB systems is developed for field programmable gate array (FPGA) logic devices. Simulation of the model was also performed using the BlockSim reliability software tool which takes into account the reconfiguration time overheads and an imperfect switching mechanism. With component time-to-failure following an exponential distribution throughout long mission duration, the range of operation over which TMRSB is superior to a standby system and a TMR system is shown.",2006,0, 1945,Optimal QoS-aware Sleep/Wake Scheduling for Time-Synchronized Sensor Networks,"We study the sleep/wake scheduling problem in the context of clustered sensor networks. We conclude that the design of any sleep/wake scheduling algorithm must take into account the impact of the synchronization error. Our work includes two parts. In the first part, we show that there is an inherent tradeoff between energy consumption and message delivery performance (defined as the message capture probability in this work). We formulate an optimization problem to minimize the expected energy consumption, with the constraint that the message capture probability should be no less than a threshold. In the first part, we assume the threshold is already given. However, by investigating the unique structure of the problem, we transform the non-convex problem into a convex equivalent, and solve it using an efficient search method. In the second part, we remove the assumption that the capture probability threshold is already given, and study how to decide it to meet the quality of services (QoS) requirement of the application. We observe that in many sensor network applications, a group of sensors collaborate to perform common task(s). Therefore, the QoS is usually not decided by the performance of any individual node, but by the collective performance of all the related nodes. To achieve the collective performance with minimum energy consumption, intuitively we should provide differentiated services for the nodes and favor more important ones. We thus formulate an optimization problem, which aims to set the capture probability threshold for messages from each individual node such that the expected energy consumption is minimized, while the collective performance is guaranteed. The problem turns out to be non-convex and hard to solve exactly. Therefore, we use approximation techniques to obtain a suboptimal solution that approximates the optimum. Simulations show that our approximate solution significantly outperforms a scheme without differentiated treatment of the nodes.",2006,0, 1946,Analysis of Distributed Intelligent Agent Model for QoS Dynamic Scheme in GSM/GPRS Network,"In this paper we study dynamic quality of service scheme in GSM/GPRS wireless network. A load balancing architecture constructed by distributed intelligent agent has been presented to support real time or burst data services. Fuzzy neural network was employed to predict GPRS traffic by learning examples. Meanwhile, we have presented a traffic estimation algorithm and a simple decision mechanism to deal with special applications such as burst data transmission. The simulation shows that distributed intelligent agent architecture could significantly reduce packet delay, route cost and relieve GPRS bottleneck",2006,0, 1947,New Tools for Blackout Prevention,"Recent power system blackouts have heightened the concern for power system security and, therefore, reliability. However, potential security improvements which could be achieved through transmission system facility reinforcement and generation supply expansion are long-term, costly, and uncertain. As a result, more immediate and cost effective solutions to the security issue have been pursued including the development of specialized software tools which can assess power system security in near-real-time and assist operators in maintaining adequate security margins at all times thus lowering the risk of blackouts. Such software tools have been implemented in numerous power systems world-wide and are gaining popularity as a result of demonstrated benefits to system performance",2006,0, 1948,Symbolic Methods for VTB Model Development,"Summary form only given. Symbolic computation is not just helpful, it is very nearly a prerequisite for building models of complex objects that will be used in dynamic system simulators such as the virtual test bed (VTB). The reason that symbolic computation is so necessary is that, for reasons of compuatational speed, the code of any VTB model having natural coupling ports must directly express the Jacobian of the dynamic equations. Computing the often-large number of Jacobian terms by hand is extremely tedious, time consuming, and error-prone, and entirely unnecessary with today's symbolic math tools. VTB model developers currently rely on several different tools to accomplish model development. One of the tools, included directly in the VTB distribution package, comes in both runtime and development-time versions that allow a user to directly enter the math equations that describe the system dynamics. Then those equations are either interpreted and symbolically processed on-the-fly during system simulation, or they are interpreted and processed prior to generation of C code that will later be compiled into a model. The combination of these two approaches provides an incremental path for model development: the first approach supports rapidly prototyping of a model, and the second supports final development of full-featured models that may require some additional hand-tailoring of the C code. Additional model development tools that are being developed based on commercial symbolic math packages will also be described",2006,0, 1949,A New Method for Detection and Identification of Power Quality Disturbance,"In this paper, a new method for detection and identification of power quality disturbance is proposed: first, the original signals are de-noised by the wavelet transform; second, the beginning and ending time of the disturbance can be detected in the time domain, and a difference signal is formed; third, the type of power quality disturbance signals can be identified by the peak value and period; finally, parameters of disturbance signals can be calculated by fast Fourier transform during the disturbance time period. Based on this, software for detection and identification of power quality disturbance is developed. The application to a case study shows that this method is fast, sensitive, and practical for detection and identification of power quality disturbance",2006,0, 1950,The Adaptive Detection And Application Of Weak Signal Based On Stochastic Resonance,"In this paper, we study how to use the stochastic resonance principle to detect the weak intermediate and low frequency signals under the condition of intensive noise in mechanical fault diagnosis. According to the relationship among system outputs, system parameters and input signals, we've studied the algorithm and the technology of the software self-adaptive control of the nonlinear bistable system. Using this method, we can obtain output with adequate Signal-to-Noise ratio (SNR) without solving complex differential equations. The simulation results show the affectivity of this method. This study shows a way to detect the weak signals under the condition of intensive noise",2006,0, 1951,Harmonization of usability measurements in ISO9126 software engineering standards,"The measurement of software usability is recommended in ISO 9126-2 to assess the external quality of software by allowing the user to test its usefulness before it is delivered to the client. Later, during the operation and maintenance phases, usability should be maintained, otherwise the software will have to be retired. This then raises harmonization issues about the proper positioning of the usability characteristic: does usability really belong to the external quality view of ISO 9126-2 and should the external quality characteristic of usability be harmonized with that of the quality in use model defined in ISO 9126-1 and ISO 9126-4? This paper analyzes these two questions: first, we identify and analyze the subset of ISO 9126-2 quality subcharacteristics and measures of usability that can be useful for quality in use, and then we recommend improvements to the harmonization of these ISO 9126 models",2006,0, 1952,Extending CSCM to support Interface Versioning,"Software component has been a main stream technology used to tackle issues such as software reuse, software quality and, software development complexity. In spite of the proliferation of component models (CORBA, .Net, JavaBeans), certain issues and limitations inherent to components are still not addressed adequately. For instance, composing software components especially those provided by different suppliers may result in faulty behavior. This behavior might be the result of incompatibilities between aging components and/or freshly released components and their respective interfaces. This paper, present an approach to tackle component interface incompatibilities via the use of a component and interface versioning scheme. This approach is designed as an extension to the compositional structured component model (CSCM), an ongoing research project. The implementation of this extension makes use of code annotations to provide interface versioning information useful in detecting interface incompatibilities",2006,0, 1953,Automated Discovery of Human Activities inside Pervasive Living Spaces,"The recognition and detection of human activities constitutes a very important step towards the fulfilment of the notion of pervasive environments. By detecting patterns on those behaviours, an environment can adapt and respond to the inhabitants' needs, thus improving the quality of life. This paper presents a framework in which those ideas can be applied and tested. It includes a system using a temporal neural-network driven embedded agent working with online, real-time data from a network of unobtrusive low-level sensors situated in either a simulated environment or a fully fitted real environment such as a whole flat",2006,0, 1954,Ant Agent-Based Multicast Routing with QoS Guarantees,"This paper designs a novel ant agent-based multicast routing algorithm with bandwidth and delay guarantees, called QMRA, which works for packet-switching networks where the state information is imprecise. In our scheme, an ant uses the probability that a link satisfies QoS requirements and the cost of a path instead of the ant's trip time or age to determine the amount of pheromone to deposit, so that it has a simpler migration process, less control parameters and can tolerate the imprecision of state information. In this paper, the proof of correctness and complexity analysis of QMRA are given. And that, experimental results show our algorithm can achieve low routing blocking ratio, low average packet delay and fast convergence when the network state information is imprecise",2006,0, 1955,StegoBreaker: Audio Steganalysis using Ensemble Autonomous Multi-Agent and Genetic Algorithm,"The goal of steganography is to avoid drawing suspicion to the transmission of a hidden message in multi-medium. This creates a potential problem when this technology is misused for planning criminal activities. Differentiating anomalous audio document (stego audio) from pure audio document (cover audio) is difficult and tedious. Steganalytic techniques strive to detect whether an audio contains a hidden message or not. This paper investigates the use of genetic algorithm (GA) to aid autonomous intelligent software agents capable of detecting any hidden information in audio files, automatically. This agent would make up the detection agent in an architecture comprising of several different agents that collaborate together to detect the hidden information. The basic idea is that, the various audio quality metrics (AQMs) calculated on cover audio signals and on stego-audio signals vis-a-vis their denoised versions, are statistically different. GA employs these AQMs to steganalyse the audio data. The overall agent architecture will operate as an automatic target detection (ATD) system. The architecture of ATD system is presented in this paper and it is shown how the detection agent fits into the overall system. The design of ATD based audio steganalyzer relies on the choice of these audio quality measures and the construction of a GA based rule generator, which spawns a set of rules that discriminates between the adulterated and the untouched audio samples. Experimental results show that the proposed technique provides promising detection rates",2006,0, 1956,Unsupervised Contextual Keyword Relevance Learning and Measurement using PLSA,"In this paper, we have developed a probabilistic approach using PLSA for the discovery and analysis of contextual keyword relevance based on the distribution of keywords across a training text corpus. We have shown experimentally, the flexibility of this approach in classifying keywords into different domains based on their context. We have developed a prototype system that allows us to project keyword queries on the loaded PLSA model and returns keywords that are closely correlated. The keyword query is vectorized using the PLSA model in the reduce aspect space and correlation is derived by calculating a dot product. We also discuss the parameters that control PLSA performance including a) number of aspects, b) number of EM iterations c) weighting functions on TDM (pre-weighting). We have estimated the quality through computation of precision-recall scores. We have presented our experiments on PLSA application towards document classification",2006,0, 1957,Fault Detection In PCB Using Homotopic Morphological Operator,"Homotopic morphological image processing software solution is developed for detection of hair cracks in PCB, which cannot be seen by naked eye. The proposed software solution is implemented using basic morphological operations like dilation, erosion, opening, closing, hit-miss transform, thinning, thickening skeletonizing and pruning. The software solution also performs image enhancement operations using mathematical morphology. The proposed software solution can be extended to detect thin bone fractures and real time automatic PCB fault detection",2006,0, 1958,An Intelligent Error Detection Model for Reliable QoS Constraints Running on Pervasive Computing,"We propose an intelligence predictive model for reliable QoS constraints running on pervasive computing. FTA is a system that is suitable for detecting and recovering software error based on pervasive computing environment as RCSM(Reconfigurable Context-Sensitive Middleware) by using software techniques. One of the methods to detect error for session's recovery inspects process database periodically. But this method has a weak point of inspecting all processes without regard to session. Therefore, we propose FTA. This method detects error by inspecting by hooking method. If an error is found, FTA informs GSM of the error. GSM informs Daemon or SA-SMA of the error. Daemon creates SA-SMA and so on. SA-SMA creates Video Service Provide Instance and so on.",2006,0, 1959,Blocking vs. Non-Blocking Coordinated Checkpointing for Large-Scale Fault Tolerant MPI,"A long-term trend in high-performance computing is the increasing number of nodes in parallel computing platforms, which entails a higher failure probability. Fault programming environments should be used to guarantee the safe execution of critical applications. Research in fault tolerant MPI has led to the development of several fault tolerant MPI environments. Different approaches are being proposed using a variety of fault tolerant message passing protocols based on coordinated checkpointing or message logging. The most popular approach is with coordinated checkpointing. In the literature, two different concepts of coordinated checkpointing have been proposed: blocking and non-blocking. However they have never been compared quantitatively and their respective scalability remains unknown. The contribution of this paper is to provide the first comparison between these two approaches and a study of their scalability. We have implemented the two approaches within the MPICH environments and evaluate their performance using the NAS parallel benchmarks",2006,0, 1960,"Parallel Genomic Sequence-Searching on an Ad-Hoc Grid: Experiences, Lessons Learned, and Implications","The Basic local alignment search tool (BLAST) allows bioinformaticists to characterize an unknown sequence by comparing it against a database of known sequences. The similarity between sequences enables biologists to detect evolutionary relationships and infer biological properties of the unknown sequence. mpiBLAST, our parallel BLAST, decreases the search time of a 300 KB query on the current NT database from over two full days to under 10 minutes on a 128-processor cluster and allows larger query files to be compared. Consequently, we propose to compare the largest query available, the entire NT database, against the largest database available, the entire NT database. The result of this comparison can provide critical information to the biology community, including insightful evolutionary, structural, and functional relationships between every sequence and family in the NT database. Preliminary projections indicated that to complete the task in a reasonable length of time required more processors than were available to us at a single site. Hence, we assembled GreenGene, an ad-hoc grid that was constructed """"on the fly"""" from donated computational, network, and storage resources during last year's SC|05. GreenGene consisted of 3048 processors from machines that were distributed across the United States. This paper presents a case study of mpiBLAST on GreenGene - specifically, a pre-run characterization of the computation, the hardware and software architectural design, experimental results, and future directions",2006,0, 1961,Qualitative Modeling for Requirements Engineering,"Acquisition of """"quantitative"""" models of sufficient accuracy to enable effective analysis of requirements tradeoffs is hampered by the slowness and difficulty of obtaining sufficient data. """"Qualitative"""" models, based on expert opinion, can be built quickly and therefore used earlier. Such qualitative models are nondeterminate which makes them hard to use for making categorical policy decisions over the model. The nondeterminacy of qualitative models can be tamed using """"stochastic sampling"""" and """"treatment learning"""". These tools can quickly find and set the """"master variables"""" that restrain qualitative simulations. Once tamed, qualitative modeling can be used in requirements engineering to assess more options, earlier in the life cycle",2006,0, 1962,SPDW: A Software Development Process Performance Data Warehousing Environment,"Metrics are essential in the assessment of the quality of software development processes (SDP). However, the adoption of a metrics program requires an information system for collecting, analyzing, and disseminating measures of software processes, products and services. This paper describes SPDW, an SPD data warehousing environment developed in the context of the metrics program of a leading software operation in Latin America, currently assessed as CMM Level 3. SDPW architecture encompasses: 1) automatic project data capturing, considering different types of heterogeneity present in the software development environment; 2) the representation of project metrics according to a standard organizational view; and 3) analytical functionality that supports process analysis. The paper also describes current implementations, and reports experiences on the use of SPDW by the organization",2006,0, 1963,Bio - Inspired & Traditional Approaches to Obtain Fault Tolerance,"Applying some observable phenomena from cells, focused on their organization, function, control and healing mechanisms, a simple fault tolerant implementation can be obtained. Traditionally, fault tolerance has been added explicitly to a system by including redundant hardware and/or software, which takes over when an error has been detected. These concepts and ideas have been applied before with the triple modular redundancy. Our approach is to design systems where redundancy was incorporated implicitly into the hardware and to mix bio-inspired and traditional approaches to deal with fault tolerance. These ideas are shown using a discrete cosine transform (application) as organ, its MAC (function) interconnected as cell and parity redundancy checker (error detector) as immune system to obtain a fault tolerance design",2006,0, 1964,Application of Static Transfer Switch for Feeder Reconfiguration to Improve Voltage at Critical Locations,"The main objective of this work was to assess and evaluate the performance of static transfer switch (STS) for feeder reconfiguration. Two particular network feeders namely preferred and alternate were selected for simulation studies. Both feeders belong to IESCO system (Islamabad Electric Supply Company, Pakistan). The sensitive loads are fed by preferred feeder but in case of disturbances, the loads are transferred to alternate feeder. Different simulation cases were performed for optimum installation of STS to obtain the required voltage quality. The simulations are performed using the PSCAD/EMTDC package",2006,0, 1965,A Software Factory for Air Traffic Data,"Modern enterprise architecture requires a flexible, scalable and upgradeable infrastructure that allows communication, and subsequently collaboration, between heterogeneous information processing and computing environments. Heterogeneous systems often use different data representations for the same data items, limiting collaboration. Although this problem is conceptually straightforward, the process of data conversion is error prone, often dramatically underestimated, and surprisingly complex. The complexity is often the result of the non-standard data representations that are used by computing systems in the aviation domain. This paper describes some of the work that is being done by Boeing Advanced Air Traffic Management to address this challenge. A prototype software factory for air traffic data management is being built and evaluated. The software factory provides the capability for a user such as a Systems Engineer or an Air Traffic Domain Expert to create an interface model. The model will allow the user to specify entities such as data items, scaling, units, headers and footers, representation, and coding. The factory automatically creates a machine usable interface. A prototype for a Domain Specific Language to assist in this task is being developed",2006,0,2213 1966,On the Use of Behavioral Models for the Integrated Performance and Reliability Evaluation of Fault-Tolerant Avionics Systems,"In this paper, the authors propose an integrated methodology for the reliability and performance analysis of fault-tolerant systems. This methodology uses a behavioral model of the system dynamics, similar to the ones used by control engineers when designing the control system, but incorporates additional artifacts to model the failure behavior of the system components. These artifacts include component failure modes (and associated failure rates) and how those failure modes affect the dynamic behavior of the component. The methodology bases the system evaluation on the analysis of the dynamics of the different configurations the system can reach after component failures occur. For each of the possible system configurations, a performance evaluation of its dynamic behavior is carried out to check whether its properties, e.g., accuracy, overshoot, or settling time, which are called performance metrics, meet system requirements. After all system configurations have been evaluated, the values of the performance metrics for each configuration and the probabilities of going from the nominal configuration (no component failures) to any other configuration are merged into a set of probabilistic measures of performance. To illustrate the methodology, and to introduce a tool that the authors developed in MATLAB/SIMULINKreg that supports this methodology, the authors present a case-study of a lateral-directional flight control system for a fighter aircraft",2006,0, 1967,Multiple Description Scalar Quantization Based 3D Mesh Coding,"In this paper, we address the problem of 3D model transmission over error-prone channels using multiple description coding (MDC). The objective of MDC is to encode a source into multiple bitstreams, called descriptions, supporting multiple quality levels of decoding. Compared to layered coding techniques, each description can be decoded independently to approximate the model. In the proposed approach, the mesh geometry is compressed using multiresolution geometry compression. Then multiple descriptions are obtained by applying multiple description scalar quantization (MDSQ) to the obtained wavelet coefficients. Experimental results show that, the proposed approach achieves competitive compression performance compared with existing multiple description methods.",2006,0, 1968,2D Frequency Selective Extrapolation for Spatial Error Concealment in H.264/AVC Video Coding,"The frequency selective extrapolation extends an image signal beyond a limited number of known samples. This problem arises in image and video communication in error prone environments where transmission errors may lead to data losses. In order to estimate the lost image areas, the missing pixels are extrapolated from the available correctly received surrounding area which is approximated by a weighted linear combination of basis functions. In this contribution, we integrate the frequency selective extrapolation into the H.264/AVC coder as spatial concealment method. The decoder reference software uses spatial concealment only for I frames. Therefore, we investigate the performance of our concealment scheme for I frames and its impact on following P frames caused by error propagation due to predictive coding. Further, we compare the performance for coded video sequences in TV quality against the non-normative concealment feature of the decoder reference software. The investigations are done for slice patterns causing chequerboard and raster scan losses enabled by flexible macroblock ordering (FMO).",2006,0, 1969,Development of Defect Classification Algorithm for POSCO Rolling Strip Surface Inspection System,"Surface inspection system (SIS) is an integrated hardware-software system which automatically inspects the surface of the steel strip. It is equipped with several cameras and illumination over and under the steel strip roll and automatically detects and classifies defects on the surface. The performance of the inspection algorithm plays an important role in not only quality assurance of the rolled steel product, but also improvement of the strip production process control. Current implementation of POSCO SIS has good ability to detect defects, however, classification performance is not satisfactory. In this paper, we introduce POSCO SIS and suggest a new defect classification algorithm which is based on support vector machine technique. The suggested classification algorithm shows good classification ability and generalization performance",2006,0, 1970,A New Approach for Induction Motor Broken Bar Diagnosis by Using Vibration Spectrum,"Different methods for detecting broken bars in induction motors can be found in literature. Many of these methods are based on evaluating special frequency magnitudes in machine signals spectrums. Current, power, flux, etc are among these signals. Frequencies related to broken rotor fault are dependent on slip; therefore, correct diagnosis of fault depends on accurate determination of motor velocity and slip. The traditional methods typically require several sensors that should be pre-installed in some cases. A practical diagnosis method should be easily performed in site and does not take too much time. This paper presents a diagnosing method based on only a vibration sensor. Motor velocity oscillation due to broken rotor causes frequency components at twice slip frequency difference around speed frequency in vibration spectrum. Speed frequency and its harmonics as well as twice supply frequency, can easily and accurately be found in vibration spectrum, therefore the motor slip can be computed. Now components related to rotor fault can be found. According to this method, an apparatus consisting necessary hardware and software has been designed. Experimental tests have confirmed the efficiency of the method",2006,0, 1971,A New Topology of Fault-current Limiter and its control strategy,"In this paper a new type of fault current limiter based on DC reactor with using superconductor are presented. In normal operation condition the limiter has no obvious effect on loads. When fault happens, the bypass AC reactor and series resistor will insert the fault line automatically to limit the short circuit current, when the control circuit detects a short circuit fault, the solid state bridge in fault line works as an inverter and is closed as soon as possible. Subsequently the fault current is fully limited by the bypass AC reactor and series resistor. The magnitude of Lac and rac must be equal with protected load. By using the electro-magnetic transients in DC systems which are the simulator of electric networks (EMTDC) software we carried out analysis of the voltage and current waveforms for fault conditions. Waveforms are considered in calculating the voltage drop at substation during the fault. The analysis used in selecting an appropriate inductance value for designing",2006,0, 1972,A Delay Fault Model for At-Speed Fault Simulation and Test Generation,"We describe a transition fault model, which is easy to simulate under test sequences that are applied at-speed, and provides a target for the generation of at-speed test sequences. At-speed test application allows a circuit to be tested under its normal operation conditions. However, fault simulation and test generation for the existing fault models become significantly more complex due to the need to handle faulty signal-transitions that span multiple clock cycles. The proposed fault model alleviates this shortcoming by introducing unspecified values into the faulty circuit when fault effects may occur. Fault detection potentially occurs when an unspecified value reaches a primary output. Due to the uncertainty that an unspecified value propagated to a primary output will be different from the fault free value, an inherent requirement in this model is that a fault would be potentially detected multiple times in order to increase the likelihood of detection. Experimental results demonstrate that the model behaves as expected in terms of fault coverage and numbers of detections of target faults. A variation of an n-detection test generation procedure for stuck-at faults is used for generating test sequences under this model",2006,0, 1973,Applying Posterior Probability Support Vector Machine to Evaluate the Market Adaptability of the Product of Tourism Agency,The product of the tourism agency can be divided into two classes of the pushing product and the pulling product. It is the very pivotal and significative step of product designing to evaluate the market adaptability of pushing product of the tourism agency. The paper studies the corner of the tour product market and illuminates that the rootstock is insufficient pushing product market adaptability analysis of the tourism agency. The product market adaptability analysis is regarded as a pattern recognition problem of two categories that the product is well adaptability or not the first time. A method based on the posterior probability support vector machine (PPSVM) is applied to evaluate the market adaptability of the product of the tourism agency after comparing the existing analysis method. In the last the technique is proved valid using demonstration analysis and the PPSVM model has the better average prediction accuracy and generality than the other models discussed in literature,2006,0, 1974,Service Quality of Information Systems,"Information system development is an expensive process, and usually fall behind the expectations of user and implementation is achieved much later than expected if ever. The designers of information systems and programmers often begin designing and programming the system too early, before they actually understand the users' or stakeholders' requirements. Since designing and programming systems are very expensive, ill-defined requirements cause projects to fall behind schedule and over budget. Correctly assessing customer needs and requirements is very important for information systems development. Over 50% of development errors occur during the requirements analysis phase of the development cycle. In this paper, following the statement of the problem, possible causes are discussed. The conclusion is made by providing some guidelines for the design of information systems that meet or exceed user requirements",2006,0, 1975,Traffic and Network Engineering in Emerging Generation IP Networks: A Bandwidth on Demand Model,"This paper assesses the performance of a network management scheme where network engineering (NE) is used to complement traffic engineering (TE) in a multi-layer setting where a data network is layered above an optical network. We present a TE strategy which is based on a multi-constraint optimization model consisting of finding bandwidth-guaranteed IP tunnels subject to contention avoidance minimization and bandwidth usage maximization constraints. The TE model is complemented by a NE model which uses a bandwidth trading mechanism to rapidly re-size and re-optimize the established tunnels (LSPs/lambdaSPs) under quality of service (QoS) mismatches between the traffic carried by the tunnels and the resources available for carrying the traffic. The resulting TE+NE strategy can be used to achieve bandwidth on demand (BoD) in emerging generation IP networks using a (G)MPLS- like integrated architecture in a cost effective way. We evaluate the performance of this hybrid strategy when routing, re-routing and re-sizing the tunnels carrying the traffic offered to a 23-node test network.",2006,0, 1976,Software Quality in Ladder Programming,"This paper aims to measure the software quality for programmable logic controllers (PLCs) especially in ladder programming. The proposed quality metrics involve the criteria of simplicity, reconfigurability, reliability, and flexibility. A fuzzy inference algorithm is developed to select the best controller design among different ladder programs for the same application. A single tone membership function is used to represent the quality metric per each controller. The fitness of each controller is represented by the minimum value of all evaluated criteria. Thereafter, a min-max fuzzy inference is applied to take the decision (which controller is the best). The developed fuzzy assessment algorithm is applied to a conveyor belt module connected to a PLC to perform a repeated sequence. The decision making to select the best ladder program is obtained using the fuzzy assessment algorithm. The obtained results affirmed the potential of the proposed algorithm to assess the quality of the designed ladder programs",2006,0, 1977,A Transmission Line Unit Protection Technique Based Combination Modulus by Using Mathematical Morphology,This paper presents a concept of combination modulus (CM) to solve the problem that the transient-based protection techniques may fail to detect faults under certain conditions. The CM features are analyzed and discussed in detail. A novel transmission line unit protection scheme is proposed by comparing transient current CM polarity. The mathematical morphology (MM) technique is used to extract the polarity features from fault-generated current wave signals propagating along transmission lines during a post-fault period. The simulation results of the ATP/EMTP software show that the reliability of the protection scheme proposed has been considerably improved.,2006,0, 1978,Research of Transient Stability Margin Affected by Single-phase Reclosing,"The reliability can be enhanced by using single-phase reclosing, but when reclose to permanent faults, it is another surge on system. So there is necessity to study on the influence of reclosing sequence on transient stability margin. In the case that transient and permanent fault is not effectively identified, the influence of sequence and time of single-phase reclosing on transient stability margin in Southern China Power Grid of 2005 year is studied using software FASTEST when single-phase fault is occurred on 500 kV Anshun - Tianshengqiao transmission line in this paper. Simulation results indicate that when the terminal without fault uses non-voltage detecting and recloses first instead of the two terminals use non-voltage detecting in turn, the transient angle, voltage and frequency stability margin can all be enhanced and the working condition of breaker can be improved. A method combined with fault location algorithm to modify the sequence of single-phase reclosing online to enhance transient stability and lower the surge when reclose to permanent fault is proposed.",2006,0, 1979,Voltage Sag Study for a Practical Industrial Distribution Network,"Voltage sags have become one of the major power quality concerns in recent years. To decide the suitability of mitigation methods, knowledge is needed about the expected number of sags as a function of special characteristics; this knowledge can be obtained through voltage sag analysis software, which is developed in the paper based on the method of fault position. The software has many functions; a detailed voltage sag study for a practical industrial distribution network by using it has been done in this paper. In order to assess the degree of fault position influence on voltage sags of the power system, an index named fault position sag coefficient (FPSC) is defined in the paper. This coefficient can be calculated out based on the results of voltage sag analysis software. The dangerous fault positions which cause serious voltage sags can be determined.",2006,0, 1980,A Model-based Approach to the Security Testing of Network Protocol Implementations,"Software is inherently buggy and those defects can lead to security breaches in applications. For more than a decade, buffer overflows have been the most common bugs found """"in the wild"""" and they often lead to critical security issues. Several techniques have been developed to defend against these types of security flaws, all with different rates of success. In this paper, we present a systematic approach for the automated testing of network protocol server implementations. The technique is based on established black-box testing methods (such as finite-state model-based testing and fault-injection) enhanced by the generation of intelligent, semantic-aware test cases that provide a more complete coverage of the code space. We also demonstrate the use of a model-based testing tool that can reliably detect vulnerabilities in server applications",2006,0, 1981,Forming Groups for Collaborative Learning in Introductory Computer Programming Courses Based on Students' Programming Styles: An Empirical Study,"This paper describes and evaluates an approach for constructing groups for collaborative learning of computer programming. Groups are formed based on students' programming styles. The style of a program is characterized by simple well known metrics, including length of identifiers, size and number of modules (functions/procedures), and numbers of indented, commented and blank lines. A tool was implemented to automatically assess the style of programs submitted by students. For evaluating the tool and approach used to construct groups, some experiments were conducted involving information systems students enrolled in a course on algorithms and data structures. The experiments showed that collaborative learning was very effective for improving the programming style of students, particularly for students that worked in heterogeneous groups (formed by students with different levels of knowledge of programming style)",2006,0, 1982,Automated Error-Prevention and Error-Detection Tools for Assembly Language in the Educational Environment,"Automated tools for error prevention and error detection exist for many high-level languages, but have been nonexistent for assembly-language programs, embedded programs in particular. We present new tools that improve the quality and reliability of assembly-language programs by helping the educator automate the arduous tasks of exposing and correcting common errors and oversights. These tools give the educator a user-friendly, but powerful means of completely testing student programs. The new tools that we have developed are the result of years of research and experience by the authors in testing and debugging students' programming assignments. During this time, we created a few preliminary versions of these automated tools, allowing us to test our students' projects in one fell swoop. These tools gave us the ability to catch stack errors and memory-access errors that we would not have been able to detect with normal testing. These tools considerably shortened the amount of testing time and allowed us to detect a larger group of errors",2006,0, 1983,An experimental framework for comparative digital library evaluation: the logging scheme,"Evaluation of digital libraries assesses their effectiveness, quality and overall impact. In this paper we present a novel, multi-level logging framework that will provide complete coverage of the different aspects of DL usage for user-system interactions. Based on this framework, we can analyse for various DL stakeholders the logging data according to their specific interests. In addition, analysis tools and a freely accessible log data repository will yield synergies and sustainability in DL evaluation and encourage a community for DL evaluation by providing for discussion on a common ground",2006,0, 1984,Pilot testing the DigiQUALTM protocol: lessons learned,"The association of research libraries (ARL) is developing the DigiQUALtrade protocol to assess the service quality provided by digital libraries (DLs). In 2005, statements about DL service quality were put through a two-step validation process with DL developers and then with users in an online survey.",2006,0, 1985,Software Estimation- An Introdution,"In many cases a successful project is one that meets a time and cost target that has been estimated by someone somewhere. The quality of this estimate will set the underlying probability of successfully completing the project. This presentation will discuss the basic factors that effect software estimation, and aims to show the benefits of formalising the estimation process to increase the understanding of key decisions that form the basis of an estimate. It will introduce the critical factors that ensure that we base our projects on information and assumptions that are clearly defined; easily understood and credible. It will also introduce the basic laws that govern software development, and the effect on project schedule.",2006,0, 1986,Building Statistical Test-Cases for Smart Device Software-An Example,"Statistical testing (ST) of software or logic-based components can produce dependability information on such components by yielding an estimate for their probability of failure on demand. An example of software-based components that are increasingly used within safety-related systems e.g. in the nuclear industry, are smart devices. Smart devices are devices with intelligence, capable of more than merely representing correctly a sensed quantity but of functionality such as processing data, self-diagnosis and possibly exchange of data with other devices. Examples are smart transmitters or smart sensors. If such devices are used in a safety-related context, it is crucial to assess whether they fulfil the dependability requirements posed on them to ensure they are dependable enough to be used within the specific safety-related context. This involves making a case for the probability of systematic failure of the smart device. This failure probability is related to faults present in the logic or software-based part of the device. In this paper we look at a technique that can be used to establish a probability of failure for the software part of a smart monitoring unit. This technique is """"statistical testing"""" (ST). Our aim is to share our own experience with ST and to describe some of the issues we have encountered so far on the way to perform ST on this device software.",2006,0,1751 1987,Mobile Computing instead of paper based documentation in German Rheumatology,"Our objective was the integration of mobile computing in our web based documentation software DocuMed.rh to improve process organisation and quality of care. We focused on self-administered standardized patient questionnaires which are implemented in DocuMed.rh. Validity of online obtained data and the capability of disabled patients to handle a Tablet PC were assigned. On a regularly scheduled visit 117 patients completed prearranged sets of self-administered questionnaires as a paper-pencil and an electronic version using a Tablet PC in a cross-over design. Patients experiences with the Tablet PC and history of computer/internet use were assessed. Positive ethics approval and signed patients consents were obtained. Though only 65% of the patients reported computer experiences no major problems with the Tablet PC occurred. Scores obtained by direct data entry on the Tablet PC did not differ significantly from the scores obtained by the paper-pencil questionnaires in the complete group and in subgroups. Application of self-administered questionnaires on the new medium Tablet PC is efficient and capable in patients with inflammatory rheumatic disease. Mobile obtained data are rapidly available and can easily be merged with clinical data, thereby contributing intensely to improved patient care.",2006,0, 1988,A Proposal and Empirical Validation of Metrics to Evaluate the Maintainability of Software Process Models,"Software measurement is essential for understanding, defining, managing and controlling the software development and maintenance processes and it is not possible to characterize the various aspects of development in a quantitative way without having a deep understanding of software development activities and their relationships The current competitive marketplace calls for the continuous improvement of processes and as consequence companies have to change their processes in order to adapt to these new emerging needs. It implies the continuous change of their software process models and therefore, it is fundamental to facilitate the evolution of these models by evaluating its easiness of maintenance (maintainability). In this paper we introduce a set of metrics for software process models and discuss how these can be used as maintainability indicators. In particular, we report the results of a family of experiments that assess relationships between the structural properties, as measured by the metrics, of the process models and their maintainability. As a result a set of useful metrics to evaluate the software process models maintainability, have been obtained",2006,0, 1989,The quality of design team factors on software effort estimation,"Over the past ten couple of years, there is a variety of effort models proposed by academicians and practitioners at early stage of software development life cycle. Some addressed that efforts could be predicted using lines of codes (LOC) and COCOMO, others emphasized that it could be made using function point analysis (FPA) or others. The study seeks to develop a model that estimates software effort by studying and analyzing small and medium scale application software. To develop such a model, 50 completed software projects are collected from a software company. With the sample data, design team factors are identified and extracted. By applying them to simple regression analyses, a prediction of software of effort estimates with accuracy of MMRE=9% was constructed. The results give several benefits. First, the estimation problems are minimized due to the simple procedure used in identifying those factors. Second, the predicted software projects are only limited to a specific environment rather than being based upon industry environment. We believe the accuracy of effort estimates can be improved. According to the results analyzed, the work shows that it is possible to build up simple and useful prediction model based on data extracted at the early stage of software development life cycle. We hope this model can provide valuable ideas and suggestions for project designers for planning and controlling software projects in near future",2006,0, 1990,Fault Injection-based Test Case Generation for SOA-oriented Software,"The concept of service oriented architecture (SOA) implies a rapid construction of a software system with components as published Web services. How to effectively and efficiently test and assess available Web services with similar functionalities published by different service providers remains a challenge. In this paper, we present a step-by-step fault injection-based automatic test case generation approach. Preliminary test results are also reported",2006,0, 1991,Binary Alpha-plane Assisted Motion Estimation of MPEG-4 Arbitrarily Shaped Video Objects,"In this paper, we propose a fast motion estimation algorithm of arbitrarily shaped video object in MPEG-4. The proposed algorithm incorporates the binary alpha-plane and the extended contour to predict accurately the motion vectors of boundary macroblocks so that the conventional fast motion estimation algorithms can be employed to search the motion vectors of opaque macro-blocks using the motion vectors of the neighboring boundary macro-blocks as the initial center. Experimental results show that the proposed algorithm requires low computation complexity while provides good motion compensation quality",2006,0, 1992,Experience-driven selective scan for 802.11 networks,"The current use of the IEEE 802.11 protocol does not fully meet the full requirements of real-time applications. During the handoff period the STA cannot receive traffic and the quality of these applications is therefore reduced. A significant cause of this latency is that the STA normally scans all possible channels before synchronizing one of them. Considering that normally several channels are empty and that the 802.11 infrastructure architecture provides for fixed access points, it is possible to reduce the overall latency by reducing the number of channels to be scanned. In this paper we show an algorithm for selectively scanning just one probable channel. If such a channel is not available, the algorithm selects a second candidate until it scans all channels. These probabilities depend on the movements inside that LAN of previous STA. We show motivations and specifications of the algorithm",2006,0, 1993,Fault Tolerant Job Scheduling in Computational Grid,"In large-scale grids, the probability of a failure is much greater than in traditional parallel systems [I]. Therefore, fault tolerance has become a crucial area in grid computing. In this paper, we address the problem of fault tolerance in term of resource failure. We devise a strategy for fault tolerant job scheduling in computational grid. Proposed strategy maintains history of the fault occurrence of resource in grid information service (GIS). Whenever a resource broker has job to schedule it uses the resource fault occurrence history information from GIS and depending on this information use different intensity of check pointing and replication while scheduling the job on resources which have different tendency towards fault. Using check pointing proposed scheme can make grid scheduling more reliable and efficient. Further, it increases the percentage of jobs executed within specified deadline and allotted budget, hence helping in making grid trustworthy. Through simulation we have evaluated the performance of the proposed strategy. The experimental results demonstrate that proposed strategy effectively schedule the grid jobs in fault tolerant way in spite of highly dynamic nature of grid",2006,0, 1994,"Fault Tolerance using """"Parallel Shadow Image Servers (PSIS)"""" in Grid Based Computing Environment","This paper presents a critical review of the existing fault tolerance mechanism in grid computing and the overhead involved in terms of reprocessing or rescheduling of jobs, if in case a fault arisen. For this purpose we suggested the parallel shadow image server (PSIS) copying techniques in parallel to the resource manager for having the check points for rescheduling of jobs from the nearest flag, if in case the fault is detected. The job process is to be scheduled from the resource manager node to the worker nodes and then its' submitted back by the worker nodes in serialized form to the parallel shadow image servers from the worker nodes after the pre-specified amount of time, which we call the recent spawn or the flag check point for rescheduling or reprocessing of job. If the fault is arisen then the rescheduling is done from the recent check point and submitted to the worker node from where the job was terminated. This will not only save time but will improve the performance up to major extent",2006,0, 1995,A Method for Detecting and Measuring Architectural Layering Violations in Source Code,"The layered architecture pattern has been widely adopted by the developer community in order to build large software systems. The layered organization of software modules offers a number of benefits such as reusability, changeability and portability to those who are involved in the development and maintenance of such software systems. But in reality as the system evolves over time, rarely does the actual source code of the system conform to the conceptual horizontal layering of modules. This in turn results in a significant degradation of system maintainability. In order to re-factor such a system to improve its maintainability, it is very important to discover, analyze and measure violations of layered architecture pattern. In this paper we propose a technique to discover such violations in the source code and quantitatively measure the amount of non-conformance to the conceptual layering. The proposed approach evaluates the extent to which the module dependencies across layers violate the layered architecture pattern. In order to evaluate the accuracy of our approach, we have applied this technique to discover and analyze such violations to a set of open source applications and a proprietary business application by taking the help of domain experts wherever possible.",2006,0, 1996,24/7 Software Development in Virtual Student Exchange Groups: Redefining the Work and Study Week,"A concept of time zone driven, 24/7-week software development in a Virtual Student Exchange (VSX) environment is being defined, developed and applied to explore reliable and efficient continuous modes of work/study processes. The overall goal is to assess the suitability and benefits of this innovative approach to teaching and learning in order to increase the efficiency and effectiveness of these processes. This new methodology aims to address industry needs for training in international teaming, to enrich students' experience, and to improve the quality of education in the participating institutions. The techniques and tools discussed here create an integrated framework for international collaboration among teaming groups of students in practice and team oriented engineering education. This paper also aims to justify the need, merits, and feasibility of the virtual collaboration student exchange teaching program between educational institutions separated by three 8- hour time zones: the Faculty of Electronic Engineering of the Wroclaw University of Technology in Poland (WUT), the Faculty of Electrical and Computer Engineering at the University of Arizona, Tucson, USA (UA) and the Faculty of Engineering, Software Engineering Group at University of Technology, Sydney, Australia (UTS). The paper defines the proposed methodology, reviews the tools and processes involved, and finally reports preliminary results.",2006,0, 1997,The Fault Diagnosis of a Class of Nonlinear Stochastic Time-delay systems,"This paper presents a new fault detection algorithm for a class of nonlinear stochastic time-delay systems. Different from the classical fault detection design, a fault detection filter with an output observer and a consensus filter is constructed for fault detecting. Simulations are provided to show the efficiency of the proposed approach.",2006,0, 1998,Development of Intelligent Visual Inspection System (IVIS) for Bottling Machine,"This paper presents a research on developing an intelligent visual inspection system (IVIS) for bottling machine, focusing on the development of image processing framework for defect detection. The objective of the research is to contribute a method on modeling, integrating and enhancing IVIS for the process of quality control in industrial area. IVIS application for quality control was studied using plastic bottles on a production line simulation. An experiment had done by using developed software and special equipments such as conveyor belt, lighting source, and a Web camera (Webcam) to capture the image. The experiment result shows that the system is accurate enough to detect moving object on the speed at 106 rpm with the accuracy of the image acquisition is 94.264%.",2006,0, 1999,SNP Data Consulting Program,"In the post genome era, considerable effort has been put into genetic association study with single nucleotide polymorphisms (SNPs) to investigate genes affecting traits, for example diseases and response to drugs. Although various software tools for SNP association study read plain text files as input data, their formats is not standardized. Manual data conversion may cause incorrect input. In addition, validity of analysis may be lost by experimental fault and by data not under assumption of analysis method. To detect various errors in input data, we implemented 19 rules as SNP data consulting program. The program can also infer first cause of error and then suggest how user should correct it. We demonstrate the program is effective not only for in-house data but also for published data where errors are expected to have been removed. With this program, biologist would be able to perform intended and valid analysis",2006,0, 2000,Considering Both Failure Detection and Fault Correction Activities in Software Reliability Modeling,"Software reliability is widely recognized as one of the most significant aspects of software quality and is often determined by the number of software uncorrected faults in the system. In practice, it is essential for fault correction prediction, because this correction process consumes a heavy amount of time and resources to predict whether reliability goals have been achieved. Therefore, in this paper we discuss a general framework of the modeling of the failure detection and fault correction process. Under this general framework, we not only verify the existing non-homogeneous poisson process (NHPP) models but also derive several new NHPP models. In addition, we show that these approaches cover a number of well-known models under different conditions. Finally, numerical examples are shown to illustrate the results of the integration of the detection and correction processes",2006,0, 2001,Efficient Mutant Generation for Mutation Testing of Pointcuts in Aspect-Oriented Programs,"Fault-based testing is an approach where the designed test data is used to demonstrate the absence of a set of prespecified faults, typically being frequently occurring faults. Mutation testing is a fault-based testing technique used to inject faults into an existing program, i.e., a variation of the original program and see if the test suite is sensitive enough to detect common faults. Aspect-oriented programming (AOP) provides new modularization of software systems by encapsulating crosscutting concerns. AspectJ, a language designed to support AOP uses abstractions like pointcuts, advice, and aspects to achieve AOP's primary functionality. Developers tend to write pointcut expressions with incorrect strength, thereby selecting additional events than intended to or leaving out necessary events. This incorrect strength causes aspects, the set of crosscutting concerns, to fail. Hence there is a need to test the pointcuts for their strength. Mutation testing of pointcuts includes two steps: creating effective mutants (variations) of a pointcut expression and testing these mutants using the designed test data. The number of mutants for a pointcut expression is usually large due to the usage of wildcards. It is tedious to manually identify effective mutants that are of appropriate strength and resemble closely the original pointcut expression. Our framework automatically generates mutants for a pointcut expression and identifies mutants that resemble closely the original expression. Then the developers could use the test data for the woven classes against these mutants to perform mutation testing.",2006,0, 2002,Assessment of Data Diversity Methods for Software Fault Tolerance Based on Mutation Analysis,"One of the main concerns in safety-critical software is to ensure sufficient reliability because proof of the absence of systematic failures has proved to be an unrealistic goal. fault-tolerance (FT) is one method for improving reliability claims. It is reasonable to assume that some software FT techniques offer more protection than others, but the relative effectiveness of different software FT schemes remains unclear. We present the principles of a method to assess the effectiveness of FT using mutation analysis. The aim of this approach is to observe the power of FT directly and use this empirical process to evolve more powerful forms of FT. We also investigate an approach to FT that integrates data diversity (DD) assertions and TA. This work is part of a longer term goal to use FT in quantitative safety arguments for safety critical systems.",2006,0, 2003,Exploiting Reference Frame History in H.264/AVC Motion Estimation,"Motion estimation is the most crucial and time-consuming part of the H.264/AVC video compression standard. The introduction of motion search of variable block sizes in multiple reference frames has significantly increased the computational complexity. This paper proposes a fast motion estimation algorithm, most used reference first (MURF), based on the usage history of reference frames. The algorithm rearranges the search order of the reference frames based on the selection probability of the reference frames in coding the current frame. The experimental results show that the proposed algorithm, when compared to the best algorithm in H.264 reference software, achieves on average 60% reduction in search points and 52% reduction in motion estimation time with comparable video quality and negligible increase in bit-rate and memory",2006,0, 2004,Media Streaming via TFRC: An Analytical Study of the Impact of TFRC on User-Perceived Media Quality,"
First Page of the Article
",2006,0, 2005,Error and Rate Joint Control for Wireless Video Streaming,"In this paper, a precise error-tracking scheme for robust transmission of real-time video streaming over wireless IP network is presented. By utilizing negative acknowledgements from feedback channel, the encoder can precisely calculate and track the propagated errors by examining the backward motion dependency. With this precise tracking, the error-propagation effects can be terminated completely by INTRA refreshing the affected macroblocks. In addition, due to lots of INTRA macroblocks refresh will entail a large increase of the output bit rate of a video encoder, several bit rate reduction techniques are proposed. They can be jointly used with INTRA refresh scheme to obtain uniform video quality performance instead of only changing the quantization scale. The simulations show that both control strategies yield significant video quality improvements in error-prone environments",2006,0, 2006,Detection and Interpretation of Text Information in Noisy Video Sequences,"Text superimposed on the video frames provides supplemental but important information for video indexing and retrieval. The detection and recognition of text from video is thus an important issue in automated content-based indexing of visual information in video archives. Text of interest is not limited to static text. They could be scrolling in a linear motion where only part of the text information is available during different frames of the video. The problem is further complicated if the video is corrupted with noise. An algorithm is proposed to detect, classify and segment both static and simple linear moving text in complex noisy background. The extracted texts are further processed using averaging to attain a quality suitable for text recognition by commercial optical character recognition (OCR) software",2006,0, 2007,NGL03-4: An Interoperability Mechanism for Seamless Interworking between WLAN and UMTS-HSDPA Networks,"Future wireless communication systems are expected to provide seamless inter-working between existing and 3G radio networks providing the user with a wide variety of services, while maintaining a large area of coverage and minimum user QoS requirements. In this paper, a new mechanism that implements interoperability between HSDPA and WLAN systems is proposed. The proposed interoperability mechanism is activated via the optimization of a suitably defined cost function, which takes into account all the appropriate system level parameters that trigger the interoperability process. The performance evaluation of the proposed scheme is assessed by means of a software - based simulation platform. A number of simulations have been carried out in order to demonstrate the performance enhancements achieved by the proposed mechanism in terms of user throughput, handovers statistics, and system throughput.",2006,0, 2008,OPN09-03: GMPLS Signaling Feedback for Encompassing Physical Impairments in Transparent Optical Networks,"Next generation GMPLS networks will be characterized by domains of transparency, in which the end-to-end optical signal quality has to be guaranteed. Currently GMPLS does not take into account the evaluation of physical impairments. Thus just limited size domains of transparency, where physical impairments can be neglected, are practically achievable. This study utilizes GMPLS signaling protocol extensions to encompass the optical layer physical impairments. The proposed approach allows to detect during the signaling phase whether lightpaths cannot be set up because of unacceptable optical signal quality. In this case successive set up attempts are performed, selecting the alternative routes with three schemes which exploit the feedback information of the signaling messages. Numerical results show that the proposed extensions are able to significantly decrease the lightpath blocking probability due to physical impairments in both static and dynamic conditions.",2006,0, 2009,QRP01-6: Resource Optimization Subject to a Percentile Response Time SLA for Enterprise Computing,"We consider a set of computer resources used by a service provider to host enterprise applications subject to service level agreements. We present an approach for resource optimization in such an environment that minimizes the total cost of computer resources used by a service provider for an enterprise application while satisfying the QoS metric that the response time for executing service requests is statistically bounded. That is, gamma% of the time the response time is less than a pre-defined value. This QoS metric is more realistic than the mean response time typically used in the literature. Numerical results show the applicability of the approach and validate its accuracy.",2006,0, 2010,QRPp1-1: User-Level QoS Assessment of a Multipoint-to-Multipoint TV Conferencing Application over IP Networks,"This paper studies a multipoint-to-multipoint TV conferencing application over IP networks and assesses its user-level QoS with two types of QoS mapping. In utilizing the application, the user perceives quality of the communication with every other conferee; we refer to the quality as individual user-level QoS to a conferee. According to the individual user-level QoS, he/she totally judges the quality of the application, which is refer to as overall user-level QoS for the user. The overall user-level QoS of the application can be affected by the individual one to each conferee; therefore, it is difficult to clarify QoS parameters which affect the overall user-level QoS. This paper tackles the problem by utilizing two types of QoS mapping: mapping between the two kinds of user-level QoS and that between user-level QoS and application-level QoS. In this paper, an experiment with a simple task by three conferees was carried out. The user-level QoS is assessed by one of the psychometric methods. As a result of the two types of QoS mapping, we find two interesting results. First, when a user communicates with the other two conferees, the lower individual user-level QoS has more effect on the overall user-level QoS than the higher one. Second, the individual user-level QoS can depend on not only its application-level QoS but also that of the other conferees.",2006,0, 2011,WLC12-5: A TDMA-Based MAC Protocol for Industrial Wireless Sensor Network Applications using Link State Dependent Scheduling,"Existing TDMA-based MAC protocols for wireless sensor networks are not specifically built to consider the harsh conditions of industrial environments where the communication channel is prone to signal fading. We propose a TDMA-based MAC protocol for wireless sensor networks built for industrial applications that uses link state dependent scheduling. In our approach, nodes gather samples of the channel quality and generate prediction sets from the sample sets in independent slots. Using the prediction sets, nodes only wake up to transmit/receive during scheduled slots that are predicted to be clear and sleep during scheduled slots that may potentially cause a transmitted signal to fade. We simulate our proposed protocol and compare its performance with a general non-link state dependent TDMA protocol and a CSMA protocol. We found that our protocol significantly improves packet throughput as compared to both the general non-link state dependent TDMA protocol and CSMA protocol. We also found that in conditions which are not perfect under our assumptions, the performance of our protocol degrades gracefully.",2006,0, 2012,P2D-3 Objective Performance Testing and Quality Assurance of Medical Ultrasound Equipment,"The goal of this study was to develop a test protocol that contains the minimum set of performance measurements for predicting the clinical performance of ultrasound equipment and that is based on objective assessments by computerized image analysis. The post-processing look-up-table (LUT) is measured and linearized. The elevational focus (slice thickness) of the transducer is estimated and the in plane transmit focus is positioned at the same depth. The developed tests are: echo level dynamic range (dB), contrast resolution (i.e., """"gamma"""" of display, #gray levels/dB) and -sensitivity, overall system sensitivity, lateral sensitivity profile, dead zone, spatial resolution, and geometric conformity of display. The concept of a computational observer is used to define the lesion signal-to-noise ratio, SNRL (or Mahalanobis distance), as a measure for contrast sensitivity. The whole performance measurement protocol has been implemented in software. Reports are generated that contain all the information about the measurements and results, such as graphs, images and numbers. The software package may be viewed and a run-time version downloaded at the website: http://www.qa4us.eu",2006,0, 2013,An integrated wired-wireless testbed for distance learning on networking,"This paper addresses a remote testbed for distance learning designed to allow the investigation of various issues related to QoS management in wired/wireless networks used to support real-time applications. Several aspects, such as traffic handling in routers, congestion control and node mobility management, can be experimentally assessed. The testbed comprises various operating modes that the user can select to configure the traffic flows and modify the operational conditions of the network. A peculiarity of the testbed is node mobility support, which allows problems related to handoff and distance to be tackled. The testbed provides for both on-line measurements, through software modules which allow the user to monitor the network while it is operating, and off-line analysis of network behavior through log file inspection",2006,0, 2014,Workflow Quality of Service Management using Data Mining Techniques,"Organizations have been aware of the importance of quality of service (QoS) for competitiveness for some time. It has been widely recognized that workflow systems are a suitable solution for managing the QoS of processes and workflows. The correct management of the QoS of workflows allows for organizations to increase customer satisfaction, reduce internal costs, and increase added value services. In this paper we show a novel method, composed of several phases, describing how organizations can apply data mining algorithms to predict the QoS for their running workflow instances. Our method has been validated using experimentation by applying different data mining algorithms to predict the QoS of workflow",2006,0, 2015,Fast Mode Decision for H.264/AVG using Mode and RD Cost Prediction,"In an H.264/AVC encoder, each macroblock can be coded in one of a large number of coding modes, which requires a huge computational effort. In this paper, we present a new method to speed up the mode decision process using RD cost prediction in addition to mode prediction. In general, video coding exploits spatial and temporal redundancies between video blocks, in particular temporal redundancy is a crucial key to compress a video sequence with little loss of image quality. The proposed method determines the best coding mode of a given macroblock by predicting the mode and its rate-distortion (RD) cost from neighboring MBs in time and space. Compared to the H.264/AVC reference software, the simulation results show that the proposed method can save about 60% of the number of RD cost computations resulting in up to 57% total encoding time reduction with up to 3.5% bit rate increase at the same PSNR.",2006,0, 2016,Hiddenness Control of Hidden Markov Models and Application to Objective Speech Quality and Isolated-Word Speech Recognition,"Markov models are a special case of hidden Markov models (HMM). In Markov models the state sequence is visible, whereas in a hidden Markov model the underlying state sequence is hidden and the sequence of observations is visible. Previous research on objective techniques for output-based speech quality (OBQ) showed that the state transition probability matrix A of a Markov model is capable of capturing speech quality information. On the other hand similar experiments using HMMs showed that the observation symbol probability matrix B is more effective at capturing the speech quality information. This shows that the speech quality information in A matrix of a Markov model shifts to the B matrix of an HMM. An HMM can have varying degrees of hiddenness, which can be intuitively guessed from the entries of its observation probability matrix B for the discrete models. In this paper, we propose a visibility measure to assess the hiddenness of a given HMM, and also a method to control the hiddenness of a discrete HMM. We test the advantage of implementing hiddenness control in output-based objective speech quality (OBQ) and isolated-word speech recognition. Our test results suggest that hiddenness control improves the performance of HMM-based OBQ and might be useful for speech-recognition as well.",2006,0, 2017,Low Complexity Scalable Video Coding,"In this paper, we consider scalable video coding (SVC) which has higher complexity than H.264/AVC since it has spatial, temporal and quality scalability in addition to H.264/AVC functionality. Furthermore, inter-layer prediction and layered coding for spatial scalability make motion estimation and mode decision more complex. Therefore, we propose low complexity SVC schemes by using current developing SVC standard. It is archived by prediction method such as skip, direct, inter-layer MV prediction with fast mode and motion vector (MV) estimation at enhancement layer. In order to increase the performance of inter-layer MV prediction, combined MV interpolation is applied with adjustment of prediction direction. Additionally, fast mode and MV estimation are proposed from structural properties of motion-compensated temporal filtering (MCTF) to elaborate predicted macro block (MB) mode and MV. From the experimental results, proposed method has comparable performance to reference software model with significant lower complexity.",2006,0, 2018,A Distributed Fault-Tolerant Algorithm for Event Detection Using Heterogeneous Wireless Sensor Networks,"Distributed event detection using wireless sensor networks has received growing interest in recent years. In such applications, a large number of inexpensive and unreliable sensor nodes are distributed in a geographical region to make firm and accurate local decisions about the presence or absence of specific events based on their sensor readings. However, sensor readings can be unreliable, due to either noise in the sensor readings or hardware failures in the devices, and may cause nodes to make erroneous local decisions. We present a general fault-tolerant event detection scheme that allows nodes to detect erroneous local decisions based on the local decisions reported by their neighbors. This detection scheme does not assume homogeneity of sensor nodes and can handle cases where nodes have different accuracy levels. We prove analytically that the derived fault-tolerant estimator is optimal under the maximum a posteriori (MAP) criterion. An equivalent weighted voting scheme is also derived. Further, we describe two new error models that take into account the neighbor distance and the geographical distributions of the two decision quorums. These models are particularly suitable for detection applications where the event under consideration is highly localized. Our fault-tolerant estimator is simulated using a network of 1024 nodes deployed randomly in a square region and assigned random probability of failures",2006,0, 2019,Using Data Confluences in a Distributed Network with Social Monitoring to Identify Fault Conditions,This paper discusses the potential benefits of socially attentive monitoring in multi-agent systems. A multi-agent system with this feature is shown to detect and identify when an individual within the network fails to operate correctly. The system that has been developed is capable of detecting a range of common faults such as stuck at zero by allowing communication between peers within a software agent network. Further adaption to the model allows an improvement in system response without introduction of specific control design algorithms,2006,0, 2020,Web-based colorimetric sensing for food quality monitoring,"The work presented in this paper outlines a novel technique for remote food quality control over the Internet by using web based image processing. The colour change of a colorimetric sensor was captured with a wireless camera and the data was transmitted to a PC, which uploaded the information to the web. A software system for colour analysis was developed to process the data locally. Quantitative colour information which reflects the quality of the food product can be deduced remotely using this technique. This novel technique was applied to the monitoring of fish spoilage in packaged fish. The on-package sensors detected the release of spoilage products, typically the amines, from the fish and gave a visible colour change which was captured by the wireless camera. The colour information obtained through the web, when processed remotely using the in-house developed software, accurately reflected the state of the product. This technology reduces the labour requirements in food quality monitoring and can be applied to all colorimetric sensors.",2006,0, 2021,Sensor Validation within a Pervasive Medical Environment,"Pervasive patient sensing devices generate large quantities of wireless data. This data needs to be transmitted to central medical servers or mobile practitioners for real-time analysis. Various factors can affect the """"quality"""" of our patient data. These include: wireless interference (e.g. access point or radio failure) and/or Sensor failure. Vital patient data packets may be lost resulting in an incorrect diagnosis. Patient sensor failure is a reality. It is imperative that sensor failure is detected as soon as possible to ensure a higher QoS is provided. Presented is a Data Management System-Validation Model (DMS-VM). It is designed to manage wireless interference and sensor failure in a controlled and intelligent manner. The DMS-VM samples multiple patient vital sign readings and intelligently filters this data to verify its integrity based on an agent middleware platform. This novel approach provides higher QoS within context aware medical environments. The DMS-VM experimental prototype is presented.",2006,0, 2022,The Monitoring Data Archiving Service for ATLAS,"ATLAS is one the four experiments being assembled at the CERN Large Hadron Collider (LHC). The complexity of the ATLAS experiment and the high event rate make the monitoring system an essential tool to assess the status of the hardware and the quality of the data while they are being acquired. It is important that all the monitoring data, mainly ROOT histograms, are saved to a permanent storage system, so that they can be used later for studying the time evolution of the experimental conditions or as reference for the future runs. The presentation will show the solution proposed to this issue within the ATLAS Trigger and Data Acquisition (TDAQ) project. Many GB of monitoring data are expected per run. At the end of each run, the Monitoring Data Archiving service (MDA) retrieves all the available histograms from the Online Histogramming service (a temporary storage provided within the ATLAS TDAQ software framework) and writes them into ROOT files, which in turn will be stored on tapes. The Collection and Cache service (CoCa) manages a disk based cache in order to guarantee a fast access to the histograms produced during the last runs. Furthermore, it collects many small files and produces big archives to be stored on tape, thus enhancing the efficiency in the tape usage. Initially meant as a component of MDA, CoCa has evolved as an independent package that can be used by any ATLAS online application.",2006,0, 2023,Probabilistic ISOCS Uncertainty Estimator: Application to the Segmented Gamma Scanner,"Traditionally, high resolution gamma-ray spectroscopy (HRGS) has been used as a very powerful tool to determine the radioactivity of various items, such as samples in the laboratory, waste assay containers, or large items in-situ. However, in order to properly interpret the quality of the result, an uncertainty estimate must be made. This uncertainty estimate should include the uncertainty in the efficiency calibration of the instrument, as well as many other operational and geometrical parameters. Efficiency calibrations have traditionally been made using traceable radioactive sources. More recently, mathematical calibration techniques have become increasingly accurate and more convenient in terms of time and effort, especially for complex or unusual configurations. Whether mathematical or source-based calibrations are used, any deviations between the as-calibrated geometry and the as-measured geometry contribute to the total measurement uncertainty (TMU). Monte Carlo approaches require source, detector, and surrounding geometry inputs. For non-trivial setups, the Monte Carlo approach is time consuming both in terms of geometry input and CPU processing. Canberra Industries has developed a tool known as In-Situ Object Calibration Software (ISOCS) that utilizes templates for most common real life setups. With over 1000 detectors in use with this product, the ISOCS software has been well validated and proven to be much faster and acceptably accurate for many applications. A segmented gamma scanner (SGS) template is available within ISOCS and we use it here to model this assay instrument for the drummed radioactive waste. Recently, a technique has been developed which uses automated ISOCS mathematical calibrations to evaluate variations between reasonably expected calibration conditions and those that might exist during the actual measurement and to propagate them into an overall uncertainty on the final efficiency. This includes variations in container wall thickness - and diameter, sample height and density, sample non-uniformity, sample-detector geometry, and many other variables, which can be specified according to certain probability distributions. The software has a sensitivity analysis mode which varies one parameter at a time and allows the user to identify those variables that have the largest contribution to the uncertainty. There is an uncertainty mode which uses probabilistic techniques to combine all the variables and compute the average efficiency and the uncertainty in that efficiency, and then to propagate those values with the gamma spectroscopic analysis into the final result. In the areas of waste handling and environmental protection, nondestructive assay by gamma ray scanning can provide a fast, convenient, and reliable way of measuring many radionuclides in closed items. The SGS is designed to perform accurate quantitative assays on gamma emitting nuclides such as fission products, activation products, and transuranic nuclides. For the SGS, this technique has been applied to understand impacts of the geometry variations during calibration on the efficiency and to estimate the TMU.",2006,0, 2024,Design and Analyses on Permanent Magnet Actuator for Mining Vacuum Circuit Breaker,"A novel permanent magnet actuator (PMA) for mining vacuum circuit breaker (VCB) is presented in this paper. Which is monostable, has two coils and able to break the VCB when the fault of low-voltage happened. And can detect the voltage variation in main circuit at each instant. When the fault of low-voltage happened, it can automatically break without additional detection and control of apparatus. Moreover, the different states and parameters of breaking and closing courses have been numerical computed and analyzed by adopting ANSOFT software. Based on the simulation results, the prototype is manufactured and assembled in the mining VCB. The feasibility and validity of the proposed PMA have been proved by testing results",2006,0, 2025,Comparative Study of Various Artificial Intelligence Techniques to Predict Software Quality,"Software quality prediction models are used to identify software modules that may cause potential quality problems. These models are based on various metrics available during the early stages of software development life cycle like product size, software complexity, coupling and cohesion. In this survey paper, we have compared and discussed some software quality prediction approaches based on Bayesian belief network, neural networks, fuzzy logic, support vector machine, expectation maximum likelihood algorithm and case-based reasoning. This study gives better comparative insight about these approaches, and helps to select an approach based on available resources and desired level of quality.",2006,0, 2026,The 3LGM2-Tool to Support Information Management in Health Care,"In industrialized as well as in developing countries the driving force for healthcare has recently been the trend towards a better coordination of care. The focus has been changed from isolated procedures in a single healthcare institution (e.g. a hospital or a general practice) to the patient-oriented care process spreading over institutional boundaries. This should lead to a shift towards better integrated and shared care. Health care professionals in different departments of a hospital but moreover in a region - and in many cases even worldwide - have to cooperate in order to achieve health for the patient. [1] Cooperation needs an adequate system for communicating and processing of information, i.e., an information system, which is that socio-technical subsystem of a (S et of) health care institution(s), which presents information at the right time, in the right place to the right people [2, 3]. Hospital Information Systems (HIS) as well as regional Health Information Systems (rHIS) (consisting of different institutional information systems) are constructed like a (complex of) building(s) out of different and probably heterogeneous bricks and components. Thus cooperation depends especially on the availability of adequate communication links between the institutional information systems and their components. Besides technical problems of communication links there are a lot of complex problems of connecting heterogeneous software components of different vendors and with different database schemata to be solved. Especially the proper application of communication standards like HL7 and DICOM [4-6] needs proper planning and supervision as part of a systematic information management. Like an architect the information manager needs a blueprint or model for the information system's architecture respectively the enterprise architecture [7-9]. In [10] we proposed the 3LGM2 as a meta model for modeling Information Systems (IS). 3LGM2 has been designe- d to describe IS by concepts on three layers. The domain layer consists of enterprise functions and entity types, the logical tool layer focuses on application components and the physical tool layer describes physical data processing components. In contrast to other approaches a lot of inter-layer-relationships exist. 3LGM2 is defined using the Unified Modeling Language (UML). The meta model has been supplemented by the 3LGM2 tool [12]. Using 3LGM2 as the ontological basis this tool enables information managers to graphically design even complex IS. It assists information managers similarly to Computer Aided Design tools (CAD) supporting architects. The tool provides means for analyzing a HIS model and thus for assessing the HIS quality. The talk will focus on the 3LGM2 tool and its most important features. It will be shown, how a model can be created by graphical user interaction as well as by importing data from other sources. It will be illustrated how the tool's analyzing features support information managers doing their job. Examples will be taken from 3LGM2 models of the information system of the Leipzig University Hospital and the regional health information system of Saxony, a federal state of Germany.",2006,0, 2027,Challenges in System on Chip Verification,"The challenges of system on a chip (SoC) verification is becoming increasingly complex as submicron process technology shrinks die size, enabling system architects to include more functionality in a single chip solution. A functional defect refers to the feature sets, protocols or performance parameters not conforming to the specifications of the SoC. Some of the functional defects can be solved by software workarounds but some require revisions of silicon. The revision of silicon not only costs millions of dollars but also impacts time to market, quality, customer commitments. Working silicon for the first revision of the SoC requires a robust module, chip and system verification strategy to uncover the logical and timing defects before tapeout. Different techniques are needed at each level (module, chip and system) to complete verification. In addition verification should quantify with a metric at every hierarchy to assess functional holes and address it. Verification metric can be a combination of code coverage, functional coverage, assertion coverage, protocol coverage, interface coverage and system coverage. A successful verification strategy also requires the test bench to be scalable, configurable, support reuse of functional tests, integration with tools and finally linkage to validation. The scope of this paper will discuss the verification strategy and pitfalls used in verification strategy and finally make recommendations for successful strategy.",2006,0, 2028,Debug Support for Scalable System-on-Chip,"On-chip debug is an important technique to detect and locate the faults in the practical software applications. Scalability and reusability are the essential features of system-on-chip (SoC). Therefore, the debug architecture should meet the requirement of those features. Furthermore, it is necessary for applications developers to communication with the SoC chip on-line. In this paper, we present the novel debug architecture to solve above problems. The debug architecture has been implemented in a typical SoC chip. The results of performance analysis show that the debug architecture has high performance at the cost of few resources and area.",2006,0, 2029,Transient Error Detection in Embedded Systems Using Reconfigurable Components,"In this paper, a hardware control flow checking technique is presented and evaluated. This technique uses re configurable of the shelf FPGA in order to concurrently check the execution flow of the target micro processor. The technique assigns signatures to the main program in the compile time and verifies the signatures using a FPGA as a watchdog processor to detect possible violation caused by the transient faults. The main characteristic of this technique is its ability to be applied to any kind of processor architecture and platforms. The low imposed hardware and performance overhead by this technique makes it suitable for those applications in which cost is a major concern, such as industrial applications. The proposed technique is experimentally evaluated on an 8051 microcontroller using software implemented fault injection (SWIFI). The results show that this technique detects about 90% of the injected control flow errors. The watchdog processor occupied 26% of an Altera Max-7000 FPGA chip logic cells. The performance overhead varies between 42% and 82% depending on the workload used.",2006,0, 2030,Preventing Cross Site Request Forgery Attacks,"The Web has become an indispensable part of our lives. Unfortunately, as our dependency on the Web increases, so does the interest of attackers in exploiting Web applications and Web-based information systems. Previous work in the field of Web application security has mainly focused on the mitigation of cross site scripting (XSS) and SQL injection attacks. In contrast, cross site request forgery (XSRF) attacks have not received much attention. In an XSRF attack, the trust of a Web application in its authenticated users is exploited by letting the attacker make arbitrary HTTP requests on behalf of a victim user. The problem is that Web applications typically act upon such requests without verifying that the performed actions are indeed intentional. Because XSRF is a relatively new security problem, it is largely unknown by Web application developers. As a result, there exist many Web applications that are vulnerable to XSRF. Unfortunately, existing mitigation approaches are time-consuming and error-prone, as they require manual effort to integrate defense techniques into existing systems. In this paper, we present a solution that provides a completely automatic protection from XSRF attacks. More precisely, our approach is based on a server-side proxy that detects and prevents XSRF attacks in a way that is transparent to users as well as to the Web application itself. We provide experimental results that demonstrate that we can use our prototype to secure a number of popular open-source Web applications, without negatively affecting their behavior",2006,0, 2031,A Heuristic Approach for Predicting Fault Locations in Distribution Power Systems,"The first step in restoring systems after a fault is detected, is determining the fault location. The large number of candidate locations for the fault makes this a complex process. Knowledge based methods have the capability to accomplish this quickly and reliably. In this paper, a heuristic approach has been used to predict potential fault locations. A software tool implements the heuristic rules and a genetic algorithm based search. The implementation and evaluation results of this tool have been presented.",2006,0, 2032,Distribution Network Restoration Using Sequential Monte Carlo Approach,"The paper describes a developed procedure for modeling restoration after a fault and calculating associated switching time using sequential Monte-Carlo approach. The procedure is based on the experience and knowledge of distribution system operators, thus represents an attempt in regarding the realistic issues influencing the duration of fault isolation and customer restoration, i.e. the actual switching and eventually repair time needed for restoration of each affected customer. In addition to expected duration of switching, the associated probability density functions can be computed. Further more, Visual Basic software using the proposed procedure has been developed and some examples of switching time expectations and probability distributions calculations are presented",2006,0, 2033,The Application of the System Parameter Fusion Principle to Assessing University Electronic Library Performance,"Modern technology provides a great amount of information. But for computer monitoring systems or computer control systems, in order to have the situation in hand, we need to reduce the number of variables to one or two parameters, which express the quality and/or security of the whole system. In this paper, the authors introduce the system parameter fusion principle put forward by the third author and present how to apply it to assessing university electronic library performance combining with the Delphi technique and AHP",2006,0, 2034,Power Quality And Harmonic Loads,This paper presents the results of a research done on the harmonic distortion of load current and voltage in low voltage distribution networks owing to the extensive use of peak-detecting type capacitor-filter rectifier systems (often the front-end of non linear electronic loads). A classical case to be studied is the compact fluorescent lamps (CFLs) without power factor correction being implemented. It has become evident that the use of CFLs alone is more or less accountable to the non-linear loads generated in rural electrification schemes in Sri Lanka. A case study was also done by the authors on a chosen low voltage distribution network to analyze the impact of CFL loads and the capabilities of the network with regard to IEEE 519 regulations. The methods of analysis and the results are presented in this paper and PSCAD software has been used to simulate the system. The commercial aspects of using CFLs and energy saving have been discussed. Harmonic effects on the distribution transformers are studied as applied to the line segment considered in the study,2006,0, 2035,Predicting Qualitative Assessments Using Fuzzy Aggregation,"Given the complexity and sophistication of many contemporary software systems, it is often difficult to gauge the effectiveness, maintainability, extensibility, and efficiency of their underlying software components. A strategy to evaluate the qualitative attributes of a system's components is to use software metrics as quantitative predictors. We present a fusion strategy that combines the predicted qualitative assessments from multiple classifiers with the anticipated outcome that the aggregated predictions are superior to any individual classifier prediction. Multiple linear classifiers are presented with different, randomly selected, subsets of software metrics. In this study, the software components are from a sophisticated biomedical data analysis system, while the external reference test is a thorough assessment of both complexity and maintainability, by a software architect, of each system component. The fuzzy integration results are compared against the best individual classifier operating on a software metric subset",2006,0, 2036,Improved Assertion Lifetime via Assertion-Based Testing Methodology,"Assertions-based verification (ABV) has been widely used in digital design validation. Assertions are HDL-syntaxed representation of design specification and used as a functional error detection mechanism. During the process of designing with HDLs, assertions are imported which could fire in case of violation during testbench run. Although these assertions are mostly used during simulation and for verifying the functional correctness of the design, but as they illustrate the specifications of a design, it is likely that their lifetime could be extended by embedding them in the chip to detect low level faults like stuck-at faults. In this paper, we introduce a new automatable assertion-based on-line testing methodology. Experimental results show that the synthesis of assertions into a chip, and then using them for online testing, can provide an acceptable coverage for stuck-at faults.",2006,0, 2037,Experimental Evaluation of Three Concurrent Error Detection Mechanisms,"This paper presents an experimental evaluation of the effectiveness of three hardware-based control flow checking mechanisms, using software-implemented fault injection (SWIFI) method. The fault detection technique uses reconfigurable of the shelf FPGAs to concurrently check the execution flow of the target program. The technique assigns signatures to the target program in the compile time and verifies the signatures using a FPGA as a watchdog processor to detect possible violation caused by the transient faults. A total of 3000 faults were injected in the experimental embedded system, which is based on an 8051 microcontroller, to measure the error detection coverage. The experimental results show that these mechanisms detect about 90% of transient errors, injected by software implemented method.",2006,0, 2038,A Failure/Fault Diagnoser Model for Autonomous Agents under Presence of Disturbances,"In this paper, we provide our preliminary results on failure/fault analysis for an autonomous agent whose behavior is influenced by disturbances from its environment. It is assumed that failures are unobservable and only detected through some observable symptoms. Furthermore, faults may be observable directly, but the conditions leading to them are unknown a priori. Both the agent and its environment may contribute to any given failure or fault. The main results of this paper relate to the development of a diagnoser model which is built by a parallel composition of Petri net models of the underlying components. This model includes both the normative and disruptive behavior of the agent and its perception of the environment. The uncontrollability of the environment leads to a non-deterministic diagnosis behavior. The likelihood of each possible state of the agent's diagnoser is updated every time that a new controllable action is taken or new sensory information is received. At each updating, if the likelihood of a failure or fault state is higher than a specified threshold, then an alarm is raised indicating a disruptive event.",2006,0, 2039,Task Graph Generation,"There are many intelligent design tools available, which are being used at the highest level of abstraction. These tools are very effective in solving the hardware/software co-synthesis problems. These tools require the input specification of the problem to be in the form of one or more task graph. Currently, one major problem is that many real time embedded system designs are specified in high level programming languages, not task graphs. The designer can manually transform the input specification from the used computer language to a task graph form, but this job has tedious and error prone problems. The task graph generation described in this paper reduces the potential for error and time required by automating the task graph process",2006,0, 2040,Multi-processor system design with ESPAM,"For modern embedded systems, the complexity of embedded applications has reached a point where the performance requirements of these applications can no longer be supported by embedded system architectures based on a single processor. Thus, the emerging embedded System-on-Chip platforms are increasingly becoming multiprocessor architectures. As a consequence, two major problems emerge, i.e., how to design and how to program such multiprocessor platforms in a systematic and automated way in order to reduce the design time and to satisfy the performance needs of applications executed on these platforms. Unfortunately, most of the current design methodologies and tools are based on Register Transfer Level (RTL) descriptions, mostly created by hand. Such methodologies are inadequate, because creating RTL descriptions of complex multiprocessor systems is error-prone and time consuming. As an efficient solution to these two problems, in this paper we propose a methodology and techniques implemented in a tool called ESPAM for automated multiprocessor system design and implementation. ESPAM moves the design specification from RTL to a higher, so called system level of abstraction. We explain how starting from system level platform, application, and mapping specifications, a multiprocessor platform is synthesized and programmed in a systematic and automated way. Furthermore, we present some results obtained by applying our methodology and ESPAM tool to automatically generate multiprocessor systems that execute a real-life application, namely a Motion-JPEG encoder.",2006,0, 2041,Evolving GA Classifiler for Audio Steganalysis based on Audio Quality Metrics,"Differentiating anomalous audio document (Stego audio) from pure audio document (cover audio) is difficult and tedious. Steganalytic techniques strive to detect whether an audio contains a hidden message or not. This paper presents a genetic algorithm (GA) based approach to audio steganalysis, and the software implementation of the approach. The basic idea is that, the various audio quality metrics calculated on cover audio signals and on stego-audio signals vis-a-vis their denoised versions, are statistically different. GA is employed to derive a set of classification rules from audio data using these audio quality metrics, and fitness function is used to judge the quality of each rule. The generated rules are then used to detect or classify the audio documents in a real-time environment. Unlike most existing GA-based approaches, because of the simple representation of rules and the effective fitness function, the proposed method is easier to implement while providing the flexibility to generally detect any new steganography technique. The implementation of the GA based audio steganalyzer relies on the choice of these audio quality metrics and the construction of a two-class classifier, which will discriminate between the adulterated and the untouched audio samples. Experimental results show that the proposed technique provides promising detection rates.",2006,0, 2042,Ensuring Numerical Quality in Grid Computing,"Certain numerically intensive applications executed within a grid computing environment crucially depend on the properties of floating-point arithmetic implemented on the respective platform. Differences in these properties may have drastic effects. This paper identifies the central problems related to this situation. We propose an approach which gives the user valuable information on the various platforms available in a grid computing environment in order to assess the numerical quality of an algorithm run on each of these platforms. In this manner, the user will at least have very strong hints whether a program will perform reliably in a grid before actually executing it. Our approach extends the existing IeeeCC754 test suite by two """"grid-enabled"""" modes: The first mode calculates a """"numerical checksum"""" on a specific grid host and executes the job only if the checksum is identical to a locally generated one. The second mode provides the user with information on the reliability and IEEE 754-conformity of the underlying floating-point implementation of various platforms. Furthermore, it can help to find a set of compiler options to optimize the application's performance while retaining numerical stability.",2006,0,1739 2043,Computerized Detection of Lung Tumors in PET/CT Images,"More and more hybrid PET/CT machines are being installed in medical centers across the country as combining computer tomography (CT) and positron emission tomography (PET) provides powerful and unique means in tumor diagnosis. Visual inspection of the images is a tedious and error-prone task and in many clinics the attenuation-uncorrected PET images are not examined by the physician, potentially missing an important source of information, especially for subtle tumors. We are developing a computer aided diagnosis software prototype that simultaneously processes the CT, attenuation-corrected PET, and attenuation-uncorrected PET volumes to detect tumors in the lungs. The system applies optimal thresholding and multiple gray-level thresholding with volume criterion to extract the lungs and to detect tumor candidates, respectively. A fuzzy logic based approach is used to reduce false-positive tumors. The remaining set of tumor candidates are ranked according to their likelihood of being actual tumors. We show the preliminary results of a retrospective evaluation of clinical PET/CT images",2006,0, 2044,Clinical Evaluation of Watermarked Medical Images,"Digital watermarking medical images provides security to the images. The purpose of this study was to see whether digitally watermarked images changed clinical diagnoses when assessed by radiologists. We embedded 256 bits watermark to various medical images in the region of non-interest (RONI) and 480K bits in both region of interest (ROI) and RONI. Our results showed that watermarking medical images did not alter clinical diagnoses. In addition, there was no difference in image quality when visually assessed by the medical radiologists. We therefore concluded that digital watermarking medical images were safe in terms of preserving image quality for clinical purposes",2006,0, 2045,A Systematic Review of Technical Evaluation in Telemedicine Systems,"We conducted a systematic review of the literature to critically analyse the evaluation and assessment frameworks that have been applied to telemedicine systems. Subjective methods were predominantly used for technical evaluation (59 %), e.g. Likert scale. Those including objective measurements (41%) were restricted to simple metrics such as network time delays. Only three papers included a rigorous standards based objective approach. Our investigation has been unable to determine a definitive standards-based telemedicine evaluation framework that exists in the literature that may be applied systematically to assess and compare telemedicine systems. We conclude that work needs to be done to address this deficiency. We have therefore developed a framework that has been used to evaluate videoconferencing systems telemedicine applications. Our method seeks to be simple to allow relatively inexperienced users to make measurements, is objective and repeatable, is standards based, is inexpensive and requires little specialist equipment. We use the EIA 1956 broadcast test card to assess resolution, grey scale and for astigmatism. Colour discrimination is assessed with the TE 106 and Ishihara 24 colour scale chart. Network protocol analysis software is used to assess network performance (throughput, delay, jitter, packet loss)",2006,0, 2046,"Formal Security Analysis in Industry, at the Example of Electronic Distribution of Aircraft Software (EDS)","Summary form only given. When developing products or solutions in industry and assessing their quality, formal methods provide the most rigorous tools for checking for safety and security flaws. In this talk we share our first-hand general experience in this area, and furthermore provide some details of a project specifying and modeling electronic distribution software (EDS). We comment on the motivation, practice, and impact of applying formal methods in industry, including the role of evaluation and certification according to the common criteria. Second, we give an overview of which modeling and verification techniques we have found useful so far, for which reasons. Third, we present some ongoing work on specifying and modeling EDS. The aim of EDS is to alleviate the burden of distributing initial and update versions of software in modern airplanes. By now this is done physically using disks, which is becoming unbearable with the amount of software steadily increasing. EDS is currently under standardization in the ARINC 666 committee, which includes the main players Boeing and Airbus, as well as their maintenance partners. Obviously, electronic shipment via cable-based and wireless connections faces severe security threats, such that one should better check with maximal scrutiny whether the mechanisms actually fulfill the security goals required, in particular integrity and authenticity.",2006,0, 2047,Noise Makers Need to Know Where to be Silent Producing Schedules That Find Bugs,"A noise maker is a tool that seeds a concurrent program with conditional synchronization primitives, such as yield(), for the purpose of increasing the likelihood that a bug manifest itself. We introduce a novel fault model that classifies locations as """"good"""", """"neutral"""", or """"bad,"""" based on the effect of a thread switch at the location. Using the model, we explore the terms under which an efficient search for real-life concurrent bugs can be conducted. We accordingly justify the use of probabilistic algorithms for this search and gain a deeper insight of the work done so far on noise- making. We validate our approach by experimenting with a set of programs taken from publicly available multi-threaded benchmarks. Our empirical evidence demonstrates that real-life behavior is similar to one derived from the model.",2006,0, 2048,Inline VPD-TXRF for Contamination Control: Reality or Myth? Experience in a 300mm R&D MirrorBit Flash Memory Fab,Recent advances in TXRF hardware and software and highly successful integration of automated VPD modules to TXRF have made inline vapor phase decomposition-total reflection X-ray fluorescence (VPD-TXRF) a reality. In this paper we describe the use of and application of a fully integrated VPD-TXRF system capable of analyzing 200 mm and 300 mm diameter silicon wafers during the critical conversion period of an R&D MirrorBit Flash Fabrication facility to 300 mm. Rapid deployment of such an automated system during the conversion period has enabled us to 1) efficiently bench mark and qualify newly installed processing tools 2) assess the validity of existing protocols and procedures 3) respond very quickly to incidences where protocols have been breached and 4) identify process tools that heavily contaminate the backside of wafers.,2006,0, 2049,Improved automated quantification of left ventricular size and function from cardiac magnetic resonance images,"Assessment of left ventricular (LV) size and function from cardiac magnetic resonance (CMR) images requires manual tracing of LV borders on multiple 2D slices, which is subjective, experience dependent, tedious and time-consuming. We tested a new method for automated dynamic segmentation of CMR images based on a modified region-based model, in which a level set function minimizes a functional containing information regarding the probability density distribution of the gray levels. Images (GE 1.5T FIESTA) obtained in 9 patients were analyzed to automatically detect LV endocardial boundaries and calculate LV volumes and ejection fraction (EF). These measurements were validated against manual tracing. The automated calculation ofLV volumes and EF was completed in each patient in <3 min and resulted in high level of agreement with no significant bias and narrow limits of agreement with the reference technique. The proposed technique allows fast automated detection of endocardial boundaries as a basis for accurate quantification of LV size and function from CMR images.",2006,0, 2050,Development of Growth Model-Based Decision Support System for Crop Management,"Growth model-based decision support system for crop management (GMDSSCM) was developed including process based models of 4 different crops, i.e. wheat, rice, rapeseed and cotton. This system aims in facilitating simulation and application of crop models for different purposes. Individual models each include six submodels for simulating phasic development, organ formation, biomass production, yield and quality formation, soil-crop water relations, and nutrient (N, P, K) balance. The implemented system can be used for evaluating individual and comprehensive management strategies based on the results of crop growth simulation under various environments and different genotypes. A Stand-alone version (GMDSSCMA) was established under the platforms VC++ and VB by adopting the characteristics of object-oriented and component-based software and with the effective integration and coupling of the growth-prediction and decision-making functions. A web-based system (GMDSSCMW) was further developed on a .net platform using C# language. These GMDSSCM systems have been used to predict dynamically crop growth and to make decisions regarding to management systems. This tool should be helpful for construction and application of informational and digital farming systems.",2006,0, 2051,An Ontology-Based Approach for Domain Requirements Elicitation and Analysis,"Domain requirements are fundamental for software reuse and are the product of domain analysis. This paper presents an ontology based approach to elicit and analyze domain requirements. An ontology definition is given out. Problem domain is decomposed into several sub problem domains by using subjective decomposition method. The top-down refinement method is used to refine each sub problem domain into primitive requirements. Abstract stakeholders are used instead of real ones when decomposing problem domain and domain primitive requirements are represented by ontology. Not only domain commonality, variability and qualities are presented, but also reasoning logic is used to detect and handle incompleteness and inconsistency of domain requirements. In addition, a case of 'spot and futures transaction' domain is used to illustrate the approach",2006,0, 2052,Software Planned Learning and Recognition Based on the Sequence Learning and NARX Memory Model of Neural Network,"In traditional way, software plans are represented explicitly by some semantic schemas. However, semantic contents, constrains and relations of plans are hard for explicit presentation. Besides, it is a heavy and error-prone work to build such a library of plans. Algorithms of recognition of such plans demand exact matching by which semantic denotation is obvious itself. We thus present a novel approach of applying neural network in the presentation and recognition of plans via asymmetric Hebbian plasticity and non-linear auto-regressive with exogenous inputs (NARX) to learn and recognize plans. Semantics of plans are represented implicitly and error-tolerant. The recognition procedure is also error-tolerant because it tends to match fuzzily like human. Models and relevant limitations are illustrated and analyzed in this article",2006,0, 2053,Reverse Engineering XML,"A great number of existing XML documents in various domain such as electrical business have to be maintained in order to constantly adapt to a dynamically changing environment to keep pace with business needs. A DTD or XML schema in its current textual form commonly lacks clarity and readability, which makes the maintenance process tedious and error-prone. This paper presents an approach to reverse engineering the XML documents to conceptual model, which makes the XML documents more close to real world and business needs, let the designers quickly gain a picture of the overall structure of XML documents in order to improve its quality, increase the maintainability and reusability. In this paper, the conceptual model is described by UML class diagram, a three-level model is defined, and a novel approach for extracting various structure and semantic information from existing DTD is given, especially the inheritance structure can be inferred from the DTD structure",2006,0, 2054,Operational Fault Detection in cellular wireless base-stations,"The goal of this work is to improve availability of operational base-stations in a wireless mobile network through non-intrusive fault detection methods. Since revenue is generated only when actual customer calls are processed, we develop a scheme to minimize revenue loss by monitoring real-time mobile user call processing activity. The mobile user call load profile experienced by a base-station displays a highly non-stationary temporal behavior with time-of-day, day-of-the-week and time-of-year variations. In addition, the geographic location also impacts the traffic profile, making each base-station have its own unique traffic patterns. A hierarchical base-station fault monitoring and detection scheme has been implemented in an IS-95 CDMA Cellular network that can detect faults at - base station level, sector level, carrier level, and channel level. A statistical hypothesis test framework, based on a combination of parametric, semi-parametric and non-parametric test statistics are defined for determining faults. The fault or alarm thresholds are determined by learning expected deviations during a training phase. Additionally, fault thresholds have to adapt to spatial and temporal mobile traffic patterns that slowly changes with seasonal traffic drifts over time and increasing penetration of mobile user density. Feedback mechanisms are provided for threshold adaptation and self-management, which includes automatic recovery actions and software reconfiguration. We call this method, Operational Fault Detection (OFD). We describe the operation of a few select features from a large family of OFD features in Base Stations; summarize the algorithms, their performance and comment on future work.",2006,0, 2055,Interface faults injection for component-based integration testing,"This paper presents a simple and improved technique of interface fault insertion for conducting component integration testing through the use of aspect-oriented software development (AOSD). Taking the advantage of aspect's cross-cutting features, this technique only requires additional codes written in AspectJ rather than having a separate tool to perform this operation. These aspect codes act as wrappers around interface services and perform operations such as disabling the implementation of the interface services, raising exceptions or corrupting the inputs and outputs of interface services. Interface faults are inserted into the system under test to evaluate the quality of the test cases by ensuring not only that they detect errors due to the interactions between components, but they are also able to handle exceptions raised when interface faults are triggered.",2006,0, 2056,Adaptive protection setting and coordination for power distribution systems,"In this paper, a protection system using a Multi-Agent concept for power distribution networks is proposed. Every digital over current relay (OCR) is developed as an agent by adding its own intelligence, self-tuning and communication ability. In order to cope with frequent changes in the network operation condition and faults, an OCR agent, suggested in this paper, is able to detect a fault or a change in the network and find its optimal parameters of the protection relays in an autonomous manner considering information of the whole network obtained by communication between other agents. Simulations in a simple distribution network show the effectiveness of the suggested scheme.",2006,0, 2057,Semantic reliability of multi-agent intelligent systems,"Generally the concept of reliability has been interpreted as applied to hardware and software and has been based upon the assumption that a system can be decomposed into subsystems or components to which success or failure probabilities can be assigned assuming perfect semantic transactions between them, i.e., with no consideration to variation in the interpretation of the meanings of messages between various components. In multi-agent intelligent systems, where the agents interact with each other in capacities other than merely sending and receiving messages, cooperative decisions are made based upon beliefs, desires, intentions, and the autonomy of individual agents. In such cases, even if the components as well as the interconnections are error-free in the classical sense, there can be serious failures due to semantic variability and consequently the concept of reliability needs to be extended to semantics as well. This paper attempts to establish this new concept of semantic reliability and explore its relationship to the system reliability and information extraction processes. Here we examine the communication between agents and semantic error modes in multi-agent systems using Rao and Georgeff's belief-desire-intention (BDI) model of intelligent agents to decompose the semantic variation into its contributing parts from various subsystems comprising the agents. From this, the impact and the risk management strategies including fault tolerance are evolved. World representation, domain ontologies, and knowledge representation are brought out as important determinants of error control. A fault tolerance design based on goal hierarchy is suggested.",2006,0, 2058,Decade experience of the ecological risk assessment for water ecosystem and human population in Russia,Generalization of environmental risk applications as an integral safety criterion for water ecosystems and human population shows good promise of this approach to local and regional level environmental quality assessment. The major advantage of the environmental risk assessment approach is the timely detection of negative environmental quality trends prior to the toxic substance concentration reaching the maximum permissible concentration in water and air. The existing models of spatial and temporal water poisonous material concentration variation do not provide for authentic results due to their erroneous axiomatic assumptions. A new concept has been suggested for environmental risk assessment for marine ecosystems and human population due to dumped chemical weapons. Implementation of this new concept requires: Actual data on the effect of short-term biogeochemical processes on the behavior of poisonous materials and their decay products both in the water and at its boundaries with seabed deposits and the air; Software package for assessing the effect of nonlinear processes on poisonous material decay and transport rates in seabed water systems.,2006,0, 2059,Software Reliability Measurement and Prediction,This chapter contains sections titled:
Why Study and Measure Software Reliability?
What Is Reliability?
Faults and Failures
Failure Severity Classes
Failure Intensity
The Cost of Reliability
Software Reliability Theory
Reliability Models
Failure Arrival Rates
But When Do I Ship?
System Configurations: Probability and Reliability
Answers to Initial Question
Summary
Problems
Project
References,2006,0, 2060,References,"All organizations today confront data quality problems, both systemic and structural. Neither ad hoc approaches nor fixes at the systems level--installing the latest software or developing an expensive data warehouse--solve the basic problem of bad data quality practices. Journey to Data Quality offers a roadmap that can be used by practitioners, executives, and students for planning and implementing a viable data and information quality management program. This practical guide, based on rigorous research and informed by real-world examples, describes the challenges of data management and provides the principles, strategies, tools, and techniques necessary to meet them.The authors, all leaders in the data quality field for many years, discuss how to make the economic case for data quality and the importance of getting an organization's leaders on board. They outline different approaches for assessing data, both subjectively (by users) and objectively (using sampling and other techniques). They describe real problems and solutions, including efforts to find the root causes of data quality problems at a healthcare organization and data quality initiatives taken by a large teaching hospital. They address setting company policy on data quality and, finally, they consider future challenges on the journey to data quality.",2006,0, 2061,Index,"All organizations today confront data quality problems, both systemic and structural. Neither ad hoc approaches nor fixes at the systems level--installing the latest software or developing an expensive data warehouse--solve the basic problem of bad data quality practices. Journey to Data Quality offers a roadmap that can be used by practitioners, executives, and students for planning and implementing a viable data and information quality management program. This practical guide, based on rigorous research and informed by real-world examples, describes the challenges of data management and provides the principles, strategies, tools, and techniques necessary to meet them.The authors, all leaders in the data quality field for many years, discuss how to make the economic case for data quality and the importance of getting an organization's leaders on board. They outline different approaches for assessing data, both subjectively (by users) and objectively (using sampling and other techniques). They describe real problems and solutions, including efforts to find the root causes of data quality problems at a healthcare organization and data quality initiatives taken by a large teaching hospital. They address setting company policy on data quality and, finally, they consider future challenges on the journey to data quality.",2006,0, 2062,A Unified Framework for Defect Data Analysis Using the MBR Technique,"Failures of mission-critical software systems can have catastrophic consequences and, hence, there is strong need for scientifically rigorous methods for assuring high system reliability. To reduce the V&V cost for achieving high confidence levels, quantitatively based software defect prediction techniques can be used to effectively estimate defects from prior data. Better prediction models facilitate better project planning and risk/cost estimation. Memory based reasoning (MBR) is one such classifier that quantitatively solves new cases by reusing knowledge gained from past experiences. However, it can have different configurations by varying its input parameters, giving potentially different predictions. To overcome this problem, we develop a framework that derives the optimal configuration of an MBR classifier for software defect data, by logical variation of its configuration parameters. We observe that this adaptive MBR technique provides a flexible and effective environment for accurate prediction of mission-critical software defect data.",2006,1, 2063,An empirical study of predicting software faults with case-based reasoning,"The resources allocated for software quality assurance and improvement have not increased with the ever-increasing need for better software quality. A targeted software quality inspection can detect faulty modules and reduce the number of faults occurring during operations. We present a software fault prediction modeling approach with case-based reasoning (CBR), a part of the computational intelligence field focusing on automated reasoning processes. A CBR system functions as a software fault prediction model by quantifying, for a module under development, the expected number of faults based on similar modules that were previously developed. Such a system is composed of a similarity function, the number of nearest neighbor cases used for fault prediction, and a solution algorithm. The selection of a particular similarity function and solution algorithm may affect the performance accuracy of a CBR-based software fault prediction system. This paper presents an empirical study investigating the effects of using three different similarity functions and two different solution algorithms on the prediction accuracy of our CBR system. The influence of varying the number of nearest neighbor cases on the performance accuracy is also explored. Moreover, the benefits of using metric-selection procedures for our CBR system is also evaluated. Case studies of a large legacy telecommunications system are used for our analysis. It is observed that the CBR system using the Mahalanobis distance similarity function and the inverse distance weighted solution algorithm yielded the best fault prediction. In addition, the CBR models have better performance than models based on multiple linear regression.",2006,1, 2064,Data Mining Static Code Attributes to Learn Defect Predictors,"The value of using static code attributes to learn defect predictors has been widely debated. Prior work has explored issues like the merits of """"McCabes versus Halstead versus lines of code counts"""" for generating defect predictors. We show here that such debates are irrelevant since how the attributes are used to build predictors is much more important than which particular attributes are used. Also, contrary to prior pessimism, we show that such defect predictors are demonstrably useful and, on the data studied here, yield predictors with a mean probability of detection of 71 percent and mean false alarms rates of 25 percent. These predictors would be useful for prioritizing a resource-bound exploration of code that has yet to be inspected",2007,1, 2065,Scene Parsing Using Region-Based Generative Models,"Semantic scene classification is a challenging problem in computer vision. In contrast to the common approach of using low-level features computed from the whole scene, we propose """"scene parsing"""" utilizing semantic object detectors (e.g., sky, foliage, and pavement) and region-based scene-configuration models. Because semantic detectors are faulty in practice, it is critical to develop a region-based generative model of outdoor scenes based on characteristic objects in the scene and spatial relationships between them. Since a fully connected scene configuration model is intractable, we chose to model pairwise relationships between regions and estimate scene probabilities using loopy belief propagation on a factor graph. We demonstrate the promise of this approach on a set of over 2000 outdoor photographs, comparing it with existing discriminative approaches and those using low-level features",2007,0, 2066,Detecting Ventricular Fibrillation by Time-Delay Methods,"A pivotal component in automated external defibrillators (AEDs) is the detection of ventricular fibrillation (VF) by means of appropriate detection algorithms. In scientific literature there exists a wide variety of methods and ideas for handling this task. These algorithms should have a high detection quality, be easily implementable, and work in realtime in an AED. Testing of these algorithms should be done by using a large amount of annotated data under equal conditions. For our investigation we simulated a continuous analysis by selecting the data in steps of 1 s without any preselection. We used the complete BIH-MIT arrhythmia database, the CU database, and files 7001-8210 of the AHA database. For a new VF detection algorithm we calculated the sensitivity, specificity, and the area under its receiver operating characteristic curve and compared these values with the results from an earlier investigation of several VF detection algorithms. This new algorithm is based on time-delay methods and outperforms all other investigated algorithms",2007,0, 2067,Reliable Effects Screening: A Distributed Continuous Quality Assurance Process for Monitoring Performance Degradation in Evolving Software Systems,"Developers of highly configurable performance-intensive software systems often use in-house performance-oriented """"regression testing"""" to ensure that their modifications do not adversely affect their software's performance across its large configuration space. Unfortunately, time and resource constraints can limit in-house testing to a relatively small number of possible configurations, followed by unreliable extrapolation from these results to the entire configuration space. As a result, many performance bottlenecks escape detection until systems are fielded. In our earlier work, we improved the situation outlined above by developing an initial quality assurance process called """"main effects screening"""". This process 1) executes formally designed experiments to identify an appropriate subset of configurations on which to base the performance-oriented regression testing, 2) executes benchmarks on this subset whenever the software changes, and 3) provides tool support for executing these actions on in-the-field and in-house computing resources. Our initial process had several limitations, however, since it was manually configured (which was tedious and error-prone) and relied on strong and untested assumptions for its accuracy (which made its use unacceptably risky in practice). This paper presents a new quality assurance process called """"reliable effects screening"""" that provides three significant improvements to our earlier work. First, it allows developers to economically verify key assumptions during process execution. Second, it integrates several model-driven engineering tools to make process configuration and execution much easier and less error prone. Third, we evaluate this process via several feasibility studies of three large, widely used performance-intensive software frameworks. Our results indicate that reliable effects screening can detect performance degradation in large-scale systems more reliably and with significantly less resources than conventional t- echniques",2007,0, 2068,PocketPad: Using Handhelds and Digital Pens to Manage Data in Mobile Contexts,"PocketPad is an information management system geared toward university students. The system is designed to support the capture, storage, browsing, editing and organization of handwritten notes via the complementary use of digital pens to capture information, handheld computers to browse and store prior information, and digital pens and handheld computers in combination to edit and organize information. Desktop computer software synchronizes the information from the digital pen and Pocket PC during editing and reorganization. Through the combined use of different hardware components supported by our software, we describe a system that bridges the paper-electronic information divide in mobile contexts. The system relies on human-in-the-loop techniques coupled with stroke timing to simplify coordination of content from different sources, rather than resorting to complex and often error-prone computer document recognition algorithms.",2007,0, 2069,An Effective PSO-Based Memetic Algorithm for Flow Shop Scheduling,"This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness of- - the PSOMA. Additionally, the effects of some parameters on optimization performances are also discussed",2007,0, 2070,TopoLayout: Multilevel Graph Layout by Topological Features,"We describe TopoLayout, a feature-based, multilevel algorithm that draws undirected graphs based on the topological features they contain. Topological features are detected recursively inside the graph, and their subgraphs are collapsed into single nodes, forming a graph hierarchy. Each feature is drawn with an algorithm tuned for its topology. As would be expected from a feature-based approach, the runtime and visual quality of TopoLayout depends on the number and types of topological features present in the graph. We show experimental results comparing speed and visual quality for TopoLayout against four other multilevel algorithms on a variety of data sets with a range of connectivities and sizes. TopoLayout frequently improves the results in terms of speed and visual quality on these data sets",2007,0, 2071,The Impact of National Culture and Social Pr esence on Trust and Communication Quality within Collabor ative Groups,"In this empirical study we examine the impact of national culture and social presence on interpersonal trust in both culturally homogeneous and heterogeneous groups. Results demonstrate that interpersonal trust is higher in homogeneous, low-individualism groups (represented by Chinese participants) than that in homogeneous, high-individualism groups (represented by U.S. participants); however, interpersonal trust in heterogeneous groups is lower for low-individualism than high-individualism group members. It is also found that social presence has a positive impact on interpersonal trust; however, a difference in social presence between groups supported by two collaborative technologies is not detected. In addition, perceived communication quality is reported highest in face-to-face (FtF) groups without the support of collaborative software (CS), followed by FtF, CS-supported groups, and then virtual, CS groups. These findings have important implications for trust building in global groups as well as for the design of collaborative technologies in support of virtual groups",2007,0, 2072,Agent-based Human-computer-interaction for Real-time Monitoring Systems in the Trucking Industry,"Auto ID systems can replace time-consuming, costly and error-prone processes of human data entry and produce detailed real time information. However, they would add value only to the extent that data is presented in a user-friendly manner. As model-based decision support is not always adequate, an agent-based approach is often chosen. Real life entities such as orders and trucks are represented by agents, which negotiate in order to solve planning problems. For the respective data representation at least two forms can be distinguished, focusing either on (1) resources (account-based) or (2) orders (order-centric). Applying cognitive fit theory we describe how the different interfaces affect decision making. The hypotheses would be tested in a laboratory experiment. The intended contribution should support that order-centric interfaces have higher user-friendliness and are especially beneficial to low-analytics and planners working under time pressure",2007,0, 2073,Ontology Driven Requirements Query,"Use cases are commonly used to represent customers' requirements during systems development. In a large software development environment, finding a relevant use case from a large use case library created in the past or related projects is a complex, error-prone and expensive task. Based on the semantic Web approach, we propose an ontological methodology to support this task. We use ontology to augment use cases with semantic information. This ontology is derived from ResearchCyc ontology. We also propose the augmentation of queries used to retrieve use cases with this ontology. We present this approach to better capture, reuse and query use cases",2007,0, 2074,Using Online Competitor's Inventory Information for Pricing,"Information displayed on an e-commerce site can be used not just by the intended customers but also by competitors. In the paper, we examine the effect of such proactive information use in the setting of e-commerce retailing where duopoly e-tailers set their prices of a commodity that is in short supply. While e-tailers enhance their service quality by making stockout information available online, that inventory information could also be used by competitors to determine their prices. Each e-tailer can launch software agents to detect its competitor's inventory position and make its price decision contingent on that position. We show that when customer reservation value is relatively high, and e-tailers do not resemble each other in terms of fill rate, both e-tailers choose to adopt the software agent technology and price dynamically at equilibrium. The high availability e-tailer can charge higher prices and enjoy a higher profit level than the low availability e-tailer. More customers prefer to visit the high fill rate e-tailer first under the dynamic pricing scheme than under the static pricing scheme. Because total search costs are reduced, social welfare is improved under the new dynamic pricing scheme",2007,0, 2075,Reconciling Manual and Automated Testing: The AutoTest Experience,"Software can be tested either manually or automatically. The two approaches are complementary: automated testing can perform a large number of tests in little time, whereas manual testing uses the knowledge of the testing engineer to target testing to the parts of the system that are assumed to be more error-prone. Despite this complementarity, tools for manual and automatic testing are usually different, leading to decreased productivity and reliability of the testing process. AutoTest is a testing tool that provides a """"best of both worlds"""" strategy: it integrates developers' test cases into an automated process of systematic contract-driven testing. This allows it to combine the benefits of both approaches while keeping a simple interface, and to treat the two types of tests in a unified fashion: evaluation of results is the same, coverage measures are added up, and both types of tests can be saved in the same format",2007,0, 2076,An Innovative Approach to Tackling the Boundary Effect in Adaptive Random Testing,"Adaptive random testing (ART) is an effective improvement of random testing (RT) in the sense that fewer test cases are needed to detect the first failure. It is based on the observation that failure-causing inputs are normally clustered in one or more contiguous regions in the input domain. Hence, it has been proposed that test case generation should refer to the locations of successful test cases (those that do not reveal failures) to ensure that all test cases are far apart and evenly spread in the input domain. Distance-based ART and restricted random testing are the first two previous attempts. However, test cases generated by these attempts are far apart but not necessarily evenly spread, since more test cases are generated near the boundary of the input domain. This paper analyzes the cause of this phenomenon and proposes an enhanced implementation based on the concept of virtual images of the successful test cases. The results of simulations show that the test cases generated by our enhanced implementation are not only far apart but also evenly spread in the input domain. Furthermore, the fault detection capability of ART for high-dimensional input domains is also enhanced",2007,0, 2077,POSAML: A Visual Modeling Framework for Middleware Provisioning,"Effective provisioning of next generation distributed applications hosted on diverse middleware platforms incurs significant challenges due to the applications' growing complexity and quality of service (QoS) requirements. An effective provisioning of the middleware platform includes a composition and configuration of the middleware services that meets the application QoS requirements under expected workloads. Traditional techniques for middleware provisioning tend to use non-intuitive, low-level and technology-specific approaches, which are tedious, error prone, non-reusable and not amenable to ease of QoS validation. Additionally, most often the configuration activities of the middleware platform tend to be decoupled from the QoS validation stages resulting in an iterative trial-and-error process between the two phases. This paper describes the design of a visual modeling language called POSAML (patterns oriented software architecture modeling language) and associated tools that provide an intuitive, higher level and unified framework for provisioning middleware platforms. POSAML provides visual modeling capabilities for middleware-independent provisioning while allowing automated middleware-specific QoS validation",2007,0, 2078,Inside Architecture Evaluation: Analysis and Representation of Optimization Potential,"The share of software in embedded systems has been growing permanently in the recent years. Thus, software architecture as well as its evaluation has become an important part of embedded systems design to define, assess, and assure architecture and system quality. Furthermore, design space exploration can be based on architecture evaluation. To achieve an efficient exploration process, architectural decisions need to be well considered. In this paper, analysis of architecture evaluation is performed to uncover dependencies of the quality attributes which are the first class citizens of architecture evaluation. With an explicit representation of such dependencies, valuable changes of an architecture can be calculated. Next to the exploration support, the analysis results help to document architecture knowledge and make architectural decisions explicit and traceable. The development process can now be based on dependable and well documented architectural decisions. Effects of changes become more predictable. Time and costs can be saved by avoiding suboptimal changes.",2007,0, 2079,Constructing a Reading Guide for Software Product Audits,"Architectural knowledge is reflected in various artifacts of a software product. In the case of a software product audit this architectural knowledge needs to be uncovered and its effects assessed, in order to evaluate the quality of the software product. A particular problem is to find and comprehend the architectural knowledge that resides in the software product documentation. The amount of documents, and the differences in for instance target audience and level of abstraction, make it a difficult job for the auditors to find their way through the documentation. This paper discusses how the use of a technique called latent semantic analysis can guide the auditors through the documentation to the architectural knowledge they need. Using latent semantic analysis, we effectively construct a reading guide for software product audits.",2007,0, 2080,A Comparison of Static Architecture Compliance Checking Approaches,"The software architecture is one of the most important artifacts created in the lifecycle of a software system. It enables, facilitates, hampers, or interferes directly the achievement of business goals, functional and quality requirements. One instrument to determine how adequate the architecture is for its intended usage is architecture compliance checking. This paper compares three static architecture compliance checking approaches (reflexion models, relation conformance rules, and component access rules) by assessing their applicability in 13 distinct dimensions. The results give guidance on when to use which approach.",2007,0, 2081,"Software Effort, Quality, and Cycle Time: A Study of CMM Level 5 Projects","The Capability Maturity Model (CMM) has become a popular methodology for improving software development processes with the goal of developing high-quality software within budget and planned cycle time. Prior research literature, while not exclusively focusing on CMM level 5 projects, has identified a host of factors as determinants of software development effort, quality, and cycle time. In this study, we focus exclusively on CMM level 5 projects from multiple organizations to study the impacts of highly mature processes on effort, quality, and cycle time. Using a linear regression model based on data collected from 37 CMM level 5 projects of four organizations, we find that high levels of process maturity, as indicated by CMM level 5 rating, reduce the effects of most factors that were previously believed to impact software development effort, quality, and cycle time. The only factor found to be significant in determining effort, cycle time, and quality was software size. On the average, the developed models predicted effort and cycle time around 12 percent and defects to about 49 percent of the actuals, across organizations. Overall, the results in this paper indicate that some of the biggest rewards from high levels of process maturity come from the reduction in variance of software development outcomes that were caused by factors other than software size",2007,0, 2082,"Fighting bugs: remove, retry, replicate, and rejuvenate","Even if software developers don't fully understand the faults or know their location in the code, software rejuvenation can help avoid failures in the presence of aging-related bugs. This is good news because reproducing and isolating an aging-related bug can be quite involved, similar to other Mandelbugs. Moreover, monitoring for signs of software aging can even help detect software faults that were missed during the development and testing phases. If, on the other hand, a developer can detect a specific aging-related bug in the code, fixing it and distributing a software update might be worthwhile. In the case of the Patriot missile-defense system, a modified version of the software was indeed prepared and deployed to users. It arrived at Dhahran on 26 February 1991 - a day after the fatal incident.",2007,0, 2083,Generalized Discrete Software Reliability Modeling With Effect of Program Size,"Generalized methods for software reliability growth modeling have been proposed so far. But, most of them are on continuous-time software reliability growth modeling. Many discrete software reliability growth models (SRGM) have been proposed to describe a software reliability growth process depending on discrete testing time such as the number of days (or weeks); the number of executed test cases. In this paper, we discuss generalized discrete software reliability growth modeling in which the software failure-occurrence times follow a discrete probability distribution. Our generalized discrete SRGMs enable us to assess software reliability in consideration of the effect of the program size, which is one of the influential factors related to the software reliability growth process. Specifically, we develop discrete SRGMs in which the software failure-occurrence times follow geometric and discrete Rayleigh distributions, respectively. Moreover, we derive software reliability assessment measures based on a unified framework for discrete software reliability growth modeling. Additionally, we also discuss optimal software release problems based on our generalized discrete software reliability growth modeling. Finally, we show numerical examples of software reliability assessment by using actual fault-counting data",2007,0, 2084,Software Quality Analysis of Unlabeled Program Modules With Semisupervised Clustering,"Software quality assurance is a vital component of software project development. A software quality estimation model is trained using software measurement and defect (software quality) data of a previously developed release or similar project. Such an approach assumes that the development organization has experience with systems similar to the current project and that defect data are available for all modules in the training data. In software engineering practice, however, various practical issues limit the availability of defect data for modules in the training data. In addition, the organization may not have experience developing a similar system. In such cases, the task of software quality estimation or labeling modules as fault prone or not fault prone falls on the expert. We propose a semisupervised clustering scheme for software quality analysis of program modules with no defect data or quality-based class labels. It is a constraint-based semisupervised clustering scheme that uses k-means as the underlying clustering algorithm. Software measurement data sets obtained from multiple National Aeronautics and Space Administration software projects are used in our empirical investigation. The proposed technique is shown to aid the expert in making better estimations as compared to predictions made when the expert labels the clusters formed by an unsupervised learning algorithm. In addition, the software quality knowledge learnt during the semisupervised process provided good generalization performance for multiple test data sets. An analysis of program modules that remain unlabeled subsequent to our semisupervised clustering scheme provided useful insight into the characteristics of their software attributes",2007,1, 2085,Systematic t-Unidirectional Error-Detecting Codes over Zm,"Some new classes of systematic t-unidirectional error-detecting codes over Zm are designed. It is shown that the constructed codes can detect two errors using two check digits. Furthermore, the constructed codes can detect up to mr-2 + r-2 errors using r ges 3 check bits. A bound on the maximum number of detectable errors using r check digits is also given.",2007,0, 2086,An Operation-Centered Approach to Fault Detection in Symmetric Cryptography Ciphers,"One of the most effective ways of attacking a cryptographic device is by deliberate fault injection during computation, which allows retrieving the secret key with a small number of attempts. Several attacks on symmetric and public-key cryptosystems have been described in the literature and some dedicated error-detection techniques have been proposed to foil them. The proposed techniques are ad hoc ones and exploit specific properties of the cryptographic algorithms. In this paper, we propose a general framework for error detection in symmetric ciphers based on an operation-centered approach. We first enumerate the arithmetic and logic operations included in the cipher and analyze the efficacy and hardware complexity of several error-detecting codes for each such operation. We then recommend an error-detecting code for the cipher as a whole based on the operations it employs. We also deal with the trade-off between the frequency of checking for errors and the error coverage. We demonstrate our framework on a representative group of 11 symmetric ciphers. Our conclusions are supported by both analytical proofs and extensive simulation experiments",2007,0, 2087,Interactive Image Repair with Assisted Structure and Texture Completion,"Removing image defects in an undetectable manner has been studied for its many useful and varied applications. In many cases the desired result may be ambiguous from the image data alone and needs to be guided by a user's knowledge of the intended result. This paper presents a framework for interactively incorporating user guidance into the filling-in process, more effectively using user input to fill in damaged regions in an image. This framework contains five main steps: first, the scratch or defect is detected; second, the edges outside the defect are detected; third, curves are fit to the detected edges; fourth, the structure is completed across the damaged region; and finally, texture synthesis constrained by the previously computed curves is used to fill in the intensities in the damaged region. Scratch detection, structure completion, and texture synthesis are influenced or guided by user input when given. Results include removal of defects from images that contain structure, texture, or both structure and texture. Users can complete images with ambiguous structure in multiple ways by gesturing the cursor in the direction of the desired structure completion",2007,0, 2088,Adaptive runtime fault management for service instances in component-based software applications,"The Trust4All project aims to define an open, component-based framework for the middleware layer in high-volume embedded appliances that enables robust and reliable operation, upgrading and extension. To improve the availability of each individual application in a Trust4All system, a runtime configurable fault management mechanism (FMM) is proposed, which detects deviations from given service specifications by intercepting interface calls. When repair is necessary, FMM picks a repair action that incurs the best tradeoff between the success rate and the cost of repair. Considering that it is rather difficult to obtain sufficient information about third party components during their early stage of usage, FMM is designed to be able to accumulate knowledge and adapts its capability accordingly",2007,0, 2089,Early Software Reliability Prediction Using Cause-effect Graphing Analysis,"Early prediction of software reliability can help organizations make informed decisions about corrective actions. To provide such early prediction, we propose practical methods to: 1) systematically identify defects in a software requirements specification document using a technique derived from cause-effect graphing analysis (CEGA); 2) assess the impact of these defects on software reliability using a recursive algorithm based on binary decision diagram (BDD) technique. Using a numerical example, we show how predicting software reliability at the requirement analysis stage could be greatly facilitated by the use of the method presented in this paper. The acronyms used throughout this paper are alphabetically listed as follows: ACEG-actually implemented cause effect graph; BCEG-benchmark cause effect graph; BDD-binary decision diagram; CEGA-cause effect graphing analysis; PACS-personal access control system; SRS-software requirements specification document",2007,0, 2090,On The Development Of Fault Injection Profiles,"The impact of hardware failures on software has attracted substantial attention in the study of dependable systems. Fault injection techniques have emerged as a major means to evaluate software behavior in the presence of hardware failures. However, due to the lack of knowledge of the fault distribution information, the fault location and time are randomly selected. One major drawback of this approach is that the injected faults do not represent the system's operational situation, thus software reliability cannot be credibly assessed. This paper aims at extending the use of fault injection to the reliability prediction of hardware faults. To do so, we have developed a set of analytical and simulation based methods capable of statistically reproducing the underlying physics and phenomena leading to hardware failures in a given system operational context. Such distributions are referred to as fault injection profiles, and are the basis to extend the fault injection technique with fault models that represent the actual conditions under which hardware faults occur",2007,0, 2091,Assessing Diagnostic Techniques for Fault Tolerance in Software,"One of the main concerns in software safety critical applications is to ensure sufficient reliability if one cannot prove the absence of faults. Fault tolerance (FT) provides a plausible method for improving reliability claims in the presence of systematic failures in software. It is plausible that some software FT techniques offer increased protection than others. However, the extent of claims that can be made for different FT software architectures remains unclear. We investigate an approach to FT that integrates data diversity (DD) assertions and traditional assertions (TA). We also present the principles of a method to assess the effectiveness of the approach. The aim of this approach is to make it possible to evolve more powerful FT and thereby improve reliability. This is a step towards the aim of understanding the effectiveness of FT safety-critical applications and thus making it easier to use FT in safety arguments",2007,0, 2092,Failure Time Based Reliability Growth in Product Development and Manufacturing,"The failure-in-time (FIT) rate is widely used to quantify the reliability of a electronic component. It fails to indicate the portion of the failures due to either environmental or electrical stresses or issues that are related to process/handing, manufacturing and applications. To meet this end, FIT-based corrective action driven metrics are proposed to link the failure mode (FM) with components and non-component faults. First the conventional failure mode pareto is reviewed and its deficiency is discussed. Then a new index called the failure mode rate (FMR) is introduced to monitor the FM trend and evaluate the effectiveness of corrective actions (C/As). Based on the FMR, the FIT rate is extended to non-component failure mode and further to individual failure mode in predicting the reliability of electronic products. The extended FIT rate enables product designers to narrow down the root-cause of the failure, identify the C/A ownership, and estimate the MTBF improvement. The new metrics provide a guideline for prioritizing resources to attack the critical failures with the minimum cost.",2007,0, 2093,Workshop: Assessing the Quality of a Business Process Implemented across Systems of Systems,"A team at the SEI has developed a framework that has shown promise in assessing the survivability of a business process in a system of systems environment. Assessment outputs in pilot use included survivability requirements and gaps among interoperable systems. It is anticipated that the framework can be of use in the evaluation of other quality attributes. Researchers and practitioners in software design, architecture, quality assurance, and requirements validation are asked to participate in this workshop to review the work done for survivability and assist in the determination of its broader applicability",2007,0, 2094,Predicting Emergent Properties of Component Based Systems,"Software product lines (SPL), component based software engineering (CBSE) and commercial off the shelf (COTS) components provide a rich supporting base for creating software architectures. Further, they promise significant improvements in the quality of software configurations that can be composed from pre-built components. Software architectural styles provide a way for achieving a desired coherence for such component-based architectures. This is because the different architectural styles enforce different quality attributes for a system. If the architectural style of an emergent system could be predicted in advance, the system architect could make necessary changes to ensure that the quality attributes dictated by the system requirements were satisfied before the actual system was deployed. In this paper we propose a model for predicting architectural styles, and hence the quality attributes, based on use cases that need to be satisfied by a system configuration. Our technique can be used to determine stylistic conformance and hence indicate the presence or absence of architectural drift",2007,0, 2095,Uncertainty Explicit Assessment of Off-the-Shelf Software: Selection of an Optimal Diverse Pair,"Assessment of software COTS components is an essential part of component-based software development. Sub-optimal selection of components may lead to solutions with low quality. The assessment is based on incomplete knowledge about the COTS components themselves and other aspects, which may affect the choice such as the vendor's credentials, etc. We argue in favor of assessment methods in which uncertainty is explicitly represented (`uncertainty explicit' methods) using probability distributions. We have adapted a model (developed elsewhere by Littlewood, B. et al. (2000)) for assessment of a pair of COTS components to take account of the fault (bug) logs that might be available for the COTS components being assessed. We also provide empirical data from a study we have conducted with off-the-shelf database servers, which illustrate the use of the method",2007,0, 2096,Verification and Validation of (Real Time) COTS Products using Fault Injection Techniques,"With the goal of reducing time to market and project costs, the current trend of real time business and mission critical systems is evolving from the development of custom made applications to the use of commercial off the shelf (COTS) products. Obviously, the same confidence and quality of the custom made software components is expected from the commercial applications. In most cases, such products (COTS) are not designed with stringent timing and/or safety requirements as priorities. Thus, to decrease the gap between the use of custom made components and COTS components, this paper presents a methodology for evaluating COTS products in the scope of dependable, real time systems, through the application of fault injection techniques at key points of the software engineering process. By combining the use of robustness testing (fault injection at interface level) with software fault injection (using educated fault injection operators), a COTS component can be assessed in the context of the system it will belong to, with special emphasis given to timing and safety constraints that are usually imposed by the target real time dependable environment. In the course of this work, three case studies have been performed to assess the methodology using realistic scenarios that used common COTS products. Results for one case study are presented",2007,0, 2097,A Probabilistic Approach to Predict Changes in Object-Oriented Software Systems,"Predicting the changes in the next release of a software system has become a quest during its maintenance phase. Such a prediction can help managers to allocate resources more appropriately which results in reducing costs associated with software maintenance activities. A measure of change-proneness of a software system also provides a good understanding of its architectural stability. This research work proposes a novel approach to predict changes in an object oriented software system. The rationale behind this approach is that in a well-designed software system, feature enhancement or corrective maintenance should affect a limited amount of existing code. The goal is to quantify this aspect of quality by assessing the probability that each class will change in a future generation. Our proposed probabilistic approach uses the dependencies obtained from the UML diagrams, as well as other data extracted from source code of several releases of a software system using reverse engineering techniques. The proposed systematic approach has been evaluated on a multi-version medium size open source project namely JFlex, the fast scanner generator for Java. The obtained results indicate the simplicity and accuracy of our approach in the comparison with existing methods in the literature",2007,0, 2098,Application of Bayesian Networks to Architectural Optimisation,"The field of optimisation covers a great multitude of principles, methods and frameworks aimed at maximisation of an objective under constraints. However, the classical optimisation can not be easily applied in the context of computer-based systems architecture as there is not enough knowledge concerning the dependencies between non-functional qualities of the system. Out approach is based on the simulation optimisation methodology where the system simulation is first created to assess the current state of the design with respect to the objectives. The results of the simulation are used to construct a Bayesian belief network which effectively becomes a base for an objective function and serves as the main source of the decision support pertaining to the guidance of the optimisation process. The potential effects of each proposed change or combination of changes is then examined by updating and re-evaluating the system simulation",2007,0, 2099,Diagnosis of Embedded Software Using Program Spectra,"Automated diagnosis of errors detected during software testing can improve the efficiency of the debugging process, and can thus help to make software more reliable. In this paper we discuss the application of a specific automated debugging technique, namely software fault localization through the analysis of program spectra, in the area of embedded software in high-volume consumer electronics products. We discuss why the technique is particularly well suited for this application domain, and through experiments on an industrial test case we demonstrate that it can lead to highly accurate diagnoses of realistic errors",2007,0, 2100,System Level Performance Assessment of SOC Processors with SystemC,"This paper presents a system level methodology for modeling, and analyzing the performance of system-on-chip (SOC) processors. The solution adopted focuses on minimizing assessment time by modeling processors behavior only in terms of the performance metrics of interest. Formally, the desired behavior is captured through a C/C++ executable model, which uses finite state machines (FSM) as the underlying model of computation (MOC). To illustrate and validate our methodology we applied it to the design of a 16-bit reduced instruction set (RISC) processor. The performance metrics used to assess the quality of the design considered are power consumption and execution time. However, the methodology can be extended to any performance metric. The results obtained demonstrate the robustness of the proposed method both in terms of assessment time and accuracy",2007,0, 2101,IPOG: A General Strategy for T-Way Software Testing,"Most existing work on t-way testing has focused on 2-way (or pairwise) testing, which aims to detect faults caused by interactions between any two parameters. However, faults can also be caused by interactions involving more than two parameters. In this paper, we generalize an existing strategy, called in-parameter-order (IPO), from pairwise testing to t-way testing. A major challenge of our generalization effort is dealing with the combinatorial growth in the number of combinations of parameter values. We describe a t-way testing tool, called FireEye, and discuss design decisions that are made to enable an efficient implementation of the generalized IPO strategy. We also report several experiments that are designed to evaluate the effectiveness of FireEye",2007,0, 2102,Transistor-Level Synthesis for Low-Power Applications,"An important factor which greatly affects the power consumption and the delay of a circuit is the input capacitance of its gates. High input capacitances increase the power consumption as well as the time for charging and discharging the inputs. Current approaches address this problem either through gate-level only resynthesis and optimization, or indirectly through transistor-level synthesis aimed for transistor count reduction. In this paper a method is presented to synthesize complex gates at the transistor level with explicit consideration of the switching activity profile for the gate. The method finds a power efficient implementation by giving priority to transistor inputs with higher switching activity, while keeping the overall number of required transistors low. Experimental results demonstrate the benefit of the approach",2007,0, 2103,A Model-Based Approach for Testing GUI Using Hierarchical Predicate Transition Nets,"Testing graphical user interface (GUI) has shown to be costly and difficult. Existing approaches for testing GUI are event-driven. In this paper, we propose a model based testing method to test the structural representation of GUIs specified in high class of Petri nets known as hierarchical predicate transitions nets (HPrTNs). In order to detect early design faults and fully benefit from HPrTNmodels, we have extended the original coverage criteria proposed for HPrTNs by event-based criteria defined for GUI testing",2007,0, 2104,Using Stealth Mixins to Achieve Modularity,"Organising a complex, interactive application into separate modules is a significant challenge. We would like to be able to evolve modules independently, and to add new modules into the system (or remove optional ones) without requiring major revisions to existing code. Solutions that rely on pre-planning when writing core modules are clumsy and error-prone, since programmers may omit to include all the required """"hooks,"""" unused hooks incur a runtime overhead, and any unanticipated extensions may still require significant code changes. Unfortunately, most languages do not provide adequate mechanisms for supporting the separation of modules. In this paper, we review partial solutions for typical object-oriented languages such as Java, and then present """"stealth mixins"""", a much more satisfactory solution that can be built on top of Common Lisp. The Common Lisp solution was developed for Gsharp, a sophisticated graphical music score editor, and the technique has been used extensively throughout the program. We use Gsharp as a running example throughout this paper. However, the ideas are applicable to a wide range of applications.",2007,0, 2105,Obstacles to Comprehension in Usage Based Reading,"Usage based reading (UBR) is a recent approach to object oriented software inspections. Like other scenario based reading (SBR) techniques it proposes a prescriptive reading procedure. However, the impact of such procedures upon comprehension is not well known, and consideration has not been given to established software cognition theories. This paper describes a study examining software comprehension in UBR inspections. Participants traced the events of a UML sequence diagram through Java source code while thinking aloud. An electronic interface collected real-time data, allowing the identification of """"points of interest"""", which were categorised according to issues affecting participants' performance. Together with indicators of participants' cognitive processes, this suggests that adherence to UBR scenarios is non-trivial. While UBR can detect more critical defects, we argue that a re-think of its prescriptive nature, including the use of cognition support, is required before it can become a practical reading technique.",2007,0, 2106,Improving Usability of Software Refactoring Tools,"Post-deployment maintenance and evolution can account for up to 75% of the cost of developing a software system. Software refactoring can reduce the costs associated with evolution by improving system quality. Although refactoring can yield benefits, the process includes potentially complex, error-prone, tedious and time-consuming tasks. It is these tasks that automated refactoring tools seek to address. However, although the refactoring process is well-defined, current refactoring tools do not support the full process. To develop better automated refactoring support, we have completed a usability study of software refactoring tools. In the study, we analysed the task of software refactoring using the ISO 9241-11 usability standard and Fitts' List of task allocation. Expanding on this analysis, we reviewed 11 collections of usability guidelines and combined these into a single list of 38 guidelines. From this list, we developed 81 usability requirements for refactoring tools. Using these requirements, the software refactoring tools Eclipse 3.2, Condenser 1.05, RefactorIT 2.5.1, and Eclipse 3.2 with the Simian UI 2.2.12 plugin were studied. Based on the analysis, we have selected a subset of the requirements that can be incorporated into a prototype refactoring tool intended to address the full refactoring process.",2007,0, 2107,Measuring the Strength of Indirect Coupling,"It is widely accepted that coupling plays an important role in software quality, particularly in the areas of software maintenance, so effort should be made to keep coupling levels to a minimum in order to reduce the complexity of the system. We have previously introduced the concept of """"indirect"""" coupling - coupling formed by relationships/dependencies that are not directly evident - with the belief that high levels of indirect coupling can constitute greater costs to maintenance as it is harder to detect. In this paper we extend our previous studies by proposing metrics that can advance our understanding of the exact relationship between indirect coupling and maintainability. In particulars the metrics focus on the reflection of """"strength"""" as it is a fundamental component of coupling. We present our observations on the results of applying the metrics to existing Java applications.",2007,0, 2108,Coupling Metrics for Predicting Maintainability in Service-Oriented Designs,"Service-oriented computing (SOC) is emerging as a promising paradigm for developing distributed enterprise applications. Although some initial concepts of SOC have been investigated in the research literature, and related technologies are in the process of adoption by an increasing number of enterprises, the ability to measure the structural attributes of service-oriented designs thus predicting the quality of the final software product does not currently exist. Therefore, this paper proposes a set of metrics for quantifying the structural coupling of design artefacts in service-oriented systems. The metrics, which are validated against previously established properties of coupling, are intended to predict the quality characteristic of maintainability of service-oriented software. This is expected to benefit both research and industrial communities as existing object-oriented and procedural metrics are not readily applicable to the implementation of service-oriented systems.",2007,0, 2109,Fault Detection Using Differential Flatness in Flight Guidance Systems,"In this paper, flight guidance dynamics are shown to be implicit differentially flat with respect to the inertial position of an aircraft. This proves the existence of a set of relations between these flat outputs and the state variables representative of flight guidance dynamics and between these flat outputs and the basic inputs to flight guidance dynamics. A neural network is introduced to obtain, from the actual trajectory, nominal flight parameters which can be compared with actual values to detect abnormal behaviour",2007,0, 2110,Is That a Fish in Your Ear? A Universal Metalanguage for Multimedia,"Developing the code to parse and generate multimedia bitstreams has traditionally been a repetitive and error-prone task. It has also been an area of application development that defied the goal of software reuse. In contrast, BSDL abstracts the minutiae of bitstream parsing out of software code, into an interoperable data file (the BSDL schema), allowing developers to concentrate on the functionality of their particular application. BSDL's approach has demonstrated applications at numerous points in the multimedia delivery chain. In the future, this approach may be extended to still other processing tasks, such as transcoding and transmoding, or to types of binary data other than multimedia",2007,0, 2111,Fault Injection Campaign for a Fault Tolerant Duplex Framework,"Software based fault tolerance may allow the use of COTS digital electronics in building a highly reliable computing system for spacecraft. In this work we present the results of a fault injection campaign we conducted on the Duplex Framework (DF). The DF is a software developed by the UCLA group [1], [2] that allows to run two copies (or replicas) of the same program on two different nodes of a commercial off-the-shelf (COTS) computer cluster. By the means of a third process (comparator) running on a different node that constantly monitors the results computed by the two replicas, the DF is able to restart the two replica processes if an inconsistency in their computation is detected. In order to test the reliability of the DF we wrote a simple fault injector that injects faults in the virtual memory of one of the replica process to simulate the effects of radiation in space. These faults occasionally cause the process to crash or produce erroneous outputs. For this study we used two different applications, one that computes an encryption of a input file using the RSA algorithm, and another that optimizes the trade-off between time spent and the fuel consumption for a low-thrust orbit transfer. But the DF is generic enough that any application written in C or Fortran could be used with little or no modification of the original source code. Our results show the potential of such approach in detecting and recovering from radiation induced random errors. This approach is very cost efficient compared to hardware implemented duplex operations and can be adopted to control processes on spacecrafts where the fault rate produced by cosmic rays is not very high.",2007,0, 2112,"Using Parallel Processing Tools to Predict Rotorcraft Performance, Stability, and Control","This paper discusses the development of the High Performance Computing (HPC) Collaborative Simulation and Test (CST) portfolio CST-03 program, one of the projects in the Common HPC Software Support Initiative (CHSSI) portfolio. The objective of this development was to provide computationally scalable tools to predict rotorcraft performance, stability, and control. The ability to efficiently predict and optimize vehicle performance, stability, and control from high fidelity computer models would greatly enhance the design and testing process and improve the quality of systems acquisition. Through this CHSSI development, the US Navy Test Pilot School performance, stability, and control test procedures were fully implemented in a high performance parallel computing environment. These Navy flight test support options were parallelized, implemented, and validated in the FLIGHTLAB comprehensive, multidisciplinary modeling environment. These tools were designed to interface with other CST compatible models and a standalone version of the tools (FLIGHTLAB-ASPECT) was delivered for use independent of the FLIGHTLAB development system. Tests on the MAUI Linux cluster indicated that there was over 25 times speedup using 32 CPUs. The tests also met the accuracy criteria as defined for the Beta trial.",2007,0, 2113,Applying a Formal Requirements Method to Three NASA Systems: Lessons Learned,"Recently, a formal requirements method called SCR (software cost reduction) was used to specify software requirements of mission-critical components of three NASA systems. The components included a fault protection engine, which determines how a spacecraft should respond to a detected fault; a fault detection, isolation and recovery component, which, in response to an undesirable event, outputs a failure notification and raises one or more alarms; and a display system, which allows a space crew to monitor and control on-orbit scientific experiments. This paper demonstrates how significant and complex requirements of one of the components can be translated into an SCR specification and describes the errors detected when the authors formulated the requirements in SCR. It also discusses lessons learned in using formal methods to document the software requirements of the three components. Based on the authors' experiences, the paper presents several recommendations for improving the quality of requirements specifications of safety-critical aerospace software.",2007,0, 2114,Program Model Checking Using Design-for-Verification: NASA Flight Software Case Study,"Model checking is a verification technique developed in the 1980s that has a history of industrial application in hardware verification and verification of communications protocol specifications. Program model checking is a technique for model checking software in which the program itself is the model to be checked. Program model checking has shown potential for detecting software defects that are extremely difficult to detect through traditional testing. The technique has been the subject of research and relatively small-scale applications but faces several barriers to wider deployment. This paper is a report on continuing work applying Java PathFinder (JPF), a program model checker developed at NASA Ames Research Center, to the shuttle abort flight management system, a situational awareness application originally developed for the space shuttle. The paper provides background on the model checking tools that were used and the target application, and then focuses on the application of a """"design for verification"""" (D4V) principle and its effect on model checking. The case study helps validate the applicability of program model checking technology to real NASA flight software. A related conclusion is that application of D4V principles can increase the efficiency of model checking in detecting subtle software defects. The paper is oriented toward software engineering technology transfer personnel and software practitioners considering introducing program model checking technology into their organizations.",2007,0, 2115,Envisioning the Next-Generation of Functional Testing Tools,"The functional test-driven development (FTDD) cycle moves functional test specification to the earliest part of the software development life cycle. Functional tests no longer merely assess quality; their purpose now is to drive quality. For some agile processes such as extreme programming, functional tests are the primary requirements specification artifact. When functional tests serve as both the system specification and the automated regression test safety net, they must remain viable for the production code's lifetime. A successful functional test-driven development strategy relies on effective fools across the application life cycle. This article reflects on FTDD teams' core tasks performed over the full application life cycle. It then envisions a conceptual functional testing framework and a concrete list of capabilities that satisfy these needs",2007,0, 2116,Determining Criteria for Selecting Software Components: Lessons Learned,"Software component selection is growing in importance. Its success relies on correctly assessing the candidate components' quality. For a particular project, you can assess quality by identifying and analyzing the criteria that affect it. Component selection is on the suitability and completeness of the criteria used for evaluation. Experiences from determining criteria for several industrial projects provide important lessons. For a particular selection process, you can organize selection criteria into a criteria catalog. A CC is built for a scope, which can be either a domain (workflow systems, mail servers, antivirus tools, and so on) or a category of domains (communication infrastructure, collaboration software, and so on). Structurally, a CC arranges selection criteria in a hierarchical tree-like structure. The higher-level selection criteria serve to classify more concrete selection criteria, usually allowing some overlap. They also serve to leverage the CC",2007,0, 2117,Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile Software Development Processes,"Empirical validation of software metrics suites to predict fault proneness in object-oriented (OO) components is essential to ensure their practical use in industrial settings. In this paper, we empirically validate three OO metrics suites for their ability to predict software quality in terms of fault-proneness: the Chidamber and Kemerer (CK) metrics, Abreu's Metrics for Object-Oriented Design (MOOD), and Bansiya and Davis' Quality Metrics for Object-Oriented Design (QMOOD). Some CK class metrics have previously been shown to be good predictors of initial OO software quality. However, the other two suites have not been heavily validated except by their original proposers. Here, we explore the ability of these three metrics suites to predict fault-prone classes using defect data for six versions of Rhino, an open-source implementation of JavaScript written in Java. We conclude that the CK and QMOOD suites contain similar components and produce statistical models that are effective in detecting error-prone classes. We also conclude that the class components in the MOOD metrics suite are not good class fault-proneness predictors. Analyzing multivariate binary logistic regression models across six Rhino versions indicates these models may be useful in assessing quality in OO classes produced using modern highly iterative or agile software development processes.",2007,0, 2118,Estimation and Evaluation of Common Cause Failures,"Success of many modern applications is highly dependent on the correct functioning of complex computer based systems. In some cases, failures in these systems may cause serious consequences in terms of loss of human life. Systems in which failure could endanger human life are termed safety-critical. The SIS (safety instrumented system) should be designed to meet the required safety integrity level as defined in the safety requirement specification (safety requirement allocation). Moreover, the SIS design should be performed in a way that minimizes the potential for common mode or common cause failures (CCF). A CCF occurs when a single fault result in the corresponding failure of multiple components. Thus, CCFs can result in the SIS failing to function when there is a process demand. Consequently, CCFs have to be identified during the design process and the potential impact on the SIS functionality have to be understood. This paper gives details about the estimation and evaluation of common failures and assesses a loo2 system. It is a survey paper that presents the newest developments in common cause failure analysis.",2007,0, 2119,Scientific programming with Java classes supported with a scripting interpreter,"jLab environment provides a Matlab/Scilab like scripting language that is executed by an interpreter, implemented in the Java language. This language supports all the basic programming constructs and an extensive set of built in mathematical routines that cover all the basic numerical analysis tasks. Moreover, the toolboxes of jLab can be easily implemented in Java and the corresponding classes can be dynamically integrated to the system. The efficiency of the Java compiled code can be directly utilised for any computationally intensive operations. Since jLab is coded in pure Java, the build from source process is much cleaner, faster, platform independent and less error prone than the similar C/C++/Fortran-based open source environments (e.g. Scilab and Octave). Neuro-Fuzzy algorithms can require enormous computation resources and at the same time an expressive programming environment. The potentiality of jLab is demonstrated by describing the implementation of a Support Vector Machine toolkit and by comparing its performance with a C/C++ and a Matlab version and across different computing platforms (i.e. Linux, Sun/Solaris and Windows XP)",2007,0, 2120,Composite Event Detection in Wireless Sensor Networks,"Sensor networks can be used for event alarming applications. To date, in most of the proposed schemes, the raw or aggregated sensed data is periodically sent to a data consuming center. However, with this scheme, the occurrence of an emergency event such as a fire is hardly reported in a timely manner which is a strict requirement for event alarming applications. In sensor networks, it is also highly desired to conserve energy so that the network lifetime can be maximized. Furthermore, to ensure the quality of surveillance, some applications require that if an event occurs, it needs to be detected by at least k sensors where k is a user-defined parameter. In this work, we examine the timely energy-efficient k-watching event detection problem (TEKWEO). A topology-and-routing-supported algorithm is proposed which constructs a set of detection sets that satisfy the short notification time, energy conservation, and tunable quality of surveillance requirements for event alarming applications. Simulation results are shown to validate the proposed algorithm.",2007,0, 2121,Accurate Software-Related Average Current Drain Measurements in Embedded Systems,"Performing accurate average current drain measurements of digital programmable components (e.g., microcontrollers, digital signal processors, System-on-Chip, or wireless modules) is a critical and error-prone measurement problem for embedded system manufacturers due to the impulsive time-varying behavior of the current waveforms drawn from a battery in real operating conditions. In this paper, the uncertainty contributions affecting the average current measurements when using a simple and inexpensive digital multimeter are analyzed in depth. Also, a criterion to keep the standard measurement uncertainty below a given threshold is provided. The theoretical analysis is validated by means of meaningful experimental results",2007,0, 2122,Algorithmic Differentiation: Application to Variational Problems in Computer Vision,"Many vision problems can be formulated as minimization of appropriate energy functionals. These energy functionals are usually minimized, based on the calculus of variations (Euler-Lagrange equation). Once the Euler-Lagrange equation has been determined, it needs to be discretized in order to implement it on a digital computer. This is not a trivial task and, is moreover, error- prone. In this paper, we propose a flexible alternative. We discretize the energy functional and, subsequently, apply the mathematical concept of algorithmic differentiation to directly derive algorithms that implement the energy functional's derivatives. This approach has several advantages: First, the computed derivatives are exact with respect to the implementation of the energy functional. Second, it is basically straightforward to compute second-order derivatives and, thus, the Hessian matrix of the energy functional. Third, algorithmic differentiation is a process which can be automated. We demonstrate this novel approach on three representative vision problems (namely, denoising, segmentation, and stereo) and show that state-of-the-art results are obtained with little effort.",2007,0, 2123,Patching Processor Design Errors with Programmable Hardware,"Equipping processors with programmable hardware to patch design errors lets manufacturers release regular hardware patches, avoiding costly chip recalls and potentially speeding time to market. For each error detected, the manufacturer creates a fingerprint, which the customer uses to program the hardware. The hardware watches for error conditions; when they arise, it takes action to avoid the error. Overall, our scheme enables an exciting new environment where hardware design errors can be handled as easily as system software bugs, by applying a patch to the hardware",2007,0, 2124,Automatic Instruction-Level Software-Only Recovery,"Software-only reliability techniques protect against transient faults without the overhead of hardware techniques. Although existing low-level software-only fault-tolerance techniques detect faults, they offer no recovery assistance. This article describes three automatic, instruction-level, software-only recovery techniques representing different trade-offs between reliability and performance",2007,0,1708 2125,QoS Management of Real-Time Data Stream Queries in Distributed Environments,"Many emerging applications operate on continuous unbounded data streams and need real-time data services. Providing deadline guarantees for queries over dynamic data streams is a challenging problem due to bursty stream rates and time-varying contents. This paper presents a prediction-based QoS management scheme for real-time data stream query processing in distributed environments. The prediction-based QoS management scheme features query workload estimators, which predict the query workload using execution time profiling and input data sampling. In this paper, we apply the prediction-based technique to select the proper propagation schemes for data streams and intermediate query results in distributed environments. The performance study demonstrates that the proposed solution tolerates dramatic workload fluctuations and saves significant amounts of CPU time and network bandwidth with little overhead",2007,0, 2126,An Approach to Automated Agent Deployment in Service-Based Systems,"In service-based systems, services from various providers can be integrated following specific workflows to achieve users' goals. These workflows are often executed and coordinated by software agents, which invoke appropriate services based on situation changes. These agents need to be deployed on underlying platforms with respect to various requirements, such as access permission of agents, real-time requirements of workflows, and reliability of the overall system. Deploying these agents manually is often error-prone and time-consuming. Furthermore, agents need to migrate from hosts to hosts at runtime to satisfy deployment requirements. Hence, an automated agent deployment mechanism is needed. In this paper, an approach to automated agent deployment in service-base systems is presented. In this approach, the deployment requirements are represented as deployment policies, and techniques are developed for generating agent deployment plans by solving the constraints specified in deployment policies, and for generating executable code for runtime agent deployment and migration.",2007,0, 2127,Independent Model-Driven Software Performance Assessments of UML Designs,"In many software development projects, performance requirements are not addressed until after the application is developed or deployed, resulting in costly changes to the software or the acquisition of expensive high-performance hardware. To remedy this, researchers have developed model-driven performance analysis techniques for assessing how well performance requirements are being satisfied early in the software lifecycle. In some cases, companies may not have the expertise to perform such analysis on their software; therefore they have an independent assessor perform the analysis. This paper describes an approach for conducting independent model-driven software performance assessments of UML 2.0 designs and illustrates this approach using a real-time signal generator as a case study",2007,0, 2128,"OOPS for Motion Planning: An Online, Open-source, Programming System","The success of sampling-based motion planners has resulted in a plethora of methods for improving planning components, such as sampling and connection strategies, local planners and collision checking primitives. Although this rapid progress indicates the importance of the motion planning problem and the maturity of the field, it also makes the evaluation of new methods time consuming. We propose that a systems approach is needed for the development and the experimental validation of new motion planners and/or components in existing motion planners. In this paper, we present the online, open-source, programming system for motion planning (OOPSMP), a programming infrastructure that provides implementations of various existing algorithms in a modular, object-oriented fashion that is easily extendible. The system is open-source, since a community-based effort better facilitates the development of a common infrastructure and is less prone to errors. We hope that researchers will contribute their optimized implementations of their methods and thus improve the quality of the code available for use. A dynamic Web interface and a dynamic linking architecture at the programming level allows users to easily add new planning components, algorithms, benchmarks, and experiment with different parameters. The system allows the direct comparison of new contributions with existing approaches on the same hardware and programming infrastructure",2007,0, 2129,An Analysis of Performance Interference Effects in Virtual Environments,"Virtualization is an essential technology in modern datacenters. Despite advantages such as security isolation, fault isolation, and environment isolation, current virtualization techniques do not provide effective performance isolation between virtual machines (VMs). Specifically, hidden contention for physical resources impacts performance differently in different workload configurations, causing significant variance in observed system throughput. To this end, characterizing workloads that generate performance interference is important in order to maximize overall utility. In this paper, we study the effects of performance interference by looking at system-level workload characteristics. In a physical host, we allocate two VMs, each of which runs a sample application chosen from a wide range of benchmark and real-world workloads. For each combination, we collect performance metrics and runtime characteristics using an instrumented Ken hypervisor. Through subsequent analysis of collected data, we identify clusters of applications that generate certain types of performance interference. Furthermore, we develop mathematical models to predict the performance of a new application from its workload characteristics. Our evaluation shows our techniques were able to predict performance with average error of approximately 5%",2007,0, 2130,A Smooth Refinement Flow for Co-designing HW and SW Threads,"Separation of HW and SW design flows represents a critical aspect in the development of embedded systems. Co-verification becomes necessary, thus implying the development of complex co-simulation strategies. This paper presents a refinement flow that delays as much as possible the separation between HW and SW concurrent entities (threads), allowing their differentiation, but preserving an homogeneous simulation environment. The approach relies on SystemC as the unique reference language. However, SystemC threads, corresponding to the SW application, are simulated outside the control of the SystemC simulation kernel to exploit the typical features of multi-threading real-time operating systems running on embedded systems. On the contrary HW threads maintain the original simulation semantics of SystemC. This allows designers to effectively tune the SW application before HW/SW partitioning, leaving to an automatic procedure the SW generation, thus avoiding error-prone and time-consuming manual conversions",2007,0, 2131,Automatic Application Specific Floating-point Unit Generation,"This paper describes the creation of custom floating point units (FPUs) for application specific instruction set processors (ASIPs). ASIPs allow the customization of processors for use in embedded systems by extending the instruction set, which enhances the performance of an application or a class of applications. These extended instructions are manifested as separate hardware blocks, making the creation of any necessary floating point instructions quite unwieldy. On the other hand, using a predefined FPU includes a large monolithic hardware block with considerable number of unused instructions. A customized FPU will overcome these drawbacks, yet the manual creation of one is a time consuming, error prone process. This paper presents a methodology for automatically generating floating-point units (FPUs) that are customized for specific applications at the instruction level. Generated FPUs comply with the IEEE754 standard, which is an advantage over FP format customization. Custom FPUs were generated for several Mediabench applications. Area savings over a fully-featured FPU without resource sharing of 26%-80% without resource sharing and 33%-87% with resource sharing, were obtained. Clock period increased in some cases by up to 9.5% due to resource sharing",2007,0, 2132,Design Fault Directed Test Generation for Microprocessor Validation,"Functional validation of modern microprocessors is an important and complex problem. One of the problems in functional validation is the generation of test cases that has higher potential to find faults in the design. We propose a model based test generation framework that generates tests for design fault classes inspired from software validation. There are two main contributions in this paper. Firstly, we propose a microprocessor modeling and test generation framework that generates test suites to satisfy modified condition decision coverage (MCDC), a structural coverage metric that detects most of the classified design faults as well as the remaining faults not covered by MCDC. Secondly, we show that there exists good correlation between types of design faults proposed by software validation and the errors/bugs reported in case studies on microprocessor validation. We demonstrate the framework by modeling and generating tests for the microarchitecture of VESPA, a 32-bit microprocessor. In the results section, we show that the tests generated using our framework's coverage directed approach detects the fault classes with 100% coverage, when compared to model-random test generation",2007,0, 2133,Microarchitectural Support for Program Code Integrity Monitoring in Application-specific Instruction Set Processors,"Program code in a computer system can be altered either by malicious security attacks or by various faults in microprocessors. At the instruction level, all code modifications are manifested as bit flips. In this work, we present a generalized methodology for monitoring code integrity at run-time in application-specific instruction set processors (ASIPs), where both the instruction set architecture (ISA) and the underlying micro architecture can be customized for a particular application domain. We embed monitoring microoperations in machine instructions, thus the processor is augmented with a hardware monitor automatically. The monitor observes the processor's execution trace of basic blocks at run-time, checks whether the execution trace aligns with the expected program behavior, and signals any mismatches. Since microoperations are at a lower software architecture level than processor instructions, the microarchitectural support for program code integrity monitoring is transparent to upper software levels and no recompilation or modification is needed for the program. Experimental results show that our microarchitectural support can detect program code integrity compromises with small area overhead and little performance degradation",2007,0, 2134,Optimizing jobs timeouts on clusters and production grids,"This paper presents a method to optimize the timeout value of computing jobs. It relies on a model of the job execution time that considers the job management system latency through a random variable. It also takes into account a proportion of outliers to model either reliable clusters or production grids characterized by faults causing jobs loss. Job management systems are first studied considering classical distributions. Different behaviors are exhibited, depending on the weight of the tail of the distribution and on the amount of outliers. Experimental results are then shown based on the latency distribution and outlier ratios measured on the EGEE grid infrastructure1. Those results show that using the optimal timeout value provided by our method reduces the impact of outliers and leads to a 1.36 speed-up even for reliable systems without outliers.",2007,0, 2135,Reliability Analysis of Self-Healing Network using Discrete-Event Simulation,"The number of processors embedded on high performance computing platforms is continuously increasing to accommodate user desire to solve larger and more complex problems. However, as the number of components increases, so does the probability of failure. Thus, both scalable and fault-tolerance of software are important issues in this field. To ensure reliability of the software especially under the failure circumstance, the reliability analysis is needed. The discrete-event simulation technique offers an attractive a ternative to traditional Markovian-based analytical models, which often have an intractably large state space. In this paper, we analyze reliability of a self-healing network developed for parallel runtime environments using discrete-event simulation. The network is designed to support transmission of messages across multiple nodes and at the same time, to protect against node and process failures. Results demonstrate the flexibility of a discrete-event simulation approach for studying the network behavior under failure conditions and various protocol parameters, message types, and routing algorithms.",2007,0, 2136,Using Stochastic AI Techniques to Achieve Unbounded Resolution in Finite Player Goore Games and its Applications,"The Goore Game (GG) introduced by M. L. Tsetlin in 1973 has the fascinating property that it can be resolved in a completely distributed manner with no intercommunication between the players. The game has recently found applications in many domains, including the field of sensor networks and quality-of-service (QoS) routing. In actual implementations of the solution, the players are typically replaced by learning automata (LA). The problem with the existing reported approaches is that the accuracy of the solution achieved is intricately related to the number of players participating in the game -which, in turn, determines the resolution. In other words, an arbitrary accuracy can be obtained only if the game has an infinite number of players. In this paper, we show how we can attain an unbounded accuracy for the GG by utilizing no more than three stochastic learning machines, and by recursively pruning the solution space to guarantee that the retained domain contains the solution to the game with a probability as close to unity as desired. The paper also conjectures on how the solution can be applied to some of the application domains",2007,0, 2137,A Comprehensive Empirical Study of Count Models for Software Fault Prediction,"Count models, such as the Poisson regression model, and the negative binomial regression model, can be used to obtain software fault predictions. With the aid of such predictions, the development team can improve the quality of operational software. The zero-inflated, and hurdle count models may be more appropriate when, for a given software system, the number of modules with faults are very few. Related literature lacks quantitative guidance regarding the application of count models for software quality prediction. This study presents a comprehensive empirical investigation of eight count models in the context of software fault prediction. It includes comparative hypothesis testing, model selection, and performance evaluation for the count models with respect to different criteria. The case study presented is that of a full-scale industrial software system. It is observed that the information obtained from hypothesis testing, and model selection techniques was not consistent with the predictive performances of the count models. Moreover, the comparative analysis based on one criterion did not match that of another criterion. However, with respect to a given criterion, the performance of a count model is consistent for both the fit, and test data sets. This ensures that, if a fitted model is considered good based on a given criterion, then the model will yield a good prediction based on the same criterion. The relative performances of the eight models are evaluated based on a one-way anova model, and Tukey's multiple comparison technique. The comparative study is useful in selecting the best count model for estimating the quality of a given software system",2007,0, 2138,Count Models for Software Quality Estimation,"Identifying which software modules, during the software development process, are likely to be faulty is an effective technique for improving software quality. Such an approach allows a more focused software quality & reliability enhancement endeavor. The development team may also like to know the number of faults that are likely to exist in a given program module, i.e., a quantitative quality prediction. However, classification techniques such as the logistic regression model (lrm) cannot be used to predict the number of faults. In contrast, count models such as the Poisson regression model (prm), and the zero-inflated Poisson (zip) regression model can be used to obtain both a qualitative classification, and a quantitative prediction for software quality. In the case of the classification models, a classification rule based on our previously developed generalized classification rule is used. In the context of count models, this study is the first to propose a generalized classification rule. Case studies of two industrial software systems are examined, and for each we developed two count models, (prm, and zip), and a classification model (lrm). Evaluating the predictive capabilities of the models, we concluded that the prm, and the zip models have similar classification accuracies as the lrm. The count models are also used to predict the number of faults for the two case studies. The zip model yielded better fault prediction accuracy than the prm. As compared to other quantitative prediction models for software quality, such as multiple linear regression (mlr), the prm, and zip models have a unique property of yielding the probability that a given number of faults will occur in any module",2007,0, 2139,A Multi-Objective Software Quality Classification Model Using Genetic Programming,"A key factor in the success of a software project is achieving the best-possible software reliability within the allotted time & budget. Classification models which provide a risk-based software quality prediction, such as fault-prone & not fault-prone, are effective in providing a focused software quality assurance endeavor. However, their usefulness largely depends on whether all the predicted fault-prone modules can be inspected or improved by the allocated software quality-improvement resources, and on the project-specific costs of misclassifications. Therefore, a practical goal of calibrating classification models is to lower the expected cost of misclassification while providing a cost-effective use of the available software quality-improvement resources. This paper presents a genetic programming-based decision tree model which facilitates a multi-objective optimization in the context of the software quality classification problem. The first objective is to minimize the """"Modified Expected Cost of Misclassification"""", which is our recently proposed goal-oriented measure for selecting & evaluating classification models. The second objective is to optimize the number of predicted fault-prone modules such that it is equal to the number of modules which can be inspected by the allocated resources. Some commonly used classification techniques, such as logistic regression, decision trees, and analogy-based reasoning, are not suited for directly optimizing multi-objective criteria. In contrast, genetic programming is particularly suited for the multi-objective optimization problem. An empirical case study of a real-world industrial software system demonstrates the promising results, and the usefulness of the proposed model",2007,0, 2140,Fault Detection and Recovery in a Transactional Agent Model,"Servers can be fault-tolerant through replication and checkpointing technologies in the client server model. However, application programs cannot be performed and servers might block in the two-phase commitment protocol due to the client fault. In this paper, we discuss the transactional agent model to make application programs fault-tolerant by taking advantage of mobile agent technologies where a program can move from a computer to another computer in networks. Here, an application program on a faulty computer can be performed on another operational computer by moving the program. A transactional agent moves to computers where objects are locally manipulated. Objects manipulated have to be held until a transactional agent terminates. Some sibling computers which the transactional gent has visited might be faulty before the transactional agent terminates. The transactional agent has to detect faulty sibling computers and makes a decision on whether it commits/aborts or continues the computation by skipping the faulty computers depending on the commitment condition. For example, a transactional agent has to abort in the atomic commitment if a sibling computer is faulty. A transactional agent can just drop a faulty sibling computer in the at-least-one commitment. We evaluate the transactional agent model in terms of how long it takes for the transactional agent to treat faulty sibling computers .",2007,0, 2141,Efficient Analysis of Systems with Multiple States,A multistate system is a system in which both the system and its components may exhibit multiple performance levels (or states) varying from perfect operation to complete failure. Examples abound in real applications such as communication networks and computer systems. Analyzing the probability of the system being in each state is essential to the design and tuning of dependable multistate systems. The difficulty in analysis arises from the non-binary state property of the system and its components as well as dependence among those multiple states. This paper proposes a new model called multistate multivalued decision diagrams (MMDD) for the analysis of multistate systems with multistate components. The computational complexity of the MMDD-based approach is low due to the nature of the decision diagrams. An example is analyzed to illustrate the application and advantages of the approach.,2007,0, 2142,Sentient Networks: A New Dimension in Network Capability,"As computer networks evolve to the next generation, they offer flexible infrastructures for communication as well as high vulnerability. In order to sustain the reliability, to prevent misuse, and to provide radically new services, these networks need new capabilities in non-traditional domains of operation such as perception and consciousness. Therefore, we propose Sentient Networks, as an early approach towards realizing them. There are several applications for such networking capability. For one, sentient networks can enable packet detection in a blackbox manner, detect encryption of data transferred, detecting and classifying data types such as voice, video, and data. Some of the applications of such capabilities, for example, include the use of emotion content of the voice packets through a network and classifying these packets into normal, moderately panicked, and extremely panicked. Such classifications can be used to provide better QoS schemes for panicked callers. We obtained a packet detection accuracy of about 65-70% for data packets and about 98% accuracy for detecting encrypted TCP packets. The emotion-aware QoS provision mechanism provided approximately 60% improvement in delay performance for voice packets generated by panicked sources.",2007,0, 2143,Assessment of Code Quality through Classification of Unit Tests in VeriNeC,"Unit testing is a tool for assessing code quality. Unit tests check the correctness of code fragments like methods, loops and conditional statements. Usually, every code fragment is involved in different tests. We propose a classification of tests, depending on the tested features, which delivers a higher detailed feedback than unclassified tests. Unclassified tests only deliver a feedback whether they failed or succeeded. The detailed feedback from the classified tests help to do a better code quality assessment and can be incorporated in tools helping to improve code quality. We demonstrate the power of this approach doing unit tests on network configuration.",2007,0, 2144,Detecting VLIW Hard Errors Cost-Effectively through a Software-Based Approach,"Research indicates that as technology scales, hard errors such as wear-out errors are increasingly becoming a critical challenge for microprocessor design. While hard errors in memory structures can be efficiently detected by error correction code, detecting hard errors for functional units cost-effectively is a challenging problem. In this paper, we propose to exploit the idle cycles of the under-utilized VLIW functional units to run test instructions for detecting wear-out errors without increasing the hardware cost or significantly impacting performance. We also explore the design space of this software-based approach to balance the error detection latency and the performance for VLIW architectures. Our experimental results indicate that such a software-based approach can effectively detect hard errors with minimum impact on performance for VLIW processors, which is particularly useful for reliable embedded applications with cost constraints.",2007,0, 2145,Making Embedded Software Development More Efficient with SOA,"SOA has been one of the most fascinating design paradigms for enterprise-level application software during the recent years. Key to its success has been the inherent support of reusability and scalability. This has brought forward significant advancements in the efficiency of SOA based software during development, deployment and runtime. As a result of the ongoing increase of computational power on embedded devices, and the ever-increasing connectivity of these, SOA has become relevant also for devices with medium computational capabilities like WiFi routers. Extrapolation suggests that SOA will soon be seen on typical embedded systems like sensors and actuators. In this paper we make a survey to outline the potential of SOA to become a key factor in embedded software development. We believe that by embracing this paradigm, current obstacles in the embedded development process can be addressed more effectively, leading to an efficient and less error prone design flow. Although some efforts in this direction have already been made, there are still areas open for research in order to optimize the development process for embedded SOA.",2007,0, 2146,Efficient Top-k Query Evaluation on Probabilistic Data,"Modern enterprise applications are forced to deal with unreliable, inconsistent and imprecise information. Probabilistic databases can model such data naturally, but SQL query evaluation on probabilistic databases is difficult: previous approaches have either restricted the SQL queries, or computed approximate probabilities, or did not scale, and it was shown recently that precise query evaluation is theoretically hard. In this paper we describe a novel approach, which computes and ranks efficiently the top-k answers to a SQL query on a probabilistic database. The restriction to top-k answers is natural, since imprecisions in the data often lead to a large number of answers of low quality, and users are interested only in the answers with the highest probabilities. The idea in our algorithm is to run in parallel several Monte-Carlo simulations, one for each candidate answer, and approximate each probability only to the extent needed to compute correctly the top-k answers.",2007,0, 2147,Assessment of Package Cohesion and Coupling Principles for Predicting the Quality of Object Oriented Design,"In determining the quality of design two factors are important, namely coupling and cohesion. This paper highlights the principles of package architecture from cohesion and coupling point of view and discusses the method for extracting metric associated with them. The method is supported with the help of case study. The results arrived at from the case study are discussed further for utilizing them for predicting the quality of software.",2007,0, 2148,A Technique for Enabling and Supporting Debugging of Field Failures,"It is difficult to fully assess the quality of software in- house, outside the actual time and context in which it will execute after deployment. As a result, it is common for software to manifest field failures, failures that occur on user machines due to untested behavior. Field failures are typically difficult to recreate and investigate on developer platforms, and existing techniques based on crash reporting provide only limited support for this task. In this paper, we present a technique for recording, reproducing, and minimizing failing executions that enables and supports in- house debugging of field failures. We also present a tool that implements our technique and an empirical study that evaluates the technique on a widely used e-mail client.",2007,0, 2149,Refactoring for Parameterizing Java Classes,"Type safety and expressiveness of many existing Java libraries and their client applications would improve, if the libraries were upgraded to define generic classes. Efficient and accurate tools exist to assist client applications to use generic libraries, but so far the libraries themselves must be parameterized manually, which is a tedious, time-consuming, and error-prone task. We present a type- constraint-based algorithm for converting non-generic libraries to add type parameters. The algorithm handles the full Java language and preserves backward compatibility, thus making it safe for existing clients. Among other features, it is capable of inferring wildcard types and introducing type parameters for mutually-dependent classes. We have implemented the algorithm as a fully automatic refactoring in Eclipse. We evaluated our work in two ways. First, our tool parameterized code that was lacking type parameters. We contacted the developers of several of these applications, and in all cases they confirmed that the resulting parameterizations were correct and useful. Second, to better quantify its effectiveness, our tool parameterized classes from already-generic libraries, and we compared the results to those that were created by the libraries' authors. Our tool performed the refactoring accurately-in 87% of cases the results were as good as those created manually by a human expert, in 9% of cases the tool results were better, and in 4% of cases the tool results were worse.",2007,0, 2150,Predicting Faults from Cached History,"We analyze the version history of 7 software systems to predict the most fault prone entities and files. The basic assumption is that faults do not occur in isolation, but rather in bursts of several related faults. Therefore, we cache locations that are likely to have faults: starting from the location of a known (fixed) fault, we cache the location itself, any locations changed together with the fault, recently added locations, and recently changed locations. By consulting the cache at the moment a fault is fixed, a developer can detect likely fault-prone locations. This is useful for prioritizing verification and validation resources on the most fault prone files or entities. In our evaluation of seven open source projects with more than 200,000 revisions, the cache selects 10% of the source code files; these files account for 73%-95% of faults - a significant advance beyond the state of the art.",2007,0, 2151,Company-Wide Implementation of Metrics for Early Software Fault Detection,"To shorten time-to-market and improve customer satisfaction, software development companies commonly want to use metrics for assessing and improving the performance of their development projects. This paper describes a measurement concept for assessing how good an organization is at finding faults when most cost-effective, i. e. in most cases early. The paper provides results and lessons learned from applying the measurement concept widely at a large software development company. A major finding was that on average, 64 percent of all faults found would have been more cost effective to find during unit tests. An in-depth study of a few projects at a development unit also demonstrated how to use the measurement concept for identifying which parts in the fault detection process that needs to be improved to become more efficient (e.g. reduce the amount of time spent on rework).",2007,0, 2152,Enhancing Software Testing by Judicious Use of Code Coverage Information,"Recently, tools for the analysis and visualization of code coverage have become widely available. At first glance, their value in assessing and improving the quality of automated test suites seems to be obvious. Yet, experimental studies as well as experience from projects in industry indicate that their use is not without pitfalls. We found these tools in a number of recent projects quite beneficial. Therefore, we set out to gather code coverage information from one of these projects. In this experience report, first the system under scrutiny as well as our methodology is described. Then, four major questions concerning the impact and benefits of using these tools are discussed. Furthermore, a list of ten lessons learned is derived. The list may help developers judiciously use code coverage tools, in order to reap a maximum of benefits.",2007,0, 2153,STRADA: A Tool for Scenario-Based Feature-to-Code Trace Detection and Analysis,"Software engineers frequently struggle with understanding the relationships between the source code of a system and its requirements or high-level features. These relationships are commonly referred to as trace links. The creation and maintenance of trace links is a largely manual, time-consuming, and error- prone process. This paper presents STRADA (Scenario-based TRAce Detection and Analysis) - a tool that helps software engineers explore traces links to source code through testing. While testing is predominantly done to ensure the correctness of a software system, STRADA demonstrates a vital secondary benefit: by executing source code during testing it can be linked to requirements and features, thus establishing traceability automatically.",2007,0, 2154,DECIMAL and PLFaultCAT: From Product-Line Requirements to Product-Line Member Software Fault Trees,"PLFaultCAT is a tool for software fault tree analysis (SFTA) during product-line engineering. When linked with DECIMAL, a product-line requirements verification tool, the enhanced version of PLFaultCAT provides traceability between product- line requirements and SFTA hazards as well as semi-automated derivation of the SFTA for each new product-line system previously verified by DECIMAL. The combined tool reduces the effort needed to safely reuse requirements and customize the product-line SFTA as each new system is constructed.",2007,0, 2155,On Sufficiency of Mutants,"Mutation is the practice of automatically generating possibly faulty variants of a program, for the purpose of assessing the adequacy of a test suite or comparing testing techniques. The cost of mutation often makes its application infeasible. The cost of mutation is usually assessed in terms of the number of mutants, and consequently the number of """"mutation operators"""" that produce them. We address this problem by finding a smaller subset of mutation operators, called """"sufficient"""", that can model the behaviour of the full set. To do this, we provide an experimental procedure and adapt statistical techniques proposed for variable reduction, model selection and nonlinear regression. Our preliminary results reveal interesting information about mutation operators.",2007,0, 2156,Adaptive Probabilistic Model for Ranking Code-Based Static Analysis Alerts,"Software engineers tend to repeat mistakes when developing software. Automated static analysis tools can detect some of these mistakes early in the software process. However, these tools tend to generate a significant number of false positive alerts. Due to the need for manual inspection of alerts, the high number of false positives may make an automated static analysis tool too costly to use. In this research, we propose to rank alerts generated from automated static analysis tools via an adaptive model that predicts the probability an alert is a true fault in a system. The model adapts based upon a history of the actions the software engineer has taken to either filter false positive alerts or fix true faults. We hypothesize that by providing this adaptive ranking, software engineers will be more likely to act upon highly ranked alerts until the probability that remaining alerts are true positives falls below a subjective threshold.",2007,0, 2157,Mining Object Usage Models,"Programs usually follow many implicit programming rules or patterns, violations of which frequently lead to failures. This thesis proposes a novel approach to statically mine object usage models representing such patterns for objects used in a program. Additionally, we will describe how object usage models can be used to automatically detect defects, increase program understanding and support programmers by providing code templates. In preliminary experiments the proposed method detected two previously unknown bugs in open source software.",2007,0, 2158,A Quality-Driven Approach to Enable Decision-Making in Self-Adaptive Software,"Self-adaptive software is a closed-loop system aims at altering itself in response to changes at runtime. Such a system, normally, requires monitoring, detecting (analyzing), deciding (planning), and acting (effecting) processes to fulfill adaptation requirements. This research mainly focuses on developing a quality-driven framework to facilitate realizing the deciding process. The framework is required to capture goals of adaptation, utility information, and domain characteristics in a knowledge-base.",2007,0, 2159,Languages for Safety-Critical Software: Issues and Assessment,"Safety-critical systems (whose anomalous behavior could have catastrophic consequences such as loss of human life) are becoming increasingly prevalent; standards such as DO-178B, originally developed for the certification of commercial avionics, are attracting attention in other communities. The requirement to comply with such standards imposes constraints (on quality assurance, traceability, etc.) much beyond what is typical for Commercial-Off-The-Shelf Software. One of the major decisions that affects the development of safety-critical software is the choice of programming language(s). Specific language features, either by their presence of absence, may make certification easier or harder. Indeed, full genera-lpurpose languages are almost always too complex, and restricted subsets are required. This tutorial compares several languages currently in use or under consideration for safety-critical systems --C (and also C++), Ada, and Java -- and assesses them with respect to their suitability to be constrained for use for such purposes. It specifically examines the MISRA C subset, SPARK, and the in-progress effort to develop a safety-critical profile of the Real-Time Specification for Java. The tutorial also identifies the challenges that Object Oriented Programming imposes on safety certification and indicates possible future directions.",2007,0, 2160,A Bayesian network based trust model for improving collaboration in mobile ad hoc networks,"Functioning as fully decentralised distributed systems, without the need of predefined infrastructures, mobile ad hoc networks provide interesting solutions when setting up dynamic and flexible applications. However, these systems also bring up some problems. In such open environments, it is difficult to discover among the nodes, which are malicious and which are not, in order to be able to choose good partners for cooperation. One solution for this to be possible, is for the entities to be able to evaluate the trust they have in each other and, based on this trust, determine which entities they can cooperate with. In this paper, we present a trust model adapted to ad hoc networks and, more generally, to distributed systems. This model is based on Bayesian networks, a probabilistic tool which provides a flexible means of dealing with probabilistic problems involving causality. The model evaluates the trust in a server according, both, to direct experiences with the server and recommendations concerning its service. We show, through a simulation, that the proposed model can determine the best server out of a set of eligible servers offering a given service. Such a trust model, when applied to ad hoc networks, tends to increase the QoS of the various services used by a host. This, when applied to security related services thus increases the overall security of the hosts.",2007,0, 2161,Interaction Analysis of Heterogeneous Monitoring Data for Autonomic Problem Determination,"Autonomic systems require continuous self-monitoring to ensure correct operation. Available monitoring data exists in a variety of formats, including log files, performance counters, traces, and state and configuration parameters. Such heterogeneity, together with the extremely large volume of data that could be collected, makes analysis very complex. To allow for more-effective problem determination, there is a need for a comprehensive integration of management data. In addition, monitoring should be adaptive to the current perceived operation of the system. In this paper we present an architecture to meet the above goals. We leverage an open-source XML-based format for data integration and describe an approach to automatically adjust monitoring for diagnosis when anomalies are detected. We have implemented a partial prototype using an Eclipse-based open-source platform. We show the effectiveness of our prototype based on fault-injection experiments. We also study issues of disparity of data formats, information overload, scalability, and automated problem determination.",2007,0, 2162,Determining Configuration Probabilities of Safety-Critical Adaptive Systems,"This article presents a novel technique to calculate the probability that an adaptive system assumes a configuration. An important application area of dynamic adaptation is the cost-efficient development of dependable embedded systems. Dynamic adaptation exploits implicitly available redundancy, reducing the need for hardware redundancy, to make systems more available, reliable, survivable and, ultimately, more safe. Knowledge of configuration probabilities of a system is an essential requirement for the optimization of safety efforts in development. In perspective, it is also a prerequisite for dependability assessment. Our approach is based on a modeling language for complex reconfiguration behavior. We transform the adaptation model into a probabilistic target model that combines a compositional fault tree with Markov chains. This hybrid model can be evaluated efficiently using a modified BDD-based algorithm. The approach is currently being implemented in an existing reliability modeling tool.",2007,0, 2163,An Integrated Framework for Assessing and Mitigating Risks to Maritime Critical Infrastructure,"Maritime security poses daunting challenges of protecting thousands of potential targets, fixed and mobile, dispersed across US coasts and waterways. We describe an integrated approach to systematically assessing risks and prospective strategies for mitigating those risks. The maritime security risk analysis model (MSRAM) quantifies maritime risk in terms of threats, vulnerabilities, and consequences. We are extending MSRAM with novel """"what-if"""" behavioral simulation software that projects the likely reduction of risks over time from adopting strategies for allocating existing security assets and investing in new security capabilities. Projected outcomes (and costs) can then be compared to identify the most robust strategies for mitigating risk. This decision support method can be re-applied over time as strategies are executed, to re-validate and adjust them in response to changing conditions and terrorist behaviors. This dynamic portfolio-based approach improves confidence, consistency, and quality of risk management decisions. It is extensible beyond the maritime domain to address other critical risk areas in homeland security.",2007,0, 2164,RF2ID: A Reliable Middleware Framework for RFID Deployment,"The reliability of RFID systems depends on a number of factors including: RF interference, deployment environment, configuration of the readers, and placement of readers and tags. While RFID technology is improving rapidly, a reliable deployment of this technology is still a significant challenge impeding widespread adoption. This paper investigates system software solutions for achieving a highly reliable deployment that mitigates inherent unreliability in RFID technology. We have developed (1) a virtual reader abstraction to improve the potentially error-prone nature of reader generated data (2) a novel path abstraction to capture the logical flow of information among virtual readers. We have designed and implemented an RFID middleware: RF2ID (reliable framework for radio frequency identification) to organize and support queries over data streams in an efficient manner. Prototype implementation using both RFID readers and simulated readers using an empirical model of RFID readers show that RF2ID is able to provide high reliability and support path-based object detection.",2007,0, 2165,Fast Failure Detection in a Process Group,"Failure detectors represent a very important building block in distributed applications. The speed and the accuracy of the failure detectors is critical to the performance of the applications built on them. In a common implementation of failure detector based on heartbeats, there is a tradeoff between speed and accuracy so it is difficult to be both fast and accurate. Based on the observation that in many distributed applications, one process takes a special role as the leader, we propose a fast failure detection (FFD) algorithm that detects the failure of the leader both fast and accurately. Taking advantage of spatial multiple timeouts, FFD detects the failure of the leader within a time period of just a little more than one heartbeat interval, making it almost the fastest detection algorithm possible based on heartbeat messages. FFD could be used stand alone in a static configuration where the leader process is fixed at one site. In a dynamic setting, where the role of leader has to be assumed by another site if the current leader fails, FFD could be used in collaboration with a leader election algorithm to speed up the process of electing a new leader.",2007,0, 2166,Identifying and Addressing Uncertainty in Architecture-Level Software Reliability Modeling,"Assessing reliability at early stages of software development, such as at the level of software architecture, is desirable and can provide a cost-effective way of improving a software system's quality. However, predicting a component's reliability at the architectural level is challenging because of uncertainties associated with the system and its individual components due to the lack of information. This paper discusses representative uncertainties which we have identified at the level of a system's components, and illustrates how to represent them in our reliability modeling framework. Our preliminary evaluation indicates promising results in our framework's ability to handle such uncertainties.",2007,0, 2167,Management of Virtual Machines on Globus Grids Using GridWay,"Virtual machines are a promising technology to overcome some of the problems found in current grid infrastructures, like heterogeneity, performance partitioning or application isolation. In this work, we present straightforward deployment of virtual machines in globus grids. This solution is based on standard services and does not require additional middleware to be installed. Also, we assess the suitability of this deployment in the execution of a high throughput scientific application, the XMM-Newton scientific analysis system.",2007,0, 2168,A Probabilistic Approach to Measuring Robustness in Computing Systems,"System builders are becoming increasingly interested in robust design. We believe that a methodology for generating robustness metrics helps the robust design research efforts and, in general, is an important step in the efforts to create robust computing systems. The purpose of the research in this paper is to quantify the robustness of a resource allocation, with the eventual objective of setting a standard that could easily be instantiated for a particular computing system to generate robustness metric. We present our theoretical foundation for robustness metric and give its instantiation for a particular system.",2007,0, 2169,Implementing Adaptive Performance Management in Server Applications,"Performance and scalability are critical quality attributes for server applications in Internet-facing business systems. These applications operate in dynamic environments with rapidly fluctuating user loads and resource levels, and unpredictable system faults- Adaptive (autonomic) systems research aims to augment such server applications with intelligent control logic that can detect and react to sudden environmental changes. However, developing this adaptive logic is complex in itself. In addition, executing the adaptive logic consumes processing resources, and hence may (paradoxically) adversely effect application performance. In this paper we describe an approach for developing high-performance adaptive server applications and the supporting technology. The Adaptive Server Framework (ASF) is built on standard middleware services, and can be used to augment legacy systems with adaptive behavior without needing to change the application business logic. Crucially, ASF provides built-in control loop components to optimize the overall application performance, which comprises both the business and adaptive logic. The control loop is based on performance models and allows systems designers to tune the performance levels simply by modifying high level declarative policies. We demonstrate the use of ASF in a case study.",2007,0, 2170,Towards Assessing Modularity,"It's noted in this workshop's call for papers that despite the emergence of a large number of """"modularisation techniques"""" (e.g., aspects, design patterns, and so on), there are no standard approaches or """"rules of thumb"""" for assessing the benefits and drawbacks of using these techniques in the construction of real software systems. In this paper we argue that the first step in assessing such techniques should be to determine their effect on modularity. Only then can we be sure that they have even been correctly classified as """"modularisation techniques"""".",2007,0, 2171,An Evolutionary Approach to Software Modularity Analysis,"Modularity determines software quality in terms of evolvability, changeability, maintainability, etc. and a module could be a vertical slicing through source code directory structure or class boundary. Given a modularized design, we need to determine whether its implementation realizes the designed modularity. Manually comparing source code modular structure with abstracted design modular structure is tedious and error-prone. In this paper, we present an automated approach to check the conformance of source code modularity to the designed modularity. Our approach uses design structure matrices (DSMs) as a uniform representation; it uses existing tools to automatically derive DSMs from the source code and design, and uses a genetic algorithm to automatically cluster DSMs and check the conformance. We applied our approach to a small canonical software system as a proof of concept experiment. The results supported our hypothesis that it is possible to check the conformance between source code structure and design structure automatically, and this approach has the potential to be scaled for use in large software systems.",2007,0, 2172,Assessing Module Reusability,"We propose a conceptual framework for assessing the reusability of modules. To do so, we define reusability of a module as the product of its functionality and its applicability. We then generalize the framework to the assessment of modularization techniques.",2007,0, 2173,Semantic Dependencies and Modularity of Aspect-Oriented Software,"Modularization of crosscutting concerns is the main benefit provided by aspect-oriented constructs. In order to rigorously assess the overall impact of this kind of modularization, we use design structure matrixes (DSMs) to analyze different versions (OO and AO) of a system. This is supported by the concept of semantic dependencies between classes and aspects, leading to a more faithful notion of coupling for AO systems. We also show how design rules can make those dependencies explicit and, consequently, yield a more modular design.",2007,0, 2174,Defect Data Analysis Based on Extended Association Rule Mining,"This paper describes an empirical study to reveal rules associated with defect correction effort. We defined defect correction effort as a quantitative (ratio scale) variable, and extended conventional (nominal scale based) association rule mining to directly handle such quantitative variables. An extended rule describes the statistical characteristic of a ratio or interval scale variable in the consequent part of the rule by its mean value and standard deviation so that conditions producing distinctive statistics can be discovered As an analysis target, we collected various attributes of about 1,200 defects found in a typical medium-scale, multi-vendor (distance development) information system development project in Japan. Our findings based on extracted rules include: (l)Defects detected in coding/unit testing were easily corrected (less than 7% of mean effort) when they are related to data output or validation of input data. (2)Nevertheless, they sometimes required much more effort (lift of standard deviation was 5.845) in case of low reproducibility, (i)Defects introduced in coding/unit testing often required large correction effort (mean was 12.596 staff-hours and standard deviation was 25.716) when they were related to data handing. From these findings, we confirmed that we need to pay attention to types of defects having large mean effort as well as those having large standard deviation of effort since such defects sometimes cause excess effort.",2007,0, 2175,Spam Filter Based Approach for Finding Fault-Prone Software Modules,"Because of the increase of needs for spam e-mail detection, the spam filtering technique has been improved as a convenient and effective technique for text mining. We propose a novel approach to detect fault-prone modules in a way that the source code modules are considered as text files and are applied to the spam filter directly. In order to show the applicability of our approach, we conducted experimental applications using source code repositories of Java based open source developments. The result of experiments shows that our approach can classify more than 75% of software modules correctly.",2007,0, 2176,Identifying Changed Source Code Lines from Version Repositories,"Observing the evolution of software systems at different levels of granularity has been a key issue for a number of studies, aiming at predicting defects or at studying certain phenomena, such as the presence of clones or of crosscutting concerns. Versioning systems such as CVS and SVN, however, only provide information about lines added or deleted by a contributor: any change is shown as a sequence of additions and deletions. This provides an erroneous estimate of the amount of code changed. This paper shows how the evolution of changes at source code line level can be inferred from CVS repositories, by combining information retrieval techniques and the Levenshtein edit distance. The application of the proposed approach to the ArgoUML case study indicates a high precision and recall.",2007,0, 2177,Predicting Defects and Changes with Import Relations,Lowering the number of defects and estimating the development time of a software project are two important goals of software engineering. To predict the number of defects and changes we train models with import relations. This enables us to decrease the number of defects by more efficient testing and to assess the effort needed in respect to the number of changes.,2007,0, 2178,Local and Global Recency Weighting Approach to Bug Prediction,"Finding and fixing software bugs is a challenging maintenance task, and a significant amount of effort is invested by software development companies on this issue. In this paper, we use the Eclipse project's recorded software bug history to predict occurrence of future bugs. The history contains information on when bugs have been reported and subsequently fixed.",2007,0, 2179,An Approach for Specification-based Test Case Generation for Web Services,"Web services applications are built by the integration of many loosely coupled and reusable services using open standards. Testing Web service is important in detecting faults and assessing quality attributes. A difficulty in testing Web services applications is the unavailability of the source code for both the application builder and the broker. This paper propose a solution to this problem by providing a formal, specification-based approach for automatically generating test cases for web services based on the WSDL input messages parts' XML Schema datatypes. Examples of using this approach are then given in order to give evidence of its usefulness. The role of the application builders and the brokers in using this approach to test Web services is also described.",2007,0, 2180,Construct Metadata Model based on Coupling Information to Increase the Testability of Component-based Software,"A software component must be tested every time it is reused, to guarantee the quality of both the component itself and the system in which it is to be integrated. So how to increase testability of component has become a key technology in the software engineering community. This paper introduces a method to increase component testability. Firstly we analyze the meanings of component testability and the effective ways to increase testability. Then we give some definitions on component coupling testing criterion. And we further give the definitions of DU-I(definition-use information) and OP- Vs (observation-point values). Base on these, we introduce a definition-use table, which includes DU-I and OP-Vs item, to help component testers understanding and observing the component better. Then a framework of testable component based on above DU-table is given. These facilities provide ways to detect errors, to observe state variables by observation-points based monitor mechanism. And we adopt coupling-based testing using information DU-table provided. Lastly, we applied the method to our application software developed before, and generate some test cases. And our method is compared with Orso method and Kan method using the same example, presenting the comparison results. The relevant results illustrate the validity of our method, effectively generating test cases and killing more mutants.",2007,0, 2181,Using Maintainability Based Risk Assessment and Severity Analysis in Prioritizing Corrective Maintenance Tasks,"A software product spends more than 65% of its lifecycle in maintenance. Software systems with good maintainability can be easily modified to fix faults. We define maintainability-based risk as a product of two factors: the probability of performing maintenance tasks and the impact of performing these tasks. In this paper, we present a methodology for assessing maintainability-based risk in the context of corrective maintenance. The proposed methodology depends on the architectural artifacts and their evolution through the life cycle of the system. In order to prioritize corrective maintenance tasks, we combine components' maintainability- based risk with the severity of a failure that may happen as a result of unfixed fault. We illustrate the methodology on a case study using UML models.",2007,0, 2182,Abnormal Process Condition Prediction (Fault Diagnosis) Using G2 Expert System,"Abnormal operating conditions (faults) in industrial processes have the potential to cause loss of production, loss of life and/or damage to environment. The accidents, which could cost industry billons of dollars per year, can be prevented if abnormal process condition is predicted and controlled in advance. Due to the increased process complexity and instability in operating conditions, the existing control system may have a limited ability to provide practical assistance to both operators and engineers. Advanced software applications, based on expert system, has the potential to assist engineers in monitoring, detecting, diagnosing abnormal condition and thus providing safe guards against unexpected process conditions.",2007,0, 2183,Simulation of Multi-Speed Vehicular Communication with Code Division Multiple Access,"In this paper, a mathematical model of vehicular communication UMTS (Universal Mobile Telecommunications System) is implemented in the software package MATLAB 6. The mathematical modeling allows the quality factor of the used code to be predicted. The accuracy of implementation is demonstrated by performing a sample simulation with a generator of narrow-band noises. The model remains sufficiently simple and efficient.",2007,0, 2184,The Reduction of Simulation Software Execution Time for Models of Integrated Electric Propulsion Systems through Partitioning and Distribution,"Software time-domain simulation models are useful to the naval engineering community both for the system design of future vessels and for the in-service support of existing vessels. For future platforms, the existence of a model of the vessel's electrical power system provides a means of assessing the performance of the system against defined requirements. This could be at the stage of requirements definition, bid assessment or any subsequent stage in the design process. For in-service support of existing platforms, the existence of a model of the vessel's electrical power system provides a means of assessing the possible cause and effect of operational defects reported by ship's staff, or of assessing the possible future implications of some change in the equipment line-up or operating conditions for the vessel. Detailed high fidelity time-domain simulation of systems, however, can be problematic due to extended execution time. This arises from the model's mathematically stiff nature: models of Integrated Electric Propulsion systems can also require significant computational resource. A conventional time-domain software simulation model is only able to utilize a single computer processor at any one time. The duration of time required to obtain results from a software model could be significantly reduced if more computer processors were utilized simultaneously. This paper details the development of a distributed simulation environment. This environment provides a mechanism for partitioning a time-domain software simulation model and running it on a cluster of computer processors. The number of processors utilized in the cluster ranges between four and sixteen nodes. The benefit of this approach is that reductions in simulation duration are achievable by an appropriate choice of model partitioning. From an engineering perspective, any net timing reduction translates to an increase in the availability of data, from which more efficient analysis and design follows.",2007,0, 2185,Standard Tools for Hardware-in-the-Loop (HIL) Modeling and Simulation,"This paper presents a cost effective hardware-in-the-loop (HIL) environment integrated into a Simulink real-time simulation. Using an inexpensive national instruments data acquisition card in conjunction with Simulink real-time simulator, a Schweitzer engineering laboratory (SEL) relay was integrated into a Simulink power system fault transient simulation. The sequence of events, from the fault initialization to the breaker opening, was captured through the Simulink model. During the simulation, the relay was fed generator terminal voltages and currents from the Simulink simulation. When the relay detects a fault condition, it sends an open breaker signal back to the simulation. The simulation opens the virtual breaker until the fault is cleared and the open-breaker signal from the relay ceases.",2007,0, 2186,A Divergence-measure Based Classification Method for Detecting Anomalies in Network Traffic,"We present 'D-CAD,' a novel divergence-measure based classification method for anomaly detection in network traffic. The D-CAD method identifies anomalies by performing classification on features drawn from software sensors that monitor network traffic. We compare the performance of the D-CAD method with two classifier based anomaly detection methods implemented using supervised Bayesian estimation and supervised maximum-likelihood estimation. Results show that the area under receiver operating characteristic curve (AUC) of the D-CAD method is as high as 0.9524, compared to an AUC value of 0.9102 of the supervised maximum-likelihood estimation based anomaly detection method and to an AUC value of 0.8887 of the supervised Bayesian estimation based anomaly detection method.",2007,0, 2187,Fault Tolerant Signal Processing for Masking Transient Errors in VLSI Signal Processors,"This paper proposes fault tolerant signal processing strategies for achieving reliable performance in VLSI signal processors that are prone to transient errors due to increasingly smaller feature dimensions and supply voltages. The proposed methods are based on residue number system (RNS) coding, involving either hardware redundancy or multiple execution redundancy (MER) strategies designed to identify and overcome transient errors. RNS techniques provide powerful low-redundancy fault tolerance properties that must be introduced at VLSI design levels, whereas MER strategies generally require higher degrees of redundancy that can be introduced at software programming levels.",2007,0, 2188,Ground Penetrating Radar: A Smart Sensor for the Evaluation of the Railway Trackbed,"Ground Penetrating Radar (GPR) has become an increasingly attractive method for the engineering community, in particular for shallow high-resolution applications such as railway trackbed evaluation. It is a non-destructive smart sensing technique, which can be applied dynamically to achieve a continuous profile of the trackbed structure. Due to recent hardware and software improvements, real time cursory analysis can be performed in the field. Based on collected field data, the present paper investigates the applicability of the GPR smart sensor system in terms of the railways trackbed assessment and concludes on the capability of the GPR sensing technique to assess adequately the ballast quality and the trackbed formation.",2007,0, 2189,A digital signal processing approach for modulation quality assessment in WiMAX systems,"Modulation quality in WiMAX systems that rely on OFDM modulation is dealt with. A digital signal processing approach is, in particular, proposed, aimed at assessing the performance of transmitters in terms of standard parameters. At present, WiMAX technology deployment is at the very beginning. Also, measurement instrumentation mandated to assist WiMAX apparatuses production and installation is not completely mature, and entitled to be significantly improved in terms of functionality and performance. In particular, no dedicated instrument is already present on the market, but the available solutions are arranged complementing existing hardware, such as real-time spectrum analyzers and vector signal analyzers, with a proper analysis software. Differently from the aforementioned solutions, the proposed approach is independent of the specific hardware platform mandated to the demodulation of the incoming WiMAX signal. It can operate, in fact, with any hardware capable of achieving and delivering the baseband I and Q components of the signal under analysis. Moreover, being open-source it can be improved or upgraded according to future needs.",2007,0, 2190,Business-Driven Optimization of Policy-Based Management solutions,"We consider whether the off-line compilation of a set of Service Level Agreements (SLAs) into low-level management policies can lead to the runtime maximization of the overall business profit for a service provider. Using a simple Web application hosting SLA template for a utility service provider, we derive low-level QoS management policies and validate their consistency. We show how the default first come first served (FCFS) mechanism for the runtime scheduling of triggered policies fails to deliver an all times maximum business profit for the service provider. To achieve a better business profit, first a penalty/reward model that is derived from the SLA Service Level Objectives (SLOs) is used to assign runtime utility tags to triggered policies. Then three policy scheduling algorithms, which are based on the prediction of the future state of the running SLAs, are used to drive the runtime actions of the Policy Decision Point (PDP). The prediction function per see involved the unsolved problem of predicting in realtime the evolution of the transient state of a variant of an M/M/Ct/Ct queue. A simple approximative solution to the latter problem is provided. Finally, using the VS policy simulator tool, comparative simulation results for the business profit generated by each of the proposed policy scheduling algorithms are presented. VS is a novel tool which we have developed to respond to the increasing need of benchmarking SLA and policy-based management solutions.",2007,0, 2191,Reducing Complexity of Software Deployment with Delta Configuration,"Deploying a modern software service usually involves installing several software components, and configuring these components properly to realize the complex interdependencies between them. This process, which accounts for a significant portion of information technology (IT) cost, is complex and error-prone. In this paper, we propose delta configuration - an approach that reduces the cost of software deployment by eliminating a large number of choices on parameter values that administrators have to make during deployment. In delta configuration, the complex software stack of a distributed service is first installed and tested in a test environment. The resulting software images are then captured and used for deployment in production environments. To deploy a software service, we only need to copy these pre-configured software images into a production environment and modify them to account for the difference between the test environment and a production environment. We have implemented a prototype system that achieves software deployment using delta configuration of the configuration state captured inside virtual machines. We perform a case study to demonstrate that our scheme leads to substantial reduction in complexity for the customer, over the traditional software deployment method.",2007,0, 2192,A Straightforward Approach to Introduce FDC-Methods for Wet-Process-Equipment,"In this paper the benefits from introducing FDC- methods for the monitoring of wet-chemistry batch- processors will be outlined. As an example, a typical spray-process equipment shall be used to demonstrate the benefits of using fault detection and classification. These include monitoring of process parameters and frequency of equipment alarms and as result indicating the current state of the equipment. Batch processors usually have a high throughput and in the case of wet-chemistry equipment the process result may be monitored only indirectly. This makes the knowledge about the state of the equipment obligatory for minimizing scrap. Aim of FDC-methods is detecting tool degrading through recording and interpreting the process-parameters during every run and, if necessary, triggering preventive maintenance. Process engineers, maintenance engineers and the equipment operator need to have all the information they need for their decisions delivered in a way they can easily access. To achieve this, Infineon Technologies has implemented various software-tools which all share the same date-base.",2007,0, 2193,Probabilistic Field Coverage using a Hybrid Network of Static and Mobile Sensors,"Providing field coverage is a key issue in many sensor network applications. For a field with unevenly distributed static sensors, a quality coverage with acceptable network lifetime is often difficult to achieve. We propose a hybrid network that consists of both static and mobile sensors, and we suggest that it can be a cost-effective solution for held coverage. The main challenges of designing such a hybrid network are, first, determining necessary coverage contributions from each type of sensors; and second, scheduling the sensors to achieve the desired coverage contributions, which includes activation scheduling for static sensors and movement scheduling for mobile sensors. In this paper, we offer an analytical study on the above problems, and the results also lead to a practical system design. Specifically, we present an optimal algorithm for calculating the contributions from different types of sensors, which fully exploits the potentials of the mobile sensors and maximizes the network lifetime. We then present a random walk model for the mobile sensors. The model is distributed with very low control overhead. Its parameters can be fine-tuned to match the moving capability of different mobile sensors and the demands from a broad spectrum of applications. A node collaboration scheme is then introduced to further enhance the system performance. We demonstrate through analysis and simulation that, in our hybrid design, a small set of mobile sensors can effectively address the uneven distribution of the static sensors and significantly improve the coverage quality.",2007,0, 2194,Distance Relay With Out-of-Step Blocking Function Using Wavelet Transform,Out-of-step blocking function in distance relays is required to distinguish between a power swing and a fault. Speedy and reliable detection of symmetrical faults during power swings presents a challenge. This paper introduces wavelet transform to reliably and quickly detect power swings as well as detect any fault during a power swing. The total number of dyadic wavelet levels of voltage/current waveforms and the choice of particular levels for such detection are carefully studied. A logic block based on the wavelet transform is developed. The output of this block is combined with the output of the conventional digital distance relay to achieve desired performance during power swings. This integrated relay is extensively tested on a simulated system using PSCAD/ EMTDCreg software.,2007,0, 2195,Classification of Electrical Disturbances in Real Time Using Neural Networks,"Power-quality (PQ) monitoring is an essential service that many utilities perform for their industrial and larger commercial customers. Detecting and classifying the different electrical disturbances which can cause PQ problems is a difficult task that requires a high level of engineering knowledge. This paper presents a novel system based on neural networks for the classification of electrical disturbances in real time. In addition, an electrical pattern generator has been developed in order to generate common disturbances which can be found in the electrical grid. The classifier obtained excellent results (for both test patterns and field tests) thanks in part to the use of this generator as a training tool for the neural networks. The neural system is integrated on a software tool for a PC with hardware connected for signal acquisition. The tool makes it possible to monitor the acquired signal and the disturbances detected by the system.",2007,0, 2196,Expert System for Power Quality Disturbance Classifier,"Identification and classification of voltage and current disturbances in power systems are important tasks in the monitoring and protection of power system. Most power quality disturbances are non-stationary and transitory and the detection and classification have proved to be very demanding. The concept of discrete wavelet transform for feature extraction of power disturbance signal combined with artificial neural network and fuzzy logic incorporated as a powerful tool for detecting and classifying power quality problems. This paper employes a different type of univariate randomly optimized neural network combined with discrete wavelet transform and fuzzy logic to have a better power quality disturbance classification accuracy. The disturbances of interest include sag, swell, transient, fluctuation, and interruption. The system is modeled using VHSIC hardware description language (VHDL), a hardware description language, followed by extensive testing and simulation to verify the functionality of the system that allows efficient hardware implementation of the same. This proposed method classifies, and achieves 98.19% classification accuracy for the application of this system on software-generated signals and utility sampled disturbance events.",2007,0, 2197,An Efficient Network Anomaly Detection Scheme Based on TCM-KNN Algorithm and Data Reduction Mechanism,"Network anomaly detection plays a vital role in securing network security and infrastructures. Current research focuses concentrate on how to effective reduce high false alarm rate and usually ignore the fact that the poor quality data for the modeling of normal patterns as well as the high computational cost make the current anomaly detection methods not act as well as we expect. Based on these, we first propose a novel data mining scheme for network anomaly detection in this paper. Moreover, we adopt data reduction mechanisms (including genetic algorithm (GA) based instance selection and filter based feature selection methods) to boost the detection performance, meanwhile reduce the computational cost of TCM-KNN. Experimental results on the well-known KDD Cup 1999 dataset demonstrate the proposed method can effectively detect anomalies with high detection rates, low false positives as well as with high confidence than the state-of-the-art anomaly detection methods. Furthermore, the data reduction mechanisms would greatly improve the performance of TCM-KNN and make it be a good candidate for anomaly detection in practice.",2007,0, 2198,Quality-Based Fusion of Multiple Video Sensors for Video Surveillance,"In this correspondence, we address the problem of fusing data for object tracking for video surveillance. The fusion process is dynamically regulated to take into account the performance of the sensors in detecting and tracking the targets. This is performed through a function that adjusts the measurement error covariance associated with the position information of each target according to the quality of its segmentation. In this manner, localization errors due to incorrect segmentation of the blobs are reduced thus improving tracking accuracy. Experimental results on video sequences of outdoor environments show the effectiveness of the proposed approach.",2007,0, 2199,On Modeling and Developing Self-Healing Web Services Using Aspects,"Like any computing application, Web services are subject to failure and unavailability due to multiple reasons like Web service faulty-code and unreliable communication-infrastructure. A manual correction of Web services failure is error-prone and time-consuming. An effective Web services environment should be able to monitor its state, diagnosis faults, and automatically recover from failures. This process is known as self-healing. In this paper, we address self-healing issues of Web services using Aspect-Oriented Programming (AOP). AOP supports separation of self-healing concerns from Web services code and promotes maintenance and reusability.",2007,0, 2200,A Middleware Architecture for Replica Voting on Fuzzy Data in Dependable Real-time Systems,"Majority voting among replicated data collection devices enhances the trust-worthiness of data flowing from a hostile external environment. It allows a correct data fusion and dissemination by the end-users, in the presence of content corruptions and/or timing failures that may possibly occur during data collection. In addition, a device may operate on fuzzy inputs, thereby generating a data that occasionally deviates from the reference datum in physical world. In this paper, we provide a QoS-oriented approach to manage the data flow through various system elements. The application-level QoS parameters we consider are timeliness and accuracy of data. The underlying protocol-level parameters that influence data delivery performance are the data sizes, network bandwidth, device asynchrony, and data fuzziness. A replica voting protocol takes into account the interplay between these parameters as the faulty behavior of malicious devices unfolds in various forms during data collection. Our QoS-oriented approach casts the well-known fault-tolerance techniques, namely, 2-phase voting, with control mechanisms that adapt the data delivery to meet the end-to-end constraints - such as latency, data integrity, and resource cost. The paper describes a middleware architecture to realize our QoS-oriented approach to the management of replicated data flows.",2007,0, 2201,Impact of Retransmission Delays on Multilayer Video Streaming over IEEE 802.1le Wireless Networks,"In this paper, we seek to establish probabilistic bounds of retransmission delays for transporting multilayer video frames over IEEE 802.11e QAP/QSTA with enhanced MAC distributed coordination function (EDCF). We consider an end-to-end multilayer video streaming that uses hybrid FEC/ARQ error detection and control. Under multiple priority levels of IEEE 802.11e MAC EDCF, we first establish steady-state collision probabilities and contention resolution delays, given the number of nodes. We introduce a time-varying Rayleigh slow-fading channel error model and studying its effect on MAC EDCF transmissions. For video transmissions, we model the expected waiting time of EDCF MAC video queue using head-of-line (HOL) priority queueing discipline using the MAC delay distribution derived earlier as service distribution. The total MAC EDCF video (base layer) queueing delay is the sum of expected waiting time of high-priority voice frames, service residual of best-effort data and the expected waiting time of video frames at HOL queue. Next, we model video retransmission events at receiver as renewal-reward process of frame(s) identified for retransmission to establish the """"spread""""-time between successful renewal events. The """"spread""""-time is indeed the probabilistic retransmission bound that we seek for a single video frame identified for retransmission. We verify our model and analytical bounds using an in-house multimedia mobile communication platform (MMCP), written entirely in software to study the cross-layer interworking between MAC and transport for IEEE 802.11 and 802.11e MAC. MMCP currently supports MPEG4 single-layer and FGS two-layer with concurrent voice and video streaming capabilities. Our model, when combined with a receiver-based channel feedback, can yield a jitter-free, rate-adaptive and guaranteed """"base"""" video quality.",2007,0, 2202,Differentiation of Wireless and Congestion Losses in TCP,"TCP is the most commonly used data transfer protocol. It assumes every packet loss to be congestion loss and reduces the sending rate. This will decrease the sender's throughput when there is an appreciable rate of packet loss due to link error and not due to congestion. This issue is significant for wireless links. We present an extension of TCP-Casablanca, which improves TCP performance over wireless links. A new discriminator is proposed that not only differentiates congestion and wireless losses, but also identifies the congestion level in the network, i.e., whether the network is lightly congested or heavily congested and throttles the sender's rate according to the congestion level in the network.",2007,0, 2203,Recovering Workflows from Multi Tiered E-commerce Systems,"A workflow is a computerized specification of a business process. A workflow describes how tasks are executed and ordered following business policies. E-commerce systems implement the workflows of the daily operations of an organization. Organizations must continuously modify their e-commerce systems in order to accommodate workflow changes. However, e-commerce systems are often designed and developed without referring to the workflows. Modifying e-commerce systems is a time consuming and error prone task. In order to correctly perform this task, developers require an in-depth understanding of multi tiered e-commerce systems and the workflows that they implement. In this paper, we present an approach which automatically recovers workflows from three tier e-commerce systems. Given the starting UI page of a particular workflow, the approach traces the flow of control throughout the different tiers of the e-commerce system in order to recover that workflow. We demonstrate the effectiveness of our approach through experiments on an open source e-commerce system.",2007,0, 2204,Efficient Belief Propagation for Vision Using Linear Constraint Nodes,"Belief propagation over pairwise connected Markov random fields has become a widely used approach, and has been successfully applied to several important computer vision problems. However, pairwise interactions are often insufficient to capture the full statistics of the problem. Higher-order interactions are sometimes required. Unfortunately, the complexity of belief propagation is exponential in the size of the largest clique. In this paper, we introduce a new technique to compute belief propagation messages in time linear with respect to clique size for a large class of potential functions over real-valued variables. We demonstrate this technique in two applications. First, we perform efficient inference in graphical models where the spatial prior of natural images is captured by 2 times 2 cliques. This approach shows significant improvement over the commonly used pairwise-connected models, and may benefit a variety of applications using belief propagation to infer images or range images. Finally, we apply these techniques to shape-from-shading and demonstrate significant improvement over previous methods, both in quality and in flexibility.",2007,0, 2205,A Classification of Architectural Reliability Models,"With the widespread use of software systems in the modern society, reliability of these systems have become as important as the functionality they provide. Building reliability into the software development process thus becomes critical for cost effective development and quality assurance. Existing reliability models (applied in post-implementation phases) may not be suitable to address reliability analysis at the software architecture level, as they often rely on implementation-level artifacts. In this paper, we present a framework for classifying reliability models based on their applicability to architectural artifacts, and assess several representative approaches based on the proposed classification. This study highlights several areas for future research.",2007,0, 2206,Real-Time Model-Based Fault Detection and Diagnosis for Alternators and Induction Motors,"This paper describes a real-time model-based fault detection and diagnosis software. The electric machines diagnosis system (EMDS) covers field winding shorted-turns fault in alternators and stator windings shorted-turns fault in induction motors. The EMDS has a modular architecture. The modules include: acquisition and data treatment; well-known parameters estimation algorithms, such as recursive least squares (RLS) and extended Kalman filter (EKF); dynamic models for faults simulation; faults detection and identification tools, such as M.L.P. and S.O.M. neural networks and fuzzy C-means (FCM) technique. The modules working together detect possible faulty conditions of various machines working in parallel through routing. A fast, safe and efficient data manipulation requires a great DataBase managing system (DBMS) performance. In our experiment, the EMDS real-time operation demonstrated that the proposed system could efficiently and effectively detect abnormal conditions resulting in lower-cost maintenance for the company.",2007,0, 2207,Applying Novel Resampling Strategies To Software Defect Prediction,"Due to the tremendous complexity and sophistication of software, improving software reliability is an enormously difficult task. We study the software defect prediction problem, which focuses on predicting which modules will experience a failure during operation. Numerous studies have applied machine learning to software defect prediction; however, skewness in defect-prediction datasets usually undermines the learning algorithms. The resulting classifiers will often never predict the faulty minority class. This problem is well known in machine learning and is often referred to as learning from unbalanced datasets. We examine stratification, a widely used technique for learning unbalanced data that has received little attention in software defect prediction. Our experiments are focused on the SMOTE technique, which is a method of over-sampling minority-class examples. Our goal is to determine if SMOTE can improve recognition of defect-prone modules, and at what cost. Our experiments demonstrate that after SMOTE resampling, we have a more balanced classification. We found an improvement of at least 23% in the average geometric mean classification accuracy on four benchmark datasets.",2007,0, 2208,Using Model-Driven Development in Time-Constrained Course Projects,"Educational software development processes, used in course projects, must exercise practices and artifacts comparable to similar industry-level processes, while achieving acceptable productivity and quality, and, at the same time, complying with constraints on available student time. Here, we discuss our experience with a specific model-driven development process, applied in a time-constrained software engineering course. The course projects are developed in iterations, each delivering a subset of the product functions. These, specified as use cases, undergo a sequence of model transformations, until they become tested code. Transformation steps are verified using standardized quality gates (inspections, tests, and audits), which serve three purposes: teaching verification, validation and quality assurance; helping to assess and grade projects; and providing feedback for process improvement. Size, effort and defect data is recorded in standardized reports. Collected data show that the quality gates proved effective to ensure compliance with the prescribed process, and that using a balanced reusable framework is necessary to achieve satisfactory productivity and quality.",2007,0, 2209,Statistical QoS Guarantee and Energy-Efficiency in Web Server Clusters,"In this paper we study the soft real-time web cluster architecture needed to support e-commerce and related applications. Our testbed is based on an industry standard, which defines a set of Web interactions and database transactions with their deadlines, for generating real workload and bench-marking e-commerce applications. In these soft real-time systems, the quality of service (QoS) is usually defined as the fraction of requests that meet the deadlines. When this QoS is measured directly, regardless of whether the request missed the deadline by an epsilon amount of time or by a large difference, the result is always the same. For this reason, only counting the number of missed requests in a period avoids the observation of the real state of the system. Our contributions are theoretical propositions of how to control the QoS, not measuring the QoS directly, but based on the probability distribution of the tardiness in the completion time of the requests. We call this new QoS metric tardiness quantile metric (TQM). The proposed method provides fine-grained control over the QoS so that we can make a closer examination of the relation between QoS and energy efficiency. We validate the theoretical results showing experiments in a multi-tiered e-commerce web cluster implemented using only open-source software solutions.",2007,0, 2210,Toward the Use of Automated Static Analysis Alerts for Early Identification of Vulnerability- and Attack-prone Components,"Extensive research has shown that software metrics can be used to identify fault- and failure-prone components. These metrics can also give early indications of overall software quality. We seek to parallel the identification and prediction of fault- and failure-prone components in the reliability context with vulnerability- and attack-prone components in the security context. Our research will correlate the quantity and severity of alerts generated by source code static analyzers to vulnerabilities discovered by manual analyses and testing. A strong correlation may indicate that automated static analyzers (ASA), a potentially early technique for vulnerability identification in the development phase, can identify high risk areas in the software system. Based on the alerts, we may be able to predict the presence of more complex and abstract vulnerabilities involved with the design and operation of the software system. An early knowledge of vulnerability can allow software engineers to make informed risk management decisions and prioritize redesign, inspection, and testing efforts. This paper presents our research objective and methodology.",2007,0, 2211,Distributed algorithm for change detection in satellite images for Grid Environments,"This paper presents a solution for real-time satellite image processing. The focus is on the detection of changes in MODIS images. We present a distributed algorithm for change detection which is based on extracting relevant parameters from MODIS spectral bands. The algorithm detects the changes between two images of the same geographical area at different time moments. The algorithm, able to run in a Grid system, is scalable, fault-tolerant. We present the experimental results of this algorithm considering three spectral bands and different input images. We also propose a method to integrate applications based on this algorithm into the MedioGRID architecture.",2007,0, 2212,Visualization of Growth Curve Data from Phenotype Microarray Experiments,"Phenotype microarrays provide a technology to simultaneously survey the response of an organism to nearly 2,000 substrates, including carbon, nitrogen and potassium sources; varying pH; varying salt concentrations; and antibiotics. In order to more quickly and easily view and compare the large number of growth curves produced by phenotype microarray experiments, we have developed software to produce and display color images, each of which corresponds to a set of 96 growth curves. Using color images to represent growth curves data has proven to be a valuable way to assess experiment quality, compare replicates, facilitate comparison of the responses of different organisms, and identify significant phenotypes. The color images are linked to traditional plots of growth versus time, as well as to information about the experiment, organism, and substrate. In order to share and view information and data project-wide, all information, plots, and data are accessible using only a Web browser.",2007,0, 2213,A Software Factory for Air Traffic Data,"Modern information systems require a flexible, scalable, and upgradable infrastructure that allows communication, and subsequently collaboration, between heterogeneous information processing and computing environments. Heterogeneous systems often use different data representations for the same data items, limiting collaboration and increasing the cost and complexity of system integration. Although this problem is conceptually straightforward, the process of data conversion is error prone, often dramatically underestimated, and surprisingly complex. The complexity is often the result of the non-standard data representations that are used by computing systems in the aviation domain. This paper describes work that is being done to address this challenge. A prototype software factory for air traffic data is being built and evaluated. The software factory provides the capability to create data and interface models for use in the air traffic domain. The model will allow the user to specify entities such as data items, scaling, units, headers and footers, representation, and coding. The factory automatically creates a machine usable data representation. A prototype for a Domain Specific Language to assist in this task is being developed. This paper describes the scope of the work and the overall approach.",2007,0, 2214,Evaluating the Combined Effect of Vulnerabilities and Faults on Large Distributed Systems,"On large and complex distributed systems hardware and software faults, as well as vulnerabilities, exhibit significant dependencies and interrelationships. Being able to assess their actual impact on the overall system dependability is especially important. The goal of this paper is to propose a unifying way of describing a complex hardware and software system, in order to assess the impact of both vulnerabilities and faults by means of the same underlying reasoning mechanism, built on a standard Prolog inference engine. Some preliminary experimental results show that a prototype tool based on these techniques is both feasible and able to achieve encouraging performance levels on several synthetic test cases.",2007,0, 2215,Evaluation of MDA/PSM database model quality in the context of selected non-functional requirements,"Conceptual, logical, and physical database models can be regarded as PIM, PSM1, and PSM2 data models within MDA architecture, respectively. Many different logical database models can be derived from a given conceptual database model by applying a set of transformations rules. To choose a logical database model for further transformation (to physical database model at PSM2 level) some selection criteria based on quality demands, e.g. database efficiency, easy maintainability or portability, should be established. To evaluate quality of the database models some metrics should be provided. We present metrics for measuring two selected, conflicted quality characteristics (efficiency and maintainability), next we analyse correlation between the metrics, and finally propose how to assess quality of logical database models.",2007,0, 2216,Error Recovery Problems,"The paper deals with the problem of handling detected faults in computer systems. We present software procedures targeted at fault detection, fault masking and error recovery. They are discussed in the context of standard PC Windows and Linux environments. Various aspects of checkpointing and recovery policies are studied. The presented considerations are illustrated with some experimental results obtained in our fault injection testbench.",2007,0, 2217,Analysis of Timing Requirements for Intrusion Detection System,"An intrusion detection system (IDS) is a collection of sensors (often in the form of mobile agents) that collect data (security related events), classify them and trigger an alarm when unwanted manipulations to regular network behaviour is detected. Activities of attackers and network are time dependent. In the paper, fault trees with time dependencies (FTTD) are used to describe intrusions with emphasis put on timing properties. In FTTD, events and gates are characterized by time parameters. FTTD are used in verification whether the IDS reacts sufficiently quick on the intrusions. As an example, """"the victim trusts the intruder"""" attack is analysed.",2007,0, 2218,Dependability Assessment of Grid Middleware,"Dependability is a key factor in any software system due to the potential costs in both time and money a failure may cause. Given the complexity of grid applications that rely on dependable grid middleware, tools for the assessment of grid middleware are highly desirable. Our past research, based around our fault injection technology (FIT) framework and its implementation, WS-FIT, has demonstrated that network level fault injection can be a valuable tool in assessing the dependability of traditional Web services. Here we apply our FIT framework to globus grid middleware using grid-FIT, our new implementation of the FIT framework, to obtain middleware dependability assessment data. We conclude by demonstrating that grid-FIT can be applied to globus grid systems to assess dependability as part of a fault removal mechanism and thus allow middleware dependability to be increased.",2007,0, 2219,A Tunable Add-On Diagnostic Protocol for Time-Triggered Systems,"We present a tunable diagnostic protocol for generic time-triggered (TT) systems to detect crash and send/receive omission faults. Compared to existing diagnostic and membership protocols for TT systems, it does not rely on the single-fault assumption and tolerates malicious faults. It runs at the application level and can be added on top of any TT system (possibly as a middleware component) without requiring modifications at the system level. The information on detected faults is accumulated using a penalty/reward algorithm to handle transient faults. After a fault is detected, the likelihood of node isolation can be adapted to different system configurations, including those where functions with different criticality levels are integrated. Using actual automotive and aerospace parameters, we experimentally demonstrate the transient fault handling capabilities of the protocol.",2007,0, 2220,Experimental Risk Assessment and Comparison Using Software Fault Injection,"One important question in component-based software development is how to estimate the risk of using COTS components, as the components may have hidden faults and no source code available. This question is particularly relevant in scenarios where it is necessary to choose the most reliable COTS when several alternative components of equivalent functionality are available. This paper proposes a practical approach to assess the risk of using a given software component (COTS or non-COTS). Although we focus on comparing components, the methodology can be useful to assess the risk in individual modules. The proposed approach uses the injection of realistic software faults to assess the impact of possible component failures and uses software complexity metrics to estimate the probability of residual defects in software components. The proposed approach is demonstrated and evaluated in a comparison scenario using two real off-the-shelf components (the RTEMS and the RTLinux real-time operating system) in a realistic application of a satellite data handling application used by the European Space Agency.",2007,0, 2221,Uniformity by Construction in the Analysis of Nondeterministic Stochastic Systems,"Continuous-time Markov decision processes (CTMDPs) are behavioral models with continuous-time, nondeterminism and memoryless stochastics. Recently, an efficient timed reachability algorithm for CTMDPs has been presented, allowing one to quantify, e. g., the worst-case probability to hit an unsafe system state within a safety critical mission time. This algorithm works only for uniform CTMDPs -- CTMDPs in which the sojourn time distribution is unique across all states. In this paper we develop a compositional theory for generating CTMDPs which are uniform by construction. To analyze the scalability of the method, this theory is applied to the construction of a fault-tolerant workstation cluster example, and experimentally evaluated using an innovative implementation of the timed reachability algorithm. All previous attempts to model-check this seemingly well-studied example needed to ignore the presence of nondeterminism, because of lacking support for modelling and analysis.",2007,0, 2222,A Reinforcement Learning Approach to Automatic Error Recovery,"The increasing complexity of modern computer systems makes fault detection and localization prohibitively expensive, and therefore fast recovery from failures is becoming more and more important. A significant fraction of failures can be cured by executing specific repair actions, e.g. rebooting, even when the exact root causes are unknown. However, designing reasonable recovery policies to effectively schedule potential repair actions could be difficult and error prone. In this paper, we present a novel approach to automate recovery policy generation with reinforcement learning techniques. Based on the recovery history of the original user-defined policy, our method can learn a new, locally optimal policy that outperforms the original one. In our experimental work on data from a real cluster environment, we found that the automatically generated policy can save 10% of machine downtime.",2007,0, 2223,On the Quality of Service of Crash-Recovery Failure Detectors,"In this paper, we study and model a crash-recovery target and its failure detector's probabilistic behavior. We extend quality of service (QoS) metrics to measure the recovery detection speed and the proportion of the detected failures of a crash-recovery failure detector. Then the impact of the dependability of the crash-recovery target on the QoS bounds for such a crash-recovery failure detector is analysed by adopting general dependability metrics such as MTTF and MTTR. In addition, we analyse how to estimate the failure detector's parameters to achieve the QoS from a requirement based on Chen's NFD-S algorithm. We also demonstrate how to execute the configuration procedure of this crash-recovery failure detector. The simulations are based on the revised NFD-S algorithm with various MTTF and MTTR. The simulation results show that the dependability of a recoverable monitored target could have significant impact on the QoS of such a failure detector and match our analysis results.",2007,0,3980 2224,Performability Models for Multi-Server Systems with High-Variance Repair Durations,"We consider cluster systems with multiple nodes where each server is prone to run tasks at a degraded level of service due to some software or hardware fault. The cluster serves tasks generated by remote clients, which are potentially queued at a dispatcher. We present an analytic queueing model of such systems, represented as an M/MMPP/1 queue, and derive and analyze exact numerical solutions for the mean and tail-probabilities of the queue-length distribution. The analysis shows that the distribution of the repair time is critical for these performability metrics. Additionally, in the case of high-variance repair times, the model reveals so-called blow-up points, at which the performance characteristics change dramatically. Since this blowup behavior is sensitive to a change in model parameters, it is critical for system designers to be aware of the conditions under which it occurs. Finally, we present simulation results that demonstrate the robustness of this qualitative blow-up behavior towards several model variations.",2007,0, 2225,Finding Errors in Interoperating Components,"Two or more components (e.g., objects, modules, or programs) interoperate when they exchange data, such as XML data. Currently, there is no approach that can detect a situation at compile time when one component modifies XML data so that it becomes incompatible for use by other components, delaying discovery of errors to runtime. Our solution, a verifier for interoperating components for finding logic faults (Viola) builds abstract programs from the source code of components that exchange XML data. Viola symbolically executes these abstract programs thereby obtaining approximate specifications of the data that would be output by these components. The computed and expected specifications are compared to find errors in XML data exchanges between components. We describe our approach, implementation, and give our error checking algorithm. We used Viola on open source and commercial systems and discovered errors that were not detected during their design and testing.",2007,0, 2226,Code Generation on Steroids: Enhancing COTS Code Generators via Generative Aspects,"Commercial of-the-shelf (COTS) code generators have become an integral part of modern commercial software development. Programmers use code generators to facilitate many tedious and error-prone software development tasks including language processing, XML data binding, graphical component creation, and middleware deployment. Despite the convenience offered by code generators, the generated code is not always adequate for the task at hand. This position paper proposes an approach to address this problem. We utilize the power of aspect oriented programming (AOP) to enhance the functionality of generated code. Furthermore, our approach enables the programmer to specify these enhancements through an intuitive graphical interface. Our proof of concept software tool provides event-handling aspect/aspects that enhance the functionality of the XML processing classes automatically generated by a commercial of- the-shelf code generator, Castor.",2007,0, 2227,Adequate and Precise Evaluation of Quality Models in Software Engineering Studies,"Many statistical techniques have been proposed and introduced to predict fault-proneness of program modules in software engineering. Choosing the """"best"""" candidate among many available models involves performance assessment and detailed comparison. But these comparisons are not simple due to varying performance measures and the related verification and validation cost implications. Therefore, a methodology for precise definition and evaluation of the predictive models is still needed. We believe the procedure we outline here, if followed, has a potential to enhance the statistical validity of future experiments.",2007,0, 2228,Using Developer Information as a Factor for Fault Prediction,"We have been investigating different prediction models to identify which files of a large multi-release industrial software system are most likely to contain the largest numbers of faults in the next release. To make predictions we considered a number of different file characteristics and change information about the files, and have built fully- automatable models that do not require that the user have any statistical expertise. We now consider the effect of adding developer information as a prediction factor and assess the extent to which this affects the quality of the predictions.",2007,0, 2229,Predicting Defects for Eclipse,"We have mapped defects from the bug database of eclipse (one of the largest open-source projects) to source code locations. The resulting data set lists the number of pre- and post-release defects for every package and file in the eclipse releases 2.0, 2.1, and 3.0. We additionally annotated the data with common complexity metrics. All data is publicly available and can serve as a benchmark for defect prediction models.",2007,0, 2230,A Workflow-Based Non-intrusive Approach for Enhancing the Survivability of Critical Infrastructures in Cyber Environment,"The focus of this paper is on vulnerabilities which exist in supervisory control and data acquisition (SCADA) systems. Cyber attacks targeting weaknesses in these systems can seriously degrade the survivability of a critical system. Detailed here is a non-intrusive approach for improving the survivability of these systems without interruption of their normal process flow. In a typical SCADA system, unsafe conditions are avoided by including interlocking logic code on the base system. This prevents conflicting operations from starting at inappropriate times, and provides corrective action or graceful shut-down of the system when a potentially unsafe condition is detected. If this code or these physical devices are manipulated remotely, the system can fail with unpredictable results. In the proposed approach, a workflow is constructed on a system outside of the attack path and separate from the process under control. The workflow is a combination of the functional behavior of a SCADA system and a model generated by cyber attack scenarios in that system. A cause and effect relationship of commands processed by the SCADA system is simulated in the workflow to help detect malicious operations. The workflow then contain functional and survivability knowledge of the underlying system. Failures induced by the introduction of malicious logic will be predicted by simulating the fault in the workflow. Modeling these modes of failure will be valuable in implementing damage control. This model is event driven and conducts simulation externally, hence does not interfere with normal functionality of the underlying systems.",2007,0, 2231,Experiences from Representing Software Architecture in a Large Industrial Project Using Model Driven Development,"A basic idea of model driven development (MDD) is to capture all important design information in a set of formal or semi formal models that are automatically kept consistent by tools. This paper reports on industrial experience from use of MDD and shows that the approach needs improvements regarding the architecture since there are no suggested ways to formalize design rules which are an important part of the architecture. Instead, one has to rely on time consuming and error prone manual interpretations, reviews and reworkings to keep the system consistent with the architecture. To reap the full benefits of MDD it is therefore important to find ways of formalizing design rules to make it possible to allow automatic enforcement of the architecture on the system model.",2007,0, 2232,An Evolution Model for Software Modularity Assessment,"The value of software design modularity largely lies in the ability to accommodate potential changes. Each modularization technique, such as aspect-oriented programming and object-oriented design patterns, provides one way to let some part of a system change independently of all other parts. A modularization technique benefits a design if the potential changes to the design can be well encapsulated by the technique. In general, questions in software evolution, such as which modularization technique is better and whether it is worthwhile to refactor, should be evaluated against potential changes. In this paper, we present a decision-tree-based framework to generally assess design modularization in terms of its changeability. In this framework, we formalize design evolution questions as decision problems, model software designs and potential changes using augmented constraint networks (ACNs), and represent design modular structure before and after envisioned changes using design structure matrices (DSMs) derived from ACNs. We formalize change impacts using an evolution vector to precisely capture well-known informal design principles. As a preliminary evaluation, we use this model to compare the aspect-oriented and object-oriented observer pattern in terms of their ability to accommodate envisioned changes. The results confirm previous published results, but in formal and quantitative ways.",2007,0, 2233,Integrated Management of Company Processes and Standard Processes: A Platform to Prepare and Perform Quality Management Appraisals,"Business processes have been introduced in many companies during the last years. But it was not clear how to measure the quality of these processes. ISO/IEC 15504 and CMMI have filled this gap and provide measurement frameworks to assess the maturity of processes. However, introducing and adapting processes to comply with these standards is difficult and error-prone. Especially the integration of the requirements of the standards into the company processes is hard to accomplish. In this paper we propose an integrated process modeling approach that is able to bridge the gap between business processes and requirements of the standards. Building on this integrated model we are able to produce reports that systematically uncover and display weaknesses in the process.",2007,0, 2234,Refactoring--Does It Improve Software Quality?,"Software systems undergo modifications, improvements and enhancements to cope with evolving requirements. This maintenance can cause their quality to decrease. Various metrics can be used to evaluate the way the quality is affected. Refactoring is one of the most important and commonly used techniques of transforming a piece of software in order to improve its quality. However, although it would be expected that the increase in quality achieved via refactoring is reflected in the various metrics, measurements on real life systems indicate the opposite. We analyzed source code version control system logs of popular open source software systems to detect changes marked as refactorings and examine how the software metrics are affected by this process, in order to evaluate whether refactoring is effectively used as a means to improve software quality within the open source community.",2007,0, 2235,A Rapid Fault Injection Approach for Measuring SEU Sensitivity in Complex Processors,"Processors are very common components in current digital systems and to assess their reliability is an essential task during the design process. In this paper a new fault injection solution to measure SEU sensitivity in processors is presented. It consists in a hardware-implemented module that performs fault injection through the available JTAG-based On-Chip Debugger (OCD). It can be widely applicable to different processors since JTAG standard is an extended interface and OCDs are usually available in current processors. The hardware implementation avoids the communication between the target system and the software debugging tool. The method has been applied to a complex processor, the ARM7TDMI. Results illustrate the approach is a fast, efficient and cost-effective solution.",2007,0, 2236,Analysis of System-Failure Rate Caused by Soft-Errors using a UML-Based Systematic Methodology in an SoC,"This paper proposes an analytical method to assess the soft-error rate (SER) in the early stages of a System-on-Chip (SoC) platform-based design methodology. The proposed method gets an executable UML (Unified Modeling Language) model of the SoC and the raw soft- error rate of different parts of the platform as its inputs. Soft-errors on the design are modeled by disturbances on the value of attributes in the classes of the UML model and disturbances on opcodes of software cores. The Dynamic behavior of each core is used to determine the propagation probability of each variable disturbance to the core outputs. Furthermore, the SER and the execution time of each core in the SoC and a Failure Modes and Effects Analysis (FMEA) that determines the severity of each failure mode in the SoC are used to compute the System-Failure Rate (SFR) of the SoC.",2007,0, 2237,Self-Adaptive Systems for Information Survivability: PMOP and AWDRAT,"Information systems form the backbones of the critical infrastructures of modern societies. Unfortunately, these systems are highly vulnerable to attacks that can result in enormous damage. This paper describes two related systems PMOP and AWDRAT that were developed during the DARPA Self Regenerative Systems program. PMOP defends against insider attacks while AWDRAT is intended to detect compromises to software systems. Both rely on self-monitoring, diagnosis and self-adaptation. We describe both systems and show the results of experiments with each.",2007,0, 2238,Online Applications of Wavelet Transforms to Power System Relaying - Part II,"Recent wavelet developments in power engineering applications, include detection, localization, classification, identification, storage, compression, and network/system analysis of the power quality disturbance signals, to very recently, power system relaying [1,2,3,4]. This paper assesses the online use of wavelet analysis to power system relaying. The paper presents a novel technique for transmission-line fault detection and classification using the DWT for which an optimal selection of mother wavelet and data window size based on the minimum entropy criterion has been performed. The paper starts with the review of recent work within the field of wavelet analysis and its applications to power systems engineering. Then, the theoretical background of the technique is presented and the proposed method is described in detail. Finally, the effect of different parameters on the algorithm are examined in order to highlight its performance. Typical fault conditions on a practical 220 kV power system as generated by ATP/EMTP is analyzed with Daubechies wavelets. The performance of the fault classifier is tested using MATLAB software. The feasibility of using wavelet analysis to detect and classify faults is investigated. Finally it discusses the results, limitations and possible improvement. It is found that the use of wavelet transforms together with an effective classification procedure is considered to be straightforward, fast, computationally efficient and allow for real-time accurate applications in monitoring and classifying techniques in power engineering.",2007,0, 2239,Implementation and Applications of Wide-area monitoring systems,"This paper discusses the design and applications of wide area monitoring and control systems, which can complement classical protection systems and SCADA/EMS applications. System wide installed phasor measurement units send their measured data to a central computer, where snapshots of the dynamic system behavior are made available online. This new quality of system information opens up a wide range of new applications to assess and actively maintain system's stability in case of voltage, angle or frequency instability, thermal overload and oscillations. Recent developed algorithms and their design for these application areas are introduced. With practical examples the benefits in terms of system security are shown..",2007,0, 2240,A Web-Based Fuel Management Software System for a Typical Indian Coal based Power Plant,"The fuel management system forms an integral part of the management process in a power plant and hence is one of the most critical areas. It deals with the management of commercial, operational and administrative functions pertaining to estimating fuel requirements, selection of fuel suppliers, fuel quality check, transportation and fuel handling, payment for fuel received, consumption and calculation of fuel efficiency. The results are then used for cost benefit analysis to suggest further plant improvement. At various levels, management information reports need to be extracted to communicate the required information across various levels of management. The core processes of fuel management involve a huge amount of paper work and manual labour, which makes it tedious, time-consuming and prone to human errors. Moreover, the time taken at each stage as well as the transparency of the relevant information has a direct bearing on the economics and efficient operation of the power plant. Both system performance and information transparency can be enhanced by the introduction of Information Technology in managing this area. This paper reports on the development of Web-based Fuel Management System Software, based on 3-tiered J2EE architecture, which aims at systematic functioning of the Core Business Processes of Fuel Management of a typical coal-fired thermal power plant in the Indian power scenario.",2007,0, 2241,Modeling Distribution Overcurrent Protective Devices for Time-Domain Simulations,"Poor overcurrent protective device coordination can cause prolonged and unnecessary voltage variation problems. The coordination of many protective devices can be a difficult task and unfortunately, device performance and accuracy are not evaluated once the device settings are chosen and deployed. Ideally, an automated system would interrogate system data at the substation and estimate voltage variations and assess protective device performance. To test the accuracy of the automated system, a simulation model is developed to generate test data. The time-domain overcurrent protective device models can be used to estimate the duration of voltage sag during utility fault clearing operation as well. This paper presents the modeling of overcurrent protective device models created in a time-domain power system simulator. The radial distribution simulation, also made in the same time-domain software, allows testing of different overcurrent protection device settings and placement.",2007,0, 2242,Bad-Smell Metrics for Aspect-Oriented Software,"Aspect-oriented programming (AOP) is a new programming paradigm that improves separation of concerns by decomposing the crosscutting concerns in aspect modules. Bad smells are metaphors to describe software patterns that are generally associated with bad design and bad programming of object-oriented programming (OOP). New notions and different ways of thinking for developing aspect-oriented (AO) software inevitably introduce bad smells which are specific bad design and bad programming in AO software called AO bad smells. Software metrics have been used to measure software artifact for a better understanding of its attributes and to assess its quality. Bad-smell metrics should be used as indicators for determining whether a particular fraction of AO code contains bad smells or not. Therefore, this paper proposes definition of metrics corresponding to the characteristic of each AO bad smell as a means to detecting them. The proposed bad-smell metrics are validated and the results show that the proposed bad- smell metrics can preliminarily indicate bad smells hidden in AO software.",2007,0, 2243,Advanced Verification of Distributed WS-BPEL Business Processes Incorporating CSSA-based Data Flow Analysis,"The Business Process Execution Language for Web Services WS-BPEL provides an technology to aggregate encapsulated functionalities for defining high-value Web services. For a distributed application in a B2B interaction, the partners simply need to expose their provided functionality as BPEL processes and compose them. Verifying such distributed web service based systems has been a huge topic in the research community lately - cf. [4] for a good overview. However, in most of the work on analyzing properties of interacting Web Services, especially when backed by stateful implementations like WS-BPEL, the data flow present in the implementation is widely neglected, and the analysis focusses on control flow only. This might lead to false-positive analysis results when searching for design weaknesses and errors, e. g. analyzing the controllability [14] of a given BPEL process. In this paper, we present a method to extract dataflow information by constructing a CSSA representation and detecting data dependencies that effect communication behavior. Those discovered dependencies are used to construct a more precise formal model of the given BPEL process and hence to improve the quality of analysis results.",2007,0, 2244,A Bayesian network based Qos assessment model for web services,"Quality of service (QoS) plays a key role in Web services. In an open and volatile environment, a provider may not deliver the QoS it declared. Hence, it's necessary to provide a QoS assessment model to determine the likely behavior of a provider. Although many researches have been done to develop models and techniques to assist users in QoS assessment, most of them ignore various QoS requirements of users, which are great important to evaluate a provider adopting the policy based on service differentiation. In this paper, we propose an approach, called Bayesian network based QoS assessment model, to QoS assessment. Through online learning, it supports to update the corresponding Bayesian network dynamically. The salient feature of this model is that it can correctly predict the provider's capability in various combinations of users' QoS requirements, especially to the provider with different service levels. Experimental results show that the proposed QoS assessment model is effective.",2007,0, 2245,Resource Allocation Based On Workflow For Enhancing the Performance of Composite Service,"Under SOA, multiple services can be aggregated to create a new composite service based on some predefined workflow. The QoS of this composite service is determined by the cooperation of all these Web services. With workflow pipelining, it is unreasonable to improve the overall service performance by only considering individual services without considering the relationship among them. In this paper, we propose to allocate resources by tracing and predicting workloads dynamically with the pipelining of service requests in workflow graph. At any moment, there are a number of service requests being handled by different services. Firstly, we predict future workloads for any requests as soon as they arrive at any service in the workflow. Secondly, we allocate resources for the predicted workloads to enhance the performance by replicating more services to resources. Our target is to maximize the number of successful requests with the constraints of limited resources. Experiment shows that our dynamic resource allocation mechanism is more efficient for enhancing the global performance of composite service than static resource allocation mechanism in general.",2007,0, 2246,Web Service decision-making model based on uncertain-but-bounded attributes,"Web services have become one of the most popular technologies of Web application. But the quality of Web services is not as stable as traditional software components' because of the uncertainty of network. Based on non-probability-set, the convex method is used to judge the range of performance affected by uncertain-but-bounded attributes. This method only requires the highest and lowest value of the uncertain attribute values and need not to know their probability distribution. This paper proposes the metric algorithm of the change of web service quality, and proposes the Web service decisionmaking algorithm based on the theory of multiple attributes decision by TOPSIS (technique for order preference by similarity to idea solution) in operations research.",2007,0, 2247,Continuous SPA: Continuous Assessing and Monitoring Software Process,"In the past ten years many assessment approaches have been proposed to help manage software process quality. However, few of them are configurable and real-time in practice. Hence, it is advantageous to find ways to monitor the current status of software processes and detect the improvement opportunities. In this paper, we introduce a web- based prototype system (Continuous SPA) on continuous assessing and monitoring software process, and perform a practical study in one process area: project management. Our study results are positive and show that features such as global management, well-defined responsibility and visualization can be integrated in process assessment to help improve the software process management.",2007,0, 2248,On the Contributions of an End-to-End AOSD Testbed,"Aspect-Oriented Software Development (AOSD) techniques are gaining increased attention from both academic and industrial organisations. In order to promote a smooth adoption of such techniques it is of paramount importance to perform empirical analysis of AOSD to gather a better understanding of its benefits and limitations. In addition, the effects of aspect-oriented (AO) mechanisms on the entire development process need to be better assessed rather than just analysing each development phase in isolation. As such, this paper outlines our initial effort on the design of a testbed that will provide end-to-end systematic comparison of AOSD techniques with other mainstream modularisation techniques. This will allow the proponents of AO and non- AO techniques to compare their approaches in a consistent manner. The testbed is currently composed of: (i) a benchmark application, (ii) an initial set of metrics suite to assess certain internal and external software attributes, and (in) a """"repository"""" of artifacts derived from AOSD approaches that are assessed based on the application of (i) and (ii). This paper mainly documents a selection of techniques that will be initially applied to the benchmark. We also discuss the expected initial outcomes such a testbed will feed back to the compared techniques. The applications of these techniques are contributions from different research groups working on AOSD.",2007,0, 2249,Probabilistic QoS and soft contracts for transaction based Web services,"Web services orchestrations and choreographies require establishing quality of service (QoS) contracts with the user. This is achieved by performing QoS composition, based on contracts established between the orchestration and the called Web services. These contracts are typically stated in the form of hard guarantees (e.g., response time always less than 5 msec). In this paper we propose using soft contracts instead. Soft contracts are characterized by means of probability distributions for QoS parameters. We show how to compose such contracts, to yield a global contract (probabilistic) for the orchestration. Our approach is implemented by the TOrQuE tool. Experiments on TOrQuE show that overly pessimistic contracts can be avoided and significant room for safe overbooking exists.",2007,0, 2250,A Declarative Approach to Enhancing the Reliability of BPEL Processes,"Currently, BPEL is the de-facto standard for the Web service composition. Because Web services are autonomous and loosely coupled, BPEL processes are susceptible to a wide variety of faults. However, BPEL only provides limited constructs for handling faults, which makes fault handling a time-consuming and error-prone task. In this paper, we propose a declarative approach to enhancing the reliability of BPEL processes. Our solution specifies fault handling logic through a set of event- condition-action (ECA) rules which build on an extensible set of fault-tolerant patterns. These ECA rules are integrated with normal business logic before deployment to generate a fault-tolerant BPEL process. We also develop a GUI tool to assist designers to specify ECA rules. Experiments show our approach is feasible.",2007,0, 2251,Utility-based QoS Brokering in Service Oriented Architectures,"Quality of service (QoS) is an important consideration in the dynamic service selection in the context of service oriented architectures. This paper extends previous work on QoS brokering for SOAs by designing, implementing, and experimentally evaluating a service selection QoS broker that maximizes a utility function for service consumers. Utility functions allow stakeholders to ascribe a value to the usefulness of a system as a function of several attributes such as response time, throughput, and availability. This work assumes that consumers of services provide to a QoS broker their utility functions and their cost constraints on the requested services. Service providers register with the broker by providing service demands for each of the resources used by the services provided and cost functions for each of the services. Consumers request services from the QoS broker, which selects a service provider that maximizes the consumer's utility function subject to its cost constraint. The QoS broker uses analytic queuing models to predict the QoS values of the various services that could be selected under varying workload conditions. The broker and services were implemented using a J2EE/Weblogic platform and experiments were conducted to evaluate the broker's efficacy. Results showed that the broker adequately adapts its selection of service providers according to cost constraints.",2007,0, 2252,Data Flow between Tools: Towards a Composition-Based Solution for Learning Design,"Data flow between tools cannot be specified using the current IMS Learning Design specification (LD). Nevertheless by specifying this data flow between tools, several degrees of activity automation may augment the system intervention opportunities for data flow management. Service automation, data flow automation and data flow validation may enhance the continuity of the learning design realization, reduce the student's cognitive load and obtain system-support for error prone situations. In this paper a novel approach based on the composition of LD and a standard workflow technology is proposed. Unlike other current approaches, our approach maintains interoperability with both LD and workflow standards. Then an architectural solution based on the composition approach is presented.",2007,0, 2253,A Formal Model for Quality of Service Measurement in e-Government,"Quality is emerging as a promising approach to promote the development of services in e-Government. A proper Quality of Services is mandatory in order to satisfy citizens and firms' needs and to accept the use of ICT in our life. This paper describes our ongoing research on QoS run-time measurement and preliminary ideas on the appliance of run-time monitoring to guarantee assurance of service applications. For this purpose, we also define and describe a formal model based on a set of quality parameters for e-Government services. These parameters can be useful both as a basis for understanding and assessing competing services, and as a way to determine what improvements are needed to assure citizens and firms satisfaction.",2007,0, 2254,Local Reference with Early Termination in H.264 Motion Estimation,"Multiple reference frames and variable block sizes improve compression efficiency of H.264, however, they also increase the encoder complexity and motion estimation time. This paper proposes a new algorithm, called local reference with early termination (LRET) to reduce the H.264 motion estimation time without adding to the encoder complexity. The LERT algorithm rearranges the search order of the reference frames based on the selection probability of the reference frames in the current frame. The experimental results show that the LERT achieves up to 59% reduction in motion estimation time with comparable video quality and negligible increase in bit-rate, as compared to the best algorithm in H.264 reference software.",2007,0, 2255,Distortion-Based Partial Distortion Search for Fast Motion Estimation,"Block motion estimation with full search is computationally complex. To reduce this complexity, different methods have been proposed, including partial distortion, which can reduce the computational complexity with no loss of image quality. We propose a distortion-based partial distortion search (DPDS) based on the magnitude of distortion and adaptive update of the matching order. We calculate absolute differences for all pixels in the predicted block point. Pixels are then sorted by the amount of distortion in a descending order for the matching process, which produces a scanning map. The sum of the absolute differences (SAD) of other candidate positions is then computed from this matching order. We also use an update of the scanning map by checking the increase in the number of absolute differences for the SAD value. The proposed DPDS algorithm improves the computational efficiency, compared with the original PDS scheme, because the accumulated value of the absolute pixel differences can rapidly reach the current minimum SAD value. The proposed algorithm is 4-13 times faster than the full search method with the same visual quality.",2007,0, 2256,VLSI Oriented Fast Multiple Reference Frame Motion Estimation Algorithm for H.264/AVC,"In H.264/AVC standard, motion estimation can be processed on multiple reference frames (MRF) to improve the video coding performance. For the VLSI real-time encoder, the heavy computation of fractional motion estimation (FME) makes the integer motion estimation (IME) and FME must be scheduled in two macro block (MB) pipeline stages, which makes many fast MRF algorithms inefficient for the computation reduction. In this paper, two algorithms are provided to reduce the computation of FME and IME. First, through analyzing the block's Hadamard transform coefficients, all-zero case after quantization can be accurately detected. The FME processing in the remaining frames for the block, detected as all-zero one, can be eliminated. Second, because the fast motion object blurs its edges in image, the effect of MRF to aliasing is weakened. The first reference frame is enough for fast motion MBs and MRF is just processed on those slow motion MBs with a small search range. The computation of IME is also highly reduced with this algorithm. Experimental results show that 61.4%-76.7% computation can be saved with the similar coding quality as the reference software. Moreover, the provided fast algorithms can be combined with fast block matching algorithms to further improve the performance.",2007,0, 2257,A Queueing-Theory-Based Fault Detection Mechanism for SOA-Based Applications,"SOA has become more and more popular, but fault tolerance is not supported in most SOA-based applications yet. Although fault tolerance is a grand challenge for enterprise computing, we can partially resolve this problem by focusing on its some aspect. This paper focuses on fault detection and puts forward a queueing-theory-based fault detection mechanism to detect the services that fail to satisfy performance requirements. This paper also gives a reference service model and reference architecture of fault-tolerance control center of Enterprise Services Bus for SOA- based applications.",2007,0,2258 2258,A Queueing-Theory-Based Fault Detection Mechanism for SOA-Based Applications,"SOA has become more and more popular, but fault tolerance is not supported in most SOA-based applications yet. Although fault tolerance is a grand challenge for enterprise computing, we can partially resolve this problem by focusing on its some aspect. This paper focuses on fault detection and puts forward a queueing-theory-based fault detection mechanism to detect the services that fail to satisfy performance requirements. This paper also gives a reference service model and reference architecture of fault-tolerance control center of Enterprise Services Bus for SOAbased applications.",2007,0, 2259,Deployment of Accountability Monitoring Agents in Service-Oriented Architectures,"Service-oriented architecture (SOA) provides a flexible paradigm to compose dynamic service processes using individual services. However, service processes can be vastly complex, involving many service partners, thereby giving rise to difficulties in terms of pinpointing the service(s) responsible for problematic outcomes. In this research, we study efficient and effective mechanisms to deploy agents to monitor and detect undesirable services in a service process. We model the agent deployment problem as the classic weighted set covering (WSC) problem and present agent selection solutions at different stages of service process deployment. We propose the MASS (merit based agent and service selection) algorithm that considers agent cost during QoS-based service composition by using a meritagent_cost heuristic metric. We also propose the IGA (incremental greedy algorithm) to achieve fast agent selection when a service process is reconfigured after service failures. The performance study shows that our proposed algorithms are effective on saving agent cost and efficient on execution time.",2007,0, 2260,Challenges and Opportunities in Information Quality,"Summary form only given. Each year companies are spending hundreds of thousands of dollars in data cleansing and other activities to improve the quality of information they use to conduct business. The hidden cost of bad data - lost opportunities, low productivity, waste, and myriads of other consequences - is believed to be much higher than these direct costs. One study estimates this combined cost due to bad data to be over U$30 billion in year 2006 alone. As business operations rely more and more on computerized systems, this cost is bound to increase at an alarming rate. Information quality (or data quality) has been an integral part of various enterprise systems such as master data management, customer data integration, and ETL (extraction, transform, and load). We are witnessing trends of renewed awareness and efforts, both in research and practice, to address information quality collectively as an independent value in enterprise computing. International organizations such as EPC Global and the International Standardization Organization (ISO) have launched working groups to study and possibly introduce standards that can be used to define, assess, and enhance information quality throughout the supply chain. Issues in information quality range over multiple disciplines including software engineering, databases, statistics, organizational operations, and accounting. The scope and goal of information quality management would depend on the organization's objectives and business models. Assessing the impact of data quality is a complex task involving key business performance indexes such as sales, profitability, and customer satisfaction. Methods of assuring data quality must address operational processes as well as supporting technologies. This panel, with input from experts from both academia and industry, explores the challenges and opportunities in information quality in the dynamic environment of today's enterprise computing.",2007,0, 2261,The Influence of Defect Distribution Function Parameters on Test Patterns Generation,This paper describes the analysis of influence of yield loss model parameters on the test patterns generation. The probability of shorts between conducting paths as well as the estimations of yield loss are presented on the example gates from industrial standard cell library in 0.8 mum CMOS technology.,2007,0, 2262,A Novel Algorithm for Detecting Air Holes in Steel Pipe Welding Based on Hopfield Neural Network,"The paper segment x-ray images of steel pipe welding to assess the quality of welding. Image segmentation is posed as an optimization problem, and is correlated with the energy function of the multistage Hopfield neural network. The algorithm for optimization and the principle of selecting coefficient are also given. The algorithm is easy to be programmed. As an application, we successfully segment some real industrial welding x-ray images.",2007,0, 2263,Recognizing Humans Based on Gait Moment Image,"This paper utilizes the periodicity of swing distances to estimate gait period. It shows good adaptability to low quality silhouette images. Gait moment image (GMI) is implemented based on the estimated gait period. GMI is the gait probability image at each key moment in gait period. It reduces the noise of the silhouettes extracted from low quality videos by gait probability distribution at each key moment. Moment deviation image (MDI) is generated by using silhouette images and GMIs. As a good complement of gait energy image (GEI), MDI provides more motion features than the basic GEI. MDI is utilized together with GEI to represent a subject. The nearest neighbor classifier is adopted to recognize subjects. The proposed algorithm is evaluated on the USF gait database, and the performance is compared with the baseline algorithm and two other algorithms. Experimental results show that this algorithm achieves a higher total recognition rate than the other algorithms.",2007,0, 2264,Quality Assessment of Beef Based of Computer Vision and Electronic Nose,"Current techniques for beef quality evaluations rely on sensory methods. These procedures are subjective, prone to error, and difficult to quantify. Automated evaluation of color and odor is desirable to reduce subjectivity and discrepancies and assist with the creation of standards for inspectors worldwide. The objectives of this study were to develop color machine vision techniques for visual evaluation and to test electronic nose sensors for odor raw and beef. A color machine vision system was developed to analyze the color of beef samples. The system was able to analyze the color of samples with non-uniform color surfaces. An electronic nose sensors was used to measure odors of beef and beef stored at different temperatures, with different levels of spoilage. Discriminant function analysis was used as the pattern recognition technique to differentiate samples based on odors. Results showed that the electronic nose could discriminate differences in odor due to storage time and spoilage levels for beef. Results also showed good correlation of sensor reading with sensory scores overall, the electronic nose showed good sensitivity and accuracy. Results from this work could lead to methodologies that will assist in the objective and repeatable quality evaluation of beef. These methods have potential in industrial and regulatory application where rapid response, no sample preparation, and no need for chemicals are required.",2007,0, 2265,An Efficient K-means Clustering Algorithm Based on Influence Factors,"Clustering has been one of the most widely studied topics in data mining and pattern recognition, k-means clustering has been one of the popular, simple and faster clustering algorithms, but the right value of k is unknown and selecting effectively initial points is also difficult.In view of this, a lot of work has been done on various versions of k-means,which refines initial points and detects the number of clusters. In this paper, we present a new algorithm, called an efficient k-means clustering based on influence factors,which is divided into two stages and can automatically achieve the actual value of k and select the right initial points based on the datasets characters. Propose influence factor to measure similarity of two clusters,using it to determine whether the two clusers should be merged into one.In order to obtain a faster algorithms theorem is proposed and proofed,using it to accelerate the algorithm. Experimental results from Gaussian datasets were generated as in Pelleg and Moore (2000) show the algorithm has high quality and obtains a satisfying result.",2007,0, 2266,An Ant Colony System Hybridized with Randomized Algorithm for TSP,"Ant algorithms are a recently developed, population- based approach which has been successfully applied to several NP-hard combinatorial optimization problems. In this paper, through an analysis of the constructive procedure of the solution in the ant colony system (ACS),we present an ant colony system hybridized with randomized algorithm(RAACS). In RAACS, only partial cities are randomly chosen to compute the state transition probability. Experimental results for solving the traveling salesman problems(TSP) with both ACS and RAACS demonstrate that averagely speaking, the proposed method is better in both the quality of solutions and the speed of convergence compared with the ACS.",2007,0, 2267,Protocol Engineering Principles for Cryptographic Protocols Design,"Design of cryptographic protocols especially authentication protocols remains error-prone, even for experts in this area. Protocol engineering is a new notion introduced in this paper for cryptographic protocol design, which is derived from software engineering idea. We present and illustrate protocol engineering principles in three groups: cryptographic protocol security requirements analysis principles, detailed protocol design principles and provable security principles. Furthermore, we illustrate that some of the well-known Abadi and Needham's principles are ambiguous. This paper is useful in that it regards cryptographic protocol design as system engineering, hence it can efficiently indicate implicit assumptions behind cryptographic protocol design, and present operational principles on uncovering these subtleties. Although our principles are informal, but they are practical, and we believe that they will benefit other researchers.",2007,0, 2268,A Business Modeled Approach for Trust Management in P2P,"P2P communities, is a method for arranging large numbers of peers in a self configuring peer relationship based on declared attributes (or interests) of the participating peers. This method is expected to have an impact in sharing of resources and pruning of search spaces based on the interests of the clients. Current peer- to-peer systems are targeted for information sharing, file storage, searching and indexing often using an overlay network. In this paper we expand the scope of peer-to-peer systems to include the concept of a business environment analogous to a """"stock market"""". Our work focuses on efficient methods to discover trustworthy peers in the P2P network. We investigate the behavior of randomly created relationships formed during transaction between Vendors and Emptors. Discovering services on the fly is essential to being able to identify profitable oriented transactions. In addition, efficient Vendor/Emptor based algorithms allow us to manage quickly changing market trends. Moreover the inclusion of the concept of trading policies among business communities enhances the probability of mutual gain.",2007,0, 2269,A Traceability Link Model for the Unified Process,"Traceability links are widely accepted as efficient means to support an evolutionary software development. However, their usage in analysis and design is effort consuming and error prone due to lacking or missing methods and tools for their creation, update and verification. In this paper we analyse and classify Unified Process artefacts to establish a traceability link model for this process. This model defines all required links between the artefacts. Furthermore, it provides a basis for the (semi)-automatic establishment and the verification of links in Unified Process development projects. We also define a first set of rules as step towards an efficient management of the links. In the ongoing project the rule set is extended to establish a whole framework of methods and rules.",2007,0, 2270,On the Customization of Components: A Rule-Based Approach,"Realizing the quality-of-service (QoS) requirements for a software system continues to be an important and challenging issue in software engineering. A software system may need to be updated or reconfigured to provide modified QoS capabilities. These changes can occur at development time or at runtime. In component-based software engineering, software systems are built by composing components. When the QoS requirements change, there is a need to reconfigure the components. Unfortunately, many components are not designed to be reconfigurable, especially in terms of QoS capabilities. It is often labor-intensive and error-prone work to reconfigure the components, as developers need to manually check and modify the source code. Furthermore, the work requires experienced senior developers, which makes it costly. The limitations motivate the development of a new rule-based semiautomated component parameterization technique that performs code analysis to identify and adapt parameters and changes components into reconfigurable ones. Compared with a number of alternative QoS adaptation approaches, the proposed rule-based technique has advantages in terms of flexibility, extensibility, and efficiency. The adapted components support the reconfiguration of potential QoS trade-offs among time, space, quality, and so forth. The proposed rule-based technique has been successfully applied to two substantial libraries of components. The F-measure or balanced F-score results for the validation are excellent, that is, 94 percent. Index Terms-Performance measures, rule-based processing, representations.",2007,0, 2271,"Comments on """"Data Mining Static Code Attributes to Learn Defect Predictors""""","In this correspondence, we point out a discrepancy in a recent paper, """"data mining static code attributes to learn defect predictors,"""" that was published in this journal. Because of the small percentage of defective modules, using probability of detection (pd) and probability of false alarm (pf) as accuracy measures may lead to impractical prediction models.",2007,0, 2272,A Measurement Based Dynamic Policy for Switched Processing Systems,"Switched processing systems (SPS) represent a canonical model for many areas of applications of communication, computer and manufacturing systems. They are characterized by flexible, interdependent service capabilities and multiple classes of job traffic flows. Recently, increased attention has been paid to the issue of improving quality of service (QoS) performance in terms of delays and backlogs of the associated scheduling policies, rather than simply maximizing the system's throughput. In this study, we investigate a measurement based dynamic service allocation policy that significantly improves performance with respect to delay metrics. The proposed policy solves a linear program at selected points in time that are in turn determined by a monitoring strategy that detects 'significant' changes in the intensities of the input processes. The proposed strategy is illustrated on a small SPS subject to different types of input traffic.",2007,0, 2273,Application of Extreme Value Theory to the Analysis of Wireless Network Traffic,"It is important to study the traffic in the wireless network control and management. This paper proposes the use of the EVT (extreme value theory) for the analysis of wireless network traffic. The role of EVT is to allow the development of procedures that are scientifically and statistically rational to estimate the extreme behavior of random processes. We have performed extensive simulation experiments by taking traffic data that is greater than a given threshold value. The results of our experiments and analysis show the wireless network traffic model obtained through the EVT fits well with the empirical distribution of traffic. Meanwhile, we can obtain EVT model has the lowest """"average deviation"""" compared with other popular distribution model such as exponential, lognormal, gamma, Weibull. Thus illustrates EVT is more suitable than other distributions to model the traffic and it has a good application foreground in the analysis of wireless network traffic.",2007,0, 2274,Automatic Conflict Analysis and Resolution of Traffic Filtering Policy for Firewall and Security Gateway,"Firewalls and Security Gateways are core elements in network security infrastructure. As networks and services become more complex, managing access-list rules becomes an error-prone task. Conflicts in a policy can cause holes in security, and can often be hard to find while performing only visual or manual inspection. First, we have defined a methodology to systematically classify the severity of rule conflicts; secondly, we have proposed two different solutions to automatically resolve conflicts in a firewall. For one of them we found an algebraic proof of the existence of the solution and the convergence of the algorithm, and then we have made a software implementation to test it.",2007,0, 2275,Petrifying Worm Cultures: Scalable Detection and Immunization in Untrusted Environments,"We present and evaluate the design of a new and comprehensive solution for automated worm detection and immunization. The system engages a peer-to-peer network of untrusted machines on the Internet to detect new worms and facilitate rapid preventative response. We evaluate the efficacy and scalability of the proposed system through large-scale simulations and assessments of a functional real-world prototype. We find that the system enjoys scalability in terms of network coverage, fault- tolerance, security, and maintainability. It proves effective against new worms, and supports collaboration among among mutually mistrusting parties.",2007,0, 2276,The Study of Noise-rejection by Using Pulse Time-delay Identification Method and the Analysis of PD Data Obtained in the Field,"Based on the mechanism of partial discharges occurring in the stator winding insulations of generators, a set of on-line PD measurement with double sensors is developed in this paper. After comparing the different types of measurement, the noise suppressing method is focused on the technique of """"pulse time-delay identification"""" which is based on installing double sensors for every phase. The principle of the method is introduced in this paper. The software program designed by Labview provides the power frequency graph, N-Q and N- A two dimension diagram, three dimension diagram and maximum quantity of PD and NQN tendency diagram to make a further analysis of the PD data. In addition, the dada obtained in the field is analyzed here, including an insulation fault detected in a plant successfully.",2007,0, 2277,Defining and Detecting Bad Smells of Aspect-Oriented Software,"Bad smells are software patterns that are generally associated with bad design and bad programming. They can be removed by using the refactoring technique which improves the quality of software. Aspect-oriented (AO) software development, which involves new notions and the different ways of thinking for developing software and solving the crosscutting problem, possibly introduces different kinds of design flaws. Defining bad smells hidden in AO software in order to point out bad design and bad programming is then necessary. This paper proposes the definition of new AO bad smells. Moreover, appropriate existing AO refactoring methods for eliminating each bad smell are presented. The proposed bad smells are validated. The results show that after removing the bad smells by using appropriate refactoring methods, the software quality is increased.",2007,0, 2278,A Model-based Object-oriented Approach to Requirement Engineering (MORE),"Most requirement documents were written in ambiguous natural languages which are less formal and imprecise. Without modeling the requirement documents, the knowledge of the requirement is hard to be kept in a way, which can be analyzed and integrated with artifacts in other phases of software life cycle, e.g. UML diagrams in analysis and design phases. Therefore, maintaining the traceability and consistency of requirement documents and software artifacts in other phases is costly and error prone. In this paper, we propose a model-based object-oriented approach to requirement engineering (MORE). Applying modeling and OO technologies to requirement phases, the domain knowledge can be captured in a well-defined model, so the completeness, consistency, traceability and reusability of requirement and its integration with the artifacts of other phases can be cost effectively improved. A case study has shown the promise of our approach.",2007,0, 2279,Analysis of Conflicts among Non-Functional Requirements Using Integrated Analysis of Functional and Non-Functional Requirements,"Conflicts among non-functional requirements are often identified subjectively and there is a lack of conflict analysis in practice. Current approaches fail to capture the nature of conflicts among non-functional requirements, which makes the task of conflict resolution difficult. In this paper, a framework has been provided for the analysis of conflicts among non-functional requirements using the integrated analysis of functional and non-functional requirements. The framework identifies and analyzes conflicts based on relationships among quality attributes, functionalities and constraints. Since, poorly structured requirement statements usually result in confusing specifications; we also developed canonical forms for representing non-functional requirements. . The output of our framework is a conflict hierarchy that refines conflicts among non-functional requirements level by level. Finally, a case study is provided in which the proposed framework was applied to analyze and detect conflicts among the non-functional requirements of a search engine.",2007,0, 2280,An Architectural Framework for the Design and Analysis of Autonomous Adaptive Systems,"Autonomous adaptive systems (AAS) have been proposed as a solution to effectively (re)design software so that it can respond to changes in execution environments, without human intervention. In the software engineering community, alternative approaches to the design of AAS have been proposed including solutions based on component technology, design patterns, and resource allocation techniques. A key limitation of the currently available approaches is that they detect constraint violations, but they do not support the prediction of constraint violations. In this work we propose an architectural framework for the design and analysis of autonomous adaptive systems, hereafter referred to as KAROO, which provides a key, new contribution: the capability to predict when a system needs to adapt itself. The results of extensive experimental evaluation of a KAROO-based system are excellent: 100% of the violations are predicted; the system is able to avoid the violations by adapting itself almost 98% of the time. The framework is a novel integration of control-theory-based adaptation, multi-criteria decision making and component-based software engineering techniques.",2007,0, 2281,Measuring and Assessing Software Reliability Growth through Simulation-Based Approaches,"In the past decade, several rate-based simulation approaches were proposed to predict software failure process. But most of them did not take the number of available debuggers into consideration and this may not be reasonable. In practice, the number of debuggers is always limited and controlled. If all debuggers or developers are busy, the new detected faults should be willing to wait (for a long time to be corrected and removed). Besides, practical experiences also show that the fault removal time is non-negligible and the number of removed faults generally lags behind the total number of detected faults. Based on these facts, in this paper, we will apply queueing theory to describe and explain the possible debugging behavior during software development. Two simulation procedures are developed based on G/G/ infin and G/G/m queueing models. The proposed methods will be illustrated with real software failure data. Experimental results will be analyzed and discussed in detail. The results we obtained will greatly help to understand the influence of size of debugger teams on the software failure correction activities and other related reliability assessments.",2007,0, 2282,Bivariate Software Fault-Detection Models,"In this paper, we develop bivariate software fault-detection models with two time measures: calendar time (day) and test-execution time (CPU time) and incorporate both of them to assess the quantitative software reliability with higher accuracy. The resulting stochastic models are characterized by a simple binomial process and the bivariate order statistics of software fault-detection times with different time scales.",2007,0, 2283,Challenges in Selecting COTS Component Guidelines,"Reuse of COTS (Commercial off the shelf components is a new development approach in software engineering. Developers get benefit from COTS-based development (CBD) environment to select suitable components, adopt and integrate into the system that to achieve better software, more quickly and lower cost. But the current environment didn't support the typical of CBD process in all processes [1]. Some environments are support in some steps or focus on how to develop the components and components interoperability that allow in the same environment. In this paper we propose a new environment tool called """"CS-COTS"""" are consist of (1) Sophisticate machine learning tools for generating rules in selection and integration. (2) Guideline for selecting methods category. (3) Predict a success rate to integrate methods fit into system.",2007,0, 2284,"Quality Metrics for Internet Applications: Developing """"New"""" from """"Old""""","This discussion concerns 'metrics'. More specifically, we discuss quantitative metrics for evaluating Internet applications: what should we quantify, monitor and analyse in order to characterise, evaluate and develop Internet applications based on reusing existing Internet applications,which is widely available in the Internet. Due to the distinctive evolution nature of Internet applications, assessing quality of software will provide ease and higher accuracy for Web developers. However, there is a great gap between the rapid development of Internet applications and the slow speed of developing corresponding metric measures. To tackle this issue, we look into measuring the quality of Internet applications and enable Web developers to enhance the quality of their programs and identify reusable components from Internet-based resources.",2007,0, 2285,"Requirements, Plato's Cave, and Perceptions of Reality","Software developers build systems in response to agreed-upon requirements as if those requirements were absolutely perfect. Those of us in the requirements field know that the process of creating and documenting requirements is extremely error- prone. In fact, so error prone that we wonder why developers (a) accept them as truth, and then to make matters worse, (b) make it so difficult to change them when problems are eventually discovered. This paper draws an analogy between the process of requirements determination and Plato's allegory of the cave. Specifically, it describes requirements problems that arise when our perceptions of reality differ from actual reality.",2007,0, 2286,Effect of the Delay Time in Fixing a Fault on Software Error Models,"In this paper, we propose a new model that incorporates both the fault-detection process and the fault-correction process. In addition, the fault- correction process is modeled as a delayed fault- detection process. Significant improvements on the conventional software reliability growth models (SRGMs) to better describe the actual software development have been achieved by eliminating an unrealistic assumption that detected errors are immediately corrected. This can especially be seen when some latent software errors are hard to detect and they even exist in the software product for a long time after they are detected. Therefore, the time delayed by the correction process is not negligible. The objective here is to remove this assumption in order to make the SRGMs more realistic and accurate. Finally, two real data sets have been performed, and the results show that the proposed new model performs much better in estimating the number of initial faults.",2007,0, 2287,Automated Testing EJB Components Based on Algebraic Specifications,Algebraic testing is an automated software testing method based on algebraic formal specifications. It has the advantages of highly automated testing process and independence of the software's implementation details. This paper applies the method to software components. An automated testing tool called CASCAT for Java components is presented. A case study of the tool shows the high fault detecting ability.,2007,0, 2288,Exposing Digital Forgeries in Interlaced and Deinterlaced Video,"With the advent of high-quality digital video cameras and sophisticated video editing software, it is becoming increasingly easier to tamper with digital video. A growing number of video surveillance cameras are also giving rise to an enormous amount of video data. The ability to ensure the integrity and authenticity of these data poses considerable challenges. We describe two techniques for detecting traces of tampering in deinterlaced and interlaced video. For deinterlaced video, we quantify the correlations introduced by the camera or software deinterlacing algorithms and show how tampering can disturb these correlations. For interlaced video, we show that the motion between fields of a single frame and across fields of neighboring frames should be equal. We propose an efficient way to measure these motions and show how tampering can disturb this relationship.",2007,0, 2289,Failure and Coverage Factors Based Markoff Models: A New Approach for Improving the Dependability Estimation in Complex Fault Tolerant Systems Exposed to SEUs,"Dependability estimation of a fault tolerant computer system (FTCS) perturbed by single event upsets (SEUs) requires obtaining first the probability distribution functions for the time to recovery (TTR) and the time to failure (TTF) random variables. The application cross section (sigmaAP) approach does not give directly all the required information. This problem can be solved by means of the construction of suitable Markoff models. In this paper, a new method for constructing such models based on the system's failure and coverage factors is presented. Analytical dependability estimation is consistent with fault injection experiments performed in a fault tolerant operating system developed for a complex, real time data processing system.",2007,0, 2290,New Protection Techniques Against SEUs for Moving Average Filters in a Radiation Environment,"Single event effects (SEEs) caused by radiation are a major concern when working with circuits that need to operate in certain environments, like for example in space applications. In this paper, new techniques for the implementation of moving average filters that provide protection against SEEs are presented, which have a lower circuit complexity and cost than traditional techniques like triple modular redundancy (TMR). The effectiveness of these techniques has been evaluated using a software fault injection platform and the circuits have been synthesized for a commercial library in order to assess their complexity. The main idea behind the presented approach is to exploit the structure of moving average filter implementations to deal with SEEs at a higher level of abstraction.",2007,0, 2291,How Business Goals Drive Architectural Design,"This paper illustrates how business goals can significantly impact a software management system's architecture without necessarily affecting its functionality. These goals include 1) supporting hardware devices from different manufacturers, 2) considering language, culture, and regulations of different markets, 3) assessing tradeoffs and risks to determine how the product should support these goals, 4) refining goals such as scaling back on intended markets, depending on the company's comfort level with the tradeoffs and risks. More importantly, these business goals correspond to quality attributes the end system must exhibit. The system must be modifiable to support a multitude of hardware devices and consider different languages and cultures. Supporting different regulations in different geographic markets requires the system to respond to life-threatening events in a timely manner performance requirement.",2007,0, 2292,High-Level Application Development is Realistic for Wireless Sensor Networks,"Programming wireless sensor network (WSN) applications is known to be a difficult task. Part of the problem is that the resource limitations of typical WSN nodes force programmers to use relatively low-level techniques to deal with the logical concurrency and asynchronous event handling inherent in these applications. In addition, existing general-purpose, node-level programming tools only support the networked nature of WSN applications in a limited way and result in application code that is hardly portable across different software platforms. All of this makes programming a single device a tedious and error-prone task. To address these issues we propose a high-level programming model that allows programmers to express applications as hierarchical state machines and to handle events and application concurrency in a way similar to imperative synchronous languages. Our program execution model is based on static scheduling what allows for standalone application analysis and testing. For deployment, the resulting programs are translated into efficient sequential C code. A prototype compiler for TinyOS has been implemented and its evaluation in described in this paper.",2007,0, 2293,Detecting and Reducing Partition Nodes in Limited-routing-hop Overlay Networks,"Many Internet applications use overlay networks as their basic facilities, like resource sharing, collaborative computing, and so on. Considering the communication cost, most overlay networks set limited hops for routing messages, so as to restrain routing within a certain scope. In this paper we describe partition nodes in such limited- routing-hop overlay networks, whose failure may potentially lead the overlay topology to be partitioned so that seriously affect its performance. We propose a proactive, distributed method to detect partition nodes and then reduce them by changing them into normal nodes. The results of simulations on both real-trace and generated topologies, scaling from 500 to 10000 nodes, show that our method can effectively detect and reduce partition nodes and improve the connectivity and fault tolerance of overlay networks.",2007,0, 2294,March DSS: A New Diagnostic March Test for All Memory Simple Static Faults,"Diagnostic march tests are powerful tests that are capable of detecting and identifying faults in memories. Although march SS was published for detecting simple static faults, no test has been published for identifying all faults possibly present in memory cells. In this paper, we target all published simple static faults. We identify faults that cannot be distinguished due to their analog behavior. We present a new methodology for generating irredundant diagnostic march tests for any desired subset of the simple static faults using the necessary and sufficient conditions for fault detection. Using that methodology, along with a verification tool, and trial and error, we were able to build a new diagnostic test for all distinguishable faults named march DSS. March DSS is the first test that is capable of identifying all distinguishable memory static faults. Compared to the latest most comprehensive published diagnostic march test, march DSS provides significant improvement in terms of fault coverage, time complexity, and power consumption. By targeting the same faults, we were able to provide a new test equivalent to the latest published test with 46% improvement in time complexity.",2007,0, 2295,Layout to Logic Defect Analysis for Hierarchical Test Generation,"As shown by previous studies, shorts between the interconnect wires should be considered as the predominant cause of failures in CMOS circuits. Fault models and tools for targeting these defects, such as the bridging fault test pattern generators have been available for a long time. However, this paper proposes a new hierarchical approach based on critical area extraction for identifying the possible shorted pairs of nets on the basis of the chip layout information, combined with logic-level test pattern generation for bridging faults. Experiments on real design layouts will show that only a fraction of all the possible pairs of nets have non-zero shorting probabilities. Furthermore, it will also be proven at the logic-level that nearly all such bridging faults can be tested by a simple and robust one-pattern logic test. The methods proposed in this paper are supported by a design flow implementing existing commercial and academic CAD software.",2007,0, 2296,Extended Fault Detection Techniques for Systems-on-Chip,"The adoption of systems-on-chip (SoCs) in different types of applications represents an attracting solution. However, the high integration level of SoCs increases the sensitivity to transient faults and consequently introduces some reliability concerns. Several solutions have been proposed to attack this issue, mainly intended to face faults in the processor or in the memory. In this paper, we propose a solution to detect transient faults affecting data transmitted between the microprocessor and the communication peripherals embedded in a SoC. This solution combines some modifications of the source code at high level with the introduction of an Infrastructure IP (I-IP) to increase the dependability of the SoC.",2007,0, 2297,Reuse Strategy based on Quality Certification of Reusable Components,"There are some barriers that prevent effective and systematic reuse. These barriers are produced by the need of introducing new methods for reuse development and especially by the distrust of developers in the components to be reused. One form of promoting reuse and reducing risks is guaranteeing the quality of these components. This can be achieved by assessing quality attributes and characteristics for each type of component. In this paper we present a reuse strategy based on quality certification. The strategy advantages are: the introduction of reuse throughout the software development process; incentive to reuse within the development team and the achievement of level three and four of the software reuse maturity model. The main result from this work is a strategy that encompasses the best practices of reuse and quality certification, which was validated through a survey, submitted to experts in the reuse and software engineering areas.",2007,0, 2298,ModelML: a Markup Language for Automatic Model Synthesis,"Domain-specific modeling has become a popular way of designing and developing systems. It generally involves a systematic use of a set of object-oriented models to represent various facets of a domain. However, manually creating instances of these models is time-consuming and error-prone when a system in the domain is complex. Automatic model synthesis tools are thus usually developed to free users from the model creation process. In practice, most of these tools would hard code knowledge about the domain specific models in the program. A biggest problem with these tools is that their source code needs to be changed whenever the knowledge changes. In this paper, we define a model markup language (ModelML) to facilitate the development of automatic model synthesis tools. The language provides a complete self-describing representation of object-oriented models to be synthesized. Unlike other XML-based representations of models, ModelML reflects the structure of the models directly in the nesting of elements in the XML-based syntax. This feature allows the knowledge about the domain specific models to be decoupled from model synthesis tools. To demonstrate the usefulness of the markup language, we have developed a generic automatic model synthesis tool which is based on ModelML inputs.",2007,0, 2299,An Empirical Study of the Classification Performance of Learners on Imbalanced and Noisy Software Quality Data,"In the domain of software quality classification, data mining techniques are used to construct models (learners) for identifying software modules that are most likely to be fault-prone. The performance of these models, however, can be negatively affected by class imbalance and noise. Data sampling techniques have been proposed to alleviate the problem of class imbalance, but the impact of data quality on these techniques has not been adequately addressed. We examine the combined effects of noise and imbalance on classification performance when seven commonly-used sampling techniques are applied to software quality measurement data. Our results show that some sampling techniques are more robust in the presence of noise than others. Further, sampling techniques are affected by noise differently given different levels of imbalance.",2007,0, 2300,Detecting Fault Modules Applying Feature Selection to Classifiers,"At present, automated data collection tools allow us to collect large amounts of information, not without associated problems. This paper, we apply feature selection to several software engineering databases selecting attributes with the final aim that project managers can have a better global vision of the data they manage. In this paper, we make use of attribute selection techniques in different datasets publicly available (PROMISE repository), and different data mining algorithms for classification to defect faulty modules. The results show that in general, smaller datasets with less attributes maintain or improve the prediction capability with less attributes than the original datasets.",2007,0, 2301,Software Defects Prediction using Operating Characteristic Curves,We present a software defect prediction model using operating characteristic curves. The main idea behind our proposed technique is to use geometric insight in helping construct an efficient and fast prediction method to accurately predict the. cumulative number of failures at any given stage during the software development process. Our predictive approach uses the number of detected faults instead of the software failure-occurrence time in the testing phase. Experimental results illustrate the effectiveness and the much improved performance of the proposed method in comparison with the Bayesian prediction approaches.,2007,0, 2302,Automated Test Data Generation using Search Based Software Engineering,"Generating test data is a demanding process. Without automation, the process is slow, expensive and error-prone. However, techniques to automate test data generation must cater for a bewildering variety of functional and non-functional test adequacy criteria and must either implicitly or explicitly solve problems involving state propagation and constraint satisfaction. This talk will show how optimisation techniques associated with search based software engineering (SBSE) have been used to automate test data generation. The talk will survey the area and present the results of recent work on characterising, transforming and eliding test data search landscapes.",2007,0, 2303,Automating Embedded Software Testing on an Emulated Target Board,"An embedded system consists of heterogeneous layers including hardware, HAL (hardware abstraction layer), OS kernel and application layer. Interactions between these layers are the software interfaces to be tested in an embedded system. The identified interfaces are important criterion that selects test cases and monitors the test results in order to detect faults and trace their causes. In this paper, we propose an automated scheme of embedded software interface test based on the emulated target board. The automated scheme enables to identify the location of interface in the source code to be tested, to generate test cases, and to determine 'pass' or fail' on the interface. We implemented the test tool called 'Justitia' based on the proposed scheme. As a case study, we applied the 'Justitia' to mobile embedded software on the S3C2440 microprocessor and Linux kernel v2.4.20.",2007,0, 2304,Towards an Automated Test Generation with Delayed Transitions for Timed Systems,"In this paper we analyze the influence of the urgency in the timed transitions, and as consequence, in the test suite generation. As result, we formalize rules to generate sequences where the messages exchanged may be instantaneous or delayed. In addition, the generated scenarios are able to detect timing faults. For test generation, we use a prototype tool called HJ2IF. It is based on a test purpose algorithm, called hit-or-jump and it is applied for systems specified using Intermediate Format language (IF).",2007,0, 2305,Fast Recovery and QoS Assurance in the Presence of Network Faults for Mission-Critical Applications in Hostile Environments,"In a hostile military environment, systems must be able to detect and react to catastrophes in a timely manner in order to provide assurance that critical tasks will continue to meet their timeliness requirements. Our research focuses on achieving network quality of service (QoS) assurance using a Bandwidth Broker in the presence of network faults in layer-3 networks. Passive discovery techniques using the link-state information from routers provide for rapid path discovery which, in turn, leads to fast failure impact analysis and QoS restoration. In addition to network fault tolerance, the Bandwidth Broker must be fault tolerant and must be able to recover quickly. This is accomplished using a modified commercially available and open-source in- memory database cluster technology.",2007,0, 2306,Evaluation and Application of MVFs in Coverage for Coverage-Based NHPP SRGM Frameworks,Many non-homogeneous Poisson process software reliability growth models are characterized by their mean value functions. Mean value functions of coverage-based models are usually obtained as composite functions of the coverage growth function and the function relating the number of detected faults to the coverage. This paper performs empirical evaluation of the relationships between the number of detected faults and the coverage embedded in the coverage- based software reliability growth models. It is also illustrated that integration of well-performing coverage growth functions and relationships between the number of detected faults and the coverage produces well-performing mean value functions.,2007,0, 2307,Anomaly-based Fault Detection System in Distributed System,"One of the important design criteria for distributed systems and their applications is their reliability and robustness to hardware and software failures. The increase in complexity, inter connectedness, dependency and the asynchronous interactions between the components that include hardware resources (computers, servers, network devices), and software (application services, middleware, web services, etc.) makes the fault detection and tolerance a challenging research problem. In this paper, we present an innovative approach based on statistical and data mining techniques to detect faults (hardware or software) and also identify the source of the fault. In our approach, we monitor and analyze in realtime all the interactions between all the components of a distributed system. We used data mining and supervised learning techniques to obtain the rules that can accurately model the normal interactions among these components. Our anomaly analysis engine will immediately produce an alert whenever one or more of the interaction rules that capture normal operations is violated due to a software or hardware failure. We evaluate the effectiveness of our approach and its performance to detect software faults that we inject asynchronously, and compare the results for different noise level.",2007,0, 2308,Shared Data Analysis for Multi-Tasking Real-Time System Testing,"Memory corruption due to program faults is one of the most common failures in computer software. For software running in a sequential manner and for multi-tasking software with synchronized data accesses, it has been shown that program faults causing memory corruption can be detected by analyzing the relations between defines and uses of variables (DU coverage-based testing). However, using such methods in testing for memory corruption where globally shared data is accessed through asynchronous events will not be sufficient since they lack the possibility to analyse the cases where preemption of tasks may lead to interleaving failures. In this paper, we propose the use of a system level shared variable DU analysis of multi-tasking realtime software. By analyzing the temporal attributes of each access to globally shared data, our method handles asynchronous data accesses. When used in system-level testing, the result from the analysis can discover failures such as ordering, synchronization and interleaving failures. The result can also serve a as measure for coverage and complexity in data dependency at system level.",2007,0, 2309,Automated Wireless Sensor Network Testing,"The design of distributed, wireless, and embedded system is a tedious and error-prone process. Experiences from previous real-world wireless sensor network (WSN) deployments strongly indicate that it is vital to follow a systematic design approach to satisfy all design requirements including robustness and reliability. Such a design methodology needs to include an end-to-end testing methodology. The proposed framework for WSN testing allows to apply distributed unit testing concepts in the development process. The tool flow decreases test time and allows for monitoring the correctness of the implementation throughout the development process.",2007,0, 2310,Exploring Genetic Programming and Boosting Techniques to Model Software Reliability,"Software reliability models are used to estimate the probability that a software fails at a given time. They are fundamental to plan test activities, and to ensure the quality of the software being developed. Each project has a different reliability growth behavior, and although several different models have been proposed to estimate the reliability growth, none has proven to perform well considering different project characteristics. Because of this, some authors have introduced the use of Machine Learning techniques, such as neural networks, to obtain software reliability models. Neural network-based models, however, are not easily interpreted, and other techniques could be explored. In this paper, we explore an approach based on genetic programming, and also propose the use of boosting techniques to improve performance. We conduct experiments with reliability models based on time, and on test coverage. The obtained results show some advantages of the introduced approach. The models adapt better to the reliability curve, and can be used in projects with different characteristics.",2007,0, 2311,Detecting Primary Transmitters via Cooperation and Memory in Cognitive Radio,"Effective detection of the activity of primary transmitters is known to be one of the major challenges to the implementation of cognitive radio systems. In this paper, we investigate the use of cooperation and memory (using tools from change detection) as means to enhance primary detection. We focus on the simple case of two secondary users and one primary source. Numerical results show the relevant performance benefits of both cooperation and memory.",2007,0, 2312,On the Evaluation of Header Compression for VoIP Traffic over DVB-RCS,"The transition towards the 'All IP' environment was long ago foreseen, but the actual shift towards this goal did not happen until recently. This inevitable convergence is naturally coupled with a number of significant advantages, but also introduces some problems that were not present before. One of these, which is the focus of this paper, is the efficient transport of voice traffic over IP (VoIP). Since voice streams consist of numerous but small packets, the overhead that is caused by the RTP/UDP/IP headers is comparable - if not higher than the capacity required for the actual payload. This handicap becomes even more severe in the case of radio communications, where the scarcity of bandwidth demands an as efficient as possible utilization. The above facts make the introduction of header compression algorithms necessary, in order to mitigate the problem of overwhelming overhead. This paper describes the design and implementation of a software platform that aims to evaluate the performance of a well known header compression scheme within the context of DVB-RCS. More specifically, the focus of this work is to assess quantitatively the gains in capacity and the degradation of quality of service, when the Compressed RTP header compression scheme is employed in this satellite environment. The presented testbed models the impairments of the satellite channel and applies the header compression mechanism on real VoIP traffic.",2007,0, 2313,Managing Behaviour Trust in Grids Using Statistical Methods of Quality Assurance,"In this paper, an approach for managing behaviour trust of participants in Grid computing environments is presented. By considering the interaction process among participants in Grid environments similar to an industrial production process, we argue that through the use of statistical methods of quality assurance it is possible to monitor the behaviour of Grid participants and discover deviations in order to assess the behaviour trust of the participants.",2007,0, 2314,A New Data Hiding Scheme with Quality Control for Binary Images Using Block Parity,"Data hiding is usually achieved by alternating some nonessential information in the host message. A more challenging problem is to hide data in a two - color binary image. Hiding is difficult for the binary image since each of its black or white pixels requires only one bit representation. So that, changing a pixel can be easily detected. In this paper, we propose a new data hiding scheme using the parity of blocks. The original image is partitioned into mxn blocks. The new scheme ensures that for any bit that is modified in the host image, the bit must be adjacent to another bit that has the same value as the former's new value. Thus, the existence of secret information in the host image is difficult to detect. The invisible effect will be achieved by sacrificing some data hiding space, but the new scheme still offers a good data hiding ratio. Specifically, for each m x n block of host image, we will hide one bit of secret data by changing either one bit or without changing any bits in the block.",2007,0, 2315,Assessing the Effectiveness of a Distributed Method for Code Inspection: A Controlled Experiment,"We propose a distributed inspection method that tries to minimise the synchronous collaboration among team members to identify defects in software artefacts. The approach consists of identifying conflicts on the potential defects and then resolving them using an asynchronous discussion before performing a traditional synchronous meeting. This approach has been implemented in a Web based tool and assessed through a controlled experiment with master students in Computer Science at the University of Salerno. The tool presented provides automatic merge and conflict highlighting functionalities to support the inspectors during the pre-meeting refinement phase and provides the moderator with information about the inspection progress as a decision support. The tool also supports a synchronous inspection meeting to discuss about unsolved conflicts. However, by analysing the data collected during a controlled experiment we found that this phase can often be skipped due to the fact that asynchronous discussion resolved most of the conflicts.",2007,0, 2316,Identification Of Software Performance Bottleneck Components In Reuse based Software Products With The Application Of Acquaintanceship Graphs,Component-based software engineering provides an opportunity for better quality and increased productivity in software development by using reusable software components [9]. Also performance is a make-or-break quality for software. The systematic application of software performance engineering techniques throughout the development process can help to identify design alternatives that preserve desirable qualities such as extensibility and reusability while meeting performance objectives [1]. Implementing the effective performance-based management of software intensive projects has proven to be challenging task now a days. This paper aims at identifying the major reasons of software performance failures in terms of the component communication path. My work focused on one of the applications of discrete mathematics namely graph theory. This study makes an attempt to predict the most used components to the least used with the help of acquaintanceship graphs and also the shortest communication path between any two components with the help of adjacency matrix. Experiments are conducted with four components and the result shows a promising approach towards component utilization and bottleneck determination that describe the major areas in the component communication to concentrate in achieving success in cost-effective development of highly performance software.,2007,0, 2317,A Novel Framework for Test Domain Reduction using Extended Finite State Machine,"Test case generation is an expensive, tedious, and error- prone process in software testing. In this paper, test case generation is accomplished using an Extended Finite State Machine (EFSM). The proper domain representative along the specified path is selected based on fundamental calculus approximation. The pre/post-conditions of class behavior is derived from a continuous or piece-wise continuous function whose values are chosen from partitioned subdomains. Subsequent test data for the designated class can be generated from the selected test frames. In so doing, the domain is partitioned wherein reduced test cases are generated, yet insuring complete test coverage of the designated test plan. The proposed modeling technique will be conducive toward a new realm of test domain analysis. Its validity can also be procedurally proved by straightforward mathematical principles.",2007,0, 2318,Redundant Coupling Detection Using Dynamic Dependence Analysis,"Most of the software engineers realize the importance of avoiding coupling between programs modules in order to achieve the advantages of modules reusability. However, most of them practice modules' coupling in many cases without necessities. Many techniques have been proposed to detect module's coupling in computer programs. However, developers desire to automatically identify unnecessary module's coupling that can be eliminated with minimum effort, which we refer to as redundant coupling. Redundant coupling is the module coupling that does not contribute to the output of the program. In this paper we introduce an automated approach that uses the dynamic dependence analysis to detect the redundant coupling between program's modules. Such technique guides the developers to avoid redundant coupling when testing their programs.",2007,0, 2319,Beyond Total Cost of Ownership: Applying Balanced Scorecards to Open-Source Software,"Potential users of Open Source Software (OSS) face the problem of evaluating OSS, in order to assess the convenience of adopting OSS instead of commercial software, or to choose among different OSS proposals. Different metrics were defined, addressing different OSS properties: the Total Cost of Ownership (TCO) addresses the cost of acquiring, adapting and operating OSS; the Total Account Ownership (TAO) represents the degree of freedom of the user with respect to the technology provider; indexes like the Open Business Quality Rating (Open BQR) assess the quality of the software with respect to the user's needs. However, none of the proposed methods and models addresses all the aspects of OSS in a balanced and complete way. For this purpose, the paper explores the possibility of adapting the Balanced Scorecard (BSC) technique to OSS. A preliminary definition of the BSC for OSS is given and discussed.",2007,0, 2320,Programming Approaches and Challenges for Wireless Sensor Networks,"Wireless sensor networks (WSNs) constitute a new pervasive and ubiquitous technology. They have been successfully used in various application areas and in future computing environments, WSNs will play an increasingly important role. However, programming sensor networks and applications to be deployed in them is extremely challenging. It has traditionally been an error-prone task since it requires programming individual nodes, using low-level programming issues and interfacing with the hardware and the network. This aspect is currently changing as different high-level programming abstractions and middleware solutions are coming into the arena. Nevertheless, many research challenges are still open. This paper presents a survey of the current state-of-the-art in the field, establishing a classification and highlighting some likely research challenges and future directions.",2007,0, 2321,Using a Configurator for Predictable Component Composition,"Predicting the properties of a component composition has been studied in component-based engineering. However, most studies do not address how to find a component composition that satisfies given functional and nonfunctional requirements. This paper proposes that configurable products and configurators can be used for solving this task. The proposed solution is based on the research on traditional, mechanical configurable products. Due to the computational complexity, the solution should utilise existing techniques from the field of artificial intelligence. The applicability of the approach is demonstrated with KumbangSec, which is a conceptualisation, a language and a configurator tool for deriving component compositions with given functional and security requirements. KumbangSec configurator utilises existing inference engine models to ensure efficiency.",2007,0, 2322,Quality-of-Service Management in ASEMA System for Unicast MPEG4 Video Transmissions,"The Active Service Environment Management system (ASEMA) provides best possible multimedia service experience to end user. One of the main aspects of the ASEMA is to provide a variable live streaming service to the end users of the ASEMA. The ASEMA implements this through dynamic change in the properties of the video stream delivered to the end user's end device. The live video stream used in the ASEMA is delivered to the end users is in MPEG 4 video format. The video itself is streamed on top of the real time protocol (RTP) and the parameters are negotiated with the real time streaming protocol (RTSP) before the streaming commences. The research problem of this is to provide easy solution for QoS measurements in networks that are closed nature. The research is based on the constructive method of the related publications and the results are deducted from the constructed quality of service management of the ASEMA system. The management is founded on the measuring of the receiving and sending bit rate values in both ends, in the sending end and in the receiving end. If a fluctuation in the values are detected the video stream's properties are changed dynamically.",2007,0, 2323,Early Software Product Improvement with Sequential Inspection Sessions: An Empirical Investigation of Inspector Capability and Learning Effects,"Software inspection facilitates product improvement in early phases of software development by detecting defects in various types of documents, e.g., requirements and design specifications. Empirical study reports show that usage-based reading (UBR) techniques can focus inspectors on most important use cases. However, the impact of inspector qualification and learning effects in the context of inspecting a set of documents in several sessions is still not well understood. This paper contributes a model for investigating the impact of inspector capability and learning effects on inspection effectiveness and efficiency in a large-scale empirical study in an academic context. Main findings of the study are (a) the inspection technique UBR better supported the performance inspectors with lower experience in sequential inspection cycles (learning effect) and (b) when inspecting objects of similar complexity significant improvements of defect detection performance could be measured.",2007,0, 2324,A Two-Step Model for Defect Density Estimation,"Identifying and locating defects in software projects is a difficult task. Further, estimating the density of defects is more difficult. Measuring software in a continuous and disciplined manner brings many advantages such as accurate estimation of project costs and schedules, and improving product and process qualities. Detailed analysis of software metric data gives significant clues about the locations and magnitude of possible defects in a program. The aim of this research is to establish an improved method for predicting software quality via identifying the defect density of fault prone modules using machine-learning techniques. We constructed a two-step model that predicts defect density by taking module metric data into consideration. Our proposed model utilizes classification and regression type learning methods consecutively. The results of the experiments on public data sets show that the two-step model enhances the overall performance measures as compared to applying only regression methods.",2007,0, 2325,Attribute Selection in Software Engineering Datasets for Detecting Fault Modules,"Decision making has been traditionally based on managers experience. At present, there is a number of software engineering (SE) repositories, and furthermore, automated data collection tools allow managers to collect large amounts of information, not without associated problems. On the one hand, such a large amount of information can overload project managers. On the other hand, problems found in generic project databases, where the data is collected from different organizations, is the large disparity of its instances. In this paper, we characterize several software engineering databases selecting attributes with the final aim that project managers can have a better global vision of the data they manage. In this paper, we make use of different data mining algorithms to select attributes from the different datasets publicly available (PROMISE repository), and then, use different classifiers to defect faulty modules. The results show that in general, the smaller datasets maintain the prediction capability with a lower number of attributes than the original datasets.",2007,0, 2326,Dynamic Detection of COTS Component Incompatibility,"The development of COTS-based systems shifts the focus of testing and verification from single components to component integration. Independent teams and organizations develop COTS components without referring to specific systems or interaction patterns. Developing systems that reuse COTS components (even high-quality ones) therefore presents new compatibility problems. David Garlan, Robert Allen, and John Ockerbloom (1995) reported that in their experience, integrating four COTS components took 10 person-years (rather than the one planned person-year), mainly because of integration problems. According to Barry Boehm and Chris Abts (1999), three of the four main problems with reusing COTS products are absence of control over their functionality, absence of control over their evolution, and lack of design for interoperability. Our proposed technique, called behavior capture and test, detects COTS component incompatibilities by dynamically analyzing component behavior. BCT incrementally builds behavioral models of components and compares them with the behavior the components display when reused in new contexts. This lets us identify incompatibilities, unexpected interactions, untested behaviors, and dangerous side effects.",2007,0, 2327,Enlarging Instruction Streams,"Web applications are widely adopted and their correct functioning is mission critical for many businesses. At the same time, Web applications tend to be error prone and implementation vulnerabilities are readily and commonly exploited by attackers. The design of countermeasures that detect or prevent such vulnerabilities or protect against their exploitation is an important research challenge for the fields of software engineering and security engineering. In this paper, we focus on one specific type of implementation vulnerability, namely, broken dependencies on session data. This vulnerability can lead to a variety of erroneous behavior at runtime and can easily be triggered by a malicious user by applying attack techniques such as forceful browsing. This paper shows how to guarantee the absence of runtime errors due to broken dependencies on session data in Web applications. The proposed solution combines development-time program annotation, static verification, and runtime checking to provably protect against broken data dependencies. We have developed a prototype implementation of our approach, building on the JML annotation language and the existing static verification tool ESC/Java2, and we successfully applied our approach to a representative J2EE-based e-commerce application. We show that the annotation overhead is very small, that the performance of the fully automatic static verification is acceptable, and that the performance overhead of the runtime checking is limited.",2007,0, 2328,Feature Extraction System for Contextual Classification within Security Imaging Applications,"Throughout security imaging applications, there is a persistent need for accurate contextual classification of objects within the scene so proper subsequent decisions can be made. To generate a set of scene attributes necessary for this analysis, this paper presents a novel feature extraction system composed of three divisions: an edge detection system, a segmentation system, and a recognition system. System inputs are considered to be low resolution, low quality images, often collected from inexpensive security imaging cameras. This work concentrates on enhancing the accuracy of the detected boundaries and edge pixel locations within the edge detection system as a pre-processing step for the segmentation and recognition systems. The edge detection described here is based on Boolean derivatives, calculated using partial derivatives of Boolean functions in combination with fusion and binarization steps. This edge detection system allows overall subsequent improvements in the segmentation and recognition systems, producing a stronger overall feature extraction system for processing data within security imaging applications.",2007,0, 2329,Node-Replacement Policies to Maintain Threshold-Coverage in Wireless Sensor Networks,"With the rapid deployment of wireless sensor networks, there are several new sensing applications with specific requirements. Specifically, target tracking applications are fundamentally concerned with the area of coverage across a sensing site in order to accurately track the target. We consider the problem of maintaining a minimum threshold-coverage in a wireless sensor network, while maximizing network lifetime and minimizing additional resources. We assume that the network has failed when the sensing coverage falls below the minimum threshold-coverage. We develop three node-replacement policies to maintain threshold-coverage in wireless sensor networks. These policies assess the candidature of each failed sensor node for replacement. Based on different performance criteria, every time a sensor node fails in the network, our replacement policies either replace with a new sensor or ignore the failure event. The node-replacement policies replace a failed node according to a node weight. The node weight is assigned based on one of the following parameters: cumulative reduction of sensing coverage, amount of energy increase per node, and local reduction of sensing coverage. We also implement a first-fail-first-replace policy and a no-replacement policy to compare the performance results. We evaluate the different node-replacement polices through extensive simulations. Our results show that given a fixed number of replacement sensor nodes, the node-replacement policies significantly increase the network lifetime and the quality of coverage, while keeping the sensing-coverage about a pre-set threshold.",2007,0, 2330,An Applied Study of Destructive Measurement System Analysis,"Measurement system analysis (MSA) is used to assess the ability of a measurement system to detect meaningful differences in process variables. For destructive measurement system, there are two methods to design and analyze it based on the homogenous batch size, including nested MSA and crossed MSA. At the same time, the P/T ratio (""""tolerance method"""") is modified to measure the suitability of destructive measurement system to make pass/fail decisions to a specification. Finally, rip off force testing system of chargers is assessed by crossed MSA and modified P/T ratio. Some suggestions for destructive MSA are also presented.",2007,0, 2331,Montgomery Multiplication with Redundancy Check,"This paper presents a method of adding redundant code to the Montgomery multiplication algorithm, to ensure that a fault attack during its calculation can be detected. This involves having checksums on the input variables that are then used to calculate a valid checksum for the output variable, in a similar manner to that proposed by Walter. However, it is shown that the proposed method is more secure than the previous work, as all the variables required to calculate Montgomery multiplication are protected.",2007,0, 2332,Fault Detection Structures for the Montgomery Multiplication over Binary Extension Fields,"Finite field arithmetic is used in applications like cryptography, where it is crucial to detect the errors. Therefore, concurrent error detection is very beneficial to increase the reliability in such applications. Multiplication is one of the most important operations and is widely used in different applications. In this paper, we target concurrent error detection in the Montgomery multiplication over binary extension fields. We propose error detection schemes for two Montgomery multiplication architectures. First, we present a new concurrent error detection scheme using the time redundancy and apply it on semi-systolic array Montgomery multipliers. Then, we propose a parity based error detection scheme for the bit-serial Montgomery multiplier over binary extension Fields.",2007,0, 2333,A comparative study of SPI Approaches with ProPAM,"Software process improvement (SPI) is one of the main software development challenges. Unfortunately, process descriptions generally do not correspond to the processes actually performed during software development projects. They just represent high-level plans and do not contain the information necessary for the concrete software projects. This deficient alignment between the process and project is caused by processes that are unrelated to project activities and failure in detecting project changes to improve the process. Process and project alignment is essential to really find out how process management is important to achieve an organization's strategic objectives. Considering this approach, this paper presents a comparative study of some of the most recognized SPI approaches and a new software process improvement methodology proposed, designed by Process and Project Alignment Methodology (ProPAM). Our intention is to show the problems observed in existing SPI approach and recognize that further research in process and project alignment based on actor oriented approaches is required.",2007,0, 2334,A Study of a Transactional Parallel Routing Algorithm,"Transactional memory proposes an alternative synchronization primitive to traditional locks. Its promise is to simplify the software development of multi-threaded applications while at the same time delivering the performance of parallel applications using (complex and error prone) fine grain locking. This study reports our experience implementing a realistic application using transactional memory (TM). The application is Lee's routing algorithm and was selected for its abundance of parallelism but difficulty of expressing it with locks. Each route between a source and a destination point in a grid can be considered a unit of parallelism. Starting from this simple approach, we evaluate the exploitable parallelism of a transactional parallel implementation and explore how it can be adapted to deliver better performance. The adaptations do not introduce locks nor alter the essence of the implemented algorithm, but deliver up to 20 times more parallelism. The adaptations are derived from understanding the application itself and TM. The evaluation simulates an abstracted TM system and, thus, the results are independent of specific software or hardware TM implemented, and describe properties of the application.",2007,0, 2335,Subjective Evaluation of Techniques for Proper Name Pronunciation,"Automatic pronunciation of unknown words of English is a hard problem of great importance in speech technology. Proper names constitute an especially difficult class of words to pronounce because of their variable origin and uncertain degree of assimilation of foreign names to the conventions of the local speech community. In this paper, we compare four different methods of proper name pronunciation for English text-to-speech (TTS) synthesis. The first (intended to be used as the primary strategy in a practical TTS system) uses a set of manually supplied pronunciations, referred to as the ldquodictionaryrdquo pronunciations. The remainder are pronunciations obtained from three different data-driven approaches (intended as candidates for the back-up strategy in a real system) which use the dictionary of ldquoknownrdquo proper names to infer pronunciations for unknown names. These are: pronunciation by analogy (PbA), a decision tree method (CART), and a table look-up method (TLU). To assess the acceptability of the pronunciations to potential users of a TTS system, subjective evaluation was carried out, in which 24 listeners rated 1200 synthesized pronunciations of 600 names by the four methods using a five-point (opinion score) scale. From over 50 000 proper names and their pronunciations, 150 so-called one-of-a-kind pronunciations were selected for each of the four methods (600 in total). A one-of-a-kind pronunciation is one for which one of the four methods disagrees with the other three methods, which agree among themselves. Listener opinions on one-of-a-kind pronunciations are argued to be a good measure of the overall quality of a particular method. For each one-of-a-kind pronunciation, there is a corresponding so-called rest pronunciation (another 600 in total), on which the remaining three competitor methods agree, for which listener opinions are taken to be indicative of the general quality of the competition. Nonparametric tests of significance of mean opin- on scores show that the dictionary pronunciations are rated superior to the automatically inferred pronunciations with little difference between the data-driven methods for the one-of-a-kind pronunciations, but for the rest pronunciations there is suggestive evidence that PbA is superior to both CART and TLU, which perform at approximately the same level.",2007,0, 2336,Traffic Object Tracking Based on Increased-step Motion History Image,"A new image-based method for tracking moving vehicles is purposed using the layered step-down grey value silhouette of increased-step motion history image. Real-time tracking of moving vehicles appeared in the traffic scene can be achieved by segmentation and marking the motion silhouette regions in the increased-step motion history images. High quality segmentation is realized by improve the basic motion history image. Background of the traffic scene is subtracted from the traffic video frames. At last, experiments are done with crossroad traffic videos. Moving objects is segmented effectively. The results show that the method is robust against the disturbance of changeful environment, high detecting rate and fast real-time processing speed.",2007,0, 2337,A Compound PRM Method for Path Planning of the Tractor-Trailer Mobile Robot,This paper researches the path planning problem for the tractor-trailer mobile robot and presents a novel environment modelling method called Compound PRM which builds the global compound roadmap by combining the local regular roadmap with the universal probabilistic roadmap. Path planning based on the Compound PRM roadmap can improve the quality of the local routes near obstacles and lower the complexity of the planning computation. Also the loss of feasible space is avoided during path planing. The simulation experiment shows that this method is very efficient in the use of tractor-trailer mobile robot path planning.,2007,0, 2338,Software-Based Failure Detection and Recovery in Programmable Network Interfaces,"Emerging network technologies have complex network interfaces that have renewed concerns about network reliability. In this paper, we present an effective low-overhead fault tolerance technique to recover from network interface failures. Failure detection is based on a software watchdog timer that detects network processor hangs and a self-testing scheme that detects interface failures other than processor hangs. The proposed self-testing scheme achieves failure detection by periodically directing the control flow to go through only active software modules in order to detect errors that affect instructions in the local memory of the network interface. Our failure recovery is achieved by restoring the state of the network interface using a small backup copy containing just the right amount of information required for complete recovery. The paper shows how this technique can be made to minimize the performance impact to the host system and be completely transparent to the user.",2007,0, 2339,A Low Pass Filter Traffic Shaping Mechanism for the Arrival of Traffic,"To decrease the traffic burstiness, general method of traditional shaping mechanisms is to smooth the traffic rate or the packet interarrival time of existing flows. However, traffic burstiness is also induced by the gusty arrivals of new flows. According to our knowledge, there is no research which attempts to decrease the burstiness caused by the arrivals of new flows. To cope with this, a shaping mechanism to the interarrival of flow named interarrival low pass filter (ILPF) fore-shaping mechanism is proposed in this paper. By the low pass filter, the ILPF fore-shaping mechanism smoothes the flow interarrival time so as to decrease the traffic burstiness. The ILPF fore- shaping mechanism is well suitable for the real-time applications which can endure some delay for accessing. According to theoretic analysis and computer simulation, it is proved that the ILPF fore-shaping mechanism significantly improves the utilization of the network as well as providing QoS guarantees in terms of probability.",2007,0, 2340,Prediction of Self-Similar Traffic and its Application in Network Bandwidth Allocation,"In this paper, traffic prediction models based on chaos theory are studied and compared with FARIMA (fractional autoregressive integrated moving average) predictors by means of the adopted measurements of predictability. The traffic prediction results are applied in the bandwidth allocation of a mesh network, and the OPNET simulation platform is developed in order to compare their effects. The adopted predictability measurements are inadequate because although the chaotic predictor based on the Lyapunov exponent with worse values of the measurements can timely predict the burstiness of self- similar traffic, the FARIMA predictor forecasts the burstiness with a time-delay. The DAMA (dynamic assignment multiaccess) bandwidth allocation strategy combined with the chaotic predictor can provide better QoS performance.",2007,0, 2341,A Rate Control Scheme Based on MAD Weighted Model for H.264/AVC,"Under the network bandwidth and the delay constrained, the rate control has become a key technique for video coding in order to obtain consecutive and high quality reconstructive video picture. Rate control scheme of the basic unit in H.264/AVC mainly adopted the linear MAD predict model and the quadratic rate distortion model, in the process of implementing, after coding a macroblock, the parameters of the models will be updated, and then computes the quantization parameters of the current macroblock. So, its computation cost is very high, and its complexity is also very high. Through analyzing the rate control scheme of the H.264/AVC JVT G012rl, the paper proposed an improved low complexity the MAD weighted predict model, and make the accurate rate control in the macroblock layer, and carried out it in the JM98 platform of JVT reference software in H.264/AVC. Extensive experiment results show the complexity of this scheme is lower than the JVT-G012rl of H.264/AVC, and the average PSNR of the usually standard test sequences increased O.lldB, at the same time, its accuracy of the rate control of the QCIF sequences averagely improved 0.498 kpbs.",2007,0, 2342,Study on Collaborative Arithmetic of Technology Support and Customer Visit of E-Business Website,"While traditional e-business Website revenue research emphasizes customer marketing such as price adjustment and individuation service, it fails to consider the collaborative dimensions of technology support capability and customer visit quantity, their interactions with a system, Website revenue is a function of customer visit quantity, and technology support capability is the base of customer visit quantity, at the same time, it will also influences the Website's cost and the QoS, and influences the revenue in the end. Our purpose is to find a collaborative arithmetic to maximize the revenue under a certain condition. Our research begins with a CBMG of a typical small-scale e-business Website, via the probability statistics of visit data; we get the relation of revenue and customer visit quantity, then we establish an QN model of the hardware structure by the queuing network theorem, via the analysis of QN model, we get the relation of technology support capability and customer visit quantity. Base on the two relations, we propose an intelligent optimization arithmetic to calculate the revenue, the calculating result can shows as a curve, and we can easy get the information of potential capability of revenue, customer quantity and the QoS from the curve. And all these information will help the Website's manager to confirm the optimal customer quantity and the only hardware investment at a certain stage.",2007,0, 2343,Usability Evaluation of B2C Web Site,"Web site usability is a critical metric for assessing the quality of the B2C Web site. A measure of usability must not only provide a rating for a specific Web site, but also should it illuminate the specific strengths and weaknesses about site design. In this paper, the usability and usability evaluation of B2C Web site are described. A comprehensive set of usability guidelines developed by Microsoft (MUG) is revised and utilized. The index and sub index comprising these guidelines are present firstly. The weights of each indexes and sub indexes are decided by AHP(Analytical Hierarchy Process). Base on the investigation data, a mathematic arithmetic is proposed to calculate the grade of each B2C Web site. The illustrated example shows that the evaluation approach of this paper is very effective.",2007,0, 2344,Functional Test-Case Generation by a Control Transaction Graph for TLM Verification,"Transaction level modeling allows exploring several SoC design architectures leading to better performance and easier verification of the final product. Test cases play an important role in determining the quality of a design. Inadequate test-cases may cause bugs to remain after verification. Although TLM expedites the verification of a hardware design, the problem of having high coverage test cases remains unsettled at this level of abstraction. In this paper, first, in order to generate test-cases for a TL model we present a Control-Transaction Graph (CTG) describing the behavior of a TL Model. A Control Graph is a control flow graph of a module in the design and Transactions represent the interactions such as synchronization between the modules. Second, we define dependent paths (DePaths) on the CTG as test-cases for a transaction level model. The generated DePaths can find some communication errors in simulation and detect unreachable statements concerning interactions. We also give coverage metrics for a TL model to measure the quality of the generated test-cases. Finally, we apply our method on the SystemC model of AMBA-AHB bus as a case study and generate test-cases based on the CTG of this model.",2007,0, 2345,Functional Verification of RTL Designs driven by Mutation Testing metrics,"The level of confidence in a VHDL description directly depends on the quality of its verification. This quality can be evaluated by mutation-based test, but the improvement of this quality requires tremendous efforts. In this paper, we propose a new approach that both qualifies and improves the functional verification process. First, we qualify test cases thanks to the mutation testing metrics: faults are injected in the design under verification (DUV) (making DUV's mutants) to check the capacity of test cases to detect theses mutants. Then, a heuristic is used to automatically improve IPs validation data. Experimental results obtained on RTL descriptions from ITC'99 benchmark show how efficient is our approach.",2007,0, 2346,Execution-time Prediction for Dynamic Streaming Applications with Task-level Parallelism,"Programmable multiprocessor systems-on-chip are becoming the preferred implementation platform for embedded streaming applications. This enables using more software components, which leads to large and frequent dynamic variations of data-dependent execution times. In this context, accurate and conservative prediction of execution times helps in maintaining good audio/video quality and reducing energy consumption by dynamic evaluation of the amount of on-chip resources needed by applications. To be effective, multiprocessor systems have to employ the available parallelism. The combination of task-level parallelism and task delay variations makes predicting execution times a very hard problem. So far, under these conditions, no appropriate techniques exist for the conservative prediction of execution times with the required accuracy. In this paper, we present a novel technique for this problem, exploiting the concept of scenario-based prediction, and taking into account the transient and periodic behavior of scenarios and the effect of scenario transitions. In our MPEG-4 shape-decoder case study, we observe no more than 11% average overestimation.",2007,0, 2347,A Sliced Coprocessor for Native Clifford Algebra Operations,"Computer graphics applications require efficient tools to model geometric objects. The traditional approach based on compute-intensive matrix calculations is error-prone due to a lack of integration between geometric reasoning and matrix-based algorithms. Clifford algebra offers a solution to these issues since it permits specification of geometry at a coordinate-free level. The best way to exploit the symbolic computing power of geometric (Clifford) algebra is supporting its data types and operators directly in hardware. This paper outlines the architecture of S-CliffoSor (Sliced Clifford coprocessor), a parallelizable embedded coprocessor that executes native Clifford algebra operations. S-CliffoSor is a sliced coprocessor that can be replicated for parallel execution of concurrent Clifford operations. A single slice has been designed, implemented and tested on the Celoxica Inc. RC1000 board. The experimental results show the potential to achieve a 3times speedup for Clifford sums and 4times speedup for Clifford products compared to against the analogous operations in the software library generator GAIGEN.",2007,0, 2348,Real-Time Frame-Layer H.264 Rate Control for Scene-Transition Video at Low Bit Rate,"An abrupt scene-transition frame is one that is hardly correlated with the previous frames. In that case, because an intra-coded frame has less distortion than an inter-coded one, almost all macroblocks are encoded in intra mode. This breaks up the rate control flow and increases the number of bits used. Since the reference software for H.264 takes no special action for a scene-transition frame, several studies have been conducted to solve the problem using the quadratic R-D model. However, since this model is more suitable for inter frames, they are unsuitable for computing the QP of the scene-transition intra frame. In this paper, a modified algorithm for detecting scene transitions is presented, and a real-time rate control scheme accounting for the characteristics of intra coding is proposed for scene- transition frames. The proposed scheme was validated using 16 test sequences. The results showed that the proposed scheme performed better than the existing H.264 rate control schemes. The PSNR was improved by an average of 0.4-0.6 dB and a maximum of 1.1-1.6 dB. The PSNR fluctuation was also improved by an average of 18.6 %.1",2007,0, 2349,An Object Oriented Complexity Metric Based on Cognitive Weights,"Complexity in general is defined as """"the degree to which a system or component has a design or implementation that is difficult to understand and verify """". Complexity metrics are used to predict critical information about reliability and maintainability of software systems. Object oriented software development requires a different approach to software metrics. In this paper, an attempt has been made to propose a metric for an object oriented code, which calculates the complexity of a class at method level. The proposed measure considers the internal architecture of the class, subclass, and member functions, while other proposed metrics for object oriented programming do not. An attempt has also been made to evaluate and validate the proposed measure in terms of Weyuker's properties and against the principles of measurement theory. It has been found that seven of nine Weyuker's properties have been satisfied by the proposed measure. It also satisfies most of the parameters required by the measurement theory perspective, hence establishes as a well-structured one.",2007,0, 2350,A Critical Analysis of Empirical Research in Software Testing,"In the foreseeable future, software testing will remain one of the best tools we have at our disposal to ensure software dependability. Empirical studies are crucial to software testing research in order to compare and improve software testing techniques and practices. In fact, there is no other way to assess the cost-effectiveness of testing techniques, since all of them are, to various extents, based on heuristics and simplifying assumptions. However, when empirically studying the cost and fault- detection rates of a testing technique, a number of validity issues arise. Further, there are many ways in which empirical studies can be performed, ranging from simulations to controlled experiments with human subjects. What are the strengths and drawbacks of the various approaches? What is the best option under which circumstances? This paper presents a critical analysis of empirical research in software testing and will attempt to highlight and clarify the issues above in a structured and practical manner.",2007,0, 2351,"Assessing, Comparing, and Combining Statechart- based testing and Structural testing: An Experiment","Although models have been proven to be helpful in a number of software engineering activities there is still significant resistance to model-driven development. This paper investigates one specific aspect of this larger problem. It addresses the impact of using statecharts for testing class clusters that exhibit a state-dependent behavior. More precisely, it reports on a controlled experiment that investigates their impact on testing fault-detection effectiveness. Code-based, structural testing is compared to statechart-based testing and their combination is investigated to determine whether they are complementary. Results show that there is no significant difference between the fault detection effectiveness of the two test strategies but that they are significantly more effective when combined. This implies that a cost-effective strategy would specify statechart-based test cases early on, execute them once the source code is available, and then complete them with test cases based on code coverage analysis.",2007,0, 2352,Test Inspected Unit or Inspect Unit Tested Code?,"Code inspection and unit testing are two popular fault- detecting techniques at unit level. Organizations where inspections are done generally supplement it with unit testing, as both are complementary. A natural question is the order in which the two techniques should be exercised as this may impact the overall effectiveness and efficiency of the verification process. In this paper, we present a controlled experiment comparing the two execution-orders, namely, code inspection followed by unit testing (CI-UT) and unit testing followed by code inspection (UT-CI), performed by a group of fresh software engineers in a company. The subjects inspected program-units by traversing a set of usage scenarios and applied unit testing by writing JUnit tests for the same. Our results showed that unit testing can be more effective, as well as more efficient, if applied after code inspection whereas the later is unaffected of the execution- order. Overall results suggest that sequence CI-UT performs better than UT-CI in time-constrained situations.",2007,0, 2353,Defect Detection Efficiency: Test Case Based vs. Exploratory Testing,"This paper presents a controlled experiment comparing the defect detection efficiency of exploratory testing (ET) and test case based testing (TCT). While traditional testing literature emphasizes test cases, ET stresses the individual tester's skills during test execution and does not rely upon predesigned test cases. In the experiment, 79 advanced software engineering students performed manual functional testing on an open-source application with actual and seeded defects. Each student participated in two 90-minute controlled sessions, using ET in one and TCT in the other. We found no significant differences in defect detection efficiency between TCT and ET. The distributions of detected defects did not differ significantly regarding technical type, detection difficulty, or severity. However, TCT produced significantly more false defect reports than ET. Surprisingly, our results show no benefit of using predesigned test cases in terms of defect detection efficiency, emphasizing the need for further studies of manual testing.",2007,0, 2354,Comparing Model Generated with Expert Generated IV&V Activity Plans,"An IV&V activity plan describes what assurance activities to perform, where to do them, when, and to what extent. Meaningful justification for an IV&V budget and evidence that activities performed actually provide high assurance has been difficult to provide from plans created (generally ad hoc) by experts. JAXA now uses the """"strategic IV&V planning and cost model"""" to addresses these issues and complement expert planning activities. This research presents a grounded empirical study that compares plans generated by the strategic model to those created by experts on several past IV&V projects. Through this research, we found that the model generated plan typically is a superset of the experts ' plan. We found that experts tended to follow the most cost-effective route but had a bias in their particular activity selections. Ultimately we found increased confidence in both expert and model based planning and now have new tools for assessing and improving them.",2007,0, 2355,"Filtering, Robust Filtering, Polishing: Techniques for Addressing Quality in Software Data","Data quality is an important aspect of empirical analysis. This paper compares three noise handling methods to assess the benefit of identifying and either filtering or editing problematic instances. We compare a 'do nothing' strategy with (i) filtering, (ii) robust filtering and (Hi) filtering followed by polishing. A problem is that it is not possible to determine whether an instance contains noise unless it has implausible values. Since we cannot determine the true overall noise level we use implausible val.ues as a proxy measure. In addition to the ability to identify implausible values, we use another proxy measure, the ability to fit a classification tree to the data. The interpretation is low misclassification rates imply low noise levels. We found that all three of our data quality techniques improve upon the 'do nothing' strategy, also that the filtering and polishing was the most effective technique for dealing with noise since we eliminated the fewest data and had the lowest misclassification rates. Unfortunately the polishing process introduces new implausible values. We believe consideration of data quality is an important aspect of empirical software engineering. We have shown that for one large and complex real world data set automated techniques can help isolate noisy instances and potentially polish the values to produce better quality data for the analyst. However this work is at a preliminary stage and it assumes that the proxy measures of lity are appropriate.",2007,0, 2356,Usability Evaluation Based on Web Design Perspectives,"Given the growth in the number and size of Web Applications worldwide, Web quality assurance, and more specifically Web usability have become key success factors. Therefore, this work proposes a usability evaluation technique based on the combination of Web design perspectives adapted from existing literature, and heuristics. This new technique is assessed using a controlled experiment aimed at measuring the efficiency and effectiveness of our technique, in comparison to Nielsen's heuristic evaluation. Results indicated that our technique was significantly more effective than and as efficient as Nielsen's heuristic evaluation.",2007,0, 2357,Evaluating the Impact of Adaptive Maintenance Process on Open Source Software Quality,"The paper focuses on measuring and assessing the relation of adaptive maintenance process and quality of open source software (OSS). A framework for assessing adaptive maintenance process is proposed and applied. The framework consists of six sub- processes. Five OSSs with considerable number of releases have been studied empirically. Their main evolutionary and quality characteristics have been measured. The main results of the study are the following:. 1) Software maintainability is affected mostly by the activities of the 'analysis' maintenance sub-process. 2) Software testability is affected by the activities of all maintenance sub-processes. 3) Software reliability is affected mostly by the activities of the 'design' and 'delivery' maintenance sub- processes. 4) Software complexity is affected mostly by the activities of the 'problem identification', design', 'implementation' and 'test' sub-processes. 5) Software flexibility is affected mostly by the activities of the 'delivery' sub-process.",2007,0, 2358,The Effects of Over and Under Sampling on Fault-prone Module Detection,"The goal of this paper is to improve the prediction performance of fault-prone module prediction models (fault-proneness models) by employing over/under sampling methods, which are preprocessing procedures for a fit dataset. The sampling methods are expected to improve prediction performance when the fit dataset is unbalanced, i.e. there exists a large difference between the number of fault-prone modules and not-fault-prone modules. So far, there has been no research reporting the effects of applying sampling methods to fault-proneness models. In this paper, we experimentally evaluated the effects of four sampling methods (random over sampling, synthetic minority over sampling, random under sampling and one-sided selection) applied to four fault-proneness models (linear discriminant analysis, logistic regression analysis, neural network and classification tree) by using two module sets of industry legacy software. All four sampling methods improved the prediction performance of the linear and logistic models, while neural network and classification tree models did not benefit from the sampling methods. The improvements of Fl-values in linear and logistic models were 0.078 at minimum, 0.224 at maximum and 0.121 at the mean.",2007,0, 2359,Generalizing fault contents from a few classes,"The challenges in fault prediction today are to get a prediction as early as possible, at as low a cost as possible, needing as little data as possible and preferably in such a language that your average developer can understand where it came from. This paper presents a fault sampling method where a summary of a few, easily available metrics is used together with the results of a few sampled classes to generalize the fault content to an entire system. The method is tested on a large software system written in Java, that currently consists of around 2000 classes and 300,000 lines of code. The evaluation shows that the fault generalization method is good at predicting fault-prone clusters and that it is possible to generalize the values of a few representative classes.",2007,0, 2360,Fine-Grained Software Metrics in Practice,"Modularity is one of the key features of the Object- Oriented (00) paradigm. Low coupling and high cohesion help to achieve good modularity. Inheritance is one of the core concepts of the 00 paradigm which facilitates modularity. Previous research has shown that the use of the friend construct as a coupling mechanism in C+ + software is extensive. However, measures of the friend construct are scarse in comparison with measures of inheritance. In addition, these existing measures are coarse-grained, in spite of the widespread use of the friend mechanism. In this paper, a set of software metrics are proposed that measure the actual use of the friend construct, inheritance and other forms of coupling. These metrics are based on the interactions for which each coupling mechanism is necessary and sufficient. Previous work only considered the declaration of a relationship between classes. The software metrics introduced are empirically assessed using the LEDA software system. Our results indicate that the friend mechanism is used to a very limited extent to access hidden methods in classes. However, access to hidden attributes is more common.",2007,0, 2361,Evaluating Software Project Control Centers in Industrial Environments,"Many software development organizations still lack support for detecting and reacting to critical project states in order to achieve planned goals. One means to institutionalize project control, systematic quality assurance, and management support on the basis of measurement and explicit models is the establishment of so-called software project control centers. However, there is only little experience reported in the literature with respect to setting up and applying such control centers in industrial environments. One possible reason is the lack of appropriate evaluation instruments (such as validated questionnaires and appropriate analysis procedures). Therefore, we developed an initial measurement instrument to systematically collect experience with respect to the deployment and use of control centers. Our main research goal was to develop and evaluate the measurement instrument. The instrument is based on the technology acceptance model (TAM) and customized to project controlling. This article illustrates the application and evaluation of this measurement instrument in the context of industrial case studies and provides lessons learned for further improvement. In addition, related work and conclusions for future work are given.",2007,0, 2362,Using Software Dependencies and Churn Metrics to Predict Field Failures: An Empirical Case Study,"Commercial software development is a complex task that requires a thorough understanding of the architecture of the software system. We analyze the Windows Server 2003 operating system in order to assess the relationship between its software dependencies, churn measures and post-release failures. Our analysis indicates the ability of software dependencies and churn measures to be efficient predictors of post-release failures. Further, we investigate the relationship between the software dependencies and churn measures and their ability to assess failure-proneness probabilities at statistically significant levels.",2007,0, 2363,Fault-Prone Filtering: Detection of Fault-Prone Modules Using Spam Filtering Technique,"The fault-prone module detection in source code is of importance for assurance of software quality. Most of previous conventional fault-prone detection approaches have been based on using software metrics. Such approaches, however, have difficulties in collecting the metrics and constructing mathematical models based on the metrics. In order to mitigate such difficulties, we propose a novel approach for detecting fault-prone modules using a spam filtering technique. Because of the increase of needs for spam e-mail detection, the spam filtering technique has been progressed as a convenient and effective technique for text mining. In our approach, fault-prone modules are detected in a way that the source code modules are considered as text files and are applied to the spam filter directly. In order to show the usefulness of our approach, we conducted an experiment using source code repository of a Java based open source development. The result of experiment shows that our approach can classify more than 70% of software modules correctly.",2007,0, 2364,Characterizing Software Architecture Changes: An Initial Study,"With today's ever increasing demands on software, developers must produce software that can be changed without the risk of degrading the software architecture. Degraded software architecture is problematic because it makes the system more prone to defects and increases the cost of making future changes. The effects of making changes to software can be difficult to measure. One way to address software changes is to characterize their causes and effects. This paper introduces an initial architecture change characterization scheme created to assist developers in measuring the impact of a change on the architecture of the system. It also presents an initial study conducted to gain insight into the validity of the scheme. The results of this study indicated a favorable view of the viability of the scheme by the subjects, and the scheme increased the ability of novice developers to assess and adequately estimate change effort.",2007,0, 2365,An Approach to Global Sensitivity Analysis: FAST on COCOMO,"There are various models in software engineering that are used to predict quality-related aspects of the process or artefacts. The use of these models involves elaborate data collection in order to estimate the input parameters. Hence, an interesting question is which of these input factors are most important. More specifically, which factors need to be estimated best and which might be removed from the model? This paper describes an approach based on global sensitivity analysis to answer these questions and shows its applicability in a case study on the COCOMO application at NASA.",2007,0, 2366,Comparison of Outlier Detection Methods in Fault-proneness Models,"In this paper, we experimentally evaluated the effect of outlier detection methods to improve the prediction performance of fault-proneness models. Detected outliers were removed from a fit dataset before building a model. In the experiment, we compared three outlier detection methods (Mahalanobis outlier analysis (MOA), local outlier factor method (LOFM) and rule based modeling (RBM)) each applied to three well-known fault-proneness models (linear discriminant analysis (LDA), logistic regression analysis (LRA) and classification tree (CT)). As a result, MOA and RBM improved Fl-values of all models (0.04 at minimum, 0.17 at maximum and 0.10 at mean) while improvements by LOFM were relatively small (-0.01 at minimum, 0.04 at maximum and 0.01 at mean).",2007,0, 2367,Assessing the Quality Impact of Design Inspections,"Inspections are widely used and studies have found them to be effective in uncovering defects. However, there is less data available regarding the impact of inspections on different defect types and almost no data quantifying the link between inspections and desired end product qualities. This paper addresses this issue by investigating whether design inspection checklists can be tailored so as to effectively target certain defect types without impairing the overall defect detection rate. The results show that the design inspection approach used here does uncover useful design quality issues and that the checklists can be effectively tailored for some types of defects.",2007,0, 2368,Testing conformance on Stochastic Stream X-Machines,"Stream X-machines have been used to specify real systems requiring to represent complex data structures. One of the advantages of using stream X-machines to specify a system is that it is possible to produce a test set that, under certain conditions, detects all the faults of an implementation. In this paper we present a formal framework to test temporal behaviors in systems where temporal aspects are critical. Temporal requirements are expressed by means of random variables and affect the duration of actions. Implementation relations are presented as well as a method to determine the conformance of an implementation with respect to a specification by applying a test set.",2007,0, 2369,Hardness for Explicit State Software Model Checking Benchmarks,"Directed model checking algorithms focus computation resources in the error-prone areas of concurrent systems. The algorithms depend on some empirical analysis to report their performance gains. Recent work characterizes the hardness of models used in the analysis as an estimated number of paths in the model that contain an error. This hardness metric is computed using a stateless random walk. We show that this is not a good hardness metric because models labeled hard with a stateless random walk metric have easily discoverable errors with a stateful randomized search. We present an analysis which shows that a hardness metric based on a stateful randomized search is a tighter bound for hardness in models used to benchmark explicit state directed model checking techniques. Furthermore, we convert easy models into hard models as measured by our new metric by pushing the errors deeper in the system and manipulating the number of threads that actually manifest an error.",2007,0, 2370,Hybrid Intelligent and Adaptive Sensor Systems with Improved Noise Invulnerability by Dynamically Reconfigurable Matched Sensor Electronics,"Hybrid intelligent sensor systems and networks are composed of modules of tightly co-operating software and hardware components. Bio-inspired information processing is embodied in algorithms as well as dedicated electronics for intelligent processing and system adaptation. This paper focuses on the challenges imposed on the small yet irreplaceable analog and mixed signal components in such a sensor system, which are prone to deviation and degradations. Novel architectures combine issues of rapid-prototyping, trimming, fault-tolerance, and self- repair. However, the common reconfiguration approaches cannot deal efficiently with real-world noise problems. This paper adapts effective solution strategies to advanced sensor electronics for hybrid intelligent and adaptive sensor systems in a 0.35 mum CMOS technology and reports on the design of a novel generic chip.",2007,0, 2371,Software Effort Estimation using Machine Learning Techniques with Robust Confidence Intervals,"The precision and reliability of the estimation of the effort of software projects is very important for the competitiveness of software companies. Good estimates play a very important role in the management of software projects. Most methods proposed for effort estimation, including methods based on machine learning, provide only an estimate of the effort for a novel project. In this paper we introduce a method based on machine learning which gives the estimation of the effort together with a confidence interval for it. In our method, we propose to employ robust confidence intervals, which do not depend on the form of probability distribution of the errors in the training set. We report on a number of experiments using two datasets aimed to compare machine learning techniques for software effort estimation and to show that robust confidence intervals can be successfully built.",2007,0, 2372,How good are your testers? An assessment of testing ability,"During our previous research conducted in the Sheffield Software Engineering Observatory [11], we found that test first programmers spent a higher percentage of their time testing than those testing after coding. However as the team allocation was based on subjects ' academic records and their preference, it was unclear if they were simply better testers. Thus this paper proposes two questionnaires to assess the testing ability of subjects, in order to reveal the factors that contribute to the previous findings. Preliminary results show that the testing ability of subjects, as measured by the survey, varies based on their professional skill level.",2007,0, 2373,Software Fault Prediction using Language Processing,"Accurate prediction of faulty modules reduces the cost of software development and evolution. Two case studies with a language-processing based fault prediction measure are presented. The measure, refereed to as a QALP score, makes use of techniques from information retrieval to judge software quality. The QALP score has been shown to correlate with human judgements of software quality. The two case studies consider the measure's application to fault prediction using two programs (one open source, one proprietary). Linear mixed-effects regression models are used to identify relationships between defects and QALP score. Results, while complex, show that little correlation exists in the first case study, while statistically significant correlations exists in the second. In this second study the QALP score is helpful in predicting faults in modules (files) with its usefulness growing as module size increases.",2007,0, 2374,An Empirical Evaluation of the MuJava Mutation Operators,"Mutation testing is used to assess the fault-finding effectiveness of a test suite. Information provided by mutation testing can also be used to guide the creation of additional valuable tests and/or to reveal faults in the implementation code. However, concerns about the time efficiency of mutation testing may prohibit its widespread, practical use. We conducted an empirical study using the MuClipse automated mutation testing plug-in for Eclipse on the back end of a small web-based application. The first objective of our study was to categorize the behavior of the mutants generated by selected mutation operators during successive attempts to kill the mutants. The results of this categorization can be used to inform developers in their mutant operator selection to improve the efficiency and effectiveness of their mutation testing. The second outcome of our study identified patterns in the implementation code that remained untested after attempting to kill all mutants.",2007,0, 2375,A Study on Performance Measurement of a Plastic Packaging Organization's Manufacturing System by AHP Modeling,"By the effect of globalization, products, services, capital, technology, and people began to circulate more freely in the world. As a conclusion, in order to achieve and gain an advantage against competitors, manufacturing firms had to adopt themselves to changing conditions and evaluate their critical performance criteria. In this study, the aim is to determine general performance criteria and their characteristics and classifications from previous studies and evaluate performance criteria for a plastic packaging organization by utilizing analytic hierarchy process (AHP) modeling. A specific manufacturing organization, operating in the Turkish plastic packaging sector has been selected and the manufacturing performance criteria have been determined for that specific organization. Finally, the selected criteria have been assessed according to their relative importance by utilizing AHP approach and expert choice (EC) software program. As a result of this study, operating managers chose cost, quality, customer satisfaction and time factors as criteria for this organization. As the findings of the study indicate, the manufacturing organization operating in the plastic packaging sector, overviews its operations and measures its manufacturing performance basically on those four criteria and their sub criteria. Finally, relative importance of those main measures and their sub criteria are determined in consideration to plastic packaging sector.",2007,0, 2376,Using Simulation to Evaluate the Impact of New Requirements Analysis Tools,"Summary form only given. Adopting new tools and technologies on a development process can be a risky endeavor. Will the project accept the new technology? What will be the impact? Far too often the project is asked to adopt the new technology without planning how it will be applied on the project or evaluating the technology's potential impact. In this paper we provide a case study evaluating one new technology. Specifically we assess the merits of an automated defect detection tool. Using process simulation, we find situations where the use of this new technology is useful and situations where the use of this new technology is useless for large-scale NASA projects that utilize a process similar to the IEEE 12207 systems development lifecycle. We also calculate the value of the tool when implementing at different point in the process. This can help project manager to decide whether it would be worthwhile to invest in this new tool. The method can be applied to assessing the impact (including Return on Investment), break even point and the over-all value of applying any tool on a project.",2007,0, 2377,Study on Software of VXIbus Boundary Scan Test Generation,"The goal of this paper is to develop a set of software of boundary-scan test (BST) generation using some test generation algorithms and test data. In order to get the test data quickly and effectively, a new innovative method of establishing test project description (TPD) file is presented. During the testing of two different boundary-scan circuit boards, all faults can be detected, indicating that the expected design objective is achieved.",2007,0, 2378,Predict Malfunction-Prone Modules for Embedded System Using Software Metrics,"High software dependability is significant for many software systems, especially in embedded system. Dependability is usually measured from the user's viewpoint in terms of time between failures, according to an operational profile. A software malfunction is defined as a defect in an executable software product that may cause a failure. Thus, malfunctions are attributed to the software modules that cause failures. Developers tend to focus on malfunctions, because they are closely related to the amount of rework necessary to prevent future failures. This paper defined a software module malfunction-prone by class cohesion metrics when there is a high risk that malfunctions will be discovered during operations. Also proposed a novel cohesion measure method for derived classes in embedded system.",2007,0, 2379,Numerical Simulation of the Temperature Distribution in SiC Sublimation Growth System,"Although serious attempts have been developed silicon carbide bulk crystal growth technology to an industrial process during the last years, the quality of crystal remains deficient. One of the major problems is that the thermal field of SiC growth systems is not fully understood. Numerical simulation is considered as an important tool for the investigation of the thermal field distribution inside the growth crucible system involved with SiC bulk growth. We employ the finite-element software package ANSYS to provide additional information on the thermal field distribution. A two-dimensional model has been developed to simulate the axisymmetric growth system consist of a cylindrical susceptor (graphite crucible), a graphite felt insulation, and a copper inductive coil. The modeling field is coupled electromagnetic heating and thermal transfer. The induced magnetic field is used to predict heat generation due to magnetic induction. Conduction, convection and radiation in various components of the system are accounted for the heat transfer ways. The thermal field in SiC sublimation growth system was provided.",2007,0, 2380,Understanding and Building Spreadsheet Tools,"Spreadsheets are among the most widely used programming systems. Unfortunately, there is a high incidence of errors within spreadsheets that are employed for a wide variety of computations. Some of these errors have a huge impact on individuals and organizations. As part of our research on spreadsheets, we have developed several approaches that are targeted at helping end-user programmers prevent, detect, and correct faults within their spreadsheets. In this tutorial, we explain fundamental principles on which spreadsheet tools can be based. We then illustrate how some simple inference mechanisms and visualization techniques that are based on these principles can be derived to detect errors or anomalous areas within spreadsheets. We also introduce a flexible framework for the quick prototype development of such spreadsheet tools and visualizations.",2007,0, 2381,Building a Self-Healing Operating System,"User applications and data in volatile memory are usually lost when an operating system crashes because of errors caused by either hardware or software faults. This is because most operating systems are designed to stop working when some internal errors are detected despite the possibility that user data and applications might still be intact and recoverable. Techniques like exception handling, code reloading, operating system component isolation, micro-rebooting, automatic system service restarts, watchdog timer based recovery and transactional components can be applied to attempt self-healing of an operating system from a wide variety of errors. Fault injection experiments show that these techniques can be used to continue running user applications after transparently recovering the operating system in a large percentage of cases. In cases where transparent recovery is not possible, individual process recovery can be attempted as a last resort.",2007,0, 2382,Performance Analysis of CORBA Replication Models,"Active and passive replication models constitute an effective way to achieve the availability objectives of distributed real-time (DRE) systems. These two models have different impacts on the application performance. Although these models have been commonly used in practical systems, a systematic quantitative evaluation of their influence on the application performance has not been conducted. In this paper we describe a methodology to analyze the application performance in the presence of active and passive replication models. For each one of these models, we obtain an analytical expression for the application response time in terms of the model parameters. Based on these analytical expressions, we derive the conditions under which one replication model has better performance over the other. Our results indicate that the superiority of one replication model over the other is governed not only by the model parameters but also by the application characteristics. We illustrate the value of the analytical expressions to assess the influence of the parameters of each model on the application response time and for a comparative analysis of the two models.",2007,0, 2383,Silicon Debug for Timing Errors,"Due to various sources of noise and process variations, assuring a circuit to operate correctly at its desired operational frequency has become a major challenge. In this paper, we propose a timing-reasoning-based algorithm and an adaptive test-generation algorithm for diagnosing timing errors in the silicon-debug phase. We first derive three metrics that are strongly correlated to the probability of a candidate's being an actual error source. We analyze the problem of circuit timing uncertainties caused by delay variations and test sampling. Then, we propose a candidate-ranking heuristic, which is robust with respect to such sources of timing uncertainty. Based on the initial ranking result and the timing information, we further propose an adaptive path-selection and test-generation algorithm to generate additional diagnostic patterns for further improvement of the first-hit-rate. The experimental results demonstrate that combining the ranking heuristic and the adaptive test-generation method would result in a very high resolution for timing diagnosis.",2007,0, 2384,Machine Simulation for Workflow Integration Testing,"This paper addresses the problems of modeling and simulating physical machines, part of complex industrial production lines. The direct execution of industrial workflows on production line machines before integration testing can be very expensive and may lead to improper machine operation and even to non recoverable faults. In order to simulate the execution of industrial workflow models for integration testing purposes, we propose a physical machine simulator based on nondeterministic, probability- based state machines. For each physical machine a behavioral model is constructed using operational scenarios followed by its translation into a state machine representation. The proposed simulator was used for a sausage preparing production line in the context of the food trace project.",2007,0, 2385,Exploration of Quantitative Scoring Metrics to Compare Systems Biology Modeling Approaches,"In this paper, we report a focused case study to assess whether quantitative metrics are useful to evaluate molecular-level system biology models on cellular metabolism. Ideally, the bio-modeling community shall be able assess systems biology models based on objective and quantitative metrics. This is because metric-based model design not only can accelerate the validation process, but also can improve the efficacy of model design. In addition, the metric will enable researchers to select models with any desired quality standards to study biological pathway. In this case study, we compare popular systems biology modeling approaches such as Michaelis-Menten kinetics and generalized mass action and flux balance analysis to examine the difficulties in developing quantitative metrics for bio-model assessment. We created a set of guidelines in evaluating the efficacy of various bio-modeling approaches and system analysis in several """";bio-systems of interest"""";. We found that quantitative scoring metrics are essential aids for (i) model adopters and users to determine fundamental distinctions among bio-models, and (ii) model developers to improve key areas in bio-modeling. Eventually, we want to extend this evaluation practice to broad systems biology modeling.",2007,0, 2386,Determination of simple thresholds for accelerometry-based parameters for fall detection,"The increasing population of elderly people is mainly living in a home-dwelling environment and needs applications to support their independency and safety. Falls are one of the major health risks that affect the quality of life among older adults. Body attached accelerometers have been used to detect falls. The placement of the accelerometric sensor as well as the fall detection algorithms are still under investigation. The aim of the present pilot study was to determine acceleration thresholds for fall detection, using triaxial accelerometric measurements at the waist, wrist, and head. Intentional falls (forward, backward, and lateral) and activities of daily living (ADL) were performed by two voluntary subjects. The results showed that measurements from the waist and head have potential to distinguish between falls and ADL. Especially, when the simple threshold-based detection was combined with posture detection after the fall, the sensitivity and specificity of fall detection were up to 100 %. On the contrary, the wrist did not appear to be an optimal site for fall detection.",2007,0, 2387,Wavelet based approach for posture transition estimation using a waist worn accelerometer,The ability to rise from a chair is considered to be important to achieve functional independence and quality of life. This sit-to-stand task is also a good indicator to assess condition of patients with chronic diseases. We developed a wavelet based algorithm for detecting and calculating the durations of sit-to-stand and stand-to-sit transitions from the signal vector magnitude of the measured acceleration signal. The algorithm was tested on waist worn accelerometer data collected from young subjects as well as geriatric patients. The test demonstrates that both transitions can be detected by using wavelet transformation applied to signal magnitude vector. Wavelet analysis produces an estimate of the transition pattern that can be used to calculate the transition duration that further gives clinically significant information on the patients condition. The method can be applied in a real life ambulatory monitoring system for assessing the condition of a patient living at home.,2007,0, 2388,Robust Nonparametric Segmentation of Infarct Lesion from Diffusion-Weighted MR Images,"Magnetic Resonance Imaging (MRI) is increasingly used for the diagnosis and monitoring of neurological disorders. In particular Diffusion-Weighted MRI (DWI) is highly sensitive in detecting early cerebral ischemic changes in acute stroke. Cerebral infarction lesion segmentation from DWI is accomplished in this work by applying nonparametric density estimation. The quality of the class boundaries is improved by including an edge confidence map, that is the confidence of truly being in the presence of a border between adjacent regions. The adjacency graph, that is constructed with the label regions, is analyzed and pruned to merge adjacent regions. The method was applied to real images, keeping all parameters constant throughout the process for each data set. The combination of region segmentation and edge detection proved to be a robust automatic technique of segmentation from DWI images of cerebral infarction regions in acute ischemic stroke. In a comparison with the reference infarct lesions segmentation, the automatic segmentation presented a significant correlation (r = 0.935), and an average Tanimoto index of 0.538.",2007,0, 2389,Monitoring Cole-Cole Parameters During Haemodialysis (HD),"The investigation of the hydration process during the haemodialysis treatment sessions is very important for the development of methods for predicting the unbalanced fluid shifts and hypotension crisis hence improving the quality of the haemodialysis procedure. Bioimpedance measurements can give valuable information about the tissue under measurement, therefore characterizing the tissue.In this work we propose a non-invasive method based on local multifrequency bioimpedance measurements that allow us to determine the fluid distribution and variations during haemodialysis. Clinical measurements were done using 10 HD patients during 60 HD sessions. Bioimpedance data, ultrafiltration volume, blood volume and blood heamatocrit variations were recorded continuously during the HD sessions. Bioimpedance of the local tissue was measured with a 4-elctrode impedance system using surface electrodes with sampling rate of 1 meas./4 min. at 6 different frequencies. The measured impedances were fitted into Cole-Cole model and the Cole-Cole parameters were continuously determined for each measurement point during the HD session. The 4 Cole-Cole parameters (RinfinR0Fcalpha) and their variations were evaluated. Impedance values at infinite and zero (RinfinR0) frequencies were extrapolated from Cole-Cole mathematical model. These values are assumed to represent the impedance of total tissue fluid and the impedance of the extracellular space respectively.",2007,0, 2390,A Robust Tool to Compare Pre- and Post-Surgical Voice Quality,"Assessing voice quality by means of objective parameters is of great relevance for clinicians. A large number of indexes have been proposed in literature and in commercially available software tools. However, clinicians commonly resort to a small subset of such indexes, due to difficulties in managing set up options and understanding their meaning. In this paper, the analysis has been limited to few but effective indexes, devoting great effort to their robust and automatic evaluation. Specifically, fundamental frequency (F0), along with its irregularity (jitter (J) and relative average perturbation (RAP)), noise and formant frequencies, are tracked on voiced parts of the signal only. Mean and std values are also displayed. The underlying high-resolution estimation procedure is further strengthened by an adaptive estimation of the optimal length of signal frames for analysis, linked to varying signal characteristics. Moreover, the new tool allows for automatic analysis of any kind of signal, both as far as F0 range and sampling frequency are concerned, no manual setting being required to the user. This makes the tool feasible for application by non-expert users, also thanks to its simple interface. The proposed approach is applied here to patients suffering from cysts and polyps that underwent micro-laryngoscopic direct exeresis (MLSD).",2007,0, 2391,Experience at Italian National Institute of Health in the quality control in telemedicine: tools for gathering data information and quality assessing,"The authors proposed a set of tools and procedures to perform a Telemedicine Quality Control process (TM-QC) to be submitted to the telemedicine (TM) manufacturers. The proposed tools were: the Informative Questionnaire (InQu), the Classification Form (ClFo), the Technical File (TF), the Quality Assessment Checklist (QACL). The InQu served to acquire the information about the examined TM product/service; the ClFo allowed to classify a TM product/service as belonging to one application area of TM. The TF was intended as a technical dossier of product and forced the TM supplier to furnish the only requested documentation of its product, so to avoid redundant information. The QACL was a checklist of requirements, regarding all the essential aspects of the telemedical applications, that each TM products/services must be met. The final assessment of the TM product/service was carried out via the QACL, by computing the number of agreed requirements: on the basis of this computation, a Quality Level (QL) was assigned to the telemedical application. Seven levels were considered, ranging from the Basic Quality Level (QL1- B) to the Excellent Quality Level (QL7-E). The TM-QC process resulted a powerful tool to perform the quality control of the telemedical applications and should be a guidance to all the TM practitioners, from the manufacturers to the expert evaluators. The quality control process procedures proposed thus could be adopted in future as routine procedures and could be useful in the assessing the TM delivering into the National Health Service versus the traditional face to face healthcare services.",2007,0, 2392,Reduction of False Positives in Polyp Detection Using Weighted Support Vector Machines,"Colorectal cancer is the third highest cause of cancer deaths in US (2007). Early detection and treatment of colon cancer can significantly improve patient prognosis. Manual identification of polyps by radiologists using CT colonography can be labour intensive due to the increasing size of datasets and is error prone due to the complexity of the anatomical structures. There has been increasing interest in computer aided detection (CAD) systems for detecting polyps using CT colonography. For a typical CAD system two major steps can be identified. In the first step image processing techniques are used to detect potential polyp candidates. Many non-polyps are inevitably found in this process. The second step attempts to discount the non-polyp candidates while maintaining true polyps. In practice this is a challenging task as training data is heavily imbalanced, that is, non-polyps dominate the data. This paper describes how the weighted support vector machine (weighted-SVM) can be used to tackle the problem effectively. The weighted-SVM generalises the traditional SVM by applying different penalties to different classes. This trains the classifier to give favour to the most weighted class (in this case true polyps). In this paper the method was applied to data obtained from the intermediate results from a CAD system, originally applied to 209 cases. The results show that the weighted-SVM can play an important role in CAD algorithms for colorectal polyps.",2007,0, 2393,Towards Automatic Grading of Nuclear Cataract,"Objective quantification of lens images is essential for cataract assessment and treatment. In this paper, bottom-up and top-down strategies are combined to detect the lens contour from the slit-lamp images. The center of the lens is localized by horizontal and vertical intensity profile clustering and the lens contour is estimated by fitting an ellipse. A modified active shape model (ASM) is further applied to detect the contour of the lens. The average intensity inside the lens is employed as the indicator of nuclear opacity. The relationship between our automated nuclear cataract assessment and the clinical grading is analyzed. The preliminary study of forty images shows that the difference between automatic grading and clinical grading is acceptable.",2007,0, 2394,Evaluation of Medical Image Watermarking with Tamper Detection and Recovery (AW-TDR),"This paper will study and evaluate watermarking technique by Zain and Fauzi. Recommendations will then be made to enhance the technique especially in the aspect of recovery or reconstruction rate for medical images. A proposal will also be made for a better distribution of watermark to minimize the distortion of the region of interest (ROI). The final proposal will enhance AW-TDR in three aspects; firstly the image quality in the ROI will be improved as the maximum change is only 2 bits in every 4 pixels, or embedding rate of 0.5 bits/pixel. Secondly the recovery rate will also be better since the recovery bits are located outside the region of interest. The disadvantage in this is that, only manipulation done in the ROI will be detected. Thirdly the quality of the reconstructed image will be enhanced since the average of 2 x 2 pixels would be used to reconstruct the tampered image.",2007,0, 2395,Sources of Mistakes in PFD Calculations for Safety-Related Loop Typicals,"In order to prevent any harm for human beings and environment, IEC 61511 imposes strict requirements on safety instrumented functions (SIFs) in chemical and pharmaceutical production plants. As measure of quality a safety integrity level (SIL) of 1, 2, 3 or 4 is postulated for the SIF. In this context for every SIF realization, i.e. safety-related loop, a SIL-specific probability of failure on demand (PFD) must be proven. Usually, the PFD calculation is performed based on the failure rates of each loop component aided by commercial software tools. But this bottom-up approach suffers from many uncertainties. Especially a lack of reliable failure rate data causes many problems. Reference data for different environmental conditions are available to solve this situation. However, this pragmatism leads to a PFD bandwidth, not to a single PFD value as desired. In order to make a decision for a numerical value appropriate for plant applications in chemical industry, a data ascertainment has been initiated by the European NAMUR within its member companies. Combined with statistical methods their results display large deficiencies for the bottom-up approach. As one main source of mistakes the distribution of the loop PFD has been identified. The well known percentages for sensor, logic solver and final element part often cited in literature could not be confirmed.",2007,0, 2396,High-available grid services through the use of virtualized clustering,"Grid applications comprise several components and web-services that make them highly prone to the occurrence of transient software failures and aging problems. This type of failures often incur in undesired performance levels and unexpected partial crashes. In this paper we present a technique that offers high-availability for Grid services based on concepts like virtualization, clustering and software rejuvenation. To show the effectiveness of our approach, we have conducted some experiments with OGSA-DAI middleware. One of the implementations of OGSA-DAI makes use of use of Apache Axis V1.2.1, a SOAP implementation that suffers from severe memory leaks. Without changing any bit of the middleware layer we have been able to anticipate most of the problems caused by those leaks and to increase the overall availability of the OGSA-DAI Application Server. Although these results are tightly related with this middleware it should be noted that our technique is neutral and can be applied to any other Grid service that is supposed to be high-available.",2007,0, 2397,An Observation-Based Approach to Performance Characterization of Distributed n-Tier Applications,"The characterization of distributed n-tier application performance is an important and challenging problem due to their complex structure and the significant variations in their workload. Theoretical models have difficulties with such wide range of environmental and workload settings. Experimental approaches using manual scripts are error-prone, time consuming, and expensive. We use code generation techniques and tools to create and run the scripts for large-scale experimental observation of n-tier benchmarking application performance measurements over a wide range of parameter settings and software/hardware combinations. Our experiments show the feasibility of experimental observations as a sound basis for performance characterization, by studying in detail the performance achieved by (up to 3) database servers and (up to 12) application servers in the RUBiS benchmark with a workload of up to 2700 concurrent users.",2007,0, 2398,Historical Risk Mitigation in Commercial Aircraft Avionics as an Indicator for Intelligent Vehicle Systems,"How safety is perceived in conjunction with consumer products has much to do with its presentation to the buying public and the company reputation for performance and safety. As the automobile industry implements integrated vehicle safety and driver aid systems, the question of public perception of the true safety benefits would seem to parallel the highly automated systems of commercial aircraft, a market in which perceived benefits of flying certainly outweigh concerns of safety. It is suggested that the history of critical aircraft systems provides a model for the wide-based implementation of automated systems in automobiles. The requirement for safety in aircraft systems as an engineering design parameter takes on several forms such as wear-out, probability of catastrophic failure and mean time between replacement or repair (MTBR). For automobile systems as in aircraft, it is a multidimensional topic encompassing a variety of hardware and software functions, fail-safe or fail-operational capability and operator and control interaction. As with critical flight systems, the adherence to specific federal safety requirements is also a cost item to which all manufacturers must adhere, but that also provides a common baseline to which all companies must design. Long a requirement for the design of systems for military and commercial aircraft control, specific safety standards have produced methodologies for analysis and system mechanization that would suggest the operational safety design methods needed for automobiles. Ultimately, tradeoffs must be completed to attain an acceptable level of safety when compared to the cost for developing and selling the system. As seen with commercial aircraft, acceptance of product safety by the public is not based on understanding strict technical requirements but is primarily the result of witnessing many hours of fault free operation, and seeking opinions of those they feel are knowledgeable. This brief study will use data from p- reliminary concept studies for the Automated Highway System and developments by human factors analysts and sociologists concerning perceptions of risk to present an evaluation of the technological methods historically used to mitigate risk in critical aircraft systems and how they might apply to automation in automobiles.",2007,0, 2399,Mining the Lexicon Used by Programmers during Sofware Evolution,"Identifiers represent an important source of information for programmers understanding and maintaining a system. Self-documenting identifiers reduce the time and effort necessary to obtain the level of understanding appropriate for the task at hand. While the role of the lexicon in program comprehension has long been recognized, only a few works have studied the quality and enhancement of the identifiers and no works have studied the evolution of the lexicon. In this paper, we characterize the evolution of program identifiers in terms of stability metrics and occurrences of renaming. We assess whether an evolution process similar to the one occurring for the program structure exists for identifiers. We report data and results about the evolution of three large systems, for which several releases are available. We have found evidence that the evolution of the lexicon is more limited and constrained than the evolution of the structure. We argue that the different evolution results from several factors including the lack of advanced tool support for lexicon construction, documentation, and evolution.",2007,0, 2400,Evaluation of Semantic Interference Detection in Parallel Changes: an Exploratory Experiment,"Parallel developments are becoming increasingly prevalent in the building and evolution of large-scale software systems. Our previous studies of a large industrial project showed that there was a linear correlation between the degree of parallelism and the likelihood of defects in the changes. To further study the relationship between parallel changes and faults, we have designed and implemented an algorithm to detect """"direct"""" semantic interference between parallel changes. To evaluate the analyzer's effectiveness in fault prediction, we designed an experiment in the context of an industrial project. We first mine the change and version management repositories to find sample versions sets of different degrees of parallelism. We investigate the interference between the versions with our analyzer. We then mine the change and version repositories to find out what faults were discovered subsequent to the analyzed interfering versions. We use the match rate between semantic interference and faults to evaluate the effectiveness of the analyzer in predicting faults. Our contributions in this evaluative empirical study are twofold. First, we evaluate the semantic interference analyzer and show that it is effective in predicting faults (based on """"direct"""" semantic interference detection) in changes made within a short time period. Second, the design of our experiment is itself a significant contribution and exemplifies how to mine software repositories rather than use artificial cases for rigorous experimental evaluations.",2007,0, 2401,An Activity-Based Quality Model for Maintainability,"Maintainability is a key quality attribute of successful software systems. However, its management in practice is still problematic. Currently, there is no comprehensive basis for assessing and improving the maintainability of software systems. Quality models have been proposed to solve this problem. Nevertheless, existing approaches do not explicitly take into account the maintenance activities, that largely determine the software maintenance effort. This paper proposes a 2-dimensional model of maintainability that explicitly associates system properties with the activities carried out during maintenance. The separation of activities and properties facilitates the identification of sound quality criteria and allows to reason about their interdependencies. This transforms the quality model into a structured and comprehensive quality knowledge base that is usable in industrial project environments. For example, review guidelines can be generated from it. The model is based on an explicit quality metamodel that supports its systematic construction and fosters preciseness as well as completeness. An industrial case study demonstrates the applicability of the model for the evaluation of the maintainability of Matlab Simulink models that are frequently used in model-based development of embedded systems.",2007,0, 2402,Combinatorial Interaction Regression Testing: A Study of Test Case Generation and Prioritization,"Regression testing is an expensive part of the software maintenance process. Effective regression testing techniques select and order (or prioritize) test cases between successive releases of a program. However, selection and prioritization are dependent on the quality of the initial test suite. An effective and cost efficient test generation technique is combinatorial interaction testing, CIT, which systematically samples all t-way combinations of input parameters. Research on CIT, to date, has focused on single version software systems. There has been little work that empirically assesses the use of CIT test generation as the basis for selection or prioritization. In this paper we examine the effectiveness of CIT across multiple versions of two software subjects. Our results show that CIT performs well in finding seeded faults when compared with an exhaustive test set. We examine several CIT prioritization techniques and compare them with a re-generation/prioritization technique. We find that prioritized and re-generated/prioritized CIT test suites may find faults earlier than unordered CIT test suites, although the re-generated/prioritized test suites sometimes exhibit decreased fault detection.",2007,0, 2403,Fault Detection Probability Analysis for Coverage-Based Test Suite Reduction,"Test suite reduction seeks to reduce the number of test cases in a test suite while retaining a high percentage of the original suite's fault detection effectiveness. Most approaches to this problem are based on eliminating test cases that are redundant relative to some coverage criterion. The effectiveness of applying various coverage criteria in test suite reduction is traditionally based on empirical comparison of two metrics derived from the full and reduced test suites and information about a set of known faults: (1) percentage size reduction and (2) percentage fault detection reduction, neither of which quantitatively takes test coverage data into account. Consequently, no existing measure expresses the likelihood of various coverage criteria to force coverage-based reduction to retain test cases that expose specific faults. In this paper, we develop and empirically evaluate, using a number of different coverage criteria, a new metric based on the """"average expected probability of finding a fault"""" in a reduced test suite. Our results indicate that the average probability of detecting each fault shows promise for identifying coverage criteria that work well for test suite reduction.",2007,0, 2404,A User-centric Applications Sharing Model on Pervasive Computing,"Pervasive computing is booming research field, in which innovative techniques and applications are continuously forming to provide users with high quality ambient and personalized services. Applications available for end-users have become increasingly abundant, distributed and heterogeneous in pervasive computing environment. How to share and utilize these applications is a significant research problem in pervasive computing environment. This paper puts forward a User-centric Applications Sharing Model (U-ASM). The U-ASM mainly focuses on abstracting the applications from service providers and end-users, and then utilizes virtualization technology to encapsulate applications and uses ontology organize distributed applications as a logic unit with semantic relationships so that it can provide better quality of service for end-users. The research result has been applied in R&D Infrastructure and Facility Development of Ministry of Science and Technology and has great flexibility and extensibility.",2007,0, 2405,Distributed Diagnosis of Failures in a Three Tier E-Commerce System,"For dependability outages in distributed Internet infrastructures, it is often not enough to detect a failure, but it is also required to diagnose it, i.e., to identify its source. Complex applications deployed in multi-tier environments make diagnosis challenging because of fast error propagation, black-box applications, high diagnosis delay, the amount of states that can be maintained, and imperfect diagnostic tests. Here, we propose a probabilistic diagnosis model for arbitrary failures in components of a distributed application. The monitoring system (the Monitor) passively observes the message exchanges between the components and, at runtime, performs a probabilistic diagnosis of the component that was the root cause of a failure. We demonstrate the approach by applying it to the Pet Store J2EE application, and we compare it with Pinpoint by quantifying latency and accuracy in both systems. The Monitor outperforms Pinpoint by achieving comparably accurate diagnosis with higher precision in shorter time.",2007,0, 2406,Partial Disk Failures: Using Software to Analyze Physical Damage,"A good understanding of disk failures is crucial to ensure a reliable storage of data. There have been numerous studies characterizing disk failures under the common assumption that failed disks are generally unusable. Contrary to this assumption, partial disk failures are very common, e.g., caused by a head crash resulting in a small number of inaccessible disk sectors. Nevertheless, the damage can sometimes be catastrophic if the file system meta-data were among the affected sectors. As disk density rapidly increases, the likelihood of losing data also rises. This paper describes our experience in analyzing partial disk failures using the physical locations of damaged disk sectors to assess the extent and characteristics of the damage on disk platter surfaces. Based on our findings, we propose several fault-tolerance techniques to proactively guard against permanent data loss due to partial disk failures.",2007,0, 2407,Study and Applicaion of FTA Software System,"Fault tree analysis (FTA) is an effective technique to analyze the reliability of a complex system. Through selecting the reasonable top events, putting out the fault tree of the system,and carrying out quantitative analysis about the fault tree, the information of the system under study can be systematically understood. First, this paper studies how to calculate the occurrence probability of the top event and compute the importance degree of a basic event. Then this paper put forwards the realized arithmetic of FTA software system. Also, it analyzed and designed the class of fault tree, calculation module of minimal cut, graph drawing of tree, and calculation class of reliably characteristic. On the basis of the above study, the software system of Fault Tree Analysis is established. Making use of the fault tree analysis system, the reliability of the intelligent electrical apparatus is analyzed. The fault tree of intelligent electrical apparatus system is established according to its composing and the relating test data. It can find the minimal cut set, and gain the probability configuration importance degree. These studies are helpful for the reliability design of the intelligent electrical apparatus.",2007,0, 2408,Power-Aware Control Flow Checking Compilation: Using Less Branches to Reduce Power Dissipation,"Satellite-borne embedded systems require the properties of low-powered and reliability in the spatial radiation environment. The control flow checking is an effective way for the running systems to prevent the broken-down caused by Single Event Upsets. Traditional software control flow checking uses a great deal of branch instructions to detect errors, thus brings great overhead in power dissipation. In this paper, a partition method of basic block is suggested. In this partition method, branch instructions are reduced greatly, while the high error detection coverage remain ensure. The simulated results show that compared with the traditional Control Flow Checking by Software Signatures(CFCSS) control flow checking algorithm, the Improved algorithm can reduce total branch instructions by over 10%, reduce the power dissipation by nearly 9%, without decreasing the error detection coverage.",2007,0, 2409,An Unsupervised Intrusion Detection Method Combined Clustering with Chaos Simulated Annealing,"Keeping networks security has never been such an imperative task as today. Threats come from hardware failures, software flaws, tentative probing and malicious attacks. In this paper, a new detection method, Intrusion Detection based on Unsupervised Clustering and Chaos Simulated Annealing algorithm (IDCCSA), is proposed. As a novel optimization technique, chaos has gained much attention and some applications during the past decade. For a given energy or cost function, by following chaotic ergodic orbits, a chaotic dynamic system may eventually reach the global optimum or its good approximation with high probability. To enhance the performance of simulated annealing which is to find a near-optimal partitioning clustering, simulated annealing algorithm is proposed by incorporating chaos. Experiments with KDD cup 1999 show that the simulated annealing combined with chaos can effectively enhance the searching efficiency and greatly improve the detection quality.",2007,0, 2410,A Fault Detection Mechanism for Fault-Tolerant SOA-Based Applications,"Fault tolerance is an important capability for SOA-based applications, since it ensures the dynamic composition of services and improves the dependability of SOA-based applications. Fault detection is the first step of fault detection, so this paper focuses on fault detection, and puts forward a fault detection mechanism, which is based on the theories of artificial neural network and probability change point analysis rather than static service description, to detect the services that fail to satisfy performance requirements at runtime. This paper also gives reference model of fault-tolerance control center of enterprise services bus.",2007,0, 2411,A Constructive RBF Neural Network for Estimating the Probability of Defects in Software Modules,"Much of the current research in software defect prediction focuses on building classifiers to predict only whether a software module is fault-prone or not. Using these techniques, the effort to test the software is directed at modules that are labelled as fault-prone by the classifier. This paper introduces a novel algorithm based on constructive RBF neural networks aimed at predicting the probability of errors in fault-prone modules; it is called RBF-DDA with Probabilistic Outputs and is an extension of RBF-DDA neural networks. The advantage of our method is that we can inform the test team of the probability of defect in a module, instead of indicating only if the module is fault-prone or not. Experiments carried out with static code measures from well-known software defect datasets from NASA show the effectiveness of the proposed method. We also compared the performance of the proposed method in software defect prediction with kNN and two of its variants, the S-POC-NN and R-POC-NN. The experimental results showed that the proposed method outperforms both S-POC-NN and R-POC-NN and that it is equivalent to kNN in terms of performance with the advantage of producing less complex classifiers.",2007,0, 2412,Defect prevention and detection in software for automated test equipment,"Software for automated test equipment can be tedious and monotonous making it just as error-prone as other types of software. Active defect prevention and detection are important for test applications. Incomplete or unclear requirements, a cryptic syntax, variability in syntax or structure, and changing requirements are among the problems encountered in test applications for one tester. These issues increase the probability of error introduction during test application development. This paper describes a test application development tool designed to address these issues for the PT3800 tester, a continuity and insulation resistance tester. The tool was designed with powerful built-in defect prevention and detection capabilities. A reduction in rework and a two-fold increase in productivity are the results. The defect prevention and detection capabilities are described along with lessons learned and their applicability to other test equipment software.",2007,0, 2413,Automatic segmentation and band detection of protein images based on the standard deviation profile and its derivative,"Gel electrophoresis has significantly influenced the progress achieved in genetic studies over the last decade. Image processing techniques that are commonly used to analyze gel electrophoresis images require mainly three steps: band detection, band matching, and quantification and comparison. Although several techniques have been proposed to fully automate all steps, errors in band detection and, hence, in quantification are still important issues to address. In order to detect bands, many techniques were used, including image segmentation. In this paper, we present two novel, fully-automated techniques based on the standard deviation and its derivative to perform segmentation and to detect protein bands. Results show that even for poor quality images with faint bands, segmentation and detection are highly accurate.",2007,0, 2414,Are Two Heads Better than One? On the Effectiveness of Pair Programming,"Pair programming is a collaborative approach that makes working in pairs rather than individually the primary work style for code development. Because PP is a radically different approach than many developers are used to, it can be hard to predict the effects when a team switches to PP. Because projects focus on different things, this article concentrates on understanding general aspects related to effectiveness, specifically project duration, effort, and quality. Not unexpectedly, our meta-analysis showed that the question of whether two heads are better than one isn't precise enough to be meaningful. Given the evidence, the best answer is """"it depends"""" - on both the programmer's expertise and the complexity of the system and tasks to be solved. Two heads are better than one for achieving correctness on highly complex programming tasks. They might also have a time gain on simpler tasks. Additional studies would be useful. For example, further investigation is clearly needed into the interaction of complexity and programmer experience and how they affect the appropriateness of a PP approach; our current understanding of this phenomenon rests chiefly on a single (although large) study. Only by understanding what makes pairs work and what makes them less efficient can we take steps to provide beneficial work conditions, to avoid detrimental conditions, and to avoid pairing altogether when conditions are detrimental. With the right cooks and the right combination of ingredients, the broth has the potential to be very good indeed.",2007,0, 2415,Using Software Reliability Growth Models in Practice,"The amount of software in consumer electronics has grown from thousands to millions of lines of source code over the past decade. Up to a million of these products are manufactured each month for a successful mobile phone or television. Development organizations must meet two challenging requirements at the same time: be predictable to meet market windows and provide nearly fault-free software. Software reliability is the probability of failure-free operation for a specified period of time in a specified environment. The process of finding and removing faults to improve the software reliability can be described by a mathematical relationship called a software reliability growth model (SRGM). Our goal is to assess the practical application of SRGMs during integration and test and compare them with other estimation methods. We empirically validated SRGMs' usability in a software development environment. During final test phases for three embedded software projects, software reliability growth models predicted remaining faults in the software, supporting management's decisions.",2007,0, 2416,Symbolic Generation of Models for Microwave Software Tools,"In this paper, we present a use of computer algebra systems (CAS) to derive the scattering parameter description of equivalent networks which are models of linear microwave devices, such as planar transmission line discontinuities. In this case, the use of an automated symbolic technique is a natural choice because manual derivation of scattering parameters is practically impossible, error-prone, and a fatiguing labour. Benefits of the presented symbolic approach are highlighted from the viewpoint of a microwave software tool. We exemplify our original symbolic algorithm by analyzing a four-port network that represents a microstrip cross-junction.",2007,0, 2417,Mechatronic Software Testing,"The paper describes mechatronic software testing techniques. Testing is different from common testing and includes special features of mechatronic systems. It may be put into effect with the aim of improving quality, assessing reliability, checking and conforming correctness. Various adapted techniques may be employed for the purpose, such as, for example, the white box technique or the black box technique when correctness testing, endurance and stress testing in relability testing or the usage of ready programs for performance testing.",2007,0, 2418,Assessing and Improving the Quality of Document Images Acquired with Portable Digital Cameras,"Professionals and students of many different areas start to use portable digital cameras to take photos of documents, instead of photocopying them. This article analyses the quality of such documents for optical character recognition and proposes ways of improving their transcription and readability.",2007,0, 2419,Automatic Document Logo Detection,"Automatic logo detection and recognition continues to be of great interest to the document retrieval community as it enables effective identification of the source of a document. In this paper, we propose a new approach to logo detection and extraction in document images that robustly classifies and precisely localizes logos using a boosting strategy across multiple image scales. At a coarse scale, a trained Fisher classifier performs initial classification using features from document context and connected components. Each logo candidate region is further classified at successively finer scales by a cascade of simple classifiers, which allows false alarms to be discarded and the detected region to be refined. Our approach is segmentation free and lay-out independent. We define a meaningful evaluation metric to measure the quality of logo detection using labeled groundtruth. We demonstrate the effectiveness of our approach using a large collection of real-world documents.",2007,0, 2420,A Best Practice Guide to Resource Forecasting for Computing Systems,"Recently, measurement-based studies of software systems have proliferated, reflecting an increasingly empirical focus on system availability, reliability, aging, and fault tolerance. However, it is a nontrivial, error-prone, arduous, and time-consuming task even for experienced system administrators, and statistical analysts to know what a reasonable set of steps should include to model, and successfully predict performance variables, or system failures of a complex software system. Reported results are fragmented, and focus on applying statistical regression techniques to monitored numerical system data. In this paper, we propose a best practice guide for building empirical models based on our experience with forecasting Apache web server performance variables, and forecasting call availability of a real-world telecommunication system. To substantiate the presented guide, and to demonstrate our approach in a step by step manner, we model, and predict the response time, and the amount of free physical memory of an Apache web server system, as well as the call availability of an industrial telecommunication system. Additionally, we present concrete results for a) variable selection where we cross benchmark three procedures, b) empirical model building where we cross benchmark four techniques, and c) sensitivity analysis. This best practice guide intends to assist in configuring modeling approaches systematically for best estimation, and prediction results.",2007,0, 2421,Detecting and Exploiting Symmetry in Discrete-State Markov Models,"Dependable systems are usually designed with multiple instances of components or logical processes, and often possess symmetries that may be exploited in model-based evaluation. The problem of how best to exploit symmetry in models has received much attention from the modeling community, but no solution has garnered widespread support, primarily because each solution is limited in terms of either the types of symmetry that can be exploited, or the difficulty of translating from the system description to the model formalism. We propose a new method for detecting and exploiting model symmetry in which 1) models retain the structure of the system, and 2) all symmetry inherent in the structure of the model can be detected and exploited for the purposes of state-space reduction. Composed models are constructed from models through specification of connections between models that correspond to shared state fragments. The composed model is interpreted as an undirected graph, and results from group theory, and graph theory are used to develop procedures for automatically detecting, and exploiting all symmetries in the composed model. We discuss the necessary algorithms to detect and exploit model symmetry, and provide a proof that the theory generates an equivalent model. After a thorough analysis of the added complexity, a state-space generator which implements these algorithms within Mobius is then presented.",2007,0, 2422,Compensated Signature Embedding Based Multimedia Content Authentication System,"Digital content authentication and preservation is an extremely challenging task in realizing decentralized digital libraries. The concept of compensated signature embedding is proposed to develop an effective multimedia content authentication system. The proposed system does not require any third party reference or side information. Towards this end, a content-based fragile signature is derived and embedded into the media using a robust watermarking technique. Since the embedding process introduces distortion in the media, it may lead to authentication failure. We propose to adjust the media samples iteratively or using a closed form process to compensate for the embedding distortion. Using an example image authentication system, we show that the proposed scheme is highly effective in detecting even minor modifications to the media.",2007,0, 2423,Detecting Hidden Messages Using Image Power Spectrum,"In this paper we present a study of the effects of data hiding on the power spectra of digital images. Several imperceptible data hiding techniques have been proposed that provide strong visual security and robustness. Although imperceptible to the human visual system, the hidden data affects the natural qualities of the image, such as the image power spectrum. In this study, we classify a large image database into a number of categories. For each category, we calculate the slope of the power spectra for the marked and unmarked images. We note that in the case of spatial data hiding the average slope of the power spectra of marked images is 54.93% higher compared to that of the unmarked images. Also in the cases of transform domain data hiding we note that the average slope of the power spectra of the images marked using a discrete cosine (wavelet) transform (DC(W)T) based technique is higher by 9.12% (38.39%). We also test a commercially available data hiding software namely Digimarc Corp.'s MyPictureMarc 2005 VI.0. In this case the average power spectra of the marked images is 35.99% higher. Hence we see that the proposed scheme is a tool for universal steganalysis with varying degrees of success depending on the type of embedding.",2007,0, 2424,Finding Two Optimal Positions of a Hand-Held Camera for the Best Reconstruction,"This paper proposes an experimental study to find the two optimal positions of a hand-held digital camera for the capture of the geometry and texture of an object. Using our improved 3D reconstruction pipeline based on a semi-dense matching between a pair of uncalibrated images, a layout of twenty-five camera positions is tested in real conditions of image acquisition and also completely simulated. The reconstruction quality is measured by assessing the accuracy of the final 3D structure in accordance with a ground truth. The results provide the optimal capturing layout. Another interesting conclusion is that the accuracy of the reconstruction does not change much in the nearby area around the best position, which enables to the hand-held capture to not strictly respect this configuration.",2007,0, 2425,Harbor: Software-based Memory Protection For Sensor Nodes,"Many sensor nodes contain resource constrained microcontrollers where user level applications, operating system components, and device drivers share a single address space with no form of hardware memory protection. Programming errors in one application can easily corrupt the state of the operating system or other applications. In this paper, we propose Harbor, a memory protection system that prevents many forms of memory corruption. We use software based fault isolation (""""sandboxing"""") to restrict application memory accesses and control flow to protection domains within the address space. A flexible and efficient memory map data structure records ownership and layout information for memory regions; writes are validated using the memory map. Control flow integrity is preserved by maintaining a safe stack that stores return addresses in a protected memory region. Run-time checks validate computed control flow instructions. Cross domain calls perform low-overhead control transfers between domains. Checks are introduced by rewriting an application's compiled binary. The sand- boxed result is verified on the sensor node before it is admitted for execution. Harbor's fault isolation properties depend only on the correctness of this verifier and the Harbor runtime. We have implemented and tested Harbor on the SOS operating system. Harbor detected and prevented memory corruption caused by programming errors in application modules that had been in use for several months. Harbor's overhead, though high, is less than that of application-specific virtual machines, and reasonable for typical sensor workloads.",2007,0, 2426,Protection of Induction Motor Using PLC,"The goal of this paper is to protect induction motors against possible failures by increasing the reliability, the efficiency, and the performance. The proposed approach is a sensor-based technique. For this purpose, currents, voltages, speed and temperature values of the induction motor were measured with sensors. When any fault condition is detected during operation of the motor, PLC controlled on-line operation system activates immediately. The performance of the protection system proposed is discussed by means of application results. The motor protection achieved in the study can be faster than the classical techniques and applied to larger motors easily after making small modifications on both software and hardware.",2007,0, 2427,Empirical Validation of a Web Fault Taxonomy and its usage for Fault Seeding,"The increasing demand for reliable Web applications gives a central role to Web testing. Most of the existing works are focused on the definition of novel testing techniques, specifically tailored to the Web. However, no attempt was carried out so far to understand the specific nature of Web faults. This is of fundamental importance to assess the effectiveness of the proposed Web testing techniques. In this paper, we describe the process followed in the construction of a Web fault taxonomy. After the initial, top- down construction, the taxonomy was subjected to four iterations of empirical validation aimed at refining it and at understanding its effectiveness in bug classification. The final taxonomy is publicly available for consultation and editing on a Wiki page. Testers can use it in the definition of test cases that target specific classes of Web faults. Researchers can use it to build fault seeding tools that inject artificial faults which resemble the real ones.",2007,0, 2428,Automatic Test Case Generation for Multi-tier Web Applications,"Testing multi-tier Web applications is challenging yet critical. First, because of inter-tier interactions, a fault in one tier may propagate to the others. Second, Web applications are often continuously evolving. Testing such emerging applications must efficiently generate test cases to catch up with fast-paced evolution and effectively capture cross-tier faults. We present a technique based on an inter-connection dependence model to generate sequences of Web pages that are potentially fault prone. To ensure that these sequences of Web pages will be exercised as designated, the path condition for each execution path is computed and used to determine the domain of each input parameter and database state. Input data for each Web page can then be automatically generated by using boundary value analysis. The test suite generated by our technique guarantees that inter-tier interactions will be adequately tested.",2007,0, 2429,A WSAD-Based Fact Extractor for J2EE Web Projects,"This paper describes our implementation of a fact extractor for J2EE Web applications. Fact extractors are part of each reverse engineering toolset; their output is used by reverse engineering analyzers and visualizers. Our fact extractor has been implemented on top of IBM's Websphere Application Developer (WSAD). The extractor's schema has been defined with the Eclipse Modeling Framework (EMF) using a graphical modeling approach. The extractor extensively reuses functionality provided by WSAD, EMF, and Eclipse, and is an example of component-based development. In this paper, we show how we used this development approach to accomplish the construction of our fact extractor, which, as a result, could be realized with significantly less code and in shorter time compared to a homegrown extractor implemented from scratch. We have assessed our extractor and the produced facts with a table- based and a graph-based visualizer. Both visualizers are integrated with Eclipse.",2007,0, 2430,Improving Usability of Web Pages for Blinds,"Warranting the access to Web contents to any citizen, even to people with physical disabilities, is a major concern of many government organizations. Although guidelines for Web developers have been proposed by international organisations (such as the W3C) to make Web site contents accessible, the wider part of today's Web sites are not completely usable by peoples with sight disabilities. In this paper, two different approaches for dynamically transforming Web pages into aural Web pages, i.e. pages that are optimised for blind peoples, will be presented. The approaches exploit heuristic techniques for summarising Web pages contents and providing them to blind users in order to improve the usability of Web sites. The techniques have been validated in an experiment where usability metrics have been used to assess the effectiveness of the Web page transformation techniques.",2007,0, 2431,Multi-Processor System-Level Synthesis for Multiple Applications on Platform FPGA,"Multiprocessor systems-on-chip (MPSoC) are being developed in increasing numbers to support the high number of applications running on modern embedded systems. Designing and programming such systems prove to be a major challenge. Most of the current design methodologies rely on creating the design by hand, and are therefore error-prone and time-consuming. This also limits the number of design points that can be explored. While some efforts have been made to automate the flow and raise the abstraction level, these are still limited to single-application designs. In this paper, we present a design methodology to generate and program MPSoC designs in a systematic and automated way for multiple applications. The architecture is automatically inferred from the application specifications, and customized for it. The flow is ideal for fast design space exploration (DSE) in MPSoC systems. We present results of a case study to compute the buffer-throughput trade-offs in real-life applications, H263 and JPEG decoders. The generation of the entire project takes about 100 ms, and the whole DSE was completed in 45 minutes, including the FPGA mapping and synthesis.",2007,0, 2432,"The Andres Project: Analysis and Design of Run-Time Reconfigurable, Heterogeneous Systems","Today's heterogeneous embedded systems combine components from different domains, such as software, analogue hardware and digital hardware. The design and implementation of these systems is still a complex and error-prone task clue to the different Models of Computations (MoCs), design languages and tools associated with each of the domains. Though making such systems adaptive is technologically feasible, most of the current design methodologies do not explicitely support adaptive architectures. This paper present the ANDRES project. The main objective of ANDRES is the development of a seamless design flow for adaptive heterogeneous embedded systems (AHES) based on the modelling language SystemC. Using domain-specific modelling extensions and libraries, ANDRES will provide means to efficiently use and exploit adaptivity in embedded system design. The design flow is completed by a methodology and tools for automatic hardware and software synthesis for adaptive architectures.",2007,0, 2433,Data Quality Monitoring Framework for the ATLAS Experiment at the LHC,"Data quality monitoring (DQM) is an important and integral part of the data taking process of HEP experiments. DQM involves automated analysis of monitoring data through user-defined algorithms and relaying the summary of the analysis results while data is being processed. When DQM occurs in the online environment, it provides the shifter with current run information that can be used to overcome problems early on. During the offline reconstruction, more complex analysis of physics quantities is performed by DQM, and the results are used to assess the quality of the reconstructed data. The ATLAS data quality monitoring framework (DQMF) is a distributed software system providing DQM functionality in the online environment. The DQMF has a scalable architecture achieved by distributing execution of the analysis algorithms over a configurable number of DQMF agents running on different nodes connected over the network. The core part of the DQMF is designed to only have dependence on software that is common between online and offline (such as ROOT) and therefore is used in the offline framework as well. This paper describes the main requirements, the architectural design, and the implementation of the DQMF.",2007,0,2606 2434,Size and Frequency of Class Change from a Refactoring Perspective,"A previous study by Bieman et al., investigated whether large, object-oriented classes were more susceptible to change than smaller classes. The measure of change used in the study was the frequency with which the features of a class had been changed over a specific period of time. From a refactoring perspective, the frequency of class change is of value But even for a relatively simple refactoring such as 'rename method', multiple classes may undergo minor modification without any net increase in class (and system) size. In this paper, we suggest that the combination of 'versions of a class and number of added lines of code ' in the bad code 'smell' detection process may give a better impression of which classes are most suitable candidates for refactoring; as such, effort in detecting bad code smells should apply to classes with a high growth rate as well as a high change frequency. To support our investigation, data relating to changes from 161 Java classes was collected. Results concluded that it is not necessarily the case that large classes are more change-prone than relatively smaller classes. Moreover, the bad code smell detection process is informed by using the combination of change frequency and class size as a heuristic.",2007,0, 2435,Defining Software Evolvability from a Free/Open-Source Software,"This paper studies various sources of information to identify factors that influence the evolvability of Free and Open-Source Software (FIOSS) endeavors. The sources reviewed to extract criteria are (1) interviews with FIOSS integrators, (2) the scientific literature, and (3) existing standard, norms as well as (4) three quality assessment methodologies specific to FIOSS , namely, QSOS, OpenBRR and Open Source Maturity Model. This effort fits in the larger scope of QUALOSS, a research project funded by the European Commission, whose goal is to develop a methodology to assess the evolvability and robustness of FIOSS endeavors.",2007,0, 2436,A Requirement Level Modification Analysis Support Framework,"Modification analysis is an essential phase of most software maintenance processes, requiring decision makers to perform and predict potential change impacts, feasibility and costs associated with a potential modification request. The majority of existing techniques and tools supporting modification analysis focusing on source code level analysis and require an understanding of the system and its implementation. In this research, we present a novel approach to support the identification of potential modification and re-testing efforts associated with a modification request, without the need for analyzing or understanding the system source code. We combine Use Case Maps with Formal Concept Analysis to provide a unique modification analysis framework that can assist decision makers during modification analysis at the requirements level. We demonstrate the applicability of our approach on a telephony system case study.",2007,0, 2437,Enabling Architecture Changes in Distributed Web-Applications,"Engineering methods for Web applications that do not take changes of the system environment into account are in danger of planning across purposes with reality. Modern Web applications are characterized by dynamically evolving architectures of loosely coupled content sources, components and services from multiple organizations. The evolution of such ecosystems poses a problem to management and maintenance. Up-to-date architectural information about the components and their relationships is required in different places within the system. However, this is problematic, because, manual propagation of changes in system descriptions is both costly and error-prone. In this paper, we therefore describe how the publish-subscribe principle can be applied to automate the handling of architecture changes via a loosely-coupled event mechanism. We investigate relevant architecture changes and propose a concrete system of subscription topics and event compositions. The practicality of the approach is demonstrated by means of an implemented support system that is compliant with the WS-notification specification.",2007,0, 2438,Assessing the Real Worth of Software Tools to Check the Healthiness Conditions of Automotive Software,"There are a number of software-controlled features in today's automotive vehicles making them comfortable, safer, entertaining, informative and even greener! The number of features is rapidly growing and so is the software content of automotive vehicles to meet these requirements. The software code that realises any one feature is, nowadays, often distributed across several Electronic Control Units (ECUs) as well. In order to produce highly reliable automotive vehicles, their increasingly complex software has to be of high-quality and this requires sophisticated tools and techniques within the automotive industry. One such category of tools statically check whether the software developed hold certain properties (healthiness conditions), such as checking that a variable is set before it is read, and that arithmetic operations do not lead to overflow. These tools typically generate a list of issues, which highlight potential areas of the code, where the healthiness conditions being checked might fail. In practice, the list of generated issues typically contains a significant number of false positives; i.e. issues that cannot lead to a genuine failure of a healthiness condition. This paper discusses the design of objective experiments and the initial stages of an ongoing automotive industry study to assess the real worth of such tools. Towards this end, relevant concepts such as healthiness conditions for software are explained and the various criteria used for the objective experiments are defined giving their rationale.",2007,0, 2439,Improved Non-parametric Subtraction for Detection of Wafer Defect,"Automated defect inspection for wafer has been developed since 1990 's to replace defect detection by human eye for low-cost and high-quality. Defects are detected by comparing an inspected die with a reference die in application of wafer defect inspection. Referential methods compare with reference image by computing the intensity difference pixel by pixel between a reference image and an inspected image or measuring the similarity between two images using normalized cross correlation or eigen value. These methods are problematic for defect detection due to illumination change, noise and alignment error. To reduce the sensitivity of illumination change and noise, the new image subtraction called non-parametric subtraction was proposed. Non-parametric subtraction can solve problem about illumination change and noise, but sensitivity of alignment remains unsolved. This paper introduces new approach less sensitive to alignment using non-parametric subtraction for wafer defect inspection.",2007,0, 2440,Providing Support for Model Composition in Metamodels,"In aspect-oriented modeling (AOM), a design is described using a set of design views. It is sometimes necessary to compose the views to obtain an integrated view that can be analyzed by tools. Analysis can uncover conflicts and interactions that give rise to undesirable emergent behavior. Design models tend to have complex structures and thus manual model composition can be arduous and error- prone. Tools that automate significant parts of model composition are needed if AOM is to gain industrial acceptance. One way of providing automated support for composing models written in a particular language is to define model composition behavior in the metamodel defining the language. In this paper we show how this can be done by extending the UML metamodel with behavior describing symmetric, signature-based composition of UML model elements. We also describe an implementation of the metamodel that supports systematic composition of UML class models.",2007,0, 2441,Automated Model-Based Configuration of Enterprise Java Applications,"The decentralized process of configuring enterprise applications is complex and error-prone, involving multiple participants/roles and numerous configuration changes across multiple files, application server settings, and database decisions. This paper describes an approach to automated enterprise application configuration that uses a feature model, executes a series of probes to verify configuration properties, formalizes feature selection as a constraint satisfaction problem, and applies constraint logic programming techniques to derive a correct application configuration. To validate the approach, we developed a configuration engine, called Fresh, for enterprise Java applications and conducted experiments to measure how effectively Fresh can configure the canonical Java Pet Store application. Our results show that Fresh reduces the number of lines of hand written XML code by up to 92% and the total number of configuration steps by up to 72%.",2007,0, 2442,A Scalable Parallel Deduplication Algorithm,"The identification of replicas in a database is fundamental to improve the quality of the information. Deduplication is the task of identifying replicas in a database that refer to the same real world entity. This process is not always trivial, because data may be corrupted during their gathering, storing or even manipulation. Problems such as misspelled names, data truncation, data input in a wrong format, lack of conventions (like how to abbreviate a name), missing data or even fraud may lead to the insertion of replicas in a database. The deduplication process may be very hard, if not impossible, to be performed manually, since actual databases may have hundreds of millions of records. In this paper, we present our parallel deduplication algorithm, called FER- APARDA. By using probabilistic record linkage, we were able to successfully detect replicas in synthetic datasets with more than 1 million records in about 7 minutes using a 20- computer cluster, achieving an almost linear speedup. We believe that our results do not have similar in the literature when it comes to the size of the data set and the processing time.",2007,0, 2443,Assessing the Object-level behavioral complexity in Object Relational Databases,"Object Relational Database Management Systems model set of interrelated objects using references and collection attributes. The static metrics capture the internal quality of the database schema at the class -level during design time. Complex databases like ORDB exhibit dynamism during runtime and hence require performance-level monitoring. This is achieved by measuring the access and invocations of the objects during runtime, thus assessing the behavior of the objects. Runtime coupling and cohesion metrics are deemed as attributes of measuring the Object-level behavioral complexity. In this work, we evaluate the runtime coupling and cohesion metrics and assess their influence in measuring the behavioral complexity of the objects in ORDB. Further, these internal measures of object behavior are externalized in measuring the performance of the database in entirety. Experiments on sample ORDB schemas are conducted using statistical analysis and correlation clustering techniques to assess the behavior of the objects in real time. The results indicate the significance of the object behavior in influencing the database performance. The scope of this work and the future works in extending this research form the concluding note.",2007,0, 2444,Estimating the Required Code Inspection Team Size,"Code inspection is considered an efficient method for detecting faults in software code documents. The number of faults not detected by inspection should be small. Several methods have been suggested for estimating the number of undetected faults. These methods include the fault injection method that is considered to be quite laborious, capture recapture methods that avoid the problems of code injection and the Detection Profile Method for cases where capture recapture methods do not provide sufficient accuracy. The Kantorowitz estimator is based on a probabilistic model of the inspection process and enables the estimating the number of inspectors required to detect a specified fraction of all the faults of a document as well as the number of undetected faults. This estimator has proven to be satisfactory in inspection of user requirements documents. The experiments reported in this study suggest that it is also useful for code inspection.",2007,0, 2445,Technique Integration for Requirements Assessment,"In determining whether to permit a safety-critical software system to be certified and in performing independent verification and validation (IV&V) of safety- or mission-critical systems, the requirements traceability matrix (RTM) delivered by the developer must be assessed for accuracy. The current state of the practice is to perform this work manually, or with the help of general-purpose tools such as word processors and spreadsheets Such work is error-prone and person-power intensive. In this paper, we extend our prior work in application of Information Retrieval (IR) methods for candidate link generation to the problem of RTM accuracy assessment. We build voting committees from five IR methods, and use a variety of voting schemes to accept or reject links from given candidate RTMs. We report on the results of two experiments. In the first experiment, we used 25 candidate RTMs built by human analysts for a small tracing task involving a portion of a NASA scientific instrument specification. In the second experiment, we randomly seeded faults in the RTM for the entire specification. Results of the experiments are presented.",2007,0, 2446,QoS Proxy Architecture for Real Time RPC with Traffic Prediction,"Currently, there are many research works focused at the creation of architectures that support QoS for real time multimedia applications guarantees. However, those architectures do not support the traffic of RT-RPCs whose requirements (i.e. priority and deadline) are harder than those of multimedia applications. The aim of this work is to propose an architecture of QoS proxy for RT-RPCs that uses Box-Jenkins time series models in order to predict future traffic characteristics of RT-RPCs that pass through the proxy, allowing the anticipated allocation of the necessary resources to attend the predicted demand and the choice of policies aimed at the adaptation of the proxy to the states of its network environment.",2007,0, 2447,Statistical Assessment of Global and Local Cylinder Wear,"Assessment of cylindricity has been traditionally performed on the basis of cylindrical crowns containing a set of points that are supposed to belong to a controlled cylinder. As such, all sampled points must lie within a crown. In contrast, the present paper analyzes the cylindricity for wear applications, in which a statistical trend is assessed, rather than to assure that all points fall within a given tolerance. Principal component analysis is used to identify the central axis of the sampled cylinder, allowing to find the actual (expected value of the) radius and axis of the cylinder. Application of k-cluster and transitive closure algorithms allow to identify particular areas of the cylinder which are specially deformed. For both, the local areas and the global cylinder, a quantile analysis allows to numerically grade the degree of deformation of the cylinder. The algorithms implemented are part of the CYLWEARcopy system and used to assess local and global wear cylinders.",2007,0, 2448,Developing Intentional Systems with the PRACTIONIST Framework,"Agent-based systems have become a very attractive approach for dealing with the complexity of modern software applications and have proved to be useful and successful in some industrial domains. However, engineering such systems is still a challenge due to the lack of effective tools and actual implementations of very interesting and fascinating theories and models. In this area the so-called intentional stance of systems can be very helpful to efficiently predict, explain, and define the behaviour of complex systems, without having to understand how they actually work, but explaining them in terms of some mental qualities or attitudes, rather than their physical or design stance. In this paper we present the PRACTIONIST framework, that supports the development of PRACTIcal reasONIng sySTems according to the BDI model of agency, which uses some mental attitudes such as beliefs, desires, and intentions to describe and specify the behaviour of system components. We adopt a goal-oriented approach and a clear separation between the deliberation phase and the means-ends reasoning, and consequently between the states of affairs to pursue and the way to do it. Moreover, PRACTIONIST allows developers to implement agents that are able to reason about their beliefs and the other agents' beliefs, expressed by modal logic formulas.",2007,0, 2449,Discovering Web Services Using Semantic Keywords,"With the increasing growth in popularity of Web services, the discovery of relevant services becomes a significant challenge. In order to enhance the service discovery is necessary that both the Web service description and the request for discovering a service explicitly declare their semantics. Some languages and frameworks have been developed to support rich semantic service descriptions and discover using ontology concepts. However, the manual creation of such concepts is tedious and error-prone and many users accustomed to automatic tools might not want to invert his time in obtaining this knowledge. In this paper we propose a system that assists to both service producers and service consumers in the discovery of semantic keywords which can be used to describe and discover Web services respectively. First, our system enhances semantically the list of keywords extracted from the elements that comprise the description of a Web service and the user keywords used for discover a service. Second, an ontology matching process is used to discovers matchings between the ontological terms of a service description and a request for service selection. Third, a subsumption reasoning algorithm tries to find service description(s) which match the user request.",2007,0, 2450,Hardware Failure Virtualization Via Software Encoded Processing,"In future, the decreasing feature size will make it much more difficult to built reliable microprocessors. Economic pressure will most likely result in the reliability of microprocessors being tuned for the commodity market. Dedicated reliable hardware is very expensive and usually slower than commodity hardware. Thus, software implemented hardware fault tolerance (SIHFT) will become essential for building safe systems. Existing SIHFT approaches either are not broadly applicable or lack the ability to reliably deal with permanent hardware faults. In contrast, Forin (1989) introduced the vital coded microprocessor which reliably detects transient and permanent hardware failures, but is not applicable to arbitrary programs. It requires a dedicated development process and special hardware. We extend Forin's Vital Code, so that it is applicable to arbitrary binary code which enables us to apply it to existing binaries or automatically during compile time. Furthermore, our approach does not require special purpose hardware.",2007,0, 2451,The Design of a Multimedia Protocol Analysis Software Environment,"We have developed a variant of Estelle, called Time-Estelle which is able to express multimedia quality of service (QoS) parameters, synchronisation scenarios, and time-dependent and probabilistic behaviours of multimedia protocols. We have developed an approach to verifying a multimedia protocol specified in Time-Estelle. To predict the performance of a multimedia system, we have also developed a method for the performance analysis of a multimedia protocol specified in Time-Estelle. However, without the support of a software environment to automate the processes, verification and performance analysis methods would be very time-consuming. This paper describes the design of such a software environment.",2007,0, 2452,A Fault Detection Mechanism for Service-Oriented Architecture Based on Queueing Theory,"SOA is an ideal solution to application building, since it reuses the existing services as many as possible. The fault tolerance is one important capability to ensure the SOA- based applications are high reliable and available. However, fault tolerance is such a complex issue for most SOA providers that they hardly provide this capability in their products. This paper provides a queuing-theory-based algorithm to fault detection, which can be used to detect the services whose performance becomes unsatisfactory at runtime according to the QoS descriptor. Based on this algorithm, this paper also gives the reference models of the extended service and the architecture of fault-tolerance control center of enterprise services bus for SOA-based applications.",2007,0, 2453,Towards Automatic Measurement of Probabilistic Processes,In this paper we propose a metric for finite processes in a probabilistic extension of CSP. The kernel of the metric corresponds to trace equivalence and most of the operators in the process algebra is shown to satisfy non-expansiveness property with respect to this metric. We also provide an algorithm to calculate the distance between two processes to a prescribed discount factor in polynomial time. The algorithm has been implemented in a tool that helps us to measure processes automatically.,2007,0, 2454,Nondeterministic Testing with Linear Model-Checker Counterexamples,"In model-based testing, software test-cases are derived from a formal specification. A popular technique is to use traces created by a model-checker as test-cases. This approach is fully automated and flexible with regard to the structure and type of test-cases. Nondeterministic models, however, pose a problem to testing with model-checkers. Even though a model-checker is able to cope with nondeterminism, the traces it returns make commitments at non- deterministic transitions. If a resulting test-case is executed on an implementation that takes a different, valid transition at such a nondeterministic choice, then the test-case would erroneously detect a fault. This paper discusses the extension of available model-checker based test-case generation methods so that the problem of nondeterminism can be overcome.",2007,0, 2455,Detecting Double Faults on Term and Literal in Boolean Expressions,"Fault-based testing aims at selecting test cases to guarantee the detection of certain prescribed faults in programs. The detection conditions of single faults have been studied and used in areas like developing test case selection strategies, establishing relationships between faults and investigating the fault coupling effect. It is common, however, for programmers to commit more than one fault. Our previous studies on the detection conditions of faults in Boolean expressions show that (1) some test case selection strategies developed for the detection of single faults can also detect all double faults related to terms, but (2) these strategies cannot guarantee to detect all double faults related to literals. This paper supplements our previous studies and completes our series of analysis of the detection condition of all double fault classes in Boolean expressions. Here we consider the fault detection conditions of combinations of two single faults, in which one is related to term and the other is related to literal. We find that all such faulty expressions, except two, can be detected by some test case selection strategies for single fault detection. Moreover, the two exception faulty expressions can be detected by existing strategies when used together with a supplementary strategy which we earlier developed to detect double literal faults.",2007,0, 2456,Synthesizing Component-Based WSN Applications via Automatic Combination of Code Optimization Techniques,"Wireless sensor network (WSN) applications sense events in-situ and compute results in-network. Their software components should run on platforms with stringent constraints on node resources. Developers often design their programs by trial-and-error with a view to meeting these constraints. Through numerous iterations, they manually measure and estimate how far the programs cannot fulfill the requirements, and make adjustments accordingly. Such manual process is time-consuming and error-prone. Automated support is necessary. Based on an existing task view that treats a WSN application as tasks and models resources as constraints, we propose a new component view that associates components with code optimization techniques and constraints. We develop algorithms to synthesize components running on nodes, fulfilling the constraints, and thus optimizing their quality. We evaluate our proposal by a simulation study adapted from a real-life WSN application. Keywords: Wireless sensor network, adaptive software design, resource constraint, code optimization technique.",2007,0, 2457,Automatic Quality Assessment of SRS Text by Means of a Decision-Tree-Based Text Classifier,"The success of a software project is largely dependent upon the quality of the Software Requirements Specification (SRS) document, which serves as a medium to communicate user requirements to the technical personnel responsible for developing the software. This paper addresses the problem of providing automated assistance for assessing the quality of textual requirements from an innovative point of view, namely through the use of a decision- tree-based text classifier, equipped with Natural Language Processing (NLP) tools. The objective is to apply the text classification technique to build a system for the automatic detection of ambiguity in SRS text based on the quality indicators defined in the quality model proposed in this paper. We believe that, with proper training, such a text classification system will prove to be of immense benefit in assessing SRS quality. To the authors' best knowledge, ours is the first documented attempt to apply the text classification technique for assessing the quality of software documents.",2007,0, 2458,A Multivariate Analysis of Static Code Attributes for Defect Prediction,"Defect prediction is important in order to reduce test times by allocating valuable test resources effectively. In this work, we propose a model using multivariate approaches in conjunction with Bayesian methods for defect predictions. The motivation behind using a multivariate approach is to overcome the independence assumption of univariate approaches about software attributes. Using Bayesian methods gives practitioners an idea about the defectiveness of software modules in a probabilistic framework rather than the hard classification methods such as decision trees. Furthermore the software attributes used in this work are chosen among the static code attributes that can easily be extracted from source code, which prevents human errors or subjectivity. These attributes are preprocessed with feature selection techniques to select the most relevant attributes for prediction. Finally we compared our proposed model with the best results reported so far on public datasets and we conclude that using multivariate approaches can perform better.",2007,0, 2459,Refinement of a Tool to Assess the Data Quality in Web Portals,"The Internet is now firmly established as an environment for the administration, exchange and publication of data. To support this, a great variety of Web applications have appeared, among these web portals. Numerous users worldwide make use of Web portals to obtain information for different purposes. These users, or data consumers, need to ensure that this information is suitable for the use to which they wish to put it. PDQM (portal data quality model) is a model for the assessment of portal data quality. It has been implemented in the PoDQA tool (portal data quality assessment tool), which can be accessed at http://podqa.webportalqualitv.com. In this paper we present the various refinements that it has been necessary to make in order to obtain a tool which is stable and able to make accurate and efficient calculations of the elements needed to assess the quality of the data of a Web portal.",2007,0, 2460,Uniform Selection of Feasible Paths as a Stochastic Constraint Problem,"Automatic structural test data generation is a real challenge of software testing. Statistical structural testing has been proposed to address this problem. This testing method aims at building an input probability distribution to maximize the coverage of some structural criteria. Under the all paths testing objective, statistical structural testing aims at selecting each feasible path of the program with the same probability. In this paper, we propose to model a uniform selector of feasible paths as a stochastic constraint program. Stochastic constraint programming is an interesting framework which combines stochastic decision problem and constraint solving. This paper reports on the translation of uniform selection of feasible paths problem into a stochastic constraint problem. An implementation which uses the library PCC(FD) of SICStus Prolog designed for this problem is detailed. First experimentations, conducted over a few academic examples, show the interest of our approach.",2007,0, 2461,Cohesion Metrics for Predicting Maintainability of Service-Oriented Software,"Although service-oriented computing (SOC) is a promising paradigm for developing enterprise software systems, existing research mostly assumes the existence of black box services with little attention given to the structural characteristics of the implementing software, potentially resulting in poor system maintainability. Whilst there has been some preliminary work examining coupling in a service-oriented context, there has to date been no such work on the structural property of cohesion. Consequently, this paper extends existing notions of cohesion in OO and procedural design in order to account for the unique characteristics of SOC, allowing the derivation of assumptions linking cohesion to the maintainability of service-oriented software. From these assumptions, a set of metrics are derived to quantify the degree of cohesion of service oriented design constructs. Such design level metrics are valuable because they allow the prediction of maintainability early in the SDLC.",2007,0, 2462,Statistical Metamorphic Testing Testing Programs with Random Output by Means of Statistical Hypothesis Tests and Metamorphic Testing,"Testing software with random output is a challenging task as the output corresponding to a given input differs from execution to execution. Therefore, the usual approaches to software testing are not applicable to randomized software. Instead, statistical hypothesis tests have been proposed for testing those applications. To apply these statistical hypothesis tests, either knowledge about the theoretical values of statistical characteristics of the program output (e. g. the mean) or a reference implementation (e. g. a legacy system) are required to apply statistical hypothesis tests. But often, both are not available. In the present paper, it is discussed how a testing method called Metamorphic Testing can be used to construct statistical hypothesis tests without knowing exact theoretical characteristics or having a reference implementation. For that purpose, two or more independent output sequences are generated by the implementation under test (IUT). Then, these sequences are compared according to the metamorphic relation using statistical hypothesis tests.",2007,0, 2463,Abstraction in Assertion-Based Test Oracles,"Assertions can be used as test oracles. However, writing effective assertions of right abstraction levels is difficult because on the one hand, detailed assertions are preferred for thorough testing (i.e., to detect as many errors as possible), but on the other hand abstract assertions are preferred for readability, maintainability, and reusability. As assertions become a practical tool for testing and debugging programs, this is an important and practical problem to solve for the effective use of assertions. We advocate the use of model variables - specification-only variables of which abstract values are given as mappings from concrete program states - to write abstract assertions for test oracles. We performed a mutation testing experiment to evaluate the effectiveness of the use of model variables in assertion-based test oracles. According to our experiment, assertions written in terms of model variables are as effective as assertions written without using model variables in detecting (injected) faults, and the execution time overhead of model variables are negligible. Our findings are applicable to other use of runtime checkable assertions.",2007,0, 2464,Using Numerical Model to Predict Hydrocephalus Based on MRI Images,"Abnormal flow of cerebrospinal fluid (CSF) may lead to a hydrocephalus condition as a result of birth defects, accident or infection. In this paper we use raw MRI data to work out the """"tissue classification segmentation"""" image and then import this realistic brain geometry into a finite element software so as to simulate CSF velocity and pressure distributions throughout the brain tissue in hydrocephalus. Such procedure is summarized using a 2D case-study.",2007,0, 2465,A Phase-Locked Loop for the Synchronization of Power Quality Instruments in the Presence of Stationary and Transient Disturbances,"Power quality instrumentation requires accurate fundamental frequency estimation and signal synchronization, even in the presence of both stationary and transient disturbances. In this paper, the authors present a synchronization technique for power quality instruments based on a single-phase software phase-locked loop (PLL), which is able to perform the synchronization, even in the presence of such disturbances. Moreover, PLL is able to detect the occurrence of a transient disturbance. To evaluate if and how the synchronization technique is adversely affected by the application of stationary and transient disturbing influences, appropriate testing conditions have been developed, taking into account the requirements of the in-force standards and the presence of the voltage transducer.",2007,0, 2466,Implementation of Hybrid Automata in Scicos,"Hybrid automaton is a standard model for describing a hybrid system. A hybrid automaton is a state machine augmented with differential equations and is generally represented by a graph composed of vertices and edges where vertices represent continuous activities and edges represent discrete transitions. Modeling a hybrid automaton with large number of vertices may be difficult, time-consuming and error prone using standard modules in modeling and simulation environments such as Scicos. In this paper, we present the new Scicos automaton block used for modeling and simulation of hybrid automata.",2007,0, 2467,Trace Based Mobility Model for Ad Hoc Networks,"Mobility of the nodes in a mobile ad hoc network poses a challenge in determining stable routes in the network. It is often difficult to predict the mobility of the nodes, as they tend to be random in nature. However, a non-random component would also exist in many scenarios. It is this non-random behavior that we consider in this paper to identify the movement trace of the mobile nodes. An algorithm is proposed to model the regular movement of a node as a trace containing a list of stable positions and their associated time. We call this model as a trace based mobility model (TBMM). The effectiveness of this model is shown by predicting the accuracy of a node movement by performing a ten fold cross validation. We also show the applicability of the trace information in the routing protocol to provide quality of service (QoS).",2007,0, 2468,An Integrated Design Of Fast LSP Data Plane Failure Detection In MPLS-OAM,"One desirable application of BFD (Bi-directional Forwarding Detection) in MPLS-OAM (Operation, Administration, and Maintenance ) is to detect MPLS ( Multple Protocol Label Switching) LSP (Label Switching Path) data plane failures. Besides detectiong failures, LSP-Ping can furtherly verify the LSP data plane against the control plane. However, the control plane processing required for BFD control packets is relatively smaller than that for LSP-Ping messages. In this paper,we will propose how to combinate LSP-Ping and BFD to provide faster data plane failure detection, which possiblely operates on a greater number of LSPs.",2007,0, 2469,Statistical Hypothesis Testing for Assessing Monte Carlo Estimators: Applications to Image Synthesis,"Image synthesis algorithms are commonly compared on the basis of running times and/or perceived quality of the generated images. In the case of Monte Carlo techniques, assessment often entails a qualitative impression of convergence toward a reference standard and severity of visible noise; these amount to subjective assessments of the mean and variance of the estimators, respectively. In this paper we argue that such assessments should be augmented by well-known statistical hypothesis testing methods. In particular, we show how to perform a number of such tests to assess random variables that commonly arise in image synthesis such as those estimating irradiance, radiance, pixel color, etc. We explore five broad categories of tests: 1) determining whether the mean is equal to a reference standard, such as an analytical value, 2) determining that the variance is bounded by a given constant, 3) comparing the means of two different random variables, 4) comparing the variances of two different random variables, and 5) verifying that two random variables stem from the same parent distribution. The level of significance of these tests can be controlled by a parameter. We demonstrate that these tests can be used for objective evaluation of Monte Carlo estimators to support claims of zero or small bias and to provide quantitative assessments of variance reduction techniques. We also show how these tests can be used to detect errors in sampling or in computing the density of an importance function in MC integrations.",2007,0, 2470,Analysis of Anomalies in IBRL Data from a Wireless Sensor Network Deployment,"Detecting interesting events and anomalous behaviors in wireless sensor networks is an important challenge for tasks such as monitoring applications, fault diagnosis and intrusion detection. A key problem is to define and detect those anomalies with few false alarms while preserving the limited energy in the sensor network. In this paper, using concepts from statistics, we perform an analysis of a subset of the data gathered from a real sensor network deployment at the Intel Berkeley Research Laboratory (IBRL) in the USA, and provide a formal definition for anomalies in the IBRL data. By providing a formal definition for anomalies in this publicly available data set, we aim to provide a benchmark for evaluating anomaly detection techniques. We also discuss some open problems in detecting anomalies in energy constrained wireless sensor networks.",2007,0, 2471,FT-CoWiseNets: A Fault Tolerance Framework for Wireless Sensor Networks,"In wireless sensor networks (WSNs), faults may occur through malfunctioning hardware, software errors or by external causes such as fire and flood. In business applications where WSNs are applied, failures in essential parts of the sensor network must be efficiently detected and automatically recovered. Current approaches proposed in the literature do not cover all the requirements of a fault tolerant system to be deployed in an enterprise environment and therefore are not suitable for such applications. In this paper we investigate these solutions and present FT-CoWiseNets, a framework designed to improve the availability of heterogeneous WSNs through an efficient fault tolerance support. The proposed framework satisfies the requirements and demonstrates to be more adequate to business scenarios than the current approaches.",2007,0, 2472,Ultrasonic motor driving method for EMI-free image in MR image-guided surgical robotic system,"Electromagnetic interference (EMI) between magnetic resonance (MR) imager and surgical manipulator is a severe problem, that degrades the image quality, in MR image- guided surgical robotic systems. We propose a novel motor driving method to acquire noise-free image. Noise generation accompanied by motor actuation is permitted only during the """"dead time"""" when the MR imager stops signal acquisition to wait for relaxation of protons. For the synchronized control between MR imager and motor driving system, we adopted a radio-frequency pulse signal detected by a special antenna as a synchronous trigger. This method can be applied widely because it only senses a part of the scanning signal and requires neither hardware nor software changes to the MR imager. The evaluation results showed the feasibility of RF pulse as a synchronous trigger and the availability of sequence-based noise reduction method.",2007,0, 2473,The Dangers of Failure Masking in Fault-Tolerant Software: Aspects of a Recent In-Flight Upset Event,"On 1 August 2005, a Boeing Company 777-200 aircraft, operating on an international passenger flight from Australia to Malaysia, was involved in a significant upset event while flying on autopilot. The Australian Transport Safety Bureau's investigation into the event discovered that """"an anomaly existed in the component software hierarchy that allowed inputs from a known faulty accelerometer to be processed by the air data inertial reference unit (ADIRU) and used by the primary flight computer, autopilot and other aircraft systems."""" This anomaly had existed in original ADIRU software, and had not been detected in the testing and certification process for the unit. This paper describes the software aspects of the incident in detail, and suggests possible implications concerning complex, safety- critical, fault-tolerant software.",2007,0, 2474,Quality Assessment Based on Attribute Series of Software Evolution,"Defect density and defect prediction are essential for efficient resource allocation in software evolution. In an empirical study we applied data mining techniques for value series based on evolution attributes such as number of authors, commit messages, lines of code, bug fix count, etc. Daily data points of these evolution attributes were captured over a period of two months to predict the defects in the subsequent two months in a project. For that, we developed models utilizing genetic programming and linear regression to accurately predict software defects. In our study, we investigated the data of three independent projects, two open source and one commercial software system. The results show that by utilizing series of these attributes we obtain models with high correlation coefficients (between 0.716 and 0.946). Further, we argue that prediction models based on series of a single variable are sometimes superior to the model including all attributes: in contrast to other studies that resulted in size or complexity measures as predictors, we have identified the number of authors and the number of commit messages to versioning systems as excellent predictors of defect densities.",2007,0, 2475,A holistic test procedure for security systems software: An experience report,"This paper presents a comprehensive holistic software test procedure for security systems applications. The method evaluates the functional requirements of a security system. Based on the functional requirements and the hardware solution that satisfies the requirements, a test procedure was developed. The test procedure evaluates the correctness, robustness, efficiency, portability, integrity, verifiability and validation and ease of use. An impact assessment is done to assess the cost effectiveness, compatibility and reusability of the software with existing hardware. The study found that the impact of software on the business process is the most important test for any new software prior to deployment.",2007,0, 2476,Service-Oriented Business Process Modeling and Performance Evaluation based on AHP and Simulation,"With the evolution of grid technologies and the application of service-oriented architecture (SOA), more and more enterprises are integrated and collaborated with each other in a loosely coupled environment. A business process in that environment, i.e., the service-oriented business process (SOBP), shows highly flexibility for its free selection and composition of different services. The performance of the business process usually has to be evaluated and predicted before its being implemented. And it has special features since it includes both business-level and IT-level attributes. However, the existing modeling and performance evaluation methods of business process are mainly concentrated on business-level performance. And the researches on service selection and composition are usually limited to the IT-level metrics. An extended activity-network-based SOBP Model, its three-level performance metrics, and the corresponding calculation algorithm are proposed to fulfill these requirements. The advantages of our method in SOBP modeling and performance evaluation are highlighted also.",2007,0, 2477,Software Reliability Modeling with Test Coverage: Experimentation and Measurement with A Fault-Tolerant Software Project,"As the key factor in software quality, software reliability quantifies software failures. Traditional software reliability growth models use the execution time during testing for reliability estimation. Although testing time is an important factor in reliability, it is likely that the prediction accuracy of such models can be further improved by adding other parameters which affect the final software quality. Meanwhile, in software testing, test coverage has been regarded as an indicator for testing completeness and effectiveness in the literature. In this paper, we propose a novel method to integrate time and test coverage measurements together to predict the reliability. The key idea is that failure detection is not only related to the time that the software experiences under testing, but also to what fraction of the code has been executed by the testing. This is the first time that execution time and test coverage are incorporated together into one single mathematical form to estimate the reliability achieved. We further extend this method to predict the reliability of fault- tolerant software systems. The experimental results with multi-version software show that our reliability model achieves a substantial estimation improvement compared with existing reliability models.",2007,0, 2478,Towards Self-Protecting Enterprise Applications,"Enterprise systems must guarantee high availability and reliability to provide 24/7 services without interruptions and failures. Mechanisms for handling exceptional cases and implementing fault tolerance techniques can reduce failure occurrences, and increase dependability. Most of such mechanisms address major problems that lead to unexpected service termination or crashes, but do not deal with many subtle domain dependent failures that do not necessarily cause service termination or crashes, but result in incorrect results. In this paper, we propose a technique for developing selfprotecting systems. The technique proposed in this paper observes values at relevant program points. When the technique detects a software failure, it uses the collected information to identify the execution contexts that lead to the failure, and automatically enables mechanisms for preventing future occurrences of failures of the same type. Thus, failures do not occur again after the first detection of a failure of the same type.",2007,0, 2479,Prioritization of Regression Tests using Singular Value Decomposition with Empirical Change Records,"During development and testing, changes made to a system to repair a detected fault can often inject a new fault into the code base. These injected faults may not be in the same files that were just changed, since the effects of a change in the code base can have ramifications in other parts of the system. We propose a methodology for determining the effect of a change and then prioritizing regression test cases by gathering software change records and analyzing them through singular value decomposition. This methodology generates clusters of files that historically tend to change together. Combining these clusters with test case information yields a matrix that can be multiplied by a vector representing a new system modification to create a prioritized list of test cases. We performed a post hoc case study using this technique with three minor releases of a software product at IBM. We found that our methodology suggested additional regression tests in 50% of test runs and that the highest-priority suggested test found an additional fault 60% of the time.",2007,0, 2480,Using Machine Learning to Support Debugging with Tarantula,"Using a specific machine learning technique, this paper proposes a way to identify suspicious statements during debugging. The technique is based on principles similar to Tarantula but addresses its main flaw: its difficulty to deal with the presence of multiple faults as it assumes that failing test cases execute the same fault(s). The improvement we present in this paper results from the use of C4.5 decision trees to identify various failure conditions based on information regarding the test cases' inputs and outputs. Failing test cases executing under similar conditions are then assumed to fail due to the same fault(s). Statements are then considered suspicious if they are covered by a large proportion of failing test cases that execute under similar conditions. We report on a case study that demonstrates improvement over the original Tarantula technique in terms of statement ranking. Another contribution of this paper is to show that failure conditions as modeled by a C4.5 decision tree accurately predict failures and can therefore be used as well to help debugging.",2007,0, 2481,Correlations between Internal Software Metrics and Software Dependability in a Large Population of Small C/C++ Programs,"Software metrics are often supposed to give valuable information for the development of software. In this paper we focus on several common internal metrics: Lines of Code, number of comments, Halstead Volume and McCabe's Cyclomatic Complexity. We try to find relations between these internal software metrics and metrics of software dependability: Probability of Failure on Demand and number of defects. The research is done using 59 specifications from a programming competition---The Online Judge--on the internet. Each specification provides us between 111 and 11,495programs for our analysis; the total number of programs used is 71,917. We excluded those programs that consist of a look-up table. The results for the Online Judge programs are: (1) there is a very strong correlation between Lines of Code and Hal- stead Volume; (2) there is an even stronger correlation between Lines of Code and McCabe's Cyclomatic Complexity; (3) none of the internal software metrics makes it possible to discern correct programs from incorrect ones; (4) given a specification, there is no correlation between any of the internal software metrics and the software dependability metrics.",2007,0, 2482,Using In-Process Testing Metrics to Estimate Post-Release Field Quality,"In industrial practice, information on the software field quality of a product is available too late in the software lifecycle to guide affordable corrective action. An important step towards remediation of this problem lies in the ability to provide an early estimation of post-release field quality. This paper evaluates the Software Testing and Reliability Early Warning for Java (STREW-J) metric suite leveraging the software testing effort to predict post-release field quality early in the software development phases. The metric suite is applicable for software products implemented in Java for which an extensive suite of automated unit test cases are incrementally created as development proceeds. We validated the prediction model using the STREW-J metrics via a two-phase case study approach which involved 27 medium-sized open source projects, and five industrial projects. The error in estimation and the sensitivity of the predictions indicate the STREW-J metric suite can be used effectively to predict post-release software field quality.",2007,0, 2483,Data Mining Techniques for Building Fault-proneness Models in Telecom Java Software,"This paper describes a study performed in an industrial setting that attempts to build predictive models to identify parts of a Java system with a high fault probability. The system under consideration is constantly evolving as several releases a year are shipped to customers. Developers usually have limited resources for their testing and inspections and would like to be able to devote extra resources to faulty system parts. The main research focus of this paper is two-fold: (1) use and compare many data mining and machine learning techniques to build fault-proneness models based mostly on source code measures and change/fault history data, and (2) demonstrate that the usual classification evaluation criteria based on confusion matrices may not be fully appropriate to compare and evaluate models.",2007,0, 2484,Predicting Subsystem Failures using Dependency Graph Complexities,"In any software project, developers need to be aware of existing dependencies and how they affect their system. We investigated the architecture and dependencies of Windows Server 2003 to show how to use the complexity of a subsystem's dependency graph to predict the number of failures at statistically significant levels. Such estimations can help to allocate software quality resources to the parts of a product that need it most, and as early as possible.",2007,0, 2485,Fault Prediction using Early Lifecycle Data,"The prediction of fault-prone modules in a software project has been the topic of many studies. In this paper, we investigate whether metrics available early in the development lifecycle can be used to identify fault-prone software modules. More precisely, we build predictive models using the metrics that characterize textual requirements. We compare the performance of requirements-based models against the performance of code-based models and models that combine requirement and code metrics. Using a range of modeling techniques and the data from three NASA projects, our study indicates that the early lifecycle metrics can play an important role in project management, either by pointing to the need for increased quality monitoring during the development or by using the models to assign verification and validation activities.",2007,0, 2486,Advanced Fault Analysis Software System (or AFAS) for Distribution Power Systems,"An advanced fault analysis software system (or AFAS) is currently being developed at Concurrent Technologies Corporation (CTC) to automatically detect and locate low and high impedance, momentary and permanent faults in distribution power systems. Microsoft Visual Studio is used to integrate advanced software packages and analysis tools (including DEW, AEMPFAST, PSCAD, and CTC's DSFL) under the AFAS platform. AFAS is an intelligent, operational, decision-support fault analysis tool that utilizes PSCAD to simulate fault transients of distribution systems to improve fault location accuracy of DSFL tool and to enhance DSFL capabilities for predicting low and high impedance, momentary and permanent faults. The implementation and evaluation results of this software tool have been presented.",2007,0, 2487,A Structural Complexity Metric for Software Components,"At present, the number of components increases largely, and component-based software development (CBSD) is becoming a new effective software development paradigm, how to measure their reliability, maintainability and complexity attracts more and more attentions. This paper presents a metric to assess the structural complexity of components. Moreover it proves that the metric satisfies some good properties.",2007,0, 2488,Mapping CMMI Project Management Process Areas to SCRUM Practices,"Over the past years, the capability maturity model (CMM) and capability maturity model integration (CMMI) have been broadly used for assessing organizational maturity and process capability throughout the world. However, the rapid pace of change in information technology has caused increasing frustration to the heavyweight plans, specifications, and other documentation imposed by contractual inertia and maturity model compliance criteria. In light of that, agile methodologies have been adopted to tackle this challenge. The aim of our paper is to present mapping between CMMI and one of these methodologies, Scrum. It shows how Scrum addresses the Project Management Process Areas of CMMI. This is useful for organizations that have their plan-driven process based on the CMMI model and are planning to improve its processes toward agility or to help organizations to define a new project management framework based on both CMMI and Scrum practices.",2007,0, 2489,Analyzing Software System Quality Risk Using Bayesian Belief Network,"Uncertainty during the period of software project development often brings huge risks to contractors and clients. Developing an effective method to predict the cost and quality of software projects based on facts such as project characteristics and two-side cooperation capability at the beginning of the project can aid us in finding ways to reduce the risks. Bayesian belief network (BBN) is a good tool for analyzing uncertain consequences, but it is difficult to produce precise network structure and conditional probability table. In this paper, we build up the network structure by Delphi method for conditional probability table learning, and learn to update the probability table and confidence levels of the nodes continuously according to application cases, which would subsequently make the evaluation network to have learning abilities, and to evaluate the software development risks in organizations more accurately. This paper also introduces the EM algorithm to enhance the ability in producing hidden nodes caused by variant software projects.",2007,0, 2490,One in a baker's dozen: debugging debugging,"In the work of Voas (1993), they outlined 13 major software engineering issues needing further research: (1) what is software quality? (2) what are the economic benefits behind existing software engineering techniques?, (3) does process improvement matter?, (4) can you trust software metrics and measurement?, (5) why are software engineering standards confusing and hard to comply with, (6) are standards interoperable, (7) how to decommission software?, (8) where are reasonable testing and debugging stoppage criteria?, (9) why are COTS components so difficult to compose?, (10) why are reliability measurement and operational profile elicitation viewed suspiciously, (11) can we design in the """"ilities"""" both technically and economically, (12) how do we handle the liability issues surrounding certification, and (13) is intelligence and autonomic computing feasible? This paper focuses on a simple and easy to understand metric that addresses the eighth issue, a testing and debugging testing stoppage criteria based on expected probability of failure graphs.",2007,0, 2491,How can Previous Component Use Contribute to Assessing the Use of COTS?,"The intuitive notion exists in industry and among regulators that successful use of a commercially available software-based component over some years and within different application environments must imply some affirmative statement about the quality of the component and - in terms of a safety-case - that it should provide evidence to support a specific safety claim for usage of the component in a specific new environment. Yet, so far a method is lacking to investigate quantitatively how such evidence can inform and influence an estimate for example of the component's probability of failure per demand or per hour, and thus the evidence is not used. Currently there is no blueprint to show us what such evidence contributes to meeting a safety claim. In this paper a route is explored that may allow to make use of such prior evidence and combine it with fresh statistical test data pertaining to the new usage environment. The model proposed is an initial model but it is hoped that it can help to develop over time a framework that can be practically used by regulators and safety assessors to inform a safety case for COTS components containing a software part.",2007,0, 2492,"Scalable, Adaptive, Time-Bounded Node Failure Detection","This paper presents a scalable, adaptive and time-bounded general approach to assure reliable, real-time node-failure detection (NFD) for large-scale, high load networks comprised of commercial off-the-shelf (COTS) hardware and software. Nodes in the network are independent processors which may unpredictably fail either temporarily or permanently. We present a generalizable, multilayer, dynamically adaptive monitoring approach to NFD where a small, designated subset of the nodes are communicated information about node failures. This subset of nodes are notified of node failures in the network within an interval of time after the failures. Except under conditions of massive system failure, the NFD system has a zero false negative rate (failures are always detected with in a finite amount of time after failure) by design. The NFD system continually adjusts to decrease the false alarm rate as false alarms are detected. The NFD design utilizes nodes that transmit, within a given locality, """"heartbeat"""" messages to indicate that the node is still alive. We intend for the NFD system to be deployed on nodes using commodity (i.e. not hard-real-time) operating systems that do not provide strict guarantees on the scheduling of the NFD processes. We show through experimental deployments of the design, the variations in the scheduling of heartbeat messages can cause large variations in the false-positive notification behavior of the NFD subsystem. We present a per-node adaptive enhancement of the NFD subsystem that dynamically adapts to provide run-time assurance of low false-alarm rates with respect to past observations of heartbeat scheduling variations while providing finite node-failure detection delays. We show through experimentation that this NFD subsystem is highly scalable and uses low resource overhead.",2007,0, 2493,Behavioral Fault Modeling for Model-based Safety Analysis,"Recent work in the area of model-based safety analysis has demonstrated key advantages of this methodology over traditional approaches, for example, the capability of automatic generation of safety artifacts. Since safety analysis requires knowledge of the component faults and failure modes, one also needs to formalize and incorporate the system fault behavior into the nominal system model. Fault behaviors typically tend to be quite varied and complex, and incorporating them directly into the nominal system model can clutter it severely. This manual process is error-prone and also makes model evolution difficult. These issues can be resolved by separating the fault behavior from the nominal system model in the form of a """"fault model"""", and providing a mechanism for automatically combining the two for analysis. Towards implementing this approach we identify key requirements for a flexible behavioral fault modeling notation. We formalize it as a domain-specific language based on Lustre, a textual synchronous dataflow language. The fault modeling extensions are designed to be amenable for automatic composition into the nominal system model.",2007,0, 2494,Combining Software Quality Analysis with Dynamic Event/Fault Trees for High Assurance Systems Engineering,"We present a novel approach for probabilistic risk assessment (PRA) of systems which require high assurance that they will function as intended. Our approach uses a new model i.e., a dynamic event/fault tree (DEFT) as a graphical and logical method to reason about and identify dependencies between system components, software components, failure events and system outcome modes. The method also explicitly includes software in the analysis and quantifies the contribution of the software components to overall system risk/ reliability. The latter is performed via software quality analysis (SQA) where we use a Bayesian network (BN) model that includes diverse sources of evidence about fault introduction into software; specifically, information from the software development process and product metrics. We illustrate our approach by applying it to the propulsion system of the miniature autonomous extravehicular robotic camera (mini-AERCam). The software component considered for the analysis is the related guidance, navigation and control (GN&C) component. The results of SQA indicate a close correspondence between the BN model estimates and the developer estimates of software defect content. These results are then used in an existing theory of worst-case reliability to quantify the basic event probability of the software component in the DEFT.",2007,0, 2495,Preliminary Models of the Cost of Fault Tolerance,"Software cost estimation and overruns continue to plague the software engineering community, especially in the area of safety-critical systems. We provide some preliminary models to predict the cost of adding fault detection, fault-tolerance, or fault isolation techniques to a software system or subsystem if the cost of originally developing the system or subsystem is known. Since cost is a major driver in the decision to develop new safety-critical systems, such models will be useful to requirements engineers, systems engineers, decision makers, and those intending to reuse systems and components in safety-critical environments where fault tolerance is critical.",2007,0, 2496,Adding Autonomic Capabilities to Network Fault Management System,"In this paper, we propose an adaptive framework for adding the most desired aspects of autonomic capabilities into the critical components of a network fault management system. The aspects deemed as the most desirable are those that have a significant impact on system dependability, which include self-monitoring, self-healing, self-adjusting, and self-configuring. Self-monitoring oversees the environmental conditions and system behavior, building a consciousness ground to support self-awareness capabilities. It is responsible for monitoring the system states and environmental conditions, analyzing them and thus detecting and identifying system faults/failures. Upon detection, self-healing operations is enabled to respond (i.e. take proper actions) to the identified faults /failures. These actions are usually accomplished by self-configuring and self- adjusting the corresponding system configurations and operations. Together, all self-*approaches complete an adaptive framework and offer a sound solution towards high system assurance.",2007,0, 2497,Parsimonious classifiers for software quality assessment,"Modeling to predict fault-proneness of software modules is an important area of research in software engineering. Most such models employ a large number of basic and derived metrics as predictors. This paper presents modeling results based on only two metrics, lines of code and cyclomatic complexity, using radial basis functions with Gaussian kernels as classifiers. Results from two NASA systems are presented and analyzed.",2007,0, 2498,Conquering Complexity,"In safety-critical systems, the potential impact of each separate failure is normally studied in detail and remedied by adding backups. Failure combinations, though, are rarely studied exhaustively; there are just too many of them, and most have a low probability of occurrence. Defect detection in software development is usually understood to be a best effort at rigorous testing just before deployment. But defects can be introduced in all phases of software design, not just in the final coding phase. Defect detection therefore shouldn't be limited to the end of the process, but practiced from the very beginning. In a rigorous model-based engineering process, each phase is based on the construction of verifiable models that capture the main decisions.",2007,0, 2499,A Combinatorial Approach to Quantify Stochastic Failure of Complex Component-Based Systems_The Case of an Advanced Railway Level Crossing Surveillance System,"There are different approaches to quantify stochastic failures of complex component-based systems. Methods successfully applied such as fault tree analysis, event tree analysis, Markov analysis and failure mode and effect analysis are recommended to certain extend according to their applicability to component-based systems. A combinatorial model of fault tree analysis and Markov analysis is developed in this paper to estimate the safety state of an advanced railway level crossing surveillance system which will be implemented by Taiwan Railways Administration in the future. Based on observations of an existing level crossing system, the combinatorial model is used to determine an instantaneous risk probability function, which is dependent on the system state. The results demonstrate that the advanced railway level crossing surveillance system has a higher safety state probability than the existing one.",2007,0, 2500,AutoGrid: Towards an Autonomic Grid Middleware,"Computer grids have drawn great attention of academic and enterprise communities, becoming an attractive alternative for the execution of applications that demand huge computational power, allowing the integration of computational resources spread through different administrative domains. However, grids exhibit high variation of resource availability, node instability, variations on load distribution, and heterogeneity of computational devices and network technology. Due to those characteristics, grid management and configuration is error-prone and almost impracticable to be performed solely by human beings. This paper describes AutoGrid, an autonomic grid middleware built using Adapta reconfiguration framework and runtime system. AutoGrid introduces self-managing capabilities to the Integrade grid middleware, such as: context-awareness, self- healing, self-optimization and self-configuration. This paper also presents insights and experiments that show the benefits towards an autonomic grid infrastructure.",2007,0, 2501,Robustness of a Spoken Dialogue Interface for a Personal Assistant,"Although speech recognition systems have become more reliable in recent years, they are still highly error-prone. Other components of a spoken language dialogue system must then be robust enough to handle these errors effectively, to avoid recognition errors from adversely affecting the overall performance of the system. In this paper, we present the results of a study focusing on the robustness of our agent-based dialogue management approach. We found that while the speech recognition software produced serious errors, the dialogue manager was generally able to respond reasonably to users' utterances.",2007,0, 2502,Surface Remeshing on Triangular Domain for CAD Applications,"In this paper a systematic remeshing method of triangle mesh surface is proposed for CAD applications. A density-sensitive edge marking function is proposed for surface resampling. Surface energy minimization is used to redistribute vertices for face geometry regularization. A simulated annealing method is proposed for vertex connectivity optimization. During each stage, the deviations from original mesh are prevented, and the boundaries and features are well preserved. While all modifications are performed locally, the error-prone global parameterization is entirely avoided. The advantage and robustness of this technique are verified by many examples from real-world product design.",2007,0, 2503,"Software-Based Online Detection of Hardware Defects Mechanisms, Architectural Support, and Evaluation","As silicon process technology scales deeper into the nanometer regime, hardware defects are becoming more common. Such defects are bound to hinder the correct operation of future processor systems, unless new online techniques become available to detect and to tolerate them while preserving the integrity of software applications running on the system. This paper proposes a new, software-based, defect detection and diagnosis technique. We introduce a novel set of instructions, called access-control extension (ACE), that can access and control the microprocessor's internal state. Special firmware periodically suspends microprocessor execution and uses the ACE instructions to run directed tests on the hardware. When a hardware defect is present, these tests can diagnose and locate it, and then activate system repair through resource reconfiguration. The software nature of our framework makes it flexible: testing techniques can be modified/upgraded in the field to trade off performance with reliability without requiring any change to the hardware. We evaluated our technique on a commercial chip-multiprocessor based on Sun's Niagara and found that it can provide very high coverage, with 99.22% of all silicon defects detected. Moreover, our results show that the average performance overhead of software-based testing is only 5.5%. Based on a detailed RTL-level implementation of our technique, we find its area overhead to be quite modest, with only a 5.8% increase in total chip area.",2007,0, 2504,A Discrete Differential Operator for Direction-based Surface Morphometry,"This paper presents a novel directional morphometry method for surfaces using first order derivatives. Non-directional surface morphometry has been previously used to detect regions of cortical atrophy using brain MRI data. However, evaluating directional changes on surfaces requires computing gradients to obtain a full metric tensor. Non-directionality reduces the sensitivity of deformation-based morphometry to area-preserving deformations. By proposing a method to compute directional derivatives, this paper enables analysis of directional deformations on surfaces. Moreover, the proposed method exhibits improved numerical accuracy when evaluating mean curvature, compared to the so-called cotangent formula. The directional deformation of folding patterns was measured in two groups of surfaces and the proposed methodology allowed to defect morphological differences that were not detected using previous non-directional morphometry. The methodology uses a closed-form analytic formalism rather than numerical approximation and is readily generalizable to any application involving surface deformation.",2007,0, 2505,Computer-Assisted Instruction in Probability and Statistics,"Because of some traditional modes of teaching, our understanding for Computer-Assisted Instruction (CAI) in probability and statistics appears narrow, making application of CAI in probability and statistics superficial. Four aspects of CAI in probability and statistics are researched mainly to reform traditional teaching and enhance teaching quality and effect of probability and statistics. Firstly, the paper defines traditional teaching of probability and statistics. Secondly, the paper addresses some skills of doing multimedia courseware of probability and statistics. Thirdly, the paper considers application and development of statistical software of CAI in probability and statistics. At last, the paper stresses the leading role of teacher in CAI.",2007,0, 2506,9D-6 Signal Analysis in Scanning Acoustic Microscopy for Non-Destructive Assessment of Connective Defects in Flip-Chip BGA Devices,"Failure analysis in industrial applications often require methods working non-destructively for allowing a variety of tests at a single device. Scanning acoustic microscopy in the frequency range above 100 MHz provides high axial and lateral resolution, a moderate penetration depth and the required non-destructivity. The goal of this work was the development of a method for detecting and evaluating connective defects in densely integrated flip-chip ball grid array (BGA) devices. A major concern was the ability to automatically detect and differentiate the ball-connections from the surrounding underfill and the derivation of a binary classification between void and intact connection. Flip chip ball grid arrays with a 750 mum silicon layer on top of the BGA were investigated using time resolved scanning acoustic microscopy. The microscope used was an Evolution II (SAM TEC, Aalen, Germany) in combination with a 230 MHz transducer. Short acoustic pulses were emitted into the silicon through an 8 mm liquid layer. In receive mode reflected signals were recorded, digitized and stored at the SAM's internal hard drive. The off-line signal analysis was performed using custom-made MATLAB (The Mathworks, Natick, USA) software. The sequentially working analysis characterized echo signals by pulse separation to determine the positions of BGA connectors. Time signals originated at the connector interface were then investigated by wavelet- (WVA) and pulse separation analysis (PSA). Additionally the backscattered amplitude integral (BAI) was estimated. For verification purposes defects were evaluated by X-ray- and scanning electron microscopy (SEM). It was observed that ball connectors containing cracks seen in the SEM images show decreased values of wavelet coefficients (WVC). However, the relative distribution was broader compared to intact connectors. It was found that the separation of pulses originated at the entrance and exit of the ball array corresponded to the condition of- the connector. The success rate of the acoustic method in detecting voids was 96.8%, as verified by SEM images. Defects revealed by the acoustic analysis and confirmed by SEM could be detected by X-ray microscopy only in 64% of the analysed cases. The combined analyses enabled a reliable and non destructive detection of defect ball-grid array connectors. The performance of the automatically working acoustical method seemed superior to X-ray microscopy in detecting defect ball connectors.",2007,0, 2507,Database Isolation and Filtering against Data Corruption Attacks,"Various attacks (e.g., SQL injections) may corrupt data items in the database systems, which decreases the integrity level of the database. Intrusion detections systems are becoming more and more sophisticated to detect such attacks. However, more advanced detection techniques require more complicated analyses, e.g, sequential analysis, which incurs detection latency. If we have an intrusion detection system as a filter for all system inputs, we introduce a uniform processing latency to all transactions of the database system. In this paper, we propose to use a """"unsafe zone"""" to isolate user's SQL queries from a """"safe zone"""" of the database. In the unsafe zone, we use polyinstantiations and flags for the records to provide an immediate but different view from that of the safe zone to the user. Such isolation has negligible processing latency from the user's view, while it can significantly improve the integrity level of the whole database system and reduce the recovery costs. Our techniques provide different integrity levels within different zones. Both our analytical and experimental results confirm the effectiveness of our isolation techniques against data corruption attacks to the databases. Our techniques can be applied to database systems to provide multizone isolations with different levels of QoS.",2007,0, 2508,Cooperative mixed strategy for service selection in service oriented architecture,"In service oriented architecture (SOA), service brokers could find many service providers which offer same function with different quality of service (QoS). Under this condition, users may encounter difficulty to decide how to choose from the candidates to obtain optimal service quality. This paper tackles the service selection problem (SSP) of time-sensitive services using the theory of games creatively. Pure strategies proposed by current studies are proved to be improper to this problem because the decision conflicts among the users result in poor performance. A novel cooperative mixed strategy (CMS) with good computability is developed in this paper to solve such inconstant-sum non-cooperative n-person dynamic game. Unlike related researches, CMS offers users an optimized probability mass function instead of a deterministic decision to select a proper provider from the candidates. Therefore it is able to eliminate the fluctuation of queue length, and raise the overall performance of SOA significantly. Furthermore, the stability and equilibrium of CMS are proved by simulations.",2007,0, 2509,Extensible Virtual Environment Systems Using System of Systems Engineering Approach,"The development of Virtual Environment (VE) systems is a challenging endeavor with a complex problem domain. The experience in the past decade has helped contribute significantly to various measures of software quality of the resulting VE systems. However, the resulting solutions remain monolithic in nature without addressing successfully the issue of system interoperability and software aging. This paper argues that the problem resides in the traditional system centric approach and that an alternative approach based on system of systems engineering is necessary. As a result, the paper presents a reference architecture based on layers, where only the core is required for deployment and all others are optional. The paper also presents an evaluation methodology to assess the validity of the resulting architecture, which was applied to the proposed core layer and involving individual sessions with 12 experts in developing VE systems.",2007,0, 2510,UML-based safety analysis of distributed automation systems,"HAZOP (hazard and operability) studies are carried out to analyse complex automated systems, especially large and distributed automated systems. The aim is to systematically assess the automated system regarding possibly negative effects of deviations from standard operation on safety and performance. Today, HAZOP studies require significant manual effort and tedious work of several costly experts. The authors of this paper propose a knowledge-based approach to support the HAZOP analysis and to reduce the required manual effort. The main ideas are (1) to incorporate knowledge about typical problems in automation systems, in combination with their causes and their effects, in a rule base, and (2) to apply this rule base by means of a rule engine on the description of the automated system under consideration. This yields a list of possible dangers regarding safety risks and performance reductions. These results can be used by the automation experts to improve the system's design. Within this paper, the general approach is presented, and an example application is dealt with where the system design is given in the form of a UML class diagram, and the HAZOP study is focused on hazards caused by faulty communication within the distributed system.",2007,0, 2511,A Failure Tolerating Atomic Commit Protocol for Mobile Environments,"In traditional fixed-wired networks, standard protocols like 2-Phase-Commit are used to guarantee atomicity for distributed transactions. However, within mobile networks, a higher probability of failures including node failures, message loss, and even network partitioning makes the use of these standard protocols difficult or even impossible. To use traditional database applications within a mobile scenario, we need an atomic commit protocol that reduces the chance of infinite blocking. In this paper, we present an atomic commit protocol called multi coordinator protocol (MCP) that uses a combination of the traditional 2-Phase-Commit, 3-Phase-Commit, and consensus protocols for mobile environments. Simulation experiments comparing MCP with 2PC show how MCP enhances stability for the coordination process by involving multiple coordinators, and that the additional time needed for the coordination among multiple coordinators is still reasonable.",2007,0, 2512,Neural network controlled voltage disturbance detector and output voltage regulator for Dynamic Voltage Restorer,"This paper describes the high power DVR (Dynamic Voltage Restorer) with the neural network controlled voltage disturbance detector and output voltage regulator. Two essential parts of DVR control are how to detect the voltage disturbance such as voltage sag and how to compensate it as fast as possible respectively. The new voltage disturbance detector was implemented by using the delta rule of the neural network control. Through the proposed method, we can instantaneously track the amplitude of each phase voltage under the severe unbalanced voltage conditions. Compared to the conventional synchronous reference frame method, the proposed one shows the minimum time delay to determine the instance of the voltage disturbance event. Also a modified d-q transformed voltage regulator for single phase inverter was adopted to obtain the fast dynamic response and the robustness, where three independent single phase inverters are controlled by using the amplitude of source voltage obtained by neural network controller. By using the proposed voltage regulator, the voltage disturbance such as voltage sag can be compensated quickly to the nominal voltage level. The proposed disturbance detector and the voltage regulator were applied to the high power DVR (1000 kV A@440 V) that was developed for the application of semiconductor manufacture plant. The performances of the proposed DVR control were verified through computer simulation and experimental results. Finally, conclusions are given.",2007,0, 2513,Hard-to-detect errors due to the assembly-language environment,"Most programming errors are reasonably simple to understand and detect. One of the benefits of a high-level language is its encapsulation of error-prone concepts such as memory access and stack manipulation. Assembly-language programmers do not have this luxury. In our quest to create automated error-prevention and error-detection tools for assembly language, we need to create a comprehensive list of possible errors. We are not considering syntax errors or algorithmic errors. Assemblers, simple testing, and automated testing can detect those errors. We want to deal with design errors that are direct byproducts of the assembly-language environment or result from a programmer's lack of understanding of the assembly-language environment. Over many years of assembly-language instruction, we have come across a plethora of errors. Understanding the different types of errors and how to prevent and detect them is essential to our goal of creating automated error-prevention and error-detection tools. In this paper we list and explain the types of errors we have cataloged.",2007,0, 2514,Work in progress - Developing transversal skills in an online theoretical engineering subject,"In some concentration subjects of online programs, such as theoretical scientific subjects (Mathematics, Physics, etc), where most of the problems have only one solution and there are few chances for discussion, the development of transversal skills, such as team-working, leadership, or oral and written expression is forgotten. Typically, only specific competences about the subject are acquired. Due to the lack of discussions, students usually work in an autonomous way, only a few days before the exam, and this causes a high ratio of desertion. To avoid this, teamwork has been planned. In it, lecturers and students assess the correctness of the solutions, the presentation of the reports, the written expression, the pedagogical quality of their explanations, and some teamwork skills such as organization, distribution of the work, the use of e-learning tools for communication, etc. To collect and analyze the marks which students grade each other's work a simple tool has been created. This way, in a preliminary analysis it is shown that students are closer to their mates, they develop transversal skills, and they study during all the year, increasing the ratio of success.",2007,0, 2515,Automatic Test Case Generation from UML Models,"This paper presents a novel approach of generating test cases from UML design diagrams. We consider use case and sequence diagram in our test case generation scheme. Our approach consists of transforming a UML use case diagram into a graph called use case diagram graph (UDG) and sequence diagram into a graph called the sequence diagram graph (SDG) and then integrating UDG and SDG to form the system testing graph (STG). The STG is then traversed to generate test cases. The test cases thus generated are suitable for system testing and to detect operational, use case dependency, interaction and scenario faults.",2007,0, 2516,An Approach for Assessment of Reliability of the System Using Use Case Model,"Existing approaches on reliability assessment of the system entirely depends on expertise, knowledge of system analysts and computation of usage probability of different user operations. Existing approach is therefore can not be taken as accurate, particularly to deal with new or unfamiliar systems. Further modern systems are very large and complex to manipulate without any automation. Addressing these issues, we propose an analytical technique in our work. In this paper, we propose a novel approach to assess reliability of the system using use case model. We consider a metric to assess the reliability of a system under development.",2007,0, 2517,The research status of complex system integrated health management system (CSIHM) architecture,"Complex system integrated health management (CSIHM) technology is developed from the fault detecting, isolation and reconfiguration (FDIR) technology; which is the integrated application of the advanced reasoning technology, artificial intelligence technology, sensor technology and information management technology. CSIHM can effectively manage the health states of many kinds of complex system for reducing the maintenance cost of an overall system and make sure the accomplishment of mission and the safety of crews by detecting the system malfunctions earlier and make deal with them automatically. Recent years, most of presented health management models, such as ISHM, IVHM and PHM etc. can be put in the category of CSIHM. This paper firstly presents the goal which CSIHM must be achieved and its activities, and then has an analysis and comparison of some representative CSIHM architectures combining with some typical cases emerged from their development course, at last presents that use the technique of concurrent engineering to conduct the design of CSIHM and the design of complex system managed by CSIHM simultaneously, the design of user interface and the development of core engine is the difficulties and hotspot problems in the research of CSIHM architecture.",2007,0, 2518,Assessing tram schedules using a library of simulation components,"Assessing tram schedules is important to assure an efficient use of infrastructure and for the provision of a good quality service. Most existing infrastructure modeling tools provide support to assess an individual aspect of rail systems in isolation, and do not provide enough flexibility to assess many aspects that influence system performance at once. We propose a library of simulation components that enable rail designers to assess different system configurations. In this paper we show how we implemented some basic safety measures used in rail systems such as: reaction to control objects (e.g. traffic lights), priority rules, and block safety systems.",2007,0, 2519,Transactional Memory Execution for Parallel Multithread Programming without Lock,"With the increasing popularity of shared-memory programming model, especially at the advent of multicore processors, applications need to become more concurrent to take advantage of the increased computational power provided by chip level multiprocessing. Traditionally locks are used to enforce data dependence and timing constraints between the various threads. However locks are error- prone, and often leading to unwanted race conditions, priority inversion, or deadlock. Therefore, recent waves of research projects are exploring transaction memory systems as an alternative synchronization mechanism to locks. This paper presents a software transactional memory execution model for parallel multithread programming without lock.",2007,0, 2520,"Multiple signal processing techniques based power quality disturbance detection, classification, and diagnostic software","This work presents the development steps of the software PQMON, which targets power quality analysis applications. The software detects and classifies electric system disturbances. Furthermore, it also makes diagnostics about what is causing such disturbances and suggests line of actions to mitigate them. Among the disturbances that can be detected and analyzed by this software are: harmonics, sag, swell and transients. PQMON is based on multiple signal processing techniques. Wavelet transform is used to detect the occurrence of the disturbances. The techniques used to do such feature extraction are: fast Fourier transform, discrete Fourier transform, periodogram, and statistics. Adaptive artificial neural network is also used due to its robustness in extracting features such as fundamental frequency and harmonic amplitudes. The probable causes of the disturbances are contained in a database, and their association to each disturbance is made through a cause-effect relationship algorithm, which is used to diagnose. The software also allows the users to include information about the equipments installed in the system under analysis, resulting in the direct nomination of any installed equipment during the diagnostic phase. In order to prove the effectiveness of software, simulated and real signals were analyzed by PQMON showing its excellent performance.",2007,0, 2521,Assess the impact of photovoltaic generation systems on low-voltage network: software analysis tool development,"The integration of photovoltaic generation systems into power networks can cause both benefits and drawbacks. However, utilities have to control and operate their systems properly, in order to assure the availability and quality power supply to the users. Therefore, utilities should consider technical constraints and existing regulation in order to assess the impact of photovoltaic systems and limit their integration. On the other hand, regulation includes few operation constraints and they are not implemented in current software analysis tools. In the present paper a tool for assessing the impact of PV integration on low-voltage networks is described. The voltage fluctuation, the inversion of the power flow and the increase of short-circuit capacity are the problems considered in the proposed tool. Further work will focus on harmonics distortion.",2007,0, 2522,Estimation of distribution algorithms for testing object oriented software,"One of the main tasks software testing involves is the generation of the test cases to be used during the test. Due to its expensive cost, the automation of this task has become one of the key issues in the area. While most of the work on test data generation has concentrated on procedural software, little attention has been paid to object oriented programs, even so they are a usual practice nowadays. We present an approach based on estimation of distribution algorithms (EDAs) for dealing with the test data generation of a particular type of objects, that is, containers. This is the first time that an EDA has been applied to testing object oriented software. In addition to automated test data generation, the EDA approach also offers the potential of modelling the fitness landscape defined by the testing problem and thus could provide some insight into the problem. Firstly, we show results from empirical evaluations and comment on some appealing properties of EDAs in this context. Next, a framework is discussed in order to deal with the generation of efficient tests for the container classes. Preliminary results are provided as well.",2007,0, 2523,Model calibration of a real petroleum reservoir using a parallel real-coded genetic algorithm,"An application of a Real-coded Genetic Algorithm (GA) to the model calibration of a real petroleum reservoir is presented. In order to shorten the computation time, the possible solutions generated by the GA are evaluated in parallel on a group of computers. This required the GA to be adapted to a multi-processor structure, so that the scalability of the computation is maximised. The best solutions of each run enter the ensemble of calibrated models, which is finally analysed using a clustering algorithm. The aim is to identify the optimal regions contained in the ensemble and thus to reveal the distinct types of reservoir models consistent with the historic production data, as a way to assess the uncertainty in the Reservoir Characterisation due to the limited reliability of optimisation algorithms. The developed methodology is applied to the characterisation of a real petroleum reservoir. Results show a large improvement with respect to previous studies on that reservoir in terms of the quality and diversity of the obtained calibrated models. Our main conclusion is that, even with regularisation, many distinct calibrated models are possible, which highlights the importance of applying optimisation methods capable of identifying all such solutions.",2007,0, 2524,Visualization for Software Evolution Based on Logical Coupling and Module Coupling,"In large scale software projects, developers make much software during long term. The source codes of software are frequently revised in the projects. The source codes evolve to become complex. Measurements of software complexity have been proposed, such as module coupling and logical coupling. In the case of the module coupling, even if developers copy pieces of source codes to a new module, the module coupling can not detect relationship between the pieces of the source codes although the pieces of the two modules have strong coupling. On the other hand, in the logical coupling, if two modules are accidentally revised at same time by a same developer, the logical coupling will judge strong coupling between the two modules although the modules have no relation. Therefore, we proposed a visualization technique and software complexity metrics for software evolution. A basic idea is that modules including strong module coupling should have strong logical coupling. If a gap between a set of modules including strong module couplings and a set of modules including strong logical couplings is large, the software complexity will be large. In addition, our visualization technique helps developers understand changes of software complexity. As a result of experiments in open source projects, we confirmed that the proposed metrics and visualization technique were able to detect high risky project with many bugs.",2007,0, 2525,Injecting security as aspectable NFR into Software Architecture,"Complexity of the software development process is often increased by actuality of crosscutting concerns in software requirements; moreover, software security as a particular non-functional requirement of software systems is often addressed late in the software development process. Modeling and analyzing of these concerns and especially security in the software architecture facilitate detecting architectural vulnerabilities, decrease costs of the software maintenance, and reduce finding tangled and complex components in the ultimate design. Aspect oriented ADLs have emerged to overcome this problem; however, imposing radical changes to existing architectural modeling methods is not easily acceptable by architects. In this paper, we present a method to enhance conventional software architecture description languages through utilization of aspect features with special focuses on security. To achieve the goal, aspectable NFRs have been clarified; then, for their description in the software architecture, an extension to xADL 2.0 [E.M. Dashofy, 2005] has been proposed; finally, we illustrate this material along with a case study.",2007,0, 2526,Enhancing the ESIM (Embedded Systems Improving Method) by Combining Information Flow Diagram with Analysis Matrix for Efficient Analysis of Unexpected Obstacles in Embedded Software,"In order to improve the quality of embedded software, this paper proposes an enhancement to the ESIM (embedded systems improving method) by combining an IFD (information flow diagram) with an Analysis Matrix to analyze unexpected obstacles in the software. These obstacles are difficult to predict in the software specification. Recently, embedded systems have become larger and more complicated. Theoretically therefore, the development cycle of these systems should be longer. On the contrary, in practice the cycle has been shortened. This trend in industry has resulted in the oversight of unexpected obstacles, and consequently affected the quality of embedded software. In order to prevent the oversight of unexpected obstacles, we have already proposed two methods for requirements analysis: the ESIM using an Analysis Matrix and a method that uses an IFD. In order to improve the efficiency of unexpected obstacle analysis at reasonable cost, we now enhance the ESIM by combining an IFD with an Analysis Matrix. The enhancement is studied from the following three viewpoints. First, a conceptual model comprising both the Analysis Matrix and IFD is defined. Then, a requirements analysis procedure is proposed, that uses both the Analysis Matrix and IFD, and assigns each specific role to either an expert or non-expert engineer. Finally, to confirm the effectiveness of this enhancement, we carry out a description experiment using an IFD.",2007,0, 2527,An Approach for Specifying Access Control Policy in J2EE Applications,"Most applications based on J2EE platform use role- based access control as an efficient mechanism to achieve security. The current approach for specifying access rule is based on methods of Enterprise JavaBeans (EJBs). In large-scale systems, where a large number of EJBs are used and the interactions between EJBs are complex, direct use of this method- based approach is error-prone and difficult to maintain. We propose an alternative approach for specifying access control policy based on the concept of business function.",2007,0, 2528,Improving Effort Estimation Accuracy by Weighted Grey Relational Analysis During Software Development,"Grey relational analysis (GRA), a similarity-based method, presents acceptable prediction performance in software effort estimation. However, we found that conventional GRA methods only consider non-weighted conditions while predicting effort. Essentially, each feature of a project may have a different degree of relevance in the process of comparing similarity. In this paper, we propose six weighted methods, namely, non-weight, distance-based weight, correlative weight, linear weight, nonlinear weight, and maximal weight, to be integrated into GRA. Three public datasets are used to evaluate the accuracy of the weighted GRA methods. Experimental results show that the weighted GRA performs better precision than the non-weighted GRA. Specifically, the linearly weighted GRA greatly improves accuracy compared with the other weighted methods. To sum up, the weighted GRA not only can improve the accuracy of prediction but is an alternative method to be applied to software development life cycle.",2007,0, 2529,Automatic Test Case Generation from UML Sequence Diagram,"This paper presents a novel approach of generating test cases from UML design diagrams. Our approach consists of transforming a UML sequence diagram into a graph called the sequence diagram graph (SDG) and augmenting the SDG nodes with different information necessary to compose test vectors. These information are mined from use case templates, class diagrams and data dictionary. The SDG is then traversed to generate test cases. The test cases thus generated are suitable for system testing and to detect interaction and scenario faults.",2007,0, 2530,A Model-Driven Simulation for Performance Evaluation of 1xEV-DO,"Finding an appropriate approach to evaluate the capacity of a radio access technology with respect to a wide range of parameters (radio signal quality, quality of service, user mobility, network resources) has become increasingly important in today's wireless network planning. In this paper, we propose a model- based simulation to assess the capabilities of the IxEV- DO radio access technology. Results are expressed in terms of throughputs and signal quality (C/I and Ec/Io) for the requested services and applications.",2007,0, 2531,A Survey of System Software for Wireless Sensor Networks,"Wireless sensor networks (WSNs) are increasingly used as an enabling technology in a variety of domains. They operate unattended in an autonomous way, are inherently error prone and have limited resources like energy and bandwidth. Due to these special features and constraints, the management of sensor nodes in WSNs has its own unique technical challenges as well. For example, system software for WSNs needs to be specially tailored and additionally support specific quality of service (QoS) properties such as context-awareness, application-knowledge, reconfiguration, QoS-awareness, scalability and efficiency. A proper classification and comparison of notable system software for WSNs seems to be in order for the literature on this subject. In doing so, in this paper we try to present a survey on such system software with the objective of classifying them especially with regard to aforementioned QoS considerations in the middleware layer. Some classes are identified: agent-based, service-based, query-based, and event-based ones.",2007,0, 2532,Fault Detection System Using Directional Classified Rule-Based in AHU,"Monitoring systems used at present to operate air handling unit (AHU) optimally do not have a function that enables to detect faults properly when there are faults of such as operating plants or performance falling, so they are unable to manage faults rapidly and operate optimally. In this paper, we have developed a fault detection system, directional classified rule-based, which can be used in AHU system. In order to experiment this algorithm, it was applied to AHU system which is installed inside environment chamber(EC), verified its own practical effect, and confirmed its own applicability to the related field in the future.",2007,0, 2533,A SLA-Oriented Management of Containers for Hosting Stateful Web Services,"Service-oriented architectures provide integration of interoperability for independent and loosely coupled services. Web services and the associated new standards such as WSRF are frequently used to realise such service-oriented architectures. In such systems, autonomic principles of self-configuration, self-optimisation, self-healing and self- adapting are desirable to ease management and improve robustness. In this paper we focus on the extension of the self management and autonomic behaviour of a WSRF container connected by a structured P2P overlay network to monitor and rectify its QoS to satisfy its SIAs. The SLA plays an important role during two distinct phases in the life-cycle of a WSRF container. Firstly during service deployment when services are assigned to containers in such a way as to minimise the threat of SLA violations, and secondly during maintenance when violations are detected and services are migrated to other containers to preserve QoS. In addition, as the architecture has been designed and built using standardised modern technologies and with high levels of transparency, conventional Web services can be deployed with the addition of a SLA specification.",2007,0, 2534,A New Context-aware Application Validation Method Based on Quality-driven Petri Net Models,"There is a problem that the crash in devices or service components may occur during executing context-aware applications when developing multiple context-aware applications in context-aware middleware. It will lead to the abnormal execution of context-aware applications. In this paper, a method based on quality-driven Petri nets (QPN) derived from Petri nets is proposed to solve it. In this method, a context-aware application is qualified, simulated, and validated based on QPN. Through QPN, we can simulate the execution process of context-aware applications. In QPN, Petri Nets' behavioral properties involved reachability. Reachability will detect and reflect the crash in devices or component services before executing context-aware applications.",2007,0, 2535,A PC-Based System for Automated Iris Recognition under Open Environment,"This paper presents an entirely automatic system designed to realize accurate and fast personal identification from the iris images acquired under open environment. The acquisition device detects the appearance of a user at any moment using an ultrasonic transducer, guides the user in positioning himself in the acquisition range and acquires the best iris image of the user through quality evaluation. Iris recognition is done using the bandpass characteristic of wavelets and wavelet transform principles for detecting singularities to extract iris features and adopting Hamming distance to match iris codes. The authentication service software can enroll a user's iris image into a database and perform verification of a claimed identity or identification of an unknown entity. The identification rate is high and the recognition result is available about 6 s starting at iris image acquisition. This system is promising to be used in applications requiring personal identification.",2007,0, 2536,ANDES: an Anomaly Detection System for Wireless Sensor Networks,"In this paper, we propose ANDES, a framework for detecting and finding the root causes of anomalies in operational wireless sensor networks (WSNs). The key novelty of ANDES is that it correlates information from two sources: one in the data plane as a result of regular data collection in WSNs, the other in the management plane implemented via a separate routing protocol, making it resilient to routing anomaly in the data plane. Evaluation using a 32-node sensor testbed shows that ANDES is effective in detecting fail-stop failures and most routing anomalies with negligible computing and storage overhead.",2007,0, 2537,A probabilistic approach of mobile router selection algorithm in dynamic moving networks,"A dynamic moving network with unspecified multiple mobile routers or dynamic mobile routers may causes difficult problems in terms of stability and reliability in the moving networks. We assume that a user terminal such as cellular phone can be a mobile router in a dynamic moving network. The selection and maintenance of mobile routers should be very important issue like in this scenario in order to support reliable communication channels to internal nodes. In the selection of representative mobile routers, the staying time of a mobile router should be one of essential factors, e.g. battery power and network capacity. It, however, is not easy to estimate the staying time of each mobile router and candidate to be selected. We propose the staying time prediction algorithm of a dynamic mobile router according to the variable status of dynamic moving network. In this approach, the proposed algorithm is based on gathered statistical data as a probabilistic approach in order to estimate the staying time of a mobile router. Finally simulation results will show the accuracy of the probability.",2007,0, 2538,Fisher information-based evaluation of image quality for time-of-flight PET,"The use of time-of-flight (TOF) information during reconstruction is generally considered to improve the image quality. In this work we quantified this improvement using two existing methods: (1) a very simple analytical expression only valid for a central point in a large uniform disk source, and (2) efficient analytical approximations for post-filtered maximum likelihood expectation maximization (MLEM) reconstruction with a fixed target resolution, predicting the image quality in a pixel or in a small region based on the Fisher information matrix. The image quality was investigated at different locations in various software phantoms. Simplified as well as realistic phantoms, measured both with TOF positron emission tomography (PET) systems and with a conventional PET system, were simulated. Since the time resolution of the system is not always accurately known, the effect on the image quality of using an inaccurate kernel during reconstruction was also examined with the Fisher information- based method. First, we confirmed with this method that the variance improvement in the center of a large uniform disk source is proportional to the disk diameter and inversely proportional to the time resolution. Next, image quality improvement was observed in all pixels, but in eccentric and high-count regions the contrast-to-noise ratio (CNR) increased slower than in central and low- or medium-count regions. Finally, the CNR was seen to decrease when the time resolution was inaccurately modeled (too narrow or too wide) during reconstruction. Although the optimum is rather flat, using an inaccurate TOF kernel might introduce artifacts in the reconstructed image.",2007,0,3981 2539,Quantifying the effects of defective block detectors in a 3D whole body pet camera,"A comparison study was conducted in order to assess the image quality of a clinical 3D whole body PET scanner (Siemens Biograph 16 HiRez) in the condition of failure of one or more block detectors. A data set was acquired using the NEMA image quality phantom when all detectors were functioning normally. The ratio of the activity in the four smallest spheres to the background region was 8.27:1. Defective blocks were then simulated by zeroing the appropriate lines of response in the sinograms. Eight different combinations of defects are considered ranging from the case of no defect up to a complete bucket failure (12 blocks). Images were reconstructed with both OSEM and FBP using the manufacturer's software. The images were examined both qualitatively and according to the NEMA NU 2-2001 protocol for contrast, variability, and residual error. The results show that despite very visible artefacts appearing in the images the NEMA contrast analysis was very similar for all defect cases. The variability increased for all cases with simulated defective blocks. The contrast results demonstrate that a method of qualitatively evaluating the images is required in addition to the quantitative analysis. The preliminary data examined in this study suggest that data acquired when there are two defective blocks in the system might still produce clinically useable images.",2007,0, 2540,Software-based BIST for Analog to Digital Converters in SoC,"Embedded software based self testing has recently become focus of intense research for microprocessor and memories in SoC. In this paper, we used the testing microprocessor and memory for developing software-based self-testing of analog to digital converters in SoC. The advantage of this methodology include at speed testing, low cost, and small test time. Simulation results show that the proposed method can detect not only catastrophic faults but also some parametric faults.",2007,0, 2541,Functional testing of digital microfluidic biochips,"Dependability is an important attribute for microfluidic biochips that are used for safety-critical applications such as point-of-care health assessment, air-quality monitoring, and food-safety testing. Therefore, these devices must be adequately tested after manufacture and during bioassay operations. Known techniques for biochip testing are all function-oblivious, i.e., while they can detect and locate defect sites on a microfluidic array, they cannot be used to ensure correct operation of functional units. In this paper, we introduce the concept of functional testing of microfluidic biochips. We address fundamental biochip operations such as droplet dispensing, droplet transportation, mixing, splitting, and capacitive sensing. Long electrode actuation times are avoided to ensure that there is no electrode degradation during testing. We evaluate the proposed test methods using simulations as well as experiments for a fabricated biochip.",2007,0, 2542,ACCE: Automatic correction of control-flow errors,"Detection of control-flow errors at the software level has been studied extensively in the literature. However, there has not been any published work that attempts to correct these errors. Low-cost correction of CFEs is important for real-time systems where checkpointing is too expensive or impossible. This paper presents automatic correction of control-flow errors (ACCE), an efficient error correction algorithm involving addition of redundant code to the program. ACCE has been implemented by modifying GCC, a widely used C compiler, and performance measurements show that the overhead is very low. Fault injection experiments on SPEC and MiBench benchmark programs compiled with ACCE show that the correct output is produced with high probability and that CFEs are corrected with a latency of a few hundred instructions.",2007,0, 2543,A methodology for detecting performance faults in microprocessors via performance monitoring hardware,"Speculative execution of instructions boosts performance in modern microprocessors. Control and data flow dependencies are overcome through speculation mechanisms, such as branch prediction or data value prediction. Because of their inherent self-correcting nature, the presence of defects in speculative execution units does not affect their functionality (and escapes traditional functional testing approaches) but impose severe performance degradation. In this paper, we investigate the effects of performance faults in speculative execution units and propose a generic, software-based test methodology, which utilizes available processor resources: hardware performance monitors and processor exceptions, to detect these faults in a systematic way. We demonstrate the methodology on a publicly available fully pipelined RISC processor that has been enhanced with the most common speculative execution unit, the branch prediction unit. Two popular schemes of predictors built around a Branch Target Buffer have been studied and experimental results show significant improvements on both cases fault coverage of the branch prediction units increased from 80% to 97%. Detailed experiments for the application of a functional self-testing methodology on a complete RISC processor incorporating both a full pipeline structure and a branch prediction unit have not been previously given in the literature.",2007,0, 2544,A practical approach to comprehensive system test & debug using boundary scan based test architecture,"In this paper, we present a boundary scan based system test approach for large and complex electronic systems. Using the multi-drop architecture, a test bus is extended through the backplane and the boundary scan chain of every board is connected to this test bus through a gateway device. We present a comprehensive system test method using this test architecture to achieve high quality, reliability and efficient diagnosis of structural defects and some functional errors. This test architecture enables many advanced test methods like, embedded test application for periodic system maintenance, high quality backplane test for efficient diagnosis of structural defects on the backplane, in-system remote programming of programmable devices in the field. Finally, we present a novel fault injection method to detect and diagnose various functional errors in the system software of an electronic system. These methods were implemented in various systems and we present some implementation data to show the effectiveness of these advanced test methods.",2007,0, 2545,Identification of Relational Discrepancies between Database Schemas and Source-Code in Enterprise Applications,"As enterprise applications become more and more complex, the understanding and quality assurance of these systems become an increasingly important issue. One specific concern of data reverse engineering, a necessary process for this type of applications which tackles the mentioned aspects, is to retrieve constraints which are not explicitly declared in the database schema but verified in the code. In this paper we propose a novel approach for detecting the relational discrepancies between database schemas and source-code in enterprise applications, as part of the data reverse engineering process. Detecting and removing these discrepancies allows us to ensure the accuracy of the stored data as well as to increase the level of understanding of the data involved in an enterprise application.",2007,0, 2546,A parallel controls software approach for PEP II: AIDA & MATLAB middle layer,"The controls software in use at PEP II (Stanford control program - SCP) had originally been developed in the eighties. It is very successful in routine operation but due to its internal structure it is difficult and time consuming to extend its functionality. This is problematic during machine development and when solving operational issues. Routinely, data has to be exported from the system, analyzed offline, and calculated settings have to be reimported. Since this is a manual process, it is time consuming and error-prone. Setting up automated processes, as is done for MIA (model independent analysis), is also time consuming and specific to each application. Recently, there has been a trend at light sources to use MATLAB [1] as the platform to control accelerators using a """"MATLAB middle layer"""" [2] (MML), and so called channel access (CA) programs to communicate with the low level control system (LLCS). This has proven very successful, especially during machine development time and trouble shooting. A special CA code, named AIDA (Accelerator Independent Data Access [3]), was developed to handle the communication between MATLAB, modern software frameworks, and the SCP. The MML had to be adapted for implementation at PEP II. Colliders differ significantly in their designs compared to light sources, which poses a challenge. PEP II is the first collider at which this implementation is being done. We will report on this effort, which is still ongoing.",2007,0, 2547,Multi-Agent System-based Protection Coordination of Distribution Feeders,"A protection system adopting multi-agent concept for power distribution systems is proposed. A device agent, embedded in a protective device, detects faults or adapts in the network in an autonomous manner and changes its operating parameters to new operation conditions by collaborating with other protection agents. Simulations of the agents and their operations for a sample distribution system with a variety of cases show the feasibility of the agent-based adaptive protection of distribution networks.",2007,0, 2548,Software reliability estimation in grey system theory,"It is necessary for the software system to work in an acceptable degree of reliability and quality. Stochastic modelling, queueing systems and network models, neural networks models, wavelet models, etc are some of interpretative methods to forecast the reliability of software system. But these approaches contain some limitations. It is still not immature for software reliability prediction. A scientific method to predict the software system reliability is presented in the paper. The grey forecasting method is effective in time series data analysis like software reliability in case of lacking information or bad data. We give the design of grey forecasting model for software system reliability. A practical experiment data is given with comparisons to demonstrate the validity of the forecast data and actual data of software reliability from one of software system development in a company to demonstrate the applicability of the proposed method above to forecast the software reliability data.",2007,0, 2549,Variable block size error concealment scheme based on H.264/AVC non-normative decoder,"As the newest video coding standard, H.264/AVC can achieve high compression efficiency. At the same time, due to the high-efficiently predictive coding and the variable length entropy coding, it is more sensitive to transmission errors. So error concealment (EC) in H.264 is very important when compressed video sequences are transmitted over error-prone networks and erroneously received. To achieve higher EC performance, this paper proposes variable block size error concealment scheme (VBSEC) by utilizing the new concept of variable block size motion estimation (VBSME) in H.264 standard. This scheme provides four EC modes and four sub-block partitions. The whole corrupted macro-block (MB) will be divided into variable block size adaptively according to the actual motion. More precise motion vectors (MV) will be predicted for each sub-block. We also produce a more accurate distortion function based on spatio-temporal boundary matching algorithm (STBMA). By utilizing VBSEC scheme based on our STBMA distortion function, we can reconstruct the corrupted MB in the inter frame more accurately. The experimental results show that our proposed scheme can obtain maximum PSNR gain up to 1.72 dB and 0.48 dB, respectively compared with the boundary matching algorithm (BMA) adopted in the JM11.0 reference software and STBMA.",2007,0, 2550,A policy controlled IPv4/IPv6 network emulation environment,"In QoS enabled IP-based networks, QoS signaling and policy control are used to control the access to network resources and their usage. The IETF proposed standard protocol for policy control is Common Open Policy Service (COPS) protocol, which has also been adopted in 3GPP IP Multimedia Subsystem (IMS) Release 5. This paper presents a prototype for policy-controlled IPv4/IPv6 network emulation environment, in which it is possible to specify the policy control and emulate, over a period of time, QoS parameters such as bandwidth, packet delay, jitter, and packet discard probability for media flows within an IP multimedia session. The policy control is handled by COPS, and IP channel emulation uses two existing network emulation tools, NIST Net and ChaNet, supporting IPv4 and IPv6 protocols, respectively. The scenario-based approach allows reproducible performance measurements and running various experiments by using the same network behavior. A graphical user interface has been developed to make the scenario specification more user-friendly. We demonstrate the functionality of the prototype emulation environment for IPv6 and analyze its performance.",2007,0, 2551,Neural network based BER prediction for 802.16e channel,"The prediction of bit error rate (BER) in IEEE 802.16e mobile wireless MAN network is investigated here. The state of the channel is estimated on symbol by symbol basis on a realistic fading environment. The state of a channel is modeled as nonlinear and temporal system. Neural network method is the best system to predict and analyze the behaviors of such nonlinear and temporal system. In this context, BER prediction by k symbol ahead is investigated by two different recurrent neural network architectures such as recurrent radial basis function (RRBF) network and echo state network (ESN). The predicted BER will match very well with the simulation results.",2007,0, 2552,Using design based binning to improve defect excursion control for 45nm production,"For advanced device (45 nm and below), we proposed a novel method to monitor systematic and random excursion. By integrating design information and defect inspection results into automated software (DBB), we can identify design/process marginality sites with defect inspection tool. In this study, we applied supervised binning function (DBC) and defect criticality index (DCI) to identify systematic and random excursion problems on 45 nm SRAM wafers. With established SPC charts, we will be able to detect future excursion problem in manufacturing line early.",2007,0, 2553,Predicting performance of software systems during feasibility study of software project management,"Software performance is an important nonfunctional attribute of software systems for producing quality software. Performance issues must be considered throughout software project development. Predicting performance early in the life cycle is addressed by many methodologies, but the data collected during feasibility study not considered for predicting performance. In this paper, we consider the data collected (technical and environmental factors) during feasibility study of software project management to predict performance. We derive an algorithm to predict the performance metrics and simulate the results using a case study on banking application.",2007,0, 2554,PCMOS-based Hardware Implementation of Bayesian Network,"Bayesian network (K.B. Korb and E. Nicholson, 2004) has received considerable attention in a great variety of research areas such as for artificial intelligence, bioinformatics, medicine, engineering, image processing, and various kinds of decision support systems. But up till now, most of the investigation on Bayesian network has been on its theory, algorithms and software implementations. This paper presents the Bayesian network from a totally new perspective-hardware circuit implementation. By using the new-born technology of probabilistic CMOS (PCMOS) (K.V. Palem, 2005), (S. Cheemalavagu et al.), (S. Cheemalavagu et al., 2004), (P. Korkmaz, 2006) and taking advantage of the statistical properties of simple logic gates, the Bayesian network can be constructed using hardware circuits. Such hardware implementation revealed the advantages in aspects of power consumption, delay time and quality of randomness.",2007,0, 2555,Constructing the Model of Propylene Distillation Based on Neural Networks,"The model of propylene distillation helps improve the quality of propylene products. This paper proposes a methodology of constructing the model of propylene distillation based on the neural network technique. The strategy of adjusting the neural network-based model of propylene distillation with rough sets is proposed. A numerical example of the neural network-based model for actual propylene distillation is provided. A comparison is made between the predicted results from the model and the actual results, which validates the effectiveness of the model of propylene distillation.",2007,0, 2556,Software Tool for Real Time Power Quality Disturbance Analysis and Classification,"Real time detection and classification of power quality disturbances is important for quick diagnosis and mitigation of such disturbances. This paper presents the development of a software tool based on MatLab for power quality disturbance analysis and classification. Prior to the development of the software tool, the disturbance signals are captured and processed in real-time using the TMS320C6711DSP starter kit. Digital signal processor is used to provide fast data capture, fast data processing and signal processing flexibility with increased system performance and reduced system cost. The developed software tool can be used for real-time and off-line disturbance analysis by displaying the detected disturbance, % harmonics of a signal, total harmonic distortion and results of the S-transform, fast Fourier transform and continuous wavelet transform analyses. In addition, graphical representation of the input signal, power quality indices including sag and swell magnitudes are also displayed on the graphical user interface. PQ disturbance classification results show that accurate disturbance classification can be obtained with a total % of correct classification of 99.3%. Such a software tool can serve as a simple and reliable means for PQ disturbance detection and classification.",2007,0, 2557,Efficient Development Methodology for Multithreaded Network Application,"Multithreading is becoming increasingly important for modern network programming. In inter-process communication platform, multithreaded applications have much benefit especially to improve application's throughput, responsiveness and latency. However, developing good quality of multithreaded codes is difficult, because threads may interact with each others in unpredictable ways. Although modern compilers can manage threads well, but in practice, synchronization errors (such as: data race errors, deadlocks) required careful management and good optimization method. The goal of this work is to discover common pitfalls in multithreaded network applications, present a software development technique to detect errors and optimize efficiently multithreaded applications. We compare performance of a single threaded network application with multithreaded network applications, use tools called Intelreg VTunetrade Performance Analyzer, Intelreg Thread Checker and our method to efficiently fix errors and optimize performance. Our methodology is divided into three phases: First phase is performance analysis using Intelreg VTunetrade Performance Analyzer with the aim is to identify performance optimization opportunities and detect bottlenecks. In second phase, with Intelreg Thread Checker we allocate data races, memory leakage and debug the multithreaded applications. In third phase, we apply tuning and optimization to the multithreaded applications. With the understanding of the common pitfalls in multithreaded network applications, through the development and debugging methodology aforementioned above, developers are able to optimize and tune their applications efficiently.",2007,0, 2558,DCPD acquisition and analysis for HV storage capacitor based on Matlab,"High-voltage storage capacitor is the key device in weapon system, high reliability is required. In the process of storage and application, effective measurements are needed to ensure the insulation capability of capacitors. Usually, partial discharge (PD) is used to detect the insulation status of capacitors. Under DC there does not exist fundamental parameter Phi , so a new parameter delta (t) is introduced .Adopting data acquisition and analysis system with single trigger based on Matlab software, PD of four typical defect models under DC condition is obtained. Data of DCPD signals was analyzed through q, n, delta (t) distribution figures, from which obvious differences can be obtained. All results can provide data foundation for pattern recognition of high-voltage storage capacitors.",2007,0, 2559,Formal safety verification for TTP/C network in Drive-by-wire system,"TTP/C is a member of the time-triggered protocol (TTP) family that satisfies Society of Automotive Engineers Class C requirements for hard real-time fault-tolerant communication. As a communication network designed for safety-critical system, it is essential to verify its safety depending on formal methods. We investigate the fault-tolerant and fault-avoidance strategies of TTP/C network used in Drive-by-wire system, with Markov modeling techniques, and evaluate the failure rate subject to different failure modes, taking into account both transit and permanent physical failures. Generalized Stochastic Petri Net (GSPN) is selected to model concurrency, non-determinism properties and calculate Markov model automatically. A model with 157 states and 78 transitions is built. The result of experiments shows that failure probability of TTP/C network in 7-nodes DBW system varies from 10-6 to 10-10 with different configuration. And diagnose mistakes are proved to be a critical factor for the success of membership service.",2007,0, 2560,Adaptive OSEK Network Management for in-vehicle network fault detection,"Rapid growth in the deployment of networked electronic control units (ECUs) and enhanced software features within automotive vehicles has occurred over the past two decades. This inevitably results in difficulties and complexity in in-vehicle network fault diagnostics. To overcome these problems, a framework for on-board in-vehicle network diagnostics has been proposed and its concept has previously been demonstrated through experiments. This paper presents a further implementation of network fault detection within the framework. Adaptive OSEK Network Management, a new technique for detecting network level faults, is presented. It is demonstrated in this paper that this technique provides more accurate fault detection and the capability to cover more fault scenarios.",2007,0, 2561,Presentation of Information Synchronized with the Audio Signal Reproduced by Loudspeakers using an AM-based Watermark,Reproducing stego audio signal via a loudspeaker and detecting embedded data from a recorded sound from a microphone are challenging with respect to the application of data hiding. A watermarking technique using subband amplitude modulation was applied to a system that displays text information synchronously with the watermarked audio signal transmitted in the air. The robustness of the system was evaluated by a computer simulation in terms of the correct rate of data transmission under reverberant and noisy conditions. The results showed that the performance of detection and the temporal precision of synchronization were sufficiently high. Objective measurement of the watermarked audio quality using the PEAQ method revealed that the mean objective difference grade obtained from 100 watermarked music samples exhibited an intermediate value between the mean ODGs of 96-kbps and 128-kbps MP3 encoded music samples.,2007,0, 2562,Fault Detection System Activated by Failure Information,"We propose a fault detection system activated by an application when the application recognizes the occurrence of a failure, in order to realize self managing systems that automatically find the source of a failure. In existing detection systems, there are three issues for constructing self managing applications: i) the detection results are not sent to the applications, ii) they can not identify the source failure from all of the detected failures, and iii) configuring the detection system for networked system is hard work. For overcoming these issues, the proposed system takes three approaches: i) the system receives failure information from an application and returns a result set to the application, ii) the system identifies the source failure using relationships among errors, and Hi) the system obtains information of the monitored system from a database. The relationship is expressed by a tree. This tree is called error relationship tree. The database provides information which are system entities such as hardware devices, software object, and network topology. When the proposed system starts looking for the source of a failure, causal relations from an error relation tree are referred to, and the correspondence of error definitions and actual objects is derived using the database. We show the design of the detection operation activated by the failure information and the architecture of the proposed system.",2007,0, 2563,Quantifying Software Maintainability Based on a Fault-Detection/Correction Model,"The software fault correction profiles play significant roles to assess the quality of software testing as well as to keep the good software maintenance activity. In this paper we develop a quantitative method to evaluate the software maintainability based on a stochastic model. The model proposed here is a queueing model with an infinite number of servers, and is related to the software fault- detection/correction profiles. Based on the familiar maximum likelihood estimation, we estimate quantitatively both the software reliability and maintainability with real project data, and refer to their applicability to the software maintenance practice.",2007,0, 2564,Predicting Defective Software Components from Code Complexity Measures,"The ability to predict defective modules can help us allocate limited quality assurance resources effectively and efficiently. In this paper, we propose a complexity- based method for predicting defect-prone components. Our method takes three code-level complexity measures as input, namely Lines of Code, McCabe's Cyclomatic Complexity and Halstead's Volume, and classifies components as either defective or non-defective. We perform an extensive study of twelve classification models using the public NASA datasets. Cross-validation results show that our method can achieve good prediction accuracy. This study confirms that static code complexity measures can be useful indicators of component quality.",2007,0, 2565,Improving Dependability Using Shared Supplementary Memory and Opportunistic Micro Rejuvenation in Multi-tasking Embedded Systems,"We propose a comprehensive solution to handle memory-overflow problems in multitasking embedded systems thereby improving their reliability and availability. In particular, we propose two complementary techniques to address two significant causes of memory-overflow problems. The first cause is errors in estimating appropriate stack and heap memory requirement. Our first technique, called shared supplementary memory (SSM), exploits the fact that the probability of multiple tasks requiring more than their estimated amount of memory concurrently is low. Using analytical model and simulations, we show that reliability can be considerably improved when SSM is employed. Furthermore, for the same reliability, SSM reduces total memory requirement by as much as 29.31% The second cause is the presence of coding Mandelbugs, which can cause abnormal memory requirement. To address this, we propose a novel technique, called opportunistic micro-rejuvenation, which when combined with SSM, provide several advantages: preventing critical-time outage, resource frugality and dependability enhancement.",2007,0, 2566,Studying effect of location and resistance of inter-turn faults on fault current in power transformers,"Inter-turn (turn-to-turn) fault is one of the most important failures which could occur in power transformers. This phenomenon could seriously reduce the useful life length of transformers. Meanwhile, transformer protection schemes such as differential relays are not able to detect this kind of fault. This type of fault should be studied carefully to determine its features and characteristics. In this paper the effect of fault location and fault resistance on the amplitude of fault current is studied. It is found that change of fault location along the winding has considerable effect on fault current amplitude. It would also be shown that, even small fault resistance could have major effect on fault current amplitude. In this paper, a real 240/11 kV, 27 MVA transformer is used for simulation studies.",2007,0, 2567,Study of distributed generation type and islanding impact on the operation of radial distribution systems,"Nowadays, power generation from renewable energy source is a preferred option and will continue to grow during the coming years. Distributed generation (DG) technologies include photovoltaic, wind turbine and fuel cell, etc., which use renewable energy sources. Most DG units use two generation types i.e. induction generator and synchronous generator. It is crucial that the power system impacts be assessed accurately so that these DG units can be applied in a manner that avoids causing degradation of power quality, reliability and control of the utility system. This paper presents the dynamic modelling of electrical network as well as DG units in order to study islanding behaviour in steady state and transient simulation. The impact of different DG types has been studied on the IEEE 34-bus system using a commercial software tool.",2007,0, 2568,Visual Support In Automated Tracing,Automated traceability facilitates the dynamic generation of candidate links between requirements and other software artifacts. It provides an alternative option to the arduous and error-prone process of manually creating and maintaining a trace matrix. However the result set contains both true and false links which must therefore be evaluated by an analyst. Current approaches display the candidate links to the user in a relatively bland textual format. This position paper proposes several visualization techniques for helping analysts to evaluate sets of candidate links. The techniques are illustrated using examples from the Ice Breaker System.,2007,0, 2569,Tackling feedback design complexity,"Feedback control dealt with the application of cheap commercial off-the-shelf (COTS) hardware to control problems. What does not come cheap, however, is the design, in terms of software and people skills. Engineering design was mostly based on hand calculations 30 years ago, and scientific calculators from the likes of Hewlett Packard and Texas Instruments were all the rage with designers. The feedback loop was expressed as an equation G = A/(1-AB), which still forms the basis of most simulation and design software today, where G is the gain of the system, with A representing the feed-forward element, typically an amplifier, and B representing the feedback element. The minus sign indicates negative feedback. The input to the equation is typically by means of a matrix, which is the foundation of several of today's leading software packages, including, amongst others, Math Works Matlab (matrix laboratory) and Simulink, and National Instruments (NI) MatrixX and Lab View. Both companies are working relentlessly to complete the design cycle from design and simulation through to testing and commissioning in hardware, and even in silicon. The Math Works has recently launched Embedded Matlab, which allows users to generate embeddable C code directly from Matlab programs, avoiding the common, time-consuming and error-prone process of rewriting Matlab algorithms in C. Embedded Matlab supports many high-level Matlab language features, such as multidimensional arrays, real and complex numbers, structures, flow control and subscripting. The conversion to C code is performed by Real-Time Workshop 7. If Simulink is used, synthesisable Verilog and VHDL can also be generated.",2007,0, 2570,Soft-error induced system-failure rate analysis in an SoC,"This paper proposes an analytical method to assess the soft-error rate (SER) in the early stages of a System-on-Chip (SoC) platform-based design methodology. The proposed method gets an executable UML model of the SoC and the raw soft-error rate of different parts of the platform as its inputs. Soft-errors on the design are modelled by disturbances on the value of attributes in the classes of the UML model and disturbances on opcodes of software cores. This Architectural Vulnerability Factor (AVF) and the raw soft-error rate of the components in the platform are used to compute the SER of cores. Furthermore, the SER and the severity of error in each core in the SoC are used to compute the System-Failure Rate (SFR) of the SoC.",2007,0, 2571,Fast implementation of a l- l/1 penalized sparse representations algorithm: applications in image denoising and coding,"Sparse representation techniques have become an important tool in image processing in recent years, for coding, de-noising and in-painting purposes, for instance. They generally rely on an lscr2-lscr1 penalized criterion and fast algorithms have been proposed to speed up the applications. We propose to replace the lscr2-part of the criterion, which has been chosen both for its easy implementation and its relation to the PSNR quality measure, by a lscr-part. We present a new fast way to minimize a lscr- lscr1 penalized criterion and assess its potential benefits for image De-noising and coding.",2007,0, 2572,Cognitive radio Research and Implementation Challenges,"Future mobile terminals will be able to communicate with various heterogeneous systems which are different by means of the algorithms used to implement baseband processing and channel coding. This represents many challenges in designing flexible and energy efficient architectures. Using the sensing phase, the mobile can sense its environment and detect the spectrum holes and use them to communicate. Current research are investigating different techniques of using cognitive radio to reuse locally unused spectrum to increase the total system capacity. They aim also to develop efficient algorithm able to maximize the quality of service (QoS) for the secondary (unlicensed) users while minimizing the interference to the primary (licensed) users. However, there are many challenges across all layers of a cognitive radio system design, from its application to its implementation.",2007,0, 2573,Fault location using wavelet energy spectrum analysis of traveling waves,"Power grid faults generate traveling wave signals at the fault point. The signals transmit to both ends of the faulted transmission line, and to the whole power grid. The traveling wave signals have many components with different frequencies and all the components have fault characteristics. The signals can be employed in locating accurately the fault, and the location method cannot be influenced by current transformer saturation and low frequency oscillation. The frequency band component with energy concentrated in the detected traveling wave is extracted by wavelet energy spectrum analysis. The arrival time of the component is recorded with wavelet analysis in the time domain. The propagation velocity of the component is calculated by the last recorded traveling wave arrived time at both ends of the tested transmission line, which is generated by an outside disturbance. The fault location scheme is simulated with ATP software. Results show that the accuracy of the method is little affected by fault positions, fault types and grounding resistances. The fault location error is less than 100 m.",2007,0, 2574,Component Based Proactive Fault Tolerant Scheduling in Computational Grid,"Computational Grids have the capability to provide the main execution platform for high performance distributed applications. Grid resources having heterogeneous architectures, being geographically distributed and interconnected via unreliable network media are extremely complex and prone to different kinds of errors, failures and faults. Grid is a layered architecture and most of the fault tolerant techniques developed on grids use its strict layering approach. In this paper, we have proposed a cross-layer design for handling faults proactively. In a cross-layer design, the top- down and bottom-up approach is not strictly followed, and a middle layer can communicate with the layer below or above it [1]. At each grid layer there would be a monitoring component that would decide on predefined factors that the reliability of that particular layer is high, medium or low. Based on Hardware Reliability Rating (HRR) and Software Reliability Rating (SRR), the Middleware Monitoring Component / Cross- Layered Component (MMC/CLC) would generate a Combined Rating (CR) using CR calculation matrix rules. Each grid participating node will have a CR value generated through cross layered communication using HMC, MMC/CLC and SMC. All grid nodes will have their CR information in the form of a CR table and high rated machines would be selected for job execution on the basis of minimum CPU load along with different intensities of check pointing. Handling faults proactively at each layer of grid using cross communication model would result in overall improved dependability and increased performance with less overheads of check pointing.",2007,0, 2575,IR thermographic detection of defects in multi-layered composite materials used in military applications,"Multi-layered composites are frequently used in many military applications as constructional materials and light armours protecting personnel and armament against fragments and bullets. Material layers can be very different by their physical properties. Therefore, such materials represent a difficult inspection task for many traditional techniques of non-destructive testing (NDT). Typical defects of composite materials are delaminations, a lack of adhesives, condensations and crumpling. IR thermographic NDT is considered as a candidate technique to detect such defects. In order to determine the potential usefulness of the thermal methods, specialized software has been developed for computing 3D (three- dimensional) dynamic temperature distributions in anisotropic six-layer solid bodies with subsurface defects. In this paper, both modeling and experimental results which illustrate advantages and limitations of IR thermography in inspecting composite materials will be presented.",2007,0, 2576,A Multi-Agent Fault Detection System for Wind Turbine Defect Recognition and Diagnosis,This paper describes the use of a combination of anomaly detection and data-trending techniques encapsulated in a multi-agent framework for the development of a fault detection system for wind turbines. Its purpose is to provide early error or degradation detection and diagnosis for the internal mechanical components of the turbine with the aim of minimising overall maintenance costs for wind farm owners. The software is to be distributed and run partly on an embedded microprocessor mounted physically on the turbine and on a PC offsite. The software will corroborate events detected from the data sources on both platforms and provide information regarding incipient faults to the user through a convenient and easy to use interface.,2007,0, 2577,HMM Based Channel Status Predictor for Cognitive Radio,"Nowadays, many researchers are interested in cognitive radio (CR) technology. We can say that the CR is the extended technology of software defined radio. Both technologies seem to be similar but there is an important difference. That is a sensing and channel management function. CR always senses incumbent users (IU) (or primary users) appearing on the channel the CR has been used and the CR must evacuate from that channel for preventing IU from interferences. For this purpose, CR should include a functionality of being able to find new relevant channel to move. So, CR must evaluate the quality of empty channels. From the point of view, we propose the HMM based channel status predictor, which helps the CR evaluate the quality. We will implement a HMM channel predictor and it will predict next channel status based on past channel states.",2007,0, 2578,Recovery of fault-tolerant real-time scheduling algorithm for tolerating multiple transient faults,"The consequences of missing deadline of hard real time system tasks may be catastrophic. Moreover, in case of faults, a deadline can be missed if the time taken for recovery is not taken into account during the phase when tasks are submitted or accepted to the system. However, when faults occur tasks may miss deadline even if fault tolerance is employed. Because when an erroneous task with larger execution time executes up to end of its total execution time even if the error is detected early, this unnecessary execution of the erroneous task provides no additional slack time in the schedule to mitigate the effect of error by running additional copy of the same task without missing deadline. In this paper, a recovery mechanism is proposed to augment the fault-tolerant real-time scheduling algorithm RM-FT that achieves node level fault tolerance (NLFT) using temporal error masking (TEM) technique based on rate monotonic (RM) scheduling algorithm. Several hardware and software error detection mechanisms (EDM), i.e. watchdog processor or executable assertions, can detect an error before an erroneous task finishes its full execution, and can immediately stops execution. In this paper, using the advantage of such early detection by EDM, a recovery algorithm RM-FT-RECOVERY is proposed to find an upper bound, denoted by Edm Bound, on the execution time of the tasks, and mechanism is developed to provide additional slack time to a fault-tolerant real-time schedule so that additional task copies can be scheduled when error occurs.",2007,0, 2579,Compiler-assisted architectural support for program code integrity monitoring in application-specific instruction set processors,"(ASIPs) are being increasingly used in mobile embedded systems, the ubiquitous networking connections have exposed these systems under various malicious security attacks, which may alter the program code running on the systems. In addition, soft errors in microprocessors can also change program code and result in system malfunction. At the instruction level, all code modifications are manifested as bit flips. In this work, we present a generalized methodology for monitoring code integrity at run-time in ASIPs, where both the instruction set architecture (ISA) and the underlying microarchitecture can be customized for a particular application domain. Based on the microoperation-based monitoring architecture that we have presented in previous work, we propose a compiler-assisted and application-controlled management approach for the monitoring architecture. Experimental results show that compared with the OS-managed scheme and other compiler-assisted schemes, our approach can detect program code integrity compromises with much less performance degradation.",2007,0, 2580,Early Models for System-Level Power Estimation,"Power estimation and verification have become important aspects of System-on-Chip (SoC) design flows. However, rapid and effective power modeling and estimation technologies for complex SoC designs are not widely available. As a result, many SoC design teams focus the bulk of their efforts on using detailed low-level models to verify power consumption. While such models can accurately estimate power metrics for a given design, they suffer from two significant limitations: (1) they are only available late in the design cycle, after many architectural features have already been decided, and (2) they are so detailed that they impose severe limitations on the size and number of workloads that can be evaluated. While these methods are useful for power verification, architects require information much earlier in the design cycle, and are therefore often limited to estimating power using spreadsheets where the expected power dissipation of each module is summed up to predict total power. As the model becomes more refined, the frequency that each module is exercised may be added as an additional parameter to further increase the accuracy. Current spreadsheets, however, rely on aggregate instruction counts and do not incorporate either time or input data and thus have inherent inaccuracies. Our strategy for early power estimation relies on (i) measurements from real silicon, (ii) models built from those measurements models that predict power consumption for a variety of processor micro-architectural structures and (iii)FPGA-based implementations of those models integrated with an FPGA-based performance simulator/emulator. The models will be designed specifically to be implemented within FPGAs. The intention is to integrate the power models with FPGA-based full-system, functional and performance simulators/emulators that will provide timing and functional information including data values. The long term goal is to provide relative power accuracy and power trends useful to arc- - hitects during the architectural phase of a project, rather than precise power numbers that would require far more information than is available at that time. By implementing the power models in an FPGA and driving those power models with a system simulator/emulator that can feed the power models real data transitions generated by real software running on top of real operating systems, we hope to both improve the quality of early stage power estimation and improve power simulation performance.",2007,0, 2581,AIMS: An agent-based information management system in JBI-like environments,"One of the challenges for many mission-critical applications is how to determine the quality of information from distributed and heterogeneous data sources. In this paper we consider joint battlespace infosphere (JBI) or JBI-like collaborative information sharing environments, where the quality of published information often depends on the quality of the contributing information and of the sources that publish them. However, as the environment becomes larger and more diverse, it is becoming increasingly difficult for human operators to assess the quality of information from various data sources. To address this challenge, we develop AIMS, an agent-based information management system, to manage the quality of information in JBI-like environments, e.g., information trustworthiness. In our approach, each operator is associated with a software agent, called client agent. The client agent enables its operator to interact with the JBI repository via the services of query, publish, and subscribe. Moreover, the client agent collaborates with other agents to assess and learn the trustworthiness of information and the reliabilities of corresponding data sources from its pedigree and the feedback from operators.",2007,0, 2582,Wavelet application for determining missing cylinders,"In this article, wavelet analysis to analyze signals in the engine faults diagnose is studied. Taking advantage of the wavelet analysis tools of MATLAB 6.0 software, it calculates the wavelet transform coefficients of the normal and abnormal signals, analyzes differences between normal and abnormal signals by using the parameters, such as average evolution root, kurtosis factor and average index. Moreover, it put forward a new method, in which the missing cylinder fault can be detected by the wavelet analysis.",2007,0, 2583,Performance under failures of high-end computing,"Modern high-end computers are unprecedentedly complex. Occurrence of faults is an inevitable fact in solving large-scale applications on future Petaflop machines. Many methods have been proposed in recent years to mask faults. These methods, however, impose various performance and production costs. A better understanding of faults' influence on application performance is necessary to use existing fault tolerant methods wisely. In this study, we first introduce some practical and effective performance models to predict the application completion time under system failures. These models separate the influence of failure rate, failure repair, checkpointing period, checkpointing cost, and parallel task allocation on parallel and sequential execution times. To benefit the end users of a given computing platform, we then develop effective fault-aware task scheduling algorithms to optimize application performance under system failures. Finally, extensive simulations and experiments are conducted to evaluate our prediction models and scheduling strategies with actual failure trace.",2007,0, 2584,Application development on hybrid systems,"Hybrid systems consisting of a multitude of different computing device types are interesting targets for high-performance applications. Chip multiprocessors, FPGAs, DSPs, and GPUs can be readily put together into a hybrid system; however, it is not at all clear that one can effectively deploy applications on such a system. Coordinating multiple languages, especially very different languages like hardware and software languages, is awkward and error prone. Additionally, implementing communication mechanisms between different device types unnecessarily increases development time. This is compounded by the fact that the application developer, to be effective, needs performance data about the application early in the design cycle. We describe an application development environment specifically targeted at hybrid systems, supporting data-flow semantics between application kernels deployed on a variety of device types. A specific feature of the development environment is the availability of performance estimates (via simulation) prior to actual deployment on a physical system.",2007,0, 2585,Performance and cost optimization for multiple large-scale grid workflow applications,"Scheduling large-scale applications on the Grid is a fundamental challenge and is critical to application performance and cost. Large-scale applications typically contain a large number of homogeneous and concurrent activities which are main bottlenecks, but open great potentials for optimization. This paper presents a new formulation of the well-known NP-complete problems and two novel algorithms that addresses the problems. The optimization problems are formulated as sequential cooperative games among workflow managers. Experimental results indicate that we have successfully devised and implemented one group of effective, efficient, and feasible approaches. They can produce soultuins of significantly better performance and cost than traditional algorithms. Our algorithms have considerably low time complexity and can assign 1,000,000 activities to 10,000 processors within 0.4 second on one Opteron processor. Moreover, the solutions can be practically performed by workflow managers, and the violation of QoS can be easily detected, which are critical to fault tolerance.",2007,0, 2586,Remote health-care monitoring using Personal Care Connect,"Caring for patients with chronic illnesses is costlynearly $1.27 trillion today and predicted to grow much larger. To address this trend, we have designed and built a platform, called Personal Care Connect (PCC), to facilitate the remote monitoring of patients. By providing caregivers with timely access to a patient's health status, they can provide patients with appropriate preventive interventions, helping to avoid hospitalization and to improve the patient's quality of care and quality of life. PCC may reduce health-care costs by focusing on preventive measures and monitoring instead of emergency care and hospital admissions. Although PCC may have features in common with other remote monitoring systems, it differs from them in that it is a standards-based, open platform designed to integrate with devices from device vendors and applications from independent software vendors. One of the motivations for PCC is to create and propagate a working environment of medical devices and applications that results in innovative solutions. In this paper, we describe the PCC remote monitoring system, including our pilot tests of the system.",2007,0, 2587,Varieties of interoperability in the transformation of the health-care information infrastructure,"Health-care costs are rising dramatically. Errors in medical delivery are associated with an alarming number of preventable, often fatal adverse events. A promising strategy for reversing these trends is to modernize and transform the health-care information exchange (HIE), that is, the mobilization of health-care information electronically across organizations within a region or community. The current HIE is inefficient and error-prone; it is largely paper-based, fragmented, and therefore overly complex, often relying on antiquated IT (information technology). To address these weaknesses, projects are underway to build regional and national HIEs which provide interoperable access to a variety of data sources, by a variety of stakeholders, for a variety of purposes. In this paper we present a technologist's guide to health-care interoperability. We define the stakeholders, roles, and activities that comprise an HIE solution; we describe a spectrum of interoperability approaches and point out their advantages and disadvantages; and we look in some detail at a set of real-world scenarios, discussing the interoperability approaches that best address the needs. These examples are drawn from IBM experience with real-world HIE engagements.",2007,0, 2588,Investigation of Hyper-NA Scanner Emulation for Photomask CDU Performance,"As the semiconductor industry moves toward immersion lithography using numerical apertures above 1.0 the quality of the photomask becomes even more crucial. Photomask specifications are driven by the critical dimension (CD) metrology within the wafer fab. Knowledge of the CD values at resist level provides a reliable mechanism for the prediction of device performance. Ultimately, tolerances of device electrical properties drive the wafer linewidth specifications of the lithography group. Staying within this budget is influenced mainly by the scanner settings, resist process, and photomask quality. Tightening of photomask specifications is one mechanism for meeting the wafer CD targets. The challenge lies in determining how photomask level metrology results influence wafer level imaging performance. Can it be inferred that photomask level CD performance is the direct contributor to wafer level CD performance? With respect to phase shift masks, criteria such as phase and transmission control are generally tightened with each technology node. Are there other photomask relevant influences that effect wafer CD performance? A comprehensive study is presented supporting the use of scanner emulation based photomask CD metrology to predict wafer level within chip CD uniformity (CDU). Using scanner emulation with the photomask can provide more accurate wafer level prediction because it inherently includes all contributors to image formation related to the 3D topography such as the physical CD, phase, transmission, sidewall angle, surface roughness, and other material properties. Emulated images from different photomask types were captured to provide CD values across chip. Emulated scanner image measurements were completed using an AIMS(TM)45-193i with its hyper-NA, through-pellicle data acquisition capability including the Global CDU Map(TM) software option for AIMS(TM) tools. The through-pellicle data acquisition capability is an essential prerequisite for capturing final CD- - U data (after final clean and pellicle mounting) before the photomask ships or for re-qualification at the wafer fab. Data was also collected on these photomasks using a conventional CD-SEM metrology system with the pellicles removed. A comparison was then made to wafer prints demonstrating the benefit of using scanner emulation based photomask CD metrology.",2007,0, 2589,Out-of-bounds array access fault model and automatic testing method study,"Out-of-bounds array access(OOB) is one of the fault models commonly employed in the objectoriented programming language. At present, the technology of code insertion and optimization is widely used in the world to detect and fix this kind of fault. Although this method can examine some of the faults in OOB programs, it cannot test programs thoroughly, neither to find the faults correctly. The way of code insertion makes the test procedures so inefficient that the test becomes costly and time-consuming. This paper, uses a kind of special static test technology to realize the fault detection in OOB programs. We first establish the fault models in OOB program, and then develop an automatic test tool to detect the faults. Some experiments have exercised and the results show that the method proposed in the paper is efficient and feasible in practical applications.",2007,0, 2590,Practical strategies to improve test efficiency,"This paper introduces strategies to detect software bugs in earlier life cycle stage in order to improve test efficiency. Static analysis tool is one of the effective methods to reveal software bugs during software development. Three popular static analysis tools are introduced, two of which, PolySpace and Splint, are compared with each other by analyzing a set of test cases generatedd by the authors. PolySpace can reveal 60% bugs with 100% R/W ratio (ratio of real bugs and total warnings), while Splint reveal 73.3% bugs with 44% R/W ratio. And they are good at finding different categories of bugs. Two strategies are concluded to improve test efficiency, under the guideline that static analysis tools should be used in finding different categories of bugs according to their features. The first one aims at finding bugs as many as possible, while the second concentrates to reduce the average time on bug revelation. Experimental data shows the first strategy can find 100% bugs with 60% R/W ratio, the second one find 80% bugs with 66.7% R/W ratio. Experiment results prove that these two strategies can improve the test efficiency in both fault coverage and testing time.",2007,0, 2591,"On power control for wireless sensor networks: System model, middleware component and experimental evaluation","In this paper, we investigate strategies for radio power control for wireless sensor networks that guarantee a desired packet error probability. Efficient power control algorithms are of major concern for these networks, not only because the power consumption can be significantly decreased but also because the interference can be reduced, allowing for higher throughput. An analytical model of the Received Signal Strength Indicator (RSSI), which is link quality metric, is proposed. The model relates the RSSI to the Signal to Interference plus Noise Ratio (SINR), and thus provides a connection between the powers and the packet error probability. Two power control mechanisms are studied: a Multiplicative-Increase Additive-Decrease (MIAD) power control described by a Markov chain, and a power control based on the average packet error rate. A component-based software implementation using the Contiki operating system is provided for both the power control mechanisms. Experimental results are reported for a test-bed with Telos motes.",2007,0, 2592,Static Testing,This chapter contains sections titled:
Introduction
Goal of Static Testing
Candidate Documents for Static Testing
Static Testing Techniques
Tracking Defects Detected by Static Testing
Putting Static Testing in Perspective,2007,0, 2593,Chip Multiprocessor Architecture:Techniques to Improve Throughput and Latency,"Chip multiprocessors - also called multi-core microprocessors or CMPs for short - are now the only way to build high-performance microprocessors, for a variety of reasons. Large uniprocessors are no longer scaling in performance, because it is only possible to extract a limited amount of parallelism from a typical instruction stream using conventional superscalar instruction issue techniques. In addition, one cannot simply ratchet up the clock speed on today's processors, or the power dissipation will become prohibitive in all but water-cooled systems. Compounding these problems is the simple fact that with the immense numbers of transistors available on today's microprocessor chips, it is too costly to design and debug ever-larger processors every year or two. CMPs avoid these problems by filling up a processor die with multiple, relatively simpler processor cores instead of just one huge core. The exact size of a CMP's cores can vary from very simple pipelines to moderately compl x superscalar processors, but once a core has been selected the CMP's performance can easily scale across silicon process generations simply by stamping down more copies of the hard-to-design, high-speed processor core in each successive chip generation. In addition, parallel code execution, obtained by spreading multiple threads of execution across the various cores, can achieve significantly higher performance than would be possible using only a single core. While parallel threads are already common in many useful workloads, there are still important workloads that are hard to divide into parallel threads. The low inter-processor communication latency between the cores in a CMP helps make a much wider range of applications viable candidates for parallel execution than was possible with conventional, multi-chip multiprocessors; nevertheless, limited parallelism in key applications is the main factor limiting acceptance of CMPs in some types of systems. After a discussion of the basi pros and cons of CMPs when they are compared with conventional uniprocessors, this book examines how CMPs can best be designed to handle two radically different kinds of workloads that are likely to be used with a CMP: highly parallel, throughput-sensitive applications at one end of the spectrum, and less parallel, latency-sensitive applications at the other. Throughput-sensitive applications, such as server workloads that handle many independent transactions at once, require careful balancing of all parts of a CMP that can limit throughput, such as the individual cores, on-chip cache memory, and off-chip memory interfaces. Several studies and example systems, such as the Sun Niagara, that examine the necessary tradeoffs are presented here. In contrast, latency-sensitive applications - many desktop applications fall into this category - require a focus on reducing inter-core communication latency and applying techniques to help programmers divide their programs into multiple threads s easily as possible. This book discusses many techniques that can be used in CMPs to simplify parallel programming, with an emphasis on research directions proposed at Stanford University. To illustrate the advantages possible with a CMP using a couple of solid examples, extra focus is given to thread-level speculation (TLS), a way to automatically break up nominally sequential applications into parallel threads on a CMP, and transactional memory. This model can greatly simplify manual parallel programming by using hardware - instead of conventional software locks - to enforce atomic code execution of blocks of instructions, a technique that makes parallel coding much less error-prone. Contents: The Case for CMPs / Improving Throughput / Improving Latency Automatically / Improving Latency using Manual Parallel Programming / A Multicore World: The Future of CMPs",2007,0, 2594,Controlling Energy Demand in Mobile Computing Systems,"This lecture provides an introduction to the problem of managing the energy demand of mobile devices. Reducing energy consumption, primarily with the goal of extending the lifetime of battery-powered devices, has emerged as a fundamental challenge in mobile computing and wireless communication. The focus of this lecture is on a systems approach where software techniques exploit state-of-the-art architectural features rather than relying only upon advances in lower-power circuitry or the slow improvements in battery technology to solve the problem. Fortunately, there are many opportunities to innovate on managing energy demand at the higher levels of a mobile system. Increasingly, device components offer low power modes that enable software to directly affect the energy consumption of the system. The challenge is to design resource management policies to effectively use these capabilities. The lecture begins by providing the necessary foundations, including basic energy terminology and widely accepted metrics, system models of how power is consumed by a device, and measurement methods and tools available for experimental evaluation. For components that offer low power modes, management policies are considered that address the questions of when to power down to a lower power state and when to power back up to a higher power state. These policies rely on detecting periods when the device is idle as well as techniques for modifying the access patterns of a workload to increase opportunities for power state transitions. For processors with frequency and voltage scaling capabilities, dynamic scheduling policies are developed that determine points during execution when those settings can be changed without harming quality of service constraints. The interactions and tradeoffs among the power management policies of multiple devices are discussed. We explore how the effective power management on one component of a system may have either a positive or negative impact on over ll energy consumption or on the design of policies for another component. The important role that application-level involvement may play in energy management is described, with several examples of cross-layer cooperation. Application program interfaces (APIs) that provide information flow across the application-OS boundary are valuable tools in encouraging development of energy-aware applications. Finally, we summarize the key lessons of this lecture and discuss future directions in managing energy demand.",2007,0, 2595,A practical method for the software fault-prediction,"In the paper, a novel machine learning method, SimBoost, is proposed to handle the software fault-prediction problem when highly skewed datasets are used. Although the method, proved by empirical results, can make the datasets much more balanced, the accuracy of the prediction is still not satisfactory. Therefore, a fuzzy-based representation of the software module fault state has been presented instead of the original faulty/non-faulty one. Several experiments were conducted using datasets from NASA Metrics Data Program. The discussion of the results of experiments is provided.",2007,1, 2596,Predicting software defects in varying development lifecycles using Bayesian nets,"An important decision in software projects is when to stop testing. Decision support tools for this have been built using causal models represented by Bayesian Networks (BNs), incorporating empirical data and expert judgement. Previously, this required a custom BN for each development lifecycle. We describe a more general approach that allows causal models to be applied to any lifecycle. The approach evolved through collaborative projects and captures significant commercial input. For projects within the range of the models, defect predictions are very accurate. This approach enables decision-makers to reason in a way that is not possible with regression-based models.",2007,1, 2597,Identifying and characterizing change-prone classes in two large-scale open-source products,"Developing and maintaining open-source software has become an important source of profit for many companies. Change-prone classes in open-source products increase project costs by requiring developers to spend effort and time. Identifying and characterizing change-prone classes can enable developers to focus timely preventive actions, for example, peer-reviews and inspections, on the classes with similar characteristics in the future releases or products. In this study, we collected a set of static metrics and change data at class level from two open-source projects, KOffice and Mozilla. Using these data, we first tested and validated Pareto's Law which implies that a great majority (around 80%) of change is rooted in a small proportion (around 20%) of classes. Then, we identified and characterized the change-prone classes in the two products by producing tree-based models. In addition, using tree-based models, we suggested a prioritization strategy to use project resources for focused preventive actions in an efficient manner. Our empirical results showed that this strategy was effective for prioritization purposes. This study should provide useful guidance to practitioners involved in development and maintenance of large-scale open-source products.",2007,1, 2598,Empirical Analysis of Software Fault Content and Fault Proneness Using Bayesian Methods,"We present a methodology for Bayesian analysis of software quality. We cast our research in the broader context of constructing a causal framework that can include process, product, and other diverse sources of information regarding fault introduction during the software development process. In this paper, we discuss the aspect of relating internal product metrics to external quality metrics. Specifically, we build a Bayesian network (BN) model to relate object-oriented software metrics to software fault content and fault proneness. Assuming that the relationship can be described as a generalized linear model, we derive parametric functional forms for the target node conditional distributions in the BN. These functional forms are shown to be able to represent linear, Poisson, and binomial logistic regression. The models are empirically evaluated using a public domain data set from a software subsystem. The results show that our approach produces statistically significant estimations and that our overall modeling method performs no worse than existing techniques.",2007,1, 2599,Overhead Analysis of Scientific Workflows in Grid Environments,"Scientific workflows are a topic of great interest in the grid community that sees in the workflow model an attractive paradigm for programming distributed wide-area grid infrastructures. Traditionally, the grid workflow execution is approached as a pure best effort scheduling problem that maps the activities onto the grid processors based on appropriate optimization or local matchmaking heuristics such that the overall execution time is minimized. Even though such heuristics often deliver effective results, the execution in dynamic and unpredictable grid environments is prone to severe performance losses that must be understood for minimizing the completion time or for the efficient use of high-performance resources. In this paper, we propose a new systematic approach to help the scientists and middleware developers understand the most severe sources of performance losses that occur when executing scientific workflows in dynamic grid environments. We introduce an ideal model for the lowest execution time that can be achieved by a workflow and explain the difference to the real measured grid execution time based on a hierarchy of performance overheads for grid computing. We describe how to systematically measure and compute the overheads from individual activities to larger workflow regions and adjust well-known parallel processing metrics to the scope of grid computing, including speedup and efficiency. We present a distributed online tool for computing and analyzing the performance overheads in real time based on event correlation techniques and introduce several performance contracts as quality-of-service parameters to be enforced during the workflow execution beyond traditional best effort practices. We illustrate our method through postmortem and online performance analysis of two real-world workflow applications executed in the Austrian grid environment.",2008,0, 2600,Provable Protection against Web Application Vulnerabilities Related to Session Data Dependencies,"Web applications are widely adopted and their correct functioning is mission critical for many businesses. At the same time, Web applications tend to be error prone and implementation vulnerabilities are readily and commonly exploited by attackers. The design of countermeasures that detect or prevent such vulnerabilities or protect against their exploitation is an important research challenge for the fields of software engineering and security engineering. In this paper, we focus on one specific type of implementation vulnerability, namely, broken dependencies on session data. This vulnerability can lead to a variety of erroneous behavior at runtime and can easily be triggered by a malicious user by applying attack techniques such as forceful browsing. This paper shows how to guarantee the absence of runtime errors due to broken dependencies on session data in Web applications. The proposed solution combines development-time program annotation, static verification, and runtime checking to provably protect against broken data dependencies. We have developed a prototype implementation of our approach, building on the JML annotation language and the existing static verification tool ESC/Java2, and we successfully applied our approach to a representative J2EE-based e-commerce application. We show that the annotation overhead is very small, that the performance of the fully automatic static verification is acceptable, and that the performance overhead of the runtime checking is limited.",2008,0, 2601,Using the Conceptual Cohesion of Classes for Fault Prediction in Object-Oriented Systems,"High cohesion is a desirable property of software as it positively impacts understanding, reuse, and maintenance. Currently proposed measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture different aspects of it. Existing approaches are largely based on using the structural information from the source code, such as attribute references, in methods to measure cohesion. This paper proposes a new measure for the cohesion of classes in OO software systems based on the analysis of the unstructured information embedded in the source code, such as comments and identifiers. The measure, named the Conceptual Cohesion of Classes (C3), is inspired by the mechanisms used to measure textual coherence in cognitive psychology and computational linguistics. This paper presents the principles and the technology that stand behind the C3 measure. A large case study on three open source software systems is presented which compares the new measure with an extensive set of existing metrics and uses them to construct models that predict software faults. The case study shows that the novel measure captures different aspects of class cohesion compared to any of the existing cohesion measures. In addition, combining C3 with existing structural cohesion metrics proves to be a better predictor of faulty classes when compared to different combinations of structural cohesion metrics.",2008,0, 2602,Software Reliability Analysis and Measurement Using Finite and Infinite Server Queueing Models,"Software reliability is often defined as the probability of failure-free software operation for a specified period of time in a specified environment. During the past 30 years, many software reliability growth models (SRGM) have been proposed for estimating the reliability growth of software. In practice, effective debugging is not easy because the fault may not be immediately obvious. Software engineers need time to read, and analyze the collected failure data. The time delayed by the fault detection & correction processes should not be negligible. Experience shows that the software debugging process can be described, and modeled using queueing system. In this paper, we will use both finite, and infinite server queueing models to predict software reliability. We will also investigate the problem of imperfect debugging, where fixing one bug creates another. Numerical examples based on two sets of real failure data are presented, and discussed in detail. Experimental results show that the proposed framework incorporating both fault detection, and correction processes for SRGM has a fairly accurate prediction capability.",2008,0, 2603,An Efficient Binary-Decision-Diagram-Based Approach for Network Reliability and Sensitivity Analysis,"Reliability and sensitivity analysis is a key component in the design, tuning, and maintenance of network systems. Tremendous research efforts have been expended in this area, but two practical issues, namely, imperfect coverage (IPC) and common-cause failures (CCF), have generally been missed or have not been fully considered in existing methods. In this paper, an efficient approach for fully incorporating both IPC and CCF into network reliability and sensitivity analysis is proposed. The challenges are to allow multiple failure modes introduced by IPC and to cope with multiple dependent faults caused by CCF simultaneously in the analysis. Our methodology for addressing the aforementioned challenges is to separate the consideration of both IPC and CCF from the combinatorics of the solution, which is based on reduced ordered binary decision diagrams (ROBDD). Due to the nature of the ROBDD and the separation of IPC and CCF from the solution combinatorics, our approach has a low computational complexity and is easy to implement. A sample network system is analyzed to illustrate the basics and advantages of our approach. A software tool that we developed for fault-tolerant network reliability and sensitivity analysis is also presented.",2008,0, 2604,Video Error Concealment Using Spatio-Temporal Boundary Matching and Partial Differential Equation,"Error concealment techniques are very important for video communication since compressed video sequences may be corrupted or lost when transmitted over error-prone networks. In this paper, we propose a novel two-stage error concealment scheme for erroneously received video sequences. In the first stage, we propose a novel spatio-temporal boundary matching algorithm (STBMA) to reconstruct the lost motion vectors (MV). A well defined cost function is introduced which exploits both spatial and temporal smoothness properties of video signals. By minimizing the cost function, the MV of each lost macroblock (MB) is recovered and the corresponding reference MB in the reference frame is obtained using this MV. In the second stage, instead of directly copying the reference MB as the final recovered pixel values, we use a novel partial differential equation (PDE) based algorithm to refine the reconstruction. We minimize, in a weighted manner, the difference between the gradient field of the reconstructed MB in current frame and that of the reference MB in the reference frame under given boundary condition. A weighting factor is used to control the regulation level according to the local blockiness degree. With this algorithm, the annoying blocking artifacts are effectively reduced while the structures of the reference MB are well preserved. Compared with the error concealment feature implemented in the H.264 reference software, our algorithm is able to achieve significantly higher PSNR as well as better visual quality.",2008,0, 2605,Classifying Software Changes: Clean or Buggy?,"This paper introduces a new technique for predicting latent software bugs, called change classification. Change classification uses a machine learning classifier to determine whether a new software change is more similar to prior buggy changes or clean changes. In this manner, change classification predicts the existence of bugs in software changes. The classifier is trained using features (in the machine learning sense) extracted from the revision history of a software project stored in its software configuration management repository. The trained classifier can classify changes as buggy or clean, with a 78 percent accuracy and a 60 percent buggy change recall on average. Change classification has several desirable qualities: 1) The prediction granularity is small (a change to a single file), 2) predictions do not require semantic information about the source code, 3) the technique works for a broad array of project types and programming languages, and 4) predictions can be made immediately upon the completion of a change. Contributions of this paper include a description of the change classification approach, techniques for extracting features from the source code and change histories, a characterization of the performance of change classification across 12 open source projects, and an evaluation of the predictive power of different groups of features.",2008,0, 2606,Data Quality Monitoring Framework for the ATLAS Experiment at the LHC,"Data quality monitoring (DQM) is an integral part of the data taking process of HEP experiments. DQM involves automated analysis of monitoring data through user-defined algorithms and relaying the summary of the analysis results to the shift personnel while data is being processed. In the online environment, DQM provides the shifter with current run information that can be used to overcome problems early on. During the offline reconstruction, more complex analysis of physics quantities is performed by DQM, and the results are used to assess the quality of the reconstructed data. The ATLAS data quality monitoring framework (DQMF) is a distributed software system providing DQM functionality in the online environment. The DQMF has a scalable architecture achieved by distributing the execution of the analysis algorithms over a configurable number of DQMF agents running on different nodes connected over the network. The core part of the DQMF is designed to have dependence only on software that is common between online and offline (such as ROOT) and therefore the same framework can be used in both environments. This paper describes the main requirements, the architectural design, and the implementation of the DQMF.",2008,0, 2607,Universal Adaptive Differential Protection for Regulating Transformers,"Since regulating transformers have proved to be efficient in controlling the power flow and regulating the voltage, they are more and more widely used in today's environment of energy production, transmission and distribution. This changing environment challenges protection engineers as well to improve the sensitivity of protection, so that low-current faults could be detected (like turn-to-turn short circuits in transformer windings) and a warning message could be given. Moreover, the idea of an adaptive protection that adjusts the operating characteristics of the relay system in response to changing system conditions has became much more promising. It improves the protection sensitivity and simplifies its conception. This paper presents an adaptive adjustment concept in relation to the position change of the on load tap changer for universal differential protection of regulating transformers; such a concept provides a sensitive and cost-efficient protection for regulating transformers. Various simulations are carried out with the Electro-Magnetic Transients Program/Alternative Transients Program. The simulation results indicate the functional efficiency of the proposed concept under different fault conditions; the protection is sensitive to low level intern faults. The paper concludes by describing the software implementation of the algorithm on a test system based on a digital signal processor.",2008,0, 2608,"A Risk-Based, Value-Oriented Approach to Quality Requirements","When quality requirements are elicited from stakeholders, they're often stated qualitatively, such as """"the response time must be fast"""" or """"we need a highly available system"""". However, qualitatively represented requirements are ambiguous and thus difficult to verify. The value-oriented approach to specifying quality requirements uses a range of potential representations chosen on the basis of assessing risk instead of quantifying everything.",2008,0, 2609,Efficient Quality Impact Analyses for Iterative Architecture Construction,"In this paper, we present an approach that supports efficient quality impact analyses in the context of iteratively constructed architectures. Since the number of established architectural strategies and the number of inter-related models heavily increase during iterative architecture construction, the impact analysis of newly introduced quality strategies during later stages becomes highly effort-intensive and error-prone. With our approach we mitigate the effort needed for such quality impact analyses by enabling efficient separation of concerns. For achieving efficiency, we developed an aspect-oriented approach that enables the automatic weaving of quality strategies into architectural artifacts. By doing so, we are able to conduct selective quality impact evaluations with significantly reduced effort.",2008,0, 2610,Automatic Rule Derivation for Adaptive Architectures,"This paper discusses on-going work in adaptive architectures concerning automatic adaptation rule derivation. Adaptation is rule-action based but deriving rules that meet the adaptation goals are tedious and error prone. We present an approach that uses model-driven derivation and training for automatically deriving adaptation rules, and exemplify this in an environment for scientific computing.",2008,0, 2611,Model-Based Gaze Direction Estimation in Office Environment,"In this paper, we present a model-based approach for gaze direction estimation in office environment. An overlapped elliptical model is used in detection of head, and Bayesian network model is used in estimation of gaze direction. The head consists of two regions which are face and hair region, and it can be represented by two overlapped ellipses. We use its spatial layout based on relative angle of two ellipses and size ratio of two ellipses as prior information for gaze direction estimation. In an image, the face regions are detected based on color and shape information, the hair regions are detected based on color information. The head is tracked by mean shift algorithm and adjustment method for image sequence. The performance of the proposed approach is illustrated on various image sequences obtained from office environment, and we show goodness of gaze direction estimation quality.",2008,0, 2612,Analysing source code: looking for useful verb-direct object pairs in all the right places,"The large time and effort devoted to software maintenance can be reduced by providing software engineers with software tools that automate tedious, error-prone tasks. However, despite the prevalence of tools such as IDEs, which automatically provide program information and automated support to the developer, there is considerable room for improvement in the existing software tools. The authors' previous work has demonstrated that using natural language information embedded in a program can significantly improve the effectiveness of various software maintenance tools. In particular, precise verb information from source code analysis is useful in improving tools for comprehension, maintenance and evolution of object-oriented code, by aiding in the discovery of scattered, action-oriented concerns. However, the precision of the extraction analysis can greatly affect the utility of the natural language information. The approach to automatically extracting precise natural language clues from source code in the form of verb- direct object (DO) pairs is described. The extraction process, the set of extraction rules and an empirical evaluation of the effectiveness of the automatic verb-DO pair extractor for Java source code are described.",2008,0, 2613,Length and readability of structured software engineering abstracts,"Attempts to perform systematic literature reviews have identified a problem with the quality of software engineering abstracts for papers describing empirical studies. Structured abstracts have been found useful for improving the quality of abstracts in many other disciplines. However, there have been no studies of the value of structured abstracts in software engineering. Therefore this paper aims to assess the comparative length and readability of unstructured abstracts and structured versions of the same abstract. Abstracts were obtained from all empirical conference papers from the Evaluation and Assessment in Software Engineering Conference (EASE04 and EASE06) that did not have a structured abstract (23 in total). Two novice researchers created structured versions of the abstracts, which were checked by the papers' authors (or a surrogate). Web tools were used to extract the length in words and readability in terms of the Flesch reading ease index and automated readability index (ARI) for the structured and unstructured abstracts. The structured abstracts were on average 142.5 words longer than the unstructured abstracts (p < 0.001). The readability of the structured abstracts was better by 8.5 points on the Flesch index (p < 0.001) and 1.8 points on the ARI (p < 0.003). The results are consistent with previous studies, although the increase in length and the increase in readability are both greater than the previous studies. Future work will consider whether structured abstracts increase the content and quality of abstracts.",2008,0, 2614,A Display Simulation Toolbox for Image Quality Evaluation,"The output of image coding and rendering algorithms are presented on a diverse array of display devices. To evaluate these algorithms, image quality metrics should include more information about the spatial and chromatic properties of displays. To understand how to best incorporate such display information, we need a computational and empirical framework to characterize displays. Here we describe a set of principles and an integrated suite of software tools that provide such a framework. The display simulation toolbox (DST) is an integrated suite of software tools that help the user characterize the key properties of display devices and predict the radiance of displayed images. Assuming that pixel emissions are independent, the DST uses the sub-pixel point spread functions, spectral power distributions, and gamma curves to calculate display image radiance. We tested the assumption of pixel independence for two liquid crystal device (LCD) displays and two cathode-ray tube (CRT) displays. For the LCD displays, the independence assumption is reasonably accurate. For the CRT displays it is not. The simulations and measurements agree well for displays that meet the model assumptions and provide information about the nature of the failures for displays that do not meet these assumptions.",2008,0, 2615,Designing for Recovery: New Challenges for Large-Scale Complex IT Systems,"Summary form only given. Since the 1980s, the object of design for dependability has been to avoid, detect or tolerate system faults so that these do not result in failures that are detectable outside the system. Whilst this is potentially achievable in medium size systems that are controlled by a single organisations, it is now practically impossible to achieve in large-scale systems of systems where different parts of the system are owned and controlled by different organisations. Therefore, we must accept the inevitability of failure and re-orient our system design strategies to recover from those failures at minimal cost and as quickly as possible. This talk will discuss why such recovery strategies cannot be purely technical but must be socio-technical in nature and argue that design for recovery will require a better understanding of how people recover from failure and the information they need during that recovery process. I will argue that supporting recovery should be a fundamental design objective of systems and explore what this means for current approaches to large-scale systems design.",2008,0, 2616,Experience Report on the Construction of Quality Models for Some Content Management Software Domains,"In previous work, we proposed the use of software quality models for driving the formulation of requirements in the context of software package selection. Now, we report two related projects of construction of software quality models in the domains of document management, entreprise content management and Web content management. These domains may be considered particular cases of a more general category sometimes labeled as content management. The goals of these projects are several. First, to assess the scalability of our methods and artifacts. Second, to investigate the degree of reusability when working on domains so closely related. Third, in relation to the previous one, to gain more knowledge of the adequacy and effectiveness of our notion of software domains taxonomy. Fourth, to evaluate the suitability and usability of our DesCOTS system proposed as tool-support for these activities.",2008,0, 2617,Assessing What Information Quality Means in OTS Selection Processes,"OTS selection plays a crucial role in the deployment of software systems. One of its main current problems is how to deal with the vast amount of unstructured, incomplete, evolvable and widespread information that highly increases the risks of taking a wrong decision. The goal of our research is to tackle these information quality problems for facilitating the collection, storage, retrieval, analysis and reuse of OTS related information. An essential issue in this endeavor is to assess what OTS selectors mean by Information Quality and their needs to perform an informed selection. Therefore, we are putting forward an on-line survey to get empirical data supporting our approach.",2008,0, 2618,Dynamic coupling measurement of object oriented software using trace events,"Software metrics are increasingly playing a central role in the planning and control of software development projects. Coupling measures have important applications in software development and maintenance. They are used to reason about the structural complexity of software and have been shown to predict quality attributes such as fault-proneness, ripple effects of changes and changeability. Coupling or dependency is the degree to which each program module relies on each one of the other modules. Coupling measures characterize the static usage dependencies among the classes in an object-oriented system. Traditional coupling measures take into account only """"static"""" couplings. They do not account for """"dynamic"""" couplings due to polymorphism and may significantly underestimate the complexity of software and misjudge the need for code inspection, testing and debugging. This is expected to result in poorer predictive accuracy of the quality models that utilize static coupling measurement. In this paper, We propose dynamic coupling measurement techniques. First the source code is introspected and all the functions are added with some trace events. Then the source code is compiled and allowed to run. During runtime the trace events are logged. This log report provides the actual function call information (AFCI) during the runtime. Based on AFCI the source code is filtered to arrive the actual runtime used source code (ARUSC). The ARUSC is then given for any standard coupling technique to get the dynamic coupling.",2008,0, 2619,Jitter-Buffer Management for VoIP over Wireless LAN in a Limited Resource Device,VoIP over WLAN is a promising technology as a powerful replacement for current local wireless telephony systems. Packet timing Jitter is a constant issue in QoS of IEEE802.11 networks and exploiting an optimum jitter handling algorithm is an essential part of any VoIP over WLAN (VoWiFi) devices especially for the low cost devices with limited resources. In this paper two common algorithms using buffer as a method for Jitter handling are analyzed with relation to different traffic patterns. The effect of different buffer sizes on the quality of voice will be assessed for these patterns. Various traffic patterns were generated using OPNET and Quality of output voice was evaluated based on ITU PESQ method. It was shown that an optimum voice quality can be attained using a circular buffer with a size of around twice that of a voice packet.,2008,0, 2620,Robust Estimation of Timing Yield with Partial Statistical Information on Process Variations,"This paper illustrates the application of distributional robustness theory to compute the worst-case timing yield of a circuit. Our assumption is that the probability distribution of process variables are unknown and only the intervals of the process variables and their class of distributions are available. We consider two practical classes to group potential distributions. We then derive conditions that allow applying the results of the distributional robustness theory to efficiently and accurately estimate the worst-case timing yield for each class. Compared to other recent works, our approach can model correlations among process variables and does not require knowledge of exact function form of the joint distribution function of process variables. While our emphasis is on robust timing yield estimation, our approach is also applicable to other types of parametric yield.",2008,0, 2621,An On-Demand Test Triggering Mechanism for NoC-Based Safety-Critical Systems,"As embedded and safety-critical applications begin to employ many-core SoCs using sophisticated on-chip networks, ensuring system quality and reliability becomes increasingly complex. Infrastructure IP has been proposed to assist system designers in meeting these requirements by providing various services such as testing and error detection, among others. One such service provided by infrastructure IP is concurrent online testing (COLT) of SoCs. COLT allows system components to be tested in-field and during normal operation of the SoC However, COLT must be used judiciously in order to minimize excessive test costs and application intrusion. In this paper, we propose and explore the use of an anomaly-based test triggering unit (ATTU) for on-demand concurrent testing of SoCs. On-demand concurrent testing is a novel solution to satisfy the conflicting design constraints of fault-tolerance and performance. Ultimately, this ensures the necessary level of design quality for safety-critical applications. To validate this approach, we explore the behavior of the ATTU using a NoC-based SoC simulator. The test triggering unit is shown to trigger tests from test infrastructure IP within 1 ms of an error occurring in the system while detecting 81% of errors, on average. Additionally, the ATTU was synthesized to determine area and power overhead.",2008,0, 2622,Quantified Impacts of Guardband Reduction on Design Process Outcomes,"The value of guardband reduction is a critical open issue for the semiconductor industry. For example, due to competitive pressure, foundries have started to incent the design of manufacturing-friendly ICs through reduced model guardbands when designers adopt layout restrictions. The industry also continuously weighs the economic viability of relaxing process variation limits in the technology roadmap [2]. Our work gives the first-ever quantification of the impact of modeling guardband reduction on outcomes from the synthesis, place and route (SP&R) implementation flow. We assess the impact of model guard- band reduction on various metrics of design cycle time and design quality, using open-source cores and production (specifically, ARM/TSMC) 90 nm and 65 nm technologies and libraries. Our experimental data clearly shows the potential design quality and turnaround time benefits of model guardband reduction. For example, we typically (i.e., on average) observe 13% standard-cell area reduction and 12% routed wirelength reduction as the consequence of a 40% reduction in library model guardband; 40% is the amount of guardband reduction reported by IBM for a variation-aware timing methodology [8]. We also assess the impact of guardband reduction on design yield. Our results suggest that there is justification for the design, EDA and process communities to enable guardband reduction as an economic incentive for manufacturing-friendly design practices.",2008,0, 2623,Greensand Mulling Quality Determination Using Capacitive Sensors,"Cast iron foundries typically use molds made from a mixture of sand, clay, and water called greensand. The mixing of the clay and water into the sand requires the clay to be smeared around the sand particles in a very thin layer, which allows the mold to have strength while allowing gas to escape. This mixing is called mulling and takes place in a machine called a muller. At present, the industry uses electrical resistance measurements to determine water quantity in the muller. The resistance measurements can not accurately predict the quality of mulling due to binding of the water to sodium and calcium ions in the clay. Poorly mixed greensand has a high resistance when the water is concentrated in a few areas, then a medium mixed greensand has a lower resistance because the water present between the sensors, and a well mulled sand has a higher resistance when the clay binds the water. This paper investigates the feasibility of using capacitive sensors to measure mulling quality using the simulation software Ansoft Maxwell. A second investigation of this paper is to find the ability of capacitance sensors to determine the drying effect of molds delayed in the casting process.",2008,0, 2624,Momentum-Based Motion Detection Methodology for Handoff in Wireless Networks,"This paper presents a novel motion detection scheme by using the momentum of received signal sstrength (MRSS) to improve the quality of handoff in a general wireless network. MRSS can detect the motion state of a mobile node (MN) without assistance of any positioning service. Although MRSS is sensitive in detecting user's motion, it is static and fails to detect quickly the motion changes of users. Thus, a novel motion state dependent MRSS scheme called dynamic MRSS (DMRSS) algorithm is proposed to address this issue. Extensive simulation experiments were conducted to study performance of our presented algorithms. The simulation results show that MRSS and DMRSS can be used to assist a handoff algorithm in substantially reducing unnecessary handoff and saving power.",2008,0, 2625,SOSRAID-6: A Self-Organized Strategy of RAID-6 in the Degraded Mode,"The distinct benefit of RAID-6 is that it provides higher reliability than the other RAID levels for tolerating double disk failures. Whereas, when a disk fails the read/write operations on the failed disk will be redirected to all the surviving disks, which will increase the burden of the surviving disks, the probability of the disk failure and the energy consumption along with the degraded performance issue. In this paper, we present a Self-Organized Strategy (SOS) to improve the performance of RAID-6 in the degraded mode. SOS organizes the data on the failed disks to the corresponding parity locations on first access. Then the later accesses to the failed disks will be redirected to the parity locations rather than all the surviving disks. Besides the performance improvement the SOSRAID-6 reduces the failure probability of the survived disks and is more energy efficient compared with the Traditional RAID-6. With the theoretical evaluation we find that the SOSRAID-6 is more powerful than the TRAID-6.",2008,0, 2626,Experiments with Analogy-X for Software Cost Estimation,"We developed a novel method called Analogy-X to provide statistical inference procedures for analogy- based software effort estimation. Analogy-X is a method to statistically evaluate the relationship between useful project features and target features such as effort to be estimated, which ensures the dataset used is relevant to the prediction problem, and project features are selected based on their statistical contribution to the target variables. We hypothesize that this method can be (1) easily applied to a much larger dataset, and (2) also it can be used for incorporating joint effort and duration estimation into analogy, which was not previously possible with conventional analogy estimation. To test these two hypotheses, we conducted two experiments using different datasets. Our results show that Analogy-X is able to deal with ultra large datasets effectively and provides useful statistics to assess the quality of the dataset. In addition, our results show that feature selection for duration estimation differs from feature selection for joint-effort duration estimation. We conclude Analogy-X allows users to assess the best procedure for estimating duration given their specific requirements and dataset.",2008,0, 2627,An Empirical Study into Use of Dependency Injection in Java,"Over the years many guidelines have been offered as to how to achieve good quality designs. We would like to be able to determine to what degree these guidelines actually help. To do that, we need to be able to determine when the guidelines have been followed. This is often difficult as the guidelines are often presented as heuristics or otherwise not completely specified. Nevertheless, we believe it is important to gather quantitative data on the effectiveness of design guidelines wherever possible. In this paper, we examine the use of """"dependency injection"""", which is a design principle that is claimed to increase software design quality attributes such as extensibility, modifiability, testability, and reusability. We develop operational definitions for it and analysis techniques for detecting its use. We demonstrate these techniques by applying them to 34 open source Java applications.",2008,0, 2628,Checklist Based Reading's Influence on a Developer's Understanding,"This paper addresses the influence the checklist based reading inspection technique has on a developer's ability to modify inspected code. Traditionally, inspections have been used to detect defects within the development life cycle. This research identified a correlation between the number of defects detected and the successful code extensions for new functionality unrelated to the defects. Participants reported that having completed a checklist inspection, modifying the code was easier because the inspection had given them an understanding of the code that would not have existed otherwise. The results also showed a significant difference in how developers systematically modified code after completing a checklist inspection when compared to those who had not performed a checklist inspection. This study has shown that applying software inspections for purposes other than defect detection include software understanding and comprehension.",2008,0, 2629,Assessing Value of SW Requirements,"Understanding software requirements and customer needs is vital for all SW companies around the world. Lately clearly more attention has been focused also on the costs, cost-effectiveness, productivity and value of software development and products. This study outlines concepts, principles and process of implementing a value assessment for SW requirements. The main purpose of this study is to collect experiences whether the value assessment for product requirements is useful for companies, works in practice, and what are the strengths and weaknesses of using it. This is done by implementing value assessment in a case company step by step to see which phases possibly work and which phases possibly do not work. The practical industrial case shows that proposed value assessment for product requirements is useful and supports companies trying to find value in their products.",2008,0, 2630,Automated Usability Testing Using HUI Analyzer,"In this paper, we present an overview of HUI Analyzer, a tool intended for automating usability testing. The tool allows a user interface's expected and actual use to be captured unobtrusively, with any mismatches indicating potential usability problems being highlighted. HUI Analyzer also supports specification and checking of assertions governing a user interface's layout and actual user interaction. Assertions offer a low cost means of detecting usability defects and are intended to be checked iteratively during a user interface's development. Hotspot analysis is a feature that highlights the relative use of GUI components in a form. This is useful in informing form layout, for example to collocate heavily used components thereby reducing unnecessary scrolling or movement. Based on evaluation, we have found HUI Analyzer's performance in detecting usability defects to be comparable to conventional formal user testing. However the time taken by HUI Analyzer to automatically process and analyze user interactions is much less than that for formal user testing.",2008,0, 2631,Integrating RTL IPs into TLM Designs Through Automatic Transactor Generation,"Transaction Level Modeling (TLM) is an emerging design practice for overcoming increasing design complexity. It aims at simplifying the design flow of embedded systems by designing and verifying a system at different abstraction levels. In this context, transactors play a fundamental role since they allow communication between the system components, implemented at different abstraction levels. Reuse of RTL IPs into TLM systems is a meaningful example of key advantage guaranteed by exploiting transactors. Nevertheless, transactors implementation is still manual, tedious and error-prone, and the effort spent to verify their correctness often overcomes the benefits of the TLM-based design flow. In this paper we present a methodology to automatically generate transactors for RTL IPs. We show how the transactor code can be automatically generated by exploiting the testbench of any RTL IP.",2008,0, 2632,Thermal Balancing Policy for Streaming Computing on Multiprocessor Architectures,"As feature sizes decrease, power dissipation and heat generation density exponentially increase. Thus, temperature gradients in multiprocessor systems on chip (MPSoCs) can seriously impact system performance and reliability. Thermal balancing policies based on task migration have been proposed to modulate power distribution between processing cores to achieve temperature flattening. However, in the context of MPSoC for multimedia streaming computing, where timeliness is critical, the impact of migration on quality of service must be carefully analyzed. In this paper we present the design and implementation of a lightweight thermal balancing policy that reduces on-chip temperature gradients via task migration. This policy exploits run-time temperature and load information to balance the chip temperature. Moreover, we assess the effectiveness of the proposed policy for streaming computing architectures using a cycle-accurate thermal-aware emulation infrastructure. Our results using a real-life software defined radio multitask benchmark show that our policy achieves thermal balancing while keeping migration costs bounded.",2008,0, 2633,Software Protection Mechanisms for Dependable Systems,"We expect that in future commodity hardware will be used in safety critical applications. But the used commodity microprocessors will become less reliable because of decreasing feature size and reduced power supply. Thus software-implemented approaches to deal with unreliable hardware will be required. As one basic step to software- implemented hardware-fault tolerance (SIHFT) we aim at providing failure virtualization by turning arbitrary value failures caused by erroneous execution into crash failures which are easier to handle. Existing SIHFT approaches either are not broadly applicable or lack the ability to reliably deal with permanent hardware faults. In contrast, Forin [7] introduced the Vital Coded Microprocessor which reliably detects transient and permanent hardware errors but is not applicable to arbitrary programs and requires special hardware. We discuss different approaches to generalize Forin's approach and make it applicable to modern infrastructures.",2008,0, 2634,A Coverage-Based Handover Algorithm for High-speed Data Service,"4G supports various types of services. The coverage of high-speed data service is smaller than that of low-speed data service, which make the high-speed data service users occur dropping before reaching the handover area. In order to solve the problem, this paper proposes A Coverage-based Handover Algorithm for High-speed Data Service (CBH), which extends the coverage of high-speed data service by reducing source rate and makes these users acquire """";transient coverage"""";. Meanwhile, FSES (Faint Sub-carrier Elimination Strategy) is introduced, which utilizes the """";transient handover QoS""""; and reduces the handover effect to target cell for high-speed data service users. The simulation results shows that the new algorithm can improve the whole system performance, reduce the handover dropping probability and new call blocking probability, enhance the resource utilization ratio.",2008,0, 2635,CiCUTS: Combining System Execution Modeling Tools with Continuous Integration Environments,"System execution modeling (SEM) tools provide an effective means to evaluate the quality of service (QoS) of enterprise distributed real-time and embedded (DRE) systems. SEM tools facilitate testing and resolving performance issues throughout the entire development life-cycle, rather than waiting until final system integration. SEM tools have not historically focused on effective testing. New techniques are therefore needed to help bridge the gap between the early integration capabilities of SEM tools and testing so developers can focus on resolving strategic integration and performance issues, as opposed to wrestling with tedious and error-prone low-level testing concerns. This paper provides two contributions to research on using SEM tools to address enterprise DRE system integration challenges. First, we evaluate several approaches for combining continuous integration environments with SEM tools and describe CiCUTS, which combines the CUTS SEM tool with the CruiseControl.NET continuous integration environment. Second, we present a case study that shows how CiCUTS helps reduce the time and effort required to manage and execute integration tests that evaluate QoS metrics for a representative DRE system from the domain of shipboard computing. The results of our case study show that CiCUTS helps developers and testers ensure the performance of an example enterprise DRE system is within its QoS specifications throughout development, instead of waiting until system integration time to evaluate QoS.",2008,0, 2636,Software Configuration Management for Product Derivation in Software Product Families,"A key process in software product line (SPL) engineering is product derivation, which is the process of building software products from a base set of core assets. During product derivation, the components in both core assets and derived software products are modified to meet needs for different functionality, platforms, quality attributes, etc. However, existing software configuration management (SCM) systems do not sufficiently support the derivation process in SPL. In this paper, we introduce a novel SCM system that is well-suited for product derivation in SPL. Our tool, MoSPL handles version management at the component level via its product versioning and data models. It explicitly manages logical constraints and derivation relations among components in both core assets and derived products, thus enabling the automatic propagation of changes in the core assets to their copies in derived products and vice versa. The system can also detect conflicting changes to different copies of components in software product lines.",2008,0, 2637,Fault-Based Web Services Testing,"Web services are considered a new paradigm for building software applications that has many advantages over the previous paradigms; however, Web services are still not widely used because Service Requesters do not trust Web services that were built by others. Testing can participate in solving this problem because it can be used to assess the quality attributes of Web services and hence increase the requesters' trustworthiness. This paper proposes an approach that can be used to test the robustness and other related attribute of Web services, and that can be easily enhanced to assess other quality attributes. The framework is based on rules for test case generation that are designed by, firstly, analyzing WSDL document to know what faults could affect the robustness quality attribute of Web services, and secondly, using the fault-based testing techniques to detect such faults. A proof of concept tool that depends on these rules has been implemented in order to assess the usefulness of the rules in detecting robustness faults in different Web services platforms.",2008,0, 2638,Investigating the Efficacy of Nonlinear Dimensionality Reduction Schemes in Classifying Gene and Protein Expression Studies,"The recent explosion in procurement and availability of high-dimensional gene and protein expression profile data sets for cancer diagnostics has necessitated the development of sophisticated machine learning tools with which to analyze them. While some investigators are focused on identifying informative genes and proteins that play a role in specific diseases, other researchers have attempted instead to use patients based on their expression profiles to prognosticate disease status. A major limitation in the ability to accurately classify these high-dimensional data sets stems from the """"curse of dimensionality,"""" occurring in situations where the number of genes or peptides significantly exceeds the total number of patient samples. Previous attempts at dealing with this issue have mostly centered on the use of a dimensionality reduction (DR) scheme, principal component analysis (PCA), to obtain a low-dimensional projection of the high-dimensional data. However, linear PCA and other linear DR methods, which rely on euclidean distances to estimate object similarity, do not account for the inherent underlying nonlinear structure associated with most biomedical data. While some researchers have begun to explore nonlinear DR methods for computer vision problems such as face detection and recognition, to the best of our knowledge, few such attempts have been made for classification and visualization of high-dimensional biomedical data. The motivation behind this work is to identify the appropriate DR methods for analysis of high-dimensional gene and protein expression studies. Toward this end, we empirically and rigorously compare three nonlinear (Isomap, Locally Linear Embedding, and Laplacian Eigenmaps) and three linear DR schemes (PCA, Linear Discriminant Analysis, and Multidimensional Scaling) with the intent of determining a reduced subspace representation in which the individual object classes are more easily discriminable. Owing to the inherent nonlinear structure- - of gene and protein expression studies, our claim is that the nonlinear DR methods provide a more truthful low-dimensional representation of the data compared to the linear DR schemes. Evaluation of the DR schemes was done by 1) assessing the discriminability of two supervised classifiers (Support Vector Machine and C4.5 Decision Trees) in the different low- dimensional data embeddings and 2) five cluster validity measures to evaluate the size, distance, and tightness of object aggregates in the low-dimensional space. For each of the seven evaluation measures considered, statistically significant improvement in the quality of the embeddings across 10 cancer data sets via the use of three nonlinear DR schemes over three linear DR techniques was observed. Similar trends were observed when linear and nonlinear DR was applied to the high-dimensional data following feature pruning to isolate the most informative features. Qualitative evaluation of the low-dimensional data embedding obtained via the six DR methods further suggests that the nonlinear schemes are better able to identify potential novel classes (e.g., cancer subtypes) within the data.",2008,0, 2639,Algorithm-Based Fault Tolerance for Fail-Stop Failures,"Fail-stop failures in distributed environments are often tolerated by checkpointing or message logging. In this paper, we show that fail-stop process failures in ScaLAPACK matrix-matrix multiplication kennel can be tolerated without checkpointing or message logging. It has been proved in previous algorithm-based fault tolerance that, for matrix-matrix multiplication, the checksum relationship in the input checksum matrices is preserved at the end of the computation no mater which algorithm is chosen. From this checksum relationship in the final computation results, processor miscalculations can be detected, located, and corrected at the end of the computation. However, whether this checksum relationship can be maintained in the middle of the computation or not remains open. In this paper, we first demonstrate that, for many matrix matrix multiplication algorithms, the checksum relationship in the input checksum matrices is not maintained in the middle of the computation. We then prove that, however, for the outer product version algorithm, the checksum relationship in the input checksum matrices can be maintained in the middle of the computation. Based on this checksum relationship maintained in the middle of the computation, we demonstrate that fail-stop process failures (which are often tolerated by checkpointing or message logging) in ScaLAPACK matrix-matrix multiplication can be tolerated without checkpointing or message logging.",2008,0, 2640,Decision Reuse in an Interactive Model Transformation,"Propagating incremental changes and maintaining traceability are challenges for interactive model transformations, i.e. ones that combine automation with user decisions. After evolutionary changes to the source models the transformations have to be rerun. Earlier decisions cannot be used directly, because they may have been affected by the changes. Re-doing or verifying each decision manually is error-prone and burdensome. We present a way to model user interaction for transformations that are well (but not fully) understood. We model each decision as a set of options and their consequences. Also, we model the decision context, i.e. the circumstances (including model elements) affecting the decision. When a transformation is run, user decisions and their context are recorded. After a model change, a decision can be safely reused without burdening the user, if its context has not changed. The context maps source model elements to a decision, and thus provides traceability across the decision.",2008,0, 2641,Reengineering Idiomatic Exception Handling in Legacy C Code,"Some legacy programming languages, e.g., C, do not provide adequate support for exception handling. As a result, users of these legacy programming languages often implement exception handling by applying an idiom. An idiomatic style of implementation has a number of drawbacks: applying idioms can be fault prone and requires significant effort. Modern programming languages provide support for structured exception handling (SEH) that makes idioms largely obsolete. Additionally, aspect-oriented programming (AOP) is believed to further reduce the effort of implementing exception handling. This paper investigates the gains that can be achieved by reengineering the idiomatic exception handling of a legacy C component to these modern techniques. First, we will reengineer a C component such that its exception handling idioms are almost completely replaced by SEH constructs. Second, we will show that the use of AOP for exception handling can be beneficial, even though the benefits are limited by inconsistencies in the legacy implementation.",2008,0, 2642,Visual Detection of Design Anomalies,"Design anomalies, introduced during software evolution, are frequent causes of low maintainability and low flexibility to future changes. Because of the required knowledge, an important subset of design anomalies is difficult to detect automatically, and therefore, the code of anomaly candidates must be inspected manually to validate them. However, this task is time- and resource-consuming. We propose a visualization-based approach to detect design anomalies for cases where the detection effort already includes the validation of candidates. We introduce a general detection strategy that we apply to three types of design anomaly. These strategies are illustrated on concrete examples. Finally we evaluate our approach through a case study. It shows that performance variability against manual detection is reduced and that our semi-automatic detection has good recall for some anomaly types.",2008,0, 2643,Modularity-Oriented Refactoring,"Refactoring, in spite of widely acknowledged as one of the best practices of object-oriented design and programming, still lacks quantitative grounds and efficient tools for tasks such as detecting smells, choosing the most appropriate refactoring or validating the goodness of changes. This is a proposal for a method, supported by a tool, for cross-paradigm refactoring (e.g. from OOP to AOP), based on paradigm and formalism-independent modularity assessment.",2008,0, 2644,CSMR 2008 - Workshop on Software Quality and Maintainability (SQM 2008),"Software is playing a crucial role in modern societies. Not only do people rely on it for their daily operations or business, but for their lives as well. For this reason correct and consistent behaviour of software systems is a fundamental part of end user expectations. Additionally, businesses require cost-effective production, maintenance, and operation of their systems. Thus, the demand for software quality is increasing and is setting it as a differentiator for the success or failure of a software product. In fact, high quality software is becoming not just a competitive advantage but a necessary factor for companies to be successful. The main question that arises now is how quality is measured. What, where and when we assess and assure quality, are still open issues. Many views have been expressed about software quality attributes, including maintainability, evolvability, portability, robustness, reliability, usability, and efficiency. These have been formulated in standards such as ISO/IEC-9126 and CMM. However, the debate about quality and maintainability between software producers, vendors and users is ongoing, while organizations need the ability to evaluate from multiple angles the software systems that they use or develop. So, is """"software quality in the eye of the beholder""""? This workshop session aims at feeding into this debate by establishing what the state of the practice and the way forward is.",2008,0, 2645,Network provisioning over IP networks with call admission control schemes,"Multimedia applications are migrating to IP networks imposing high challenges on network planners. Challenges arise due to the stringent quality of service (QoS) requirements of multimedia applications that cannot be met over enterprise IP networks unless advanced techniques and strategies are applied. Example techniques are traffic differentiation, capacity evaluation and reservation, and call admission control. In this work, we assume practical call admission control (CAC) schemes and study by simulations the distribution of traffic inside the network. We show how capacity needs of traffic are affected when various CAC schemes are employed. Finally, we identify a procedure to evaluate the link capacity share for realtime traffic and study the resulting tradeoff between capacity needs and QoS parameters such as packet loss and blocking probability.",2008,0, 2646,OTSX: An extended transaction service in support of FT-CORBA standard,The FT-CORBA standard that has been adopted by OMG in recent years introduces mechanisms in support of increasing the availability of systems. This standard provides an infrastructure to detect faults and replicate distributed objects. In this paper we are going to share our experiences on implementing an extended transaction service (OTSX) which provides a set of specific features in support of FT-CORBA standard. These extensions allow distributed applications developed on top of FT-CORBA to run atomic operations on multiple object groups and ignore any faults that may occur in any object. The role of some affecting parameters like object size and failure rate is also studied and reported in this paper.,2008,0, 2647,Assessing quality of web based systems,"This paper proposes an assessment model for Web-based systems in terms of non-functional properties of the system. The proposed model consists of two stages: (i) deriving quality metrics using goal-question-metric (GQM) approach; and (ii) evaluating the metrics to rank a Web based system using multi-element component comparison analysis technique. The model ultimately produces a numeric rating indicating the relative quality of a particular Web system in terms of selected quality attributes. We decompose the quality objectives of the web system into sub goals, and develop questions in order to derive metrics. The metrics are then assessed against the defined requirements using an assessment scheme.",2008,0, 2648,An evaluation method for aspectual modeling of distributed software architectures,"Dealing with crosscutting requirements in software development usually makes the process more complex. Modeling and analyzing of these requirements in the software architecture facilitate detecting architectural risks early. Distributed systems have more complexity and so these facilities are much useful in development of such systems. Aspect oriented Architectural Description Languages (ADD) have emerged to represent solutions for discussed problems; nevertheless, imposing radical changes to existing architectural modeling methods is not easily acceptable by architects. Software architecture analysis methods, furthermore, intend to verify that the quality requirements have been addressed properly. In this paper, we enhance ArchC# through utilization of aspect features with an especial focus on Non-Functional Requirements (NFR). ArchC# is mainly focused on describing architecture of distributed systems; in addition, it unifies software architecture with an object- oriented implementation to make executable architectures. Moreover, in this paper, a comparative analysis method is presented for evaluation of the result. All of these materials are illustrated along with a case study.",2008,0, 2649,A Study on the Application of Patient Location Data for Ubiquitous Healthcare System based on LBS,"Ubiquitous means the environment that users can receive the medical treatment regardless of the location and time. As the quality of the life has been improved, we are more focusing on our health and people want to be treated with the arising trend of the ubiquitous. Along with the circumstance, the interest of the remote-treatment has been increasing. So, systems are developing that can check their health status and treat them in a distance in a real time. Now, we are asking more services that can detect patients location and utilize this information. We studied the """"health care system"""" that can be operated by detecting the location of the patient in an urgent situation with the previous remote-treatment system that applying """"location based services"""". This system is the service which can help to detect the location of people or things through the portable-equipment based on wireless communication network. With this system we can process and manage data at the hospital or emergency room in a distance by transferring bio-data such as ECG data and pulse data as well as the user's location information.",2008,0, 2650,Dynamic reconfiguration for Java applications using AOP,"One of the characteristics of contemporary software systems is their ability to adapt to evolutionary computing needs and environments. Dynamic reconfiguration is a way to make changes to software systems at runtime, while typical software changes involve the shutting down and rebooting of software systems. Therefore, dynamic reconfiguration can provide continuous availability for software systems. However, its processes are still complicated and error- prone due to the intervention of human beings. This research describes an aspect-oriented approach to dynamic reconfiguration for Java applications, providing software maintainers with systematic and controlled reconfiguration processes. The features of aspect-oriented programming systems, such as aspect weaving and code instrumentation, are appropriate to the problems of dynamic reconfiguration. This proposed approach is intended to minimize the efforts of software engineers, to enable automated dynamic reconfiguration, and to ensure the integrity of software systems. The primary domain of the research is component-based applications where the addition, removal, and replacement of components might be needed.",2008,0, 2651,Derivation of Fault Tolerance Measures of Self-Stabilizing Algorithms by Simulation,"Fault tolerance measures can be used to distinguish between different self-stabilizing solutions to the same problem. However, derivation of these measures via analysis suffers from limitations with respect to scalability of and applicability to a wide class of self-stabilizing distributed algorithms. We describe a simulation framework to derive fault tolerance measures for self-stabilizing algorithms which can deal with the complete class of self-stabilizing algorithms. We show the advantages of the simulation framework in contrast to the analytical approach not only by means of accuracy of results, range of applicable scenarios and performance, but also for investigation of the influence of schedulers on a meta level and the possibility to simulate large scale systems featuring dynamic fault probabilities.",2008,0, 2652,An Innovative Transient-Based Protection Scheme for MV Distribution Networks with Distributed Generation,This paper presents a new transient based scheme to detect single phase to ground faults (or grounded faults in general) in distribution systems with high penetration of distributed generation. This algorithm combines the fault direction based approach and the distance estimation based approach in order to determine the faulted section. The wavelet coefficients of the transient fault currents measured at the interconnection points of the network are used to determine the direction of fault currents and to estimate the distance of the fault. The simulations have been carried out by using DigSilent software package and the results have been processed in MATLAB using the Wavelet Toolbox.,2008,0, 2653,NDT. A Model-Driven Approach for Web Requirements,"Web engineering is a new research line in software engineering that covers the definition of processes, techniques, and models suitable for Web environments in order to guarantee the quality of results. The research community is working in this area and, as a very recent line, they are assuming the Model-Driven paradigm to support and solve some classic problems detected in Web developments. However, there is a lack in Web requirements treatment. This paper presents a general vision of Navigational Development Techniques (NDT), which is an approach to deal with requirements in Web systems. It is based on conclusions obtained in several comparative studies and it tries to fill some gaps detected by the research community. This paper presents its scope, its most important contributions, and offers a global vision of its associated tool: NDT-Tool. Furthermore, it analyzes how Web Engineering can be applied in the enterprise environment. NDT is being applied in real projects and has been adopted by several companies as a requirements methodology. The approach offers a Web requirements solution based on a Model-Driven paradigm that follows the most accepted tendencies by Web engineering.",2008,0, 2654,Quality-Aware Retrieval of Data Objects from Autonomous Sources for Web-Based Repositories,"The goal of this paper is to develop a framework for designing good data repositories for Web applications. The central theme of our approach is to employ statistical methods to predict quality metrics. These prediction quantities can be used to answer important questions such as: How soon should the local repository be synchronized to have a quality of at least 90% precision with certain confidence level? Suppose the local repository was synchronized three days ago, how many objects could have been deleted at the remote source since then?",2008,0, 2655,A New Grid Resource Management Mechanism Based on D-S Theory,"There may be some malicious nodes in the Grid environment because the resource of nodes can access the Grid system freely. The number and qualities of this resource can be change dramatically and optionally, so it is possible to affect the utilization of Grid resource. Hence, the trust mechanism is widely used in the management of Grid. This paper proposes a new method to detect the behaviors of resource providers in Grid environments based on D-S theory. Through simulating experiments, this mechanism can record the behaviors exactly and prevent the malicious ones. Then it gives a strong support for resource scheduling in the Grid.",2008,0, 2656,Resource Optimization for 60 GHz Indoor Networks Using Dynamic Extended Cell Formation,"With the advent of opening up 60 GHz Radio with 5 GHz of available spectrum, many bandwidth-hungry applications can be supported. The immediate concern, however, is the constraint on the line-of-sight transmission as well as the short transmission range of signals. As a result, in an indoor network, a mobile user might experience frequent breaks or losses of connection when one moves from one cell to another. To mitigate this problem, the Extended Cell (EC) architecture is proposed. In this paper, a dynamic Extended Cell formation algorithm is proposed based on the actual floor plan and the traffic situation under the network. Moreover, by applying this dynamic EC formation algorithm, we show that the call blocking probability is reduced. The dynamic EC formation also eases the deployment and maintenance cost due to its adaptability to the changing environment.",2008,0, 2657,Model Based Requirements Specification and Validation for Component Architectures,"Requirements specification is a major component of the system development cycle. Mistakes and omissions in requirements documents lead to ambiguous or wrong interpretation by engineers and, in turn, cause errors that trickle down in design and implementation with consequences on the overall development cost. In this paper we describe a methodology for requirements specification that aims to alleviate the above issues and that produces models for functional requirements that can be automatically validated for completeness and consistency. This methodology is part of the Requirements Driven Design Automation framework (RDDA) that we develop for component-based system development. The RDDA framework uses an ontology-based language for semantic description of functional product requirements, UM- L/SysML structure diagrams, component constraints, and Quality of Service. The front end method for requirements specification is the SysML editor in Rhapsody. A requirements model in OWL is converted from SysML XMI representation. The specification is validated for completeness and consistency with a ruled-based system implemented in Prolog. With our methodology, omissions and several types of consistency errors present in the requirements specification are detected early on, before the design stage.",2008,0, 2658,The Role of System Behavior in Diagnostic Performance,"The diagnostic performance of system built-in-test has historically suffered from deficiencies such as high false alarm rates, high undetected failure rates and high fault isolation ambiguity. In general these deficiencies impose a burden on maintenance resources and can affect mission readiness and effectiveness. Part of the problem has to do with the blurred distinction between physical faults and the test failures used to detect those faults. A greater part of the problem has to do with the test limits used to establish pass/fail criteria. If the limits do not reflect system behavior that is to be expected, given its current no fault (or fault) status, then a test fail result can often be a false alarm, and a test pass result can often constitute an undetected fault. A model based approach to prediction of system behavior can do much to alleviate the problem.",2008,0, 2659,Realization of an Adaptive Distributed Sound System Based on Global-Time-Based Coordination and Listener Localization,"This paper discusses the benefits of exploiting 1) the principle of global-time-based coordination of distributed computing actions (TCoDA) and 2) a high-level component-/object-based programming approach in developing real-time embedded computing software. The benefits are discussed in the context of a concrete case study. A new major type of distributed multimedia processing applications, called Adaptive Distributed Sound Systems (ADSSs), is presented here to show the compelling nature of the TCoDA exploitation. High-quality ADSSs impose stringent real-time distributed computing requirements. They require a global-time base with precision better than 100 mus. For efficient implementation, the TMO programming scheme and associated tools are shown to be highly useful. In addition, a prototype TMO-based ADSS has been developed and its most important quality attributes have been empirically evaluated. The prototype ADSS has also turned out to be a cost-effective tool for assessing the quality of service of a TMO execution engine.",2008,0, 2660,Matrix converter: improvement on the start-up and the switching behavior,"The matrix converter (MC) presents a promising topology that needs to overcome certain barriers (protection systems, durability, the development of converters for real applications, etc.) in order to gain a foothold in the market. Taking into consideration that the great majority of efforts are being oriented towards control algorithms and modulation, this article focuses on MC hardware. In order to improve the switching speed of the MC and thus obtain signals with less harmonic distortion, several different IGBT excitation circuits are being studied. Below, the appropriate topology is selected for the MC and a configuration is presented which reduces the excursion range of the drivers and optimizes the switching speed of the IGBTs. Inadequate driver control can lead to the destruction of the MC due to its low ride-through capability. Moreover, this converter is specially sensitive during start-up, as at that moment there are high overcurrents and overvoltages. With the aim of finding a solution for starting-up the MC, a circuit is presented (separate from the control software) which ensures a correct sequencing of supplies, thus avoiding a short-circuit between input phases. Moreover, it detects overcurrent, connection/disconnection, and converter supply faults. Faults cause the circuit to protect the MC by switching off all the IGBT drivers without latency. All this operability is guaranteed even when the supply falls below the threshold specified by the manufacturers for the correct operation of the circuits. All these features are demonstrated with experimental results. For all these reasons, it can be said that the techniques proposed in this article substantially improve the MC start-up cycle, representing a step forward towards the development of reliable matrix converters for real applications.",2008,0, 2661,A new power quality detection device based on embedded technique,"A new kind of monitoring device on power quality based on digital signal processing (DSP) chip and advanced RISC machine (ARM) microprogrammed control unit (MCU) is introduced in this thesis. And its power quality detection arithmetic and its hardware realization and software realization are expatiated on. The device use a kind of reformative FFT to measure harmonic frequency difference, with a digital filter method to survey voltage fluctuation and flicker, in a wavelet transform to detect transient power quality disturbance, on an embedded Linux operation system to develop management software, by DSP chip to implement power quality calculations and through ARM MCU to perform data management, so that the device can implement real-time, comprehensive and high-precision monitoring and management for all power quality parameters. Thus, the device not only structures a self-governed power quality monitoring system, but also acts as front placed part or local area network of wide area power quality monitoring system.",2008,0, 2662,Three-phase four-wire DSTATCOM based on a three-dimensional PWM algorithm,"A modified voltage space vector pulse-width modulated (PWM) algorithm for three-phase four-wire distribution static compensator (DSTATCOM) is described in this paper. The mathematical model of shunt-connected three-leg inverter in three-phase four-wire system is studied in a-b-c frame. The p-q-o theory based on the instantaneous reactive power theory is applied to detect the reference current. A fast and generalized applicable three-dimensional space vector modulation (3DSVM) is proposed for controlling a three-leg inverter. The reference voltage vector is decomposed into an offset vector and a two-level vector. So identification of neighboring vectors and dwell times calculation are all settled by a general two-level 3DSVM control. This algorithm can also be applied to multilevel inverter. The zero-sequence component of each vector is considered in order to implement the neutral current compensation. The simulation is performed by EMTDC/PSCAD software. The neutral current, harmonics current, unbalance current and reactive current can be compensated. The result shows that the validity of the proposed 3DSVM can be applied to compensate power quality problems in three-phase four-wire system.",2008,0, 2663,Optimisation of wirebond interconnects by automated parameter variation,"A numerical optimisation strategy for interconnections in electronic packaging is demonstrated. The method is based on a toolbox for the parametric generation of finite- element models of package types such as Chip Scale Package (CSP), Micro Lead Package (MLP) or Ball Grid Array (BGA). The novelty of this work is the combination of this modeling toolbox with an optimisation software for automatic parameter variation. Resulting in a convenient tool to investigate the influence of geometry on the relevant quality characteristics of the device. Users can set the parameters to be varied, the ranges of parameter variation and the number of iterations. The optimisation software automatically generates the parameter sets depending on the number of iterations. The generation of a finite-element model for each parameter set, the meshing and the implementation of the required material properties are also automated by the toolbox. Thereafter, the simulation of the desired load conditions results in quality characteristics such as the maximum mechanical stress for each set. After completion of all iterations, the optimisation software provides a user interface for statistical analysis and graphic visualisation of the results. The wirebond geometry is also included in the toolbox. Influence on maximum mechanical stress and fatigue properties under thermal loads is examined during this study. As an example, the effect of the bonding tool geometry on the locations and the value of the maximum mechanical stress in the wirebond material during thermal shocking is determined. This combination of parametric finite-element model generation and automatic parameter variation represents a powerful tool for design automation in packaging technology and product development. The effects of several geometrical parameters on the thermal and mechanical behaviour of packaging interconnects can be predicted. In a virtual product-development process, time- and cost-intensive prototyping and testin- - g can thus be reduced.",2008,0, 2664,Interactive Software and Hardware Faults Diagnosis Based on Negative Selection Algorithm,"Both hardware and software of computer systems are subject to faults. However, traditional methods, ignoring the relationship between software fault and hardware fault, are ineffective to diagnose complex faults between software and hardware. On the basis of defining the interactive effect to describe the process of the interactive software and hardware fault, this paper present a new matrix-oriented negative selection algorithm to detect faults. Furthermore, the row vector distance and matrix distance are constructed to measure elements between the self set and detector set. The experiment on a temperature control system indicates that the proposed algorithm has good fault detection ability, and the method is applicable to diagnose interactive software and hardware faults with small samples.",2008,0, 2665,Practical Combinatorial Testing: Beyond Pairwise,"With new algorithms and tools, developers can apply high-strength combinatorial testing to detect elusive failures that occur only when multiple components interact. In pairwise testing, all possible pairs of parameter values are covered by at least one test, and good tools are available to generate arrays with the value pairs. In the past few years, advances in covering-array algorithms, integrated with model checking or other testing approaches, have made it practical to extend combinatorial testing beyond pairwise tests. The US National Institute of Standards and Technology (NIST) and the University of Texas, Arlington, are now distributing freely available methods and tools for constructing large t-way combination test sets (known as covering arrays), converting covering arrays into executable tests, and automatically generating test oracles using model checking (http://csrc.nist.gov/acts). In this review, we focus on real-world problems and empirical results from applying these methods and tools.",2008,0, 2666,Fault Tolerant ICAP Controller for High-Reliable Internal Scrubbing,"High reliable reconfigurable applications today require system platforms that can easily and quickly detect and correct single event upsets. This capability, however, can be costly for FPGAs. This paper demonstrates a technique for detecting and repairing SEUs within the configuration memory of a Xilinx Virtex-4 FPGA using the ICAP interface. The internal configuration access port (ICAP) provides a port internal to the FPGA for configuring the FPGA device. An application note demonstrates how this port can be used for both error injection and scrubbing (L. Jones, 2007). We have extended this work to create a fault tolerant ICAP scrubber by triplicating the internal ICAP circuit using TMR and block memory scrubbing. This paper will describe the costs, benefits, and reliability of this fault-tolerant ICAP controller.",2008,0, 2667,Automated Generation and Assessment of Autonomous Systems Test Cases,"Verification and validation testing of autonomous spacecraft routinely culminates in the exploration of anomalous or faulted mission-like scenarios. Prioritizing which scenarios to develop usually comes down to focusing on the most vulnerable areas and ensuring the best return on investment of test time. Rules-of-thumb strategies often come into play, such as injecting applicable anomalies prior to, during, and after system state changes; or, creating cases that ensure good safety-net algorithm coverage. Although experience and judgment in test selection can lead to high levels of confidence about the majority of a system's autonomy, it's likely that important test cases are overlooked. One method to fill in potential test coverage gaps is to automatically generate and execute test cases using algorithms that ensure desirable properties about the coverage. For example, generate cases for all possible fault monitors, and across all state change boundaries. Of course, the scope of coverage is determined by the test environment capabilities, where a faster-than-real-time, high-fidelity, software-only simulation would allow the broadest coverage. Even real-time systems that can be replicated and run in parallel, and that have reliable set-up and operations features provide an excellent resource for automated testing. Making detailed predictions for the outcome of such tests can be difficult, and when algorithmic means are employed to produce hundreds or even thousands of cases, generating predicts individually is impractical, and generating predicts with tools requires executable models of the design and environment that themselves require a complete test program. Therefore, evaluating the results of large number of mission scenario tests poses special challenges. A good approach to address this problem is to automatically score the results based on a range of metrics. Although the specific means of scoring depends highly on the application, the use of formal scoring - metrics has high value in identifying and prioritizing anomalies, and in presenting an overall picture of the state of the test program. In this paper we present a case study based on automatic generation and assessment of faulted test runs for the Dawn mission, and discuss its role in optimizing the allocation of resources for completing the test program.",2008,0, 2668,Using Sequence Diagrams to Detect Communication Problems between Systems,"Many software systems are evolving complex system of systems (SoS) for which inter-system communication is both mission-critical and error-prone. Such communication problems ideally would be detected before deployment. In a NASA-supported Software Assurance Research Program (SARP) project, we are researching a new approach addressing such problems. In this paper, we show that problems in the communication between two systems can be detected by using sequence diagrams to model the planned communication and by comparing the planned sequence to the actual sequence. We identify different kinds of problems that can be addressed by modeling the planned sequence using different level of abstractions.",2008,0, 2669,A Methodology for Performance Modeling and Simulation Validation of Parallel 3-D Finite Element Mesh Refinement With Tetrahedra,"The design and implementation of parallel finite element methods (FEMs) is a complex and error-prone task that can benefit significantly by simulating models of them first. However, such simulations are useful only if they accurately predict the performance of the parallel system being modeled. The purpose of this contribution is to present a new, practical methodology for validation of a promising modeling and simulation approach for parallel 3-D FEMs. To meet this goal, a parallel 3-D unstructured mesh refinement model is developed and implemented based on a detailed software prototype and parallel system architecture parameters in order to simulate the functionality and runtime behavior of the algorithm. Estimates for key performance measures are derived from these simulations and are validated with benchmark problem computations obtained using the actual parallel system. The results illustrate the potential benefits of the new methodology for designing high performance parallel FEM algorithms.",2008,0, 2670,Benchmarking Classification Models for Software Defect Prediction: A Proposed Framework and Novel Findings,"Software defect prediction strives to improve software quality and testing efficiency by constructing predictive classification models from code attributes to enable a timely identification of fault-prone modules. Several classification models have been evaluated for this task. However, due to inconsistent findings regarding the superiority of one classifier over another and the usefulness of metric-based classification in general, more research is needed to improve convergence across studies and further advance confidence in experimental results. We consider three potential sources for bias: comparing classifiers over one or a small number of proprietary data sets, relying on accuracy indicators that are conceptually inappropriate for software defect prediction and cross-study comparisons, and, finally, limited use of statistical testing procedures to secure empirical findings. To remedy these problems, a framework for comparative software defect prediction experiments is proposed and applied in a large-scale empirical comparison of 22 classifiers over 10 public domain data sets from the NASA Metrics Data repository. Overall, an appealing degree of predictive accuracy is observed, which supports the view that metric-based classification is useful. However, our results indicate that the importance of the particular classification algorithm may be less than previously assumed since no significant performance differences could be detected among the top 17 classifiers.",2008,1, 2671,Preventing Lithography-Induced Maverick Yield Events With A Dispense System Advanced Equipment Control Method,"As semiconductor manufacturers march to the drum beat of Moore's law there is very little room for yield mavericks, especially those that can be prevented. Critical process errors are costly and photolithography is one of the few processes in semiconductor manufacturing where there is an opportunity to correct errors. Small changes in photo resist dispensed volume may have severe impact on film thickness uniformity and can ultimately affect patterning. It is important to monitor photo-dispense conditions to detect real-time events that may have a direct negative impact on process yield and be able to react to these events as quickly as possible. This paper presents an evaluation of the IntelliGenreg Mini, a photo resist dispense system manufactured by Entegris, Inc. This system utilizes advanced equipment control software, known as dispense confirmation, to detect variations in photo dispense. These variations, caused by bubbles in the dispense line, valve errors, and accidentally changed chemistries can all create maverick yield events that can go undetected until metrology, defect inspection, or wafer final test. The ability of an advanced dispense system to detect events and create alerts is a very powerful tool, but it can be most effective when that information is collected and analyzed by an automated system. In a modern fabricator this is most likely a statistical process control chart that is monitoring a track's progress and is ready to stop the track when a maverick event occurs or alert personnel to trends they may not otherwise catch with other inline metrology data. Dispense confirmation, when combined with networking capabilities, can meet this need. After a brief description of the pump, data from simulated yield-affecting events will be examined to evaluate the IntelliGenreg Mini's ability to detect them. This discussion will conclude with a brief analysis of the ultimate time and cost savings of utilizing dispense confirmation with networking cap- abilities to detect and eliminate poorly coated wafers.",2008,0, 2672,ATP-Based Automated Fault Simulation,"As a free Electromagnetic Transient (EMT) simulation program, the Alternative Transient Program (ATP) cannot simulate in batch mode. Manual operation is very boring and prone to error when thousands of faults are to be simulated. In order to automate the process, based on close observation and analysis of the operation mechanism of the ATP, the letter concludes some useful rules and develops a software package to automate the ATP-based EMT simulation.",2008,0, 2673,Hardening XDS-Based Architectures,"Healthcare is an information-intensive domain and therefore information technologies are playing an ever-growing role in this sector. They are expected to increase the efficiency of the delivery of healthcare services in order to both improve the quality and reduce the costs. In this context, security has been identified as a priority although several gaps still exist. This paper reports on the results of assessing the threats to XDS-based architectures. Accordingly, an architectural solution to the identified threats is presented.",2008,0, 2674,FEDC: Control Flow Error Detection and Correction for Embedded Systems without Program Interruption,"This paper proposes a new technique called CFEDC to detect and correct control flow errors (CFEs) without program interruption. The proposed technique is based on the modification of application software and minor changes in the underlying hardware. To demonstrate the effectiveness of CFEDC, it has been implemented on an OpenRISC 1200 as a case study. Analytical results for three workload programs show that this technique detects all CFEs and corrects on average about 81.6% of CFEs. These figures are achieved with zero error detection /correction latency. According to the experimental results, the overheads are generally low as compared to other techniques; the performance overhead and the memory overhead are on average 8.5% and 9.1%, respectively. The area overhead is about 4% and the power dissipation increases by the amount of 1.5% on average.",2008,0, 2675,Effective RTL Method to Develop On-Line Self-Test Routine for the Processors Using the Wavelet Transform,"In this paper, we introduce a new efficient register transfer level (RTL) method to develop on-line self- test routines. We consider some prioritizations to select the components and instructions of the processor. In addition, we choose test patterns based on spectral RTL test pattern generation (TPG) strategy. For the purpose of spectral analysis, we use the wavelet transform. Also, we use a few extra instructions for the purpose of the signature monitoring to detect control flow errors. We demonstrate that the combination of these three strategies is effective for developing small test programs with high fault coverage in a small test development time. In this case, we only need the instruction set architecture (ISA) and RTL information. Our method not only provides a simple and fast algorithm for on-line self-test applications, also gains the advantages of utilizing lower memory and reducing the test generation time complexities in comparison with proposed methods so far. We focus on the application of this approach for Parwan processor. We develop a self-test routine using our proposed method for Parwan processor and demonstrate the effectiveness of our proposed methodology for on-line testing by presenting experimental results for Parwan processor.",2008,0, 2676,A Value-Added Predictive Defect Type Distribution Model Based on Project Characteristics,"In software project management, there are three major factors to predict and control; size, effort, and quality. Much software engineering work has focused on these. When it comes to software quality, there are various possible quality characteristics of software, but in practice, quality management frequently revolves around defects, and delivered defect density has become the current de facto industry standard. Thus, research related to software quality has been focused on modeling residual defects in software in order to estimate software reliability. Currently, software engineering literature still does not have a complete defect prediction for a software product although much work has been performed to predict software quality. On the other side, the number of defects alone cannot be sufficient information to provide the basis for planning quality assurance activities and assessing them during execution. That is, for project management to be improved, we need to predict other possible information about software quality such as in-process defects, their types, and so on. In this paper, we propose a new approach for predicting the distribution of defects and their types based on project characteristics in the early phase. For this approach, the model for prediction was established using the curve-fitting method and regression analysis. The maximum likelihood estimation (MLE) was used in fitting the Weibull probability density function to the actual defect data, and regression analysis was used in identifying the relationship between the project characteristics and the Weibull parameters. The research model was validated by cross-validation.",2008,0, 2677,Presenting A Method for Benchmarking Application in the Enterprise Architecture Planning Process Based on Federal Enterprise Architecture Framework,"One of the main challenges of the enterprise architecture planning process is its time consuming and to some extend having unrealistic results from this process under heading target architecture products. Getting best practices in this area can be to a large extent effective in speed up and quality enhanced of the results of enterprise architecture planning. Utilization of best practices in most methodologies and the enterprise architecture planning process guidelines namely EAP Methodology presented by Steven Spewak [14] also BSP Methodology produced by IBM [15], have been recommended. However there have been no presentation of any process or a specific method which would lead to benchmarking at enterprise architectural planning level. In this paper, a systematic and documented approach to employ benchmarking in the enterprise architecture planning process is being presented which can be used to assess the equally successful enterprises as best practices in target architecture documentation or by building a transition plan , utilize the enterprise architecture planning process. No doubt in order to have a basic and specific framework and also because of its vast application in governmental and nongovernmental organizations, federal enterprise architect reference models are utilized, though other frameworks and their presented reference models can also be used. Results obtained from proposed approach are indicative of reduced enterprise architecture planning process time especially the target architecture documentation, also risks reduction in this process and increased reliability in production.",2008,0, 2678,Admon: ViSaGe Administration And Monitoring Service For Storage Virtualization in Grid Environment,"This work is part of the ViSaGe project. This project concerns the storage virtualization applied in the grid environment. Its goal is to create and manage virtual storage resources by aggregating geographically distributed physical storage resources shared on a Grid. To these shared resources, a quality of service will be associated and related to the data storage performance. In a Grid environment, sharing resources may improve performances if these resources are well managed and if the management software obtains sufficient knowledge about the grid resources workload (computing nodes, storage nodes and links). The grid resources workload is mainly perceived by a monitoring system. Several existing monitoring systems are available for monitoring the grid resources and applications. Each one provides informations according to its aim. For example, Network Weather Service [6] is designed to monitor grid computing nodes and links. On the other hand, Netlogger [8] monitors grid resources and applications in order to detect application's bottleneck. These systems are useful for a post mortem analysis. However, in ViSaGe, we need a system that analyzes the necessary nodes state during execution time. This system is represented by the ViSaGe Administration and monitoring service """"Admon"""". In this paper, we present our scalable distributed system: Admon. Admon traces applications, and calculates the system resources consumption (CPU, Disks, Links). Its originality consists in providing a new opportunity to improve virtual data storage performance by using workload's constraints (e.g., maximum CPU usage percentage). Many constraints will be discussed (such as time's constraint). These constraints allow performance improvement by assigning nodes to the ViSaGe's jobs in an effective manner.",2008,0, 2679,Delay-Differentiated Gossiping in Delay Tolerant Networks,"Delay Tolerant Networks are increasingly being envisioned for a wide range of applications. Many of these applications need support for quality of service (QoS) differentiation from the network. This paper proposes a method for providing probabilistic delay assurances in DTNs. In particular, the paper presents a method called Delay- Differentiated Gossiping to assure a certain probability of meeting the packets' delay requirements while using as little network resources as possible. The idea is to adapt a set of forwarding probabilities and time-to-live parameters to control the usage of network resources based on how the delay requirements are being met. Empirical results evaluating the effectiveness of the proposed method are also included. The results show that there are simple ways of assuring the delay requirements while making effective use of the network resources.",2008,0, 2680,Fault Tolerance Management for a Hierarchical GridRPC Middleware,"The GridRPC model is well suited for high performance computing on grids thanks to efficiently solving most of the issues raised by geographically and administratively split resources. Because of large scale, long range networks and heterogeneity, Grids are extremely prone to failures. GridRPC middleware are usually managing failures by using 1) TCP or other link network layer provided failure detector, 2) automatic checkpoints of sequential jobs and 3) a centralized stable agent to perform scheduling. Most recent developments have provided some new mechanisms like the optimal Chandra & Toueg & Aguillera failure detector, most numerical libraries now providing their own optimized checkpoint routine and distributed scheduling GridRPC architectures. In this paper we aim at adapting to these novelties by providing the first implementation and evaluation in a grid system of the optimal fault detector, a novel and simple checkpoint API allowing to manage both service provided checkpoint and automatic checkpoint (even for parallel services) and a scheduling hierarchy recovery algorithm tolerating several simultaneous failures. All those mechanisms are implemented and evaluated on a real grid in the DIET middleware.",2008,0, 2681,Fault Tolerance and Recovery of Scientific Workflows on Computational Grids,"In this paper, we describe the design and implementation of two mechanisms for fault-tolerance and recovery for complex scientific workflows on computational grids. We present our algorithms for over-provisioning and migration, which are our primary strategies for fault-tolerance. We consider application performance models, resource reliability models, network latency and bandwidth and queue wait times for batch-queues on compute resources for determining the correct fault-tolerance strategy. Our goal is to balance reliability and performance in the presence of soft real-time constraints like deadlines and expected success probabilities, and to do it in a way that is transparent to scientists. We have evaluated our strategies by developing a Fault-Tolerance and Recovery (FTR) service and deploying it as a part of the Linked Environments for Atmospheric Discovery (LEAD) production infrastructure. Results from real usage scenarios in LEAD show that the failure rate of individual steps in workflows decreases from about 30% to 5% by using our fault-tolerance strategies.",2008,0, 2682,Research and Development of Print Quality Inspection System of Biochips,"In order to automatically detect the print defects of PET board of a kind of biochip, integrated electro-mechanical method was employed to design the experimental platform for the recognition and marker of PET board, therefore the automatic control of mechanical inspection system was realized; CCD sensor was used to get the image of PET board, and specific computer software was developed to achieve the image processing and recognition of PET board; thus, the automatic inspection system of print quality of biochips was constructed. Practical results indicated that this system can precisely and effectively achieve real-time detection.",2008,0, 2683,Making an SCI fabric dynamically fault tolerant,"In this paper we present a method for dynamic fault tolerant routing for SCI networks implemented on Dolphin Interconnect Solutions hardware. By dynamic fault tolerance, we mean that the interconnection network reroutes affected packets around a fault, while the rest of the network is fully functional. To the best of our knowledge this is the first reported case of dynamic fault tolerant routing available on commercial off the shelf interconnection network technology without duplicating hardware resources. The development is focused around a 2-D torus topology, and is compatible with the existing hardware, and software stack. We look into the existing mechanisms for routing in SCI. We describe how to make the nodes that detect the faulty component do routing decisions, and what changes are needed in the existing routing to enable support for local rerouting. The new routing algorithm is tested on clusters with real hardware. Our tests show that distributed databases like MySQL can run uninterruptedly while the network reacts to faults. The solution is now part of Dolphin Interconnect Solutions SCI driver, and hardware development to further decrease the reaction time is underway.",2008,0, 2684,"Service replication in Grids: Ensuring consistency in a dynamic, failure-prone environment","A major challenge in a service-oriented environment as a Grid is fault tolerance. The more resources and services involved, the more complicated and error-prone becomes the system. Migol (Luckow and Schnor, 2008) is a Grid middleware, which addresses the fault tolerance of Grid applications and services. Migol's core component is its registry service called application information service (AIS). To achieve fault tolerance and high availability the AIS is replicated on different sites. Since a registry is a stateful Web service, the replication of the AIS is no trivial task. In this paper, we present our concept for active replication of Grid services. Migol's Replication Service uses a token-based algorithm and certificate-based security to provide secure group communication. Further, we show in different experiments that active replication in a real Grid environment is feasible.",2008,0, 2685,Model-based fault localization in large-scale computing systems,"We propose a new fault localization technique for software bugs in large-scale computing systems. Our technique always collects per-process function call traces of a target system, and derives a concise execution model that reflects its normal function calling behaviors using the traces. To find the cause of a failure, we compare the derived model with the traces collected when the system failed, and compute a suspect score that quantifies how likely a particular part of call traces explains the failure. The execution model consists of a call probability of each function in the system that we estimate using the normal traces. Functions with low probabilities in the model give high anomaly scores when called upon a failure. Frequently-called functions in the model also give high scores when not called. Finally, we report the function call sequences ranked with the suspect scores to the human analyst, narrowing further manual localization down to a small part of the overall system. We have applied our proposed method to fault localization of a known non-deterministic bug in a distributed parallel job manager. Experimental results on a three-site, 78-node distributed environment demonstrate that our method quickly locates an anomalous event that is highly correlated with the bug, indicating the effectiveness of our approach.",2008,0, 2686,Investigating software Transactional Memory on clusters,"Traditional parallel programming models achieve synchronization with error-prone and complex-to-debug constructs such as locks and barriers. Transactional Memory (TM) is a promising new parallel programming abstraction that replaces conventional locks with critical sections expressed as transactions. Most TM research has focused on single address space parallel machines, leaving the area of distributed systems unexplored. In this paper we introduce a flexible Java Software TM (STM) to enable evaluation and prototyping of TM protocols on clusters. Our STM builds on top of the ProActive framework and has as an underlying transactional engine the state-of-the-art DSTM2. It does not rely on software or hardware distributed shared memory for the execution. This follows the transactional semantics at object granularity level and its feasibility is evaluated with non-trivial TM-specific benchmarks.",2008,0, 2687,Improving software reliability and productivity via mining program source code,"A software system interacts with third-party libraries through various APIs. Insufficient documentation and constant refactorings of third-party libraries make API library reuse difficult and error prone. Using these library APIs often needs to follow certain usage patterns. These patterns aid developers in addressing commonly faced programming problems such as what checks should precede or follow API calls, how to use a given set of APIs for a given task, or what API method sequence should be used to obtain one object from another. Ordering rules (specifications) also exist between APIs, and these rules govern the secure and robust operation of the system using these APIs. These patterns and rules may not be well documented by the API developers. Furthermore, usage patterns and specifications might change with library refactorings, requiring changes in the software that reuse the library. To address these issues, we develop novel techniques (and their supporting tools) based on mining source code, assisting developers in productively reusing third party libraries to build reliable and secure software.",2008,0, 2688,PLP: Towards a realistic and accurate model for communication performances on hierarchical cluster-based systems,"Today, due to many reasons, such as the inherent heterogeneity, the diversity, and the continuous evolving of actual computational supports, writing efficient parallel applications on such systems represents a great challenge. One way to answer this problem is to optimize communications of such applications. Our objective within this work is to design a realistic model able to accurately predict the cost of communication operations on execution environments characterized by both heterogeneity and hierarchical structure. We principally aim to guarantee a good quality of prediction with a neglected additional overhead. The proposed model was applied on point-to-point and collective communication operations and showed by achieving experiments on a hierarchical cluster-based system with heterogeneous resources that the predicted performances are close to measured ones.",2008,0, 2689,Modifying weber fraction law to postprocessing and edge detection applications,"A postprocessing scheme is proposed to enhance compressed images. The main objective is to obtain improvements that are pertinent to the properties of human visual system. The proposed scheme implements Weber fraction (also called contrast sensitivity) to enhance the appearance of the current block by incorporating information from adjacent blocks. The ratio DeltaI/I is found between the mean of a line of pixels in the current block and two points, each resides on the boundary of an adjacent block, that are the continuation of the chosen line. To avoid biasing toward low intensity values and to preserve the symmetry of the sensitivity curve, I was replaced by the maximum of the actual mean value and the corresponding value of the negative image or simply max(mean, 255-mean). If DeltaI/I is less than a threshold, the chosen line is replaced with a one fitting the original data and the two boundary points. Although PSNR improvement is <0.3 dB, the resultant image is visually more pleasing as will be demonstrated experimentally. The algorithm can be easily modified to perform as an edge detection scheme by finding DeltaI/I between any pixel and its 8 neighbours. The maximum is then taken. A new histogram thresholding is then applied to discriminate the edge pixels. Experimental results indicate a superior capability of the proposed scheme to detect edges of objects that are close in intensity to their background. Some comparisons with Sobel operator are also demonstrated.",2008,0, 2690,"Relationships between Test Suites, Faults, and Fault Detection in GUI Testing","Software-testing researchers have long sought recipes for test suites that detect faults well. In the literature, empirical studies of testing techniques abound, yet the ideal technique for detecting the desired kinds of faults in a given situation often remains unclear. This work shows how understanding the context in which testing occurs, in terms of factors likely to influence fault detection, can make evaluations of testing techniques more readily applicable to new situations. We present a methodology for discovering which factors do statistically affect fault detection, and we perform an experiment with a set of test-suite- and fault-related factors in the GUI testing of two fielded, open-source applications. Statement coverage and GUI-event coverage are found to be statistically related to the likelihood of detecting certain kinds of faults.",2008,0, 2691,On the Predictability of Random Tests for Object-Oriented Software,"Intuition suggests that random testing of object-oriented programs should exhibit a significant difference in the number of faults detected by two different runs of equal duration. As a consequence, random testing would be rather unpredictable. We evaluate the variance of the number of faults detected by random testing over time. We present the results of an empirical study that is based on 1215 hours of randomly testing 27 Eiffel classes, each with 30 seeds of the random number generator. Analyzing over 6 million failures triggered during the experiments, the study provides evidence that the relative number of faults detected by random testing over time is predictable but that different runs of the random test case generator detect different faults. The study also shows that random testing quickly finds faults: the first failure is likely to be triggered within 30 seconds.",2008,0, 2692,Traffic-aware Stress Testing of Distributed Real-Time Systems Based on UML Models in the Presence of Time Uncertainty,"In a previous work, we reported and experimented with a stress testing methodology to detect network traffic- related real-time (RT) faults in distributed real-time systems (DRTSs) based on the design UML models. The stress methodology, referred to as time-shifting stress test methodology (TSSTM), aimed at increasing chances of discovering RT faults originating from network traffic overloads in DRTSs. The TSSTM uses the UML 2.0 model of a system under test (SUT), augmented with timing information, and is based on an analysis of the control flow in UML sequence diagrams. In order to devise deterministic test requirements (from time point of view) that yield the maximum stress test scenario in terms of network traffic in a SUT, the TSSTM methodology requires that the timing information of messages in sequence diagrams is available and as precise as possible. In reality, however, the timing information of messages is not always available and precise. As we demonstrate using a case study in this work, the effectiveness of the stress test cases generated by TSSTM is very sensitive to such time uncertainty. In other words, TSSTM might generate imprecise and not necessarily maximum stressing test cases in the presence of such time uncertainty and, thus, it might not be very effective in revealing RT faults. To address the above limitation of TSSTM, we present in this article a modified testing methodology which can be used to stress test systems when the timing information of messages is imprecise or unpredictable. The stress test results of applying the new test methodology to a prototype DRTS indicate that, in the presence of uncertainty in timing information of messages, the new methodology is more effective in detecting RT faults when compared to our previous methodology (i.e., TSSTM) and also test cases based on an operational profile.",2008,0, 2693,UML Activity Diagram Based Testing of Java Concurrent Programs for Data Race and Inconsistency,"Data race occurs when multiple threads simultaneously access shared data without appropriate synchronization, and at least one is write. System with a data race is nondeterministic and may generate different outputs even with the same input, according to different interleaving of data access. We present a model-based approach for detecting data races in concurrent Java programs. We extend UML Activity diagrams with data operation tags, to model program behavior. Program under test (PUT) is instrumented according to the model. It is then executed with random test cases generated based on path analysis of the model. Execution traces are reverse engineered and used for post-mortem verification. First, data races are identified by searching the time overlaps of entering and exiting critical sections of different threads. Second, implementation could be inconsistent with the design. The problem may tangle with race condition and makes it hard to detect races. We compare the event sequences with the behavior model for consistency checking. Identified inconsistencies help debuggers locate the defects in the PUT. A prototype tool named tocAj implements the proposed approach and was successfully applied to several cases studies.",2008,0, 2694,The Use of Intra-Release Product Measures in Predicting Release Readiness,"Modern business methods apply micro management techniques to all aspects of systems development. We investigate the use of product measures during the intra-release cycles of an application as a means of assessing release readiness. The measures include those derived from the Chidamber and Kemerer metric suite and some coupling measures of our own. Our research uses successive monthly snapshots during systems re-structuring, maintenance and testing cycles over a two year period on a commercial application written in C++. We examine the prevailing trends which the measures reveal at both component class and application level. By applying criteria to the measures we suggest that it is possible to evaluate the maturity and stability of the application thereby facilitating the project manager in making an informed decision on the application's fitness for release.",2008,0, 2695,An Evaluation of Two Bug Pattern Tools for Java,"Automated static analysis is a promising technique to detect defects in software. However, although considerable effort has been spent for developing sophisticated detection possibilities, the effectiveness and efficiency has not been treated in equal detail. This paper presents the results of two industrial case studies in which two tools based on bug patterns for Java are applied and evaluated. First, the economic implications of the tools are analysed. It is estimated that only 3-4 potential field defects need to be detected for the tools to be cost-efficient. Second, the capabilities of detecting field defects are investigated. No field defects have been found that could have been detected by the tools. Third, the identification of fault-prone classes based on the results of such tools is investigated and found to be possible. Finally, methodological consequences are derived from the results and experiences in order to improve the use of bug pattern tools in practice.",2008,0, 2696,An Empirical Study on Bayesian Network-based Approach for Test Case Prioritization,"A cost effective approach to regression testing is to prioritize test cases from a previous version of a software system for the current release. We have previously introduced a new approach for test case prioritization using Bayesian Networks (BN) which integrates different types of information to estimate the probability of each test case finding bugs. In this paper, we enhance our BN-based approach in two ways. First, we introduce a feedback mechanism and a new change information gathering strategy. Second, a comprehensive empirical study is performed to evaluate the performance of the approach and to identify the effects of using different parameters included in the technique. The study is performed on five open source Java objects. The obtained results show relative advantage of using feedback mechanism for some objects in terms of early fault detection. They also provide insight into costs and benefits of the various parameters used in the approach.",2008,0, 2697,Test Instrumentation and Pattern Matching for Automatic Failure Identification,"An increasing emphasis on test automation presents test teams with a growing set of results to review and investigate. The life time of a given failure typically spans multiple automation runs making the necessary due-diligence effort time consuming, labor intensive, error prone and often redundant. When faced with this condition within their teams, the authors addressed the problem from two approaches; pattern recognition and instrumentation. Pattern recognition sought to automate some of the manual processes of failure identification while instrumentation provided a detailed and consistent description of a failure. In practice, the problem divides into 3 domains; instrumentation, annotation, and recognition. Instrumentation forms the basis for the solution. Annotation provides the ability to refine a given pattern based on the investigation results. Recognition forms the basis for identification. The effectiveness of the solution in a production environment will be discussed. Best practices and lessons learned will be shared.",2008,0, 2698,The Role of Stability Testing in Heterogeneous Application Environment,"This paper presents an approach to system stability tests performed in Motorola private radio networks (PRN) department. The stability tests are among crucial elements of the department's testing strategy, such as functional testing, regression testing and stress testing. The gravity of the subject is illustrated with an example of a serious system defect: memory leak, whicht was detected in Solaris operating system during system stability tests. The paper provides technical background essential to understand the problem, as well as emphasizes the role of the tests in the problem solving. The following approaches to memory leaks detection are discussed: code review, memory debugging and system stability tests. The article presents several guidelines on stability test implementation and mentions the crucial elements of the PRN Department testing strategy: load definition, testing period and the system monitoring method.",2008,0, 2699,Collaborative Target Detection in Wireless Sensor Networks with Reactive Mobility,"Recent years have witnessed the deployments of wireless sensor networks in a class of mission-critical applications such as object detection and tracking. These applications often impose stringent QoS requirements including high detection probability, low false alarm rate and bounded detection delay. Although a dense all-static network may initially meet these QoS requirements, it does not adapt to unpredictable dynamics in network conditions (e.g., coverage holes caused by death of nodes) or physical environments (e.g., changed spatial distribution of events). This paper exploits reactive mobility to improve the target detection performance of wireless sensor networks. In our approach, mobile sensors collaborate with static sensors and move reactively to achieve the required detection performance. Specifically, mobile sensors initially remain stationary and are directed to move toward a possible target only when a detection consensus is reached by a group of sensors. The accuracy of final detection result is then improved as the measurements of mobile sensors have higher signal-to-noise ratios after the movement. We develop a sensor movement scheduling algorithm that achieves near-optimal system detection performance within a given detection delay bound. The effectiveness of our approach is validated by extensive simulations using the real data traces collected by 23 sensor nodes.",2008,0, 2700,Adaptation of high level behavioral models for stuck-at coverage analysis,"There has been increasing effort in the years for defining test strategies at the behavioral level. Due to the lack of suitable coverage metrics and tools to assess the quality of a testbench, these strategies have not been able to play an important role in stuck-at fault simulation. The work we are presenting here proposes a new coverage metric that employs back-annotation of post-synthesis design properties into pre-synthesis simulation models to estimate the stuck-at fault coverage of a testbench. The effectiveness of this new metric is evaluated for several example circuits. The results show that the new metric provides a good evaluation of high level testbenches for detection of stuck-at faults.",2008,0, 2701,A system architecture for collaborative environmental modeling research,"This relates to early stage research that aims to build an integrated toolbox of instruments that can be used for environmental modeling tasks. The application area described is grape growing and wine production. A comparative study including data gathered in both New Zealand and Chile is described. Using both passive and sensor technology data is gathered from atmosphere, vines, and soil. Human sensory perceptions relating to wine taste and quality is also gathered. The project proposes a synthesizer which collects and analyzes data in real time. Computational neural network modeling methods and geographic information systems are used for result depiction. This convergence of computational techniques and information processing methods is proposed as being an example of software and systems collaboration. The project called Eno-Humanas is so named because of the bled of the precise enological data and less qualitative human perception data. It is expected that the discrete input elements of the architecture here will be demonstrably dependency-related and derived from correlation values once data gathering instruments and analytical software have been developed. At this stage of the project, these tools and methods are being built and tested. This is the first stage of the project and the proposed research that will come from it in order to answer wide questions such as the ordinal set of data values necessarily present to predict climate conditions, the relationship between vine sap rise and dew point calibrations, towards addressing the popular question of 'what makes for a good year for wine'. In addition to the bringing together of various technologies, methods and kinds of data, (geo-referential, climatic, atmospheric, terrain, plant biological and qualitative sensory expressions), the paper also describes an international research collaboration and its parameters.",2008,0, 2702,Enabling executable architecture by improving the foundations of DoD architecting,"Architecture is an intrinsic quality or property of a system. It consists of the arrangement and inter-relationships, both static and dynamic, among the components of the system as well as their externally visible properties. We commonly think of this property as the structure or form of the system. Through the practice of architecting, we seek to make apparent the architecture of a system through the creation of architecture descriptions. Architecture descriptions are representations or conceptualizations of the form of a system. In architecting, our goal is to understand, affect, predict, or manage this architecture property in order to achieve other system properties that are dependent upon it. In creating such descriptions, we often employ an architecture framework as a way of conceptualizing the form of the system. A framework consists of a set of assumptions, concepts, values, and practices that constitutes a way of viewing reality. Applying an architecture framework results in creation of a representation of the system that is at least two steps removed from the reality of the system: first, through our interpretation of that reality, and second, through the application of a framework to shape our interpretation. Most of the architecture descriptions produced by DoD architects are static. They portray system properties at a single point in time. However, system properties may in fact change over time due to the interaction of components of the system's architecture via their established relationships. Successfully achieving the goal of creating executable system models at various phases of a system's life cycle is principally dependent upon expressing such models at a sufficient level of formality and characterization.",2008,0, 2703,Adaptive Kalman Filtering for anomaly detection in software appliances,"Availability and reliability are often important features of key software appliances such as firewalls, web servers, etc. In this paper we seek to go beyond the simple heartbeat monitoring that is widely used for failover control. We do this by integrating more fine grained measurements that are readily available on most platforms to detect possible faults or the onset of failures. In particular, we evaluate the use of adaptive Kalman Filtering for automated CPU usage prediction that is then used to detect abnormal behaviour. Examples from experimental tests are given.",2008,0, 2704,A Survey of Automated Techniques for Formal Software Verification,"The quality and the correctness of software are often the greatest concern in electronic systems. Formal verification tools can provide a guarantee that a design is free of specific flaws. This paper surveys algorithms that perform automatic static analysis of software to detect programming errors or prove their absence. The three techniques considered are static analysis with abstract domains, model checking, and bounded model checking. A short tutorial on these techniques is provided, highlighting their differences when applied to practical problems. This paper also surveys tools implementing these techniques and describes their merits and shortcomings.",2008,0, 2705,Assessment of a New High-Performance Small-Animal X-Ray Tomograph,"We have developed a new X-ray cone-beam tomograph for in vivo small-animal imaging using a flat panel detector (CMOS technology with a microcolumnar CsI scintillator plate) and a microfocus X-ray source. The geometrical configuration was designed to achieve a spatial resolution of about 12 lpmm with a field of view appropriate for laboratory rodents. In order to achieve high performance with regard to per-animal screening time and cost, the acquisition software takes advantage of the highest frame rate of the detector and performs on-the-fly corrections on the detector raw data. These corrections include geometrical misalignments, sensor non-uniformities, and defective elements. The resulting image is then converted to attenuation values. We measured detector modulation transfer function (MTF), detector stability, system resolution, quality of the reconstructed tomographic images and radiated dose. The system resolution was measured following the standard test method ASTM E 1695 -95. For image quality evaluation, we assessed signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) as a function of the radiated dose. Dose studies for different imaging protocols were performed by introducing TLD dosimeters in representative organs of euthanized laboratory rats. Noise figure, measured as standard deviation, was 50 HU for a dose of 10 cGy. Effective dose with standard research protocols is below 200 mGy, confirming that the system is appropriate for in vivo imaging. Maximum spatial resolution achieved was better than 50 micron. Our experimental results obtained with image quality phantoms as well as with in-vivo studies show that the proposed configuration based on a CMOS flat panel detector and a small micro-focus X-ray tube leads to a compact design that provides good image quality and low radiated dose, and it could be used as an add-on for existing PET or SPECT scanners.",2008,0, 2706,Notice of Violation of IEEE Publication Principles
Dynamic Binding Framework for Adaptive Web Services,"Notice of Violation of IEEE Publication Principles

""""Dynamic Binding Framework for Adaptive Web Services,""""
by A. Erradi, P. Maheshwari
in the Proceedings of the Third International Conference on Internet and Web Applications and Services, 2008. ICIW '08, June 2008, pp. 162-167

After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

This paper contains significant portions of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.

""""Foundations of Software Engineering,""""
by A. Michlmayr, F. Rosenberg, C. Platzer, M. Treiber, S. Dustdar
in the Proceedings of the 2nd International Workshop on Service Oriented Software Engineering: In Conjunction with the 6th ESEC/FSE Joint Meeting, Sept 2007, pp. 22-28

Dynamic selection and composition of autonomous and loosely-coupled Web services is increasingly used to automate business processes. The typical long-running characteristic of business processes imposes new management challenges such as dynamic adaptation of running process instances. To address this, we developed a policy-based framework, named manageable and adaptable service compositions (MASC) , to declaratively specify policies that govern: (1) discovery and selection of services to be used, (2) monitoring to detect the need for adaptation, (3) reconfiguration and adaptation of the process to handle special cases (e.g., context-dependant behavior) and recover from typical faults in service-based processes. The identified constructs are executed by a lightweight service-oriented management middleware named MASC middleware. We implemented a MASC proof-of-concept prototype and evaluated it on stock trading case study scenarios. We conducted exten- sive studies to demonstrate the feasibility of the proposed techniques and illustrate the benefits of our approach in providing adaptive composite services using the policy-based approach. Our performance and scalability studies indicate that MASC middleware is scalable and the introduced overhead are acceptable.",2008,0, 2707,Generic CSSA-Based Pattern over Boolean Data for an Improved WS-BPEL to Petri Net Mappping,"Formal methods, like Petri nets, provide a means to analyse BPEL processes, detecting weaknesses and errors in the process model already at design-time. However, in most approaches proposed so far, the analysis is restricted to the control flow only. Analysing quality properties of BPEL processes might therefore yield false-negative results. In this paper, we are presenting an enhanced BPEL to Petri net mapping, that incorporates relevant data aspects by doing a CSSA- based analysis and applying novel Petri net patterns. The resulting formal model allows for a more precise analysis of critical properties, such as controllability and behavioural compatibility.",2008,0, 2708,Prediction-based Haptic Data Reduction and Compression in Tele-Mentoring Systems,"In this paper, a novel haptic data reduction and compression technique to reduce haptic data traffic in networked haptic tele-mentoring systems is presented. The suggested method follows a two-step procedure: (1) haptic data packets are not transmitted when they can be predicted within a predefined tolerable error; otherwise, (2) data packets are compressed prior to transmission. The prediction technique relies on the least-squares method. Knowledge from human haptic perception is incorporated into the architecture to assess the perceptual quality of the prediction results. Packet-payload compression is performed using uniform quantization and adaptive Golomb-Rice codes. The preliminary experimental results demonstrate the algorithm's effectiveness as great haptic data reduction and compression is achieved, while preserving the overall quality of the tele-mentoring environment.",2008,0, 2709,Traffic and Quality Characterization of Single-Layer Video Streams Encoded with the H.264/MPEG-4 Advanced Video Coding Standard and Scalable Video Coding Extension,"The recently developed H.264/AVC video codec with scalable video coding (SVC) extension, compresses non-scalable (single-layer) and scalable video significantly more efficiently than MPEG-4 Part 2. Since the traffic characteristics of encoded video have a significant impact on its network transport, we examine the bit rate-distortion and bit rate variability-distortion performance of single-layer video traffic of the H.264/AVC codec and SVC extension using long CIF resolution videos. We also compare the traffic characteristics of the hierarchical B frames (SVC) versus classical B frames. In addition, we examine the impact of frame size smoothing on the video traffic to mitigate the effect of bit rate variabilities. We find that compared to MPEG-4 Part 2, the H.264/AVC codec and SVC extension achieve lower average bit rates at the expense of significantly increased traffic variabilities that remain at a high level even with smoothing. Through simulations we investigate the implications of this increase in rate variability on (i) frame losses when transmitting a single video, and (ii) on a bufferless statistical multiplexing scenario with restricted link capacity and information loss. We find increased frame losses, and rate-distortion/rate-variability/encoding complexity tradeoffs. We conclude that solely assessing bit rate-distortion improvements of video encoder technologies is not sufficient to predict the performance in specific networked application scenarios.",2008,0, 2710,Biological Sensor System Design for Gymnasium Indoor Air Protection,"Increasing attention is being directed to the vulnerability of public buildings, and national defense facilities to terrorist attack or the accidental release of biological pathogens. Many biological sensors have been developed for protecting the indoor air quality. However, there is lack of fundamental system-level research on developing sensor networks for indoor air protection. The optimal design of a sensor system is affected by sensor parameters, such as sensitivity, probability of correct detection, false positive rate, and response time. This study applies CONTAM in the sensor system design. Common building biological attack scenarios are simulated for a gymnasium. Genetic algorithm (GA) is then applied to optimize the sensor sensitivity, location, and quantity, thus achieving the best system behavior and reducing the total system cost. Assuming that each attack scenario has the same probability for occurrence, optimal system designs that account for the simulated possible attack scenarios are obtained.",2008,0, 2711,Structure and Interpretation of Computer Programs,"Call graphs depict the static, caller-callee relation between """"functions """" in a program. With most source/target languages supporting functions as the primitive unit of composition, call graphs naturally form the fundamental control flow representation available to understand/develop software. They are also the substrate on which various inter- procedural analyses are performed and are integral part of program comprehension/testing. Given their universality and usefulness, it is imperative to ask if call graphs exhibit any intrinsic graph theoretic features - across versions, program domains and source languages. This work is an attempt to answer these questions: we present and investigate a set of meaningful graph measures that help us understand call graphs better; we establish how these measures correlate, if any, across different languages and program domains; we also assess the overall, language independent software quality by suitably interpreting these measures.",2008,0, 2712,Full automated packaging of high-power diode laser bars,"Full automated packaging of high power diode laser bars on passive or micro channel heat sinks requires a high precision measurement and handling technology. The metallurgic structure of the solder and intrinsic stress of the laser bar are largely influenced by the conditions of the mounting process. To avoid thermal deterioration the tolerance for the overhang between laser bar and heat sink is about a few microns maximum. Due to an increase of growing applications and growing number of systems there is a need for automatic manufacturing not just for cost efficiency but also for yield and product quality reasons. In this paper we describe the demands on fully automated packaging, the realized design and finally the test results of bonded devices. The design of the automated bonding systems includes an air cushioned, 8 axes system on a granite frame. Each laser bar is picked up by a vacuum tool from a special tray or directly out of the gel pak. The reflow oven contains a ceramic heater with low thermal capacity and reaches a maximum of 400degC with a heating rate up to 100 K/s and a cooling rate up to 20 K/s. It is suitable for all common types of heat sinks and submounts which are fixed onto the heater by vacuum. The soldering process is performed under atmospheric pressure, during the oven is filled up with inert gas. Additionally, reactive gases can be used to proceed the reduction of the solder. Three high precision optical sensors for distance measurement detect the relative position of laser bar and heat sink. The high precision alignment uses a special algorithm for final positioning. For the alignment of the tilt and roll angles between the laser bar and the heat sink two optical distance sensors and the two goniometers below the oven are used. To detect the angular orientation of the heat sinks upper surface a downwards looking optical sensor system is used. The upwards pointing optical sensor mounted is used to measure the orientation of the laser bars lo- wer side. These measurements provide the data needed to calculate the angles that the heat sink needs to be tilted and rolled by the two goniometers, in order to get its upper surface parallel to the lower surface of the laser bar. For the measurement of the laser bar overhang and yaw an optical distance sensor is mounted in front of the oven. Overhang and yaw are aligned by using high precision rotary and translation stages. A software tool calculates the displacement necessary to get a parallel orientation and a desired overhang of the laser bar relative to the heat sink. A post bonding accuracy of +/- 1 micron and of +/- 0,2 mrad respectively is achieved. To demonstrate the performance and reliability of the bonding system the bonded devices were characterized by tests like smile test, shear test, burn in test. The results will be presented as well as additional aspects of automated manufacturing like part identification and part tracking.",2008,0, 2713,Clustering Analysis for the Management of Self-Monitoring Device Networks,"The increasing computing and communication capabilities of multi-function devices (MFDs) have enabled networks of such devices to provide value-added services. This has placed stringent QoS requirements on the operations of these device networks. This paper investigates how the computational capabilities of the devices in the network can be harnessed to achieve self-monitoring and QoS management. Specifically, the paper investigates the application of clustering analysis for detecting anomalies and trends in events generated during device operation, and presents a novel decentralized cluster and anomaly detection algorithm. The paper also describes how the algorithm can be implemented within a device overlay network, and demonstrates its performance and utility using simulated as well as real workloads.",2008,0, 2714,Automating ITSM Incident Management Process,"Service desks are used by customers to report IT issues in enterprise systems. Most of these service requests are resolved by level-1 persons (service desk attendants) by providing information/quick-fix solutions to customers. For each service request, level- 1 personnel identify important keywords and see if the incoming request is similar to any historic incident. Otherwise, an incident ticket is created and, with other related information, forwarded to incident's subject matter expert (SME). Incident management process is used for managing the life cycle of all incidents. An organization spends lots of resources to keep its IT resources incident free and, therefore, timely resolution of incoming incident is required to attain that objective. Currently, the incident management process is largely manual, error prone and time consuming. In this paper, we use information integration techniques and machine learning to automate various processes in the incident management workflow. We give a method for correlating the incoming incident with configuration items (CIs) stored in Configuration management database (CMDB). Such a correlation can be used for correctly routing the incident to SMEs, incident investigation and root cause analysis. In our technique, we discover relevant CIs by exploiting the structured and unstructured information available in the incident ticket. We present efficient algorithm which gives more than 70% improvement in accuracy of identifying the failing component by efficiently browsing relationships among CIs.",2008,0, 2715,Goal-Centric Traceability: Using Virtual Plumblines to Maintain Critical Systemic Qualities,"Successful software development involves the elicitation, implementation, and management of critical systemic requirements related to qualities such as security, usability, and performance. Unfortunately, even when such qualities are carefully incorporated into the initial design and implemented code, there are no guarantees that they will be consistently maintained throughout the lifetime of the software system. Even though it is well known that system qualities tend to erode as functional and environmental changes are introduced, existing regression testing techniques are primarily designed to test the impact of change upon system functionality rather than to evaluate how it might affect more global qualities. The concept of using goal-centric traceability to establish relationships between a set of strategically placed assessment models and system goals is introduced. This paper describes the process, algorithms, and techniques for utilizing goal models to establish executable traces between goals and assessment models, detect change impact points through the use of automated traceability techniques, propagate impact events, and assess the impact of change upon systemic qualities. The approach is illustrated through two case studies.",2008,0, 2716,Study on Fuzzy Theory Based Web Access Control Model,"Along with the constant development of online services, the application of traditional RBAC model for the user-role assignment and maintaining the user-role assignment become an arduous and error-prone task. In order to solve these problems, this paper proposes a trust based user-role assignment model for assign role to users. It is based on the userspsila trustworthiness in the system, which is a fuzzy concept, the application of fuzzy theory to calculate the trust degree of users and the trust degree that the role needed, provides a new method for the user-role assignment.",2008,0, 2717,Quality of service investigation for multimedia transmission over UWB networks,"In this paper, the Quality of Service (QoS) for multimedia traffic of the Medium Access Control (MAC) protocol for Ultra Wide-Band (UWB) networks is investigated. A protocol is proposed to enhance the network performance and increase its capacity. This enhancement comes from using Wise Algorithm for Link Admission Control (WALAC). The QoS of multimedia transmission is determined in terms of average delay, loss probability, utilization, and the network capacity.",2008,0, 2718,An adaptive CAC algorithm based on fair utility for low earth orbit satellite networks,"A novel adaptive call admission control algorithm for multimedia low orbit satellite networks was proposed. Based on real-time call dropping probability of destination cell, the algorithm is able to reserve bandwidth for handoff calls combining probability threshold method and fair utility allocation scheme. Simulation results show that the proposed algorithm presents satisfactory new call blocking probability and greatly reduces handoff call dropping probability, while guarantees the high bandwidth utilization.",2008,0, 2719,Pre-emption based call admission control with QoS and dynamic bandwidth reservation for cellular networks,"Call admission protocol (CAC) is a very important process in the provision of good quality of service (QoS) in cellular mobile networks. With micro/Pico cellular architectures that are now used to provide higher capacity, the cell size decreases with a drastic increase in the handoff rate. In this paper, we present modeling and simulation results to help in better understanding of the performance and efficiency of CAC in cellular networks. Handoff prioritization is a common characteristic, which is achieved through the threshold bandwidth reservation policy framework. Combined with this framework, we use pre-emptive call admission scheme and elastic bandwidth allocation for data calls in order to gain a near optimal QoS. In this paper, we also use a genetic algorithm (GA) based approach to optimize the fitness function, which we obtained by calculating the mean square error of predicted rejection values and the actual ones. The predicted values are calculated using a linear model, which relates the rejection ratios with different threshold values.",2008,0, 2720,Control theory-based DVS for interactive 3D games,"We propose a control theory-based dynamic voltage scaling (DVS) algorithm for interactive 3D game applications running on battery- powered portable devices. Using this scheme, we periodically adjust the game workload prediction based on the feedback from recent prediction errors. Although such control-theoretic feedback mechanisms have been widely applied to predict the workload of video decoding applications, they heavily rely on estimating the queue lengths of video frame buffers. Given the interactive nature of games - where game frames cannot be buffered - the control - theoretic DVS schemes for video applications can no longer be applied. Our main contribution is to suitably adapt these schemes for interactive games. Compared to history-based workload prediction schemes - where the workload of a game frame is predicted by averaging the workload of the previously-rendered frames - our proposed scheme yields significant improvement on different platforms (e.g. a laptop and a PDA) both in terms of energy savings as well as output quality.",2008,0, 2721,Checklist Inspections and Modifications: Applying Bloom's Taxonomy to Categorise Developer Comprehension,"Software maintenance can consume up to 70% of the effort spent on a software project, with more than half of this devoted to understanding the system. Performing a software inspection is expected to contribute to comprehension of the software. The question is: at what cognition levels do novice developers operate during a checklist-based code inspection followed by a code modification? This paper reports on a pilot study of Bloom's taxonomy levels observed during a checklist-based inspection and while adding new functionality unrelated to the defects detected. Bloom's taxonomy was used to categorise think-aloud data recorded while performing these activities. Results show the checklist-based reading technique facilitates inspectors to function at the highest cognitive level within the taxonomy and indicates that using inspections with novice developers to improve cognition and understanding may assist integrating developers into existing project teams.",2008,0, 2722,Evaluating the Reference and Representation of Domain Concepts in APIs,"As libraries are the most widespread form of software reuse, the usability of their APIs substantially influences the productivity of programmers in all software development phases. In this paper we develop a framework to characterize domain-specific APIs along two directions: 1) how can the API users reference the domain concepts implemented by the API; 2) how are the domain concepts internally represented in the API. We define metrics that allow the API developer for example to assess the conceptual complexity of his API and the non-uniformity and ambiguities introduced by the API's internal representations of domain concepts, which makes developing and maintaining software that uses the library difficult and error-prone. The aim is to be able to predict these difficulties already during the development of the API, and based on this feedback be able to develop better APIs up front, which will reduce the risks of these difficulties later.",2008,0, 2723,Atom-Aid: Detecting and Surviving Atomicity Violations,"Writing shared-memory parallel programs is error-prone. Among the concurrency errors that programmers often face are atomicity violations, which are especially challenging. They happen when programmers make incorrect assumptions about atomicity and fail to enclose memory accesses that should occur atomically inside the same critical section. If these accesses happen to be interleaved with conflicting accesses from different threads, the program might behave incorrectly. Recent architectural proposals arbitrarily group consecutive dynamic memory operations into atomic blocks to enforce memory ordering at a coarse grain. This provides what we call implicit atomicity, as the atomic blocks are not derived from explicit program annotations. In this paper, we make the fundamental observation that implicit atomicity probabilistically hides atomicity violations by reducing the number of interleaving opportunities between memory operations. We then propose Atom-Aid, which creates implicit atomic blocks intelligently instead of arbitrarily, dramatically reducing the probability that atomicity violations will manifest themselves. Atom-Aid is also able to report where atomicity violations might exist in the code, providing resilience and debuggability. We evaluate Atom-Aid using buggy code from applications including Apache, MySQL, and XMMS, showing that Atom-Aid virtually eliminates the manifestation of atomicity violations.",2008,0, 2724,The Allure and Risks of a Deployable Software Engineering Project: Experiences with Both Local and Distributed Development,"The student project is a key component of a software engineering course. What exact goals should the project have, and how should the instructors focus it? While in most cases projects are artificially designed for the course, we use a deployable, realistic project. This paper presents the rationale for such an approach and assesses our experience with it, drawing on this experience to present guidelines for choosing the theme and scope of the project, selecting project tasks, switching student groups, specifying deliverables and grading scheme. It then expands the discussion to the special but exciting case of a project distributed between different universities, the academic approximation of globalized software development as practiced today by the software industry.",2008,0, 2725,An Experience on Applying Learning Mechanisms for Teaching Inspection and Software Testing,"Educational modules, concise units of study capable of integrating theoretical/practical content and supporting tools, are relevant mechanisms to improve learning processes. In this paper we briefly discuss the establishment of mechanisms to ease the development of educational modules - a Standard Process for Developing Educational Modules and an Integrated Modeling Approach for structuring their learning content. The proposed mechanisms have been investigated in the development of the ITonCode module - an educational module for teaching inspection and testing techniques. Aiming at evaluating the module we have replicated an extended version of the Basili & Selby experiment, originally used for comparing V&V techniques, now considering the educational context. The obtained results were mainly analyzed in terms of the student's uniformity in detecting existent faults, giving us very preliminar evidences on the learning effectiveness provided by the module produced.",2008,0, 2726,Anshan: Wireless Sensor Networks for Equipment Fault Diagnosis in the Process Industry,"Wireless sensor networks provide an opportunity to enhance the current equipment diagnosis systems in the process industry, which have been based so far on wired networks. In this paper, we use our experience in the Anshan Iron and Steel Factory, China, as an example to present the issues from the real field of process industry, and our solutions. The challenges are three fold: First, very high reliability is required; second, energy consumption is constrained; and third, the environment is very challenging and constrained. To address these issues, it is necessary to put systematic efforts on network topology and node placement, network protocols, embedded software, and hardware. In this paper, we propose two technologies i.e. design for reliability and energy efficiency (DRE), and design for reconfiguration (DRC). Using these techniques we developed Anshan, a wireless sensor network for monitoring the temperature of rollers in a continuously annealing line and detecting equipment failures. Project Anshan includes 406 sensor nodes and has been running for four months continuously.",2008,0, 2727,The reliability analysis of thermal design software system,"For the software reliability testing the model of software reliability estimation, which is based on the random Poisson process, has been used, which determines and allows to forecast the software fault probability and its reliability in the set moment of time. The software environment of computer-aided testing for the verification of computational software for the solution of thermal conductivity problems has been developed. Keywords - software reliability,",2008,0, 2728,Computationally efficient algorithms for predicting the file size of JPEG images subject to changes of quality factor and scaling,"To enable the delivery of multimedia content to mobile devices with limited capabilities, high volume transcoding servers must rely on efficient adaptation algorithms. Our objective in addressing the case of JPEG image adaptation was to find computationally efficient algorithms to accurately predict the compressed file size of images subject to simultaneous changes in quality factor (QF) and resolution. In this paper, we present two new prediction algorithms which use only information readily available from the file header. The first algorithm, QF Scaling-Aware Prediction, predicts file size based on the QF of the original picture, as well as a target QF and scaling. The second algorithm, Clustered QF Scaling-Aware Prediction, also takes into account the resolution of the original picture for improved prediction accuracy. As both algorithms rely on machine-learning strategies, a large corpus of representative JPEG images was assembled. We show that both prediction algorithms lead to acceptably small relative prediction errors in adaptation scenarios of interest.",2008,0, 2729,Formally comparing user and implementer model-based testing methods,"There are several methods to assess the capability of a test suite to detect faults in a potentially wrong system. We explore two methods based on considering some probabilistic information. In the first one, we assume that we are provided with a probabilistic user model. This is a model denoting the probability that the entity interacting with the system takes each available choice. In the second one, we suppose that we have a probabilistic implementer model, that is, a model denoting the probability that the implementer makes each possible fault while constructing the system. We show that both testing scenarios are strongly related. In particular, we prove that any user can be translated into an implementer model in such a way that the optimality of tests is preserved, that is, a test suite is optimal for the user if and only if it is optimal for the resulting implementer. Another translation, working in the opposite direction, fulfills the reciprocal property. Thus, we conclude that any test selection criterium designed for one of these testing problems can be used for the other one, once the model has been properly translated.",2008,0, 2730,Testing and Validating the Quality of Specifications,"Model-based testing of state based systems is known to be able to spot non-conformance issues. However, up to half of these issues appear to be errors in the model rather than in the system under test. Errors in the specification at least hamper the prompt delivery of the software, so it is worth while to invest in the quality of the specification. Worse, errors in the specification that are also present in the system under test cannot be detected by model-based testing. In this paper we show how very desirable properties of specifications can be checked by systematic automated testing of the specifications themselves. We show how useful properties of specifications can be found by generalization of incorrect transitions encountered in simulation of the model.",2008,0, 2731,Checking Properties on the Control of Heterogeneous Systems,"We present a component-based description language for heterogeneous systems composed of several data flow processing components and a unique event- based controller. Descriptions are used both for generating and deploying implementation code and for checking safety properties on the system. The only constraint is to specify the controller in a synchrounous reactive language. We propose an analysis tool which transforms temporal logic properties of the system as a whole into properties on the events of the controller, and hence into synchronous reactive observers. If checks succeed, the final system is therefore correct by construction. When it is not possible to generate observers that correspond exactly to the specified properties, our tool is capable of generating approximate observers. Alghough the results given by these are subject to interpretation, they can nevertheless prove useful and help detect defects or even guarantee the correctness of a system.",2008,0, 2732,Investigating the dimensionality problem of Adaptive Random Testing incorporating a local search technique,"Adaptive random testing (ART) has been proposed to enhance the effectiveness of random testing. By spreading test cases evenly within the input domain, ART techniques may reduce the number of test cases necessary to detect the first failure by up to 50%. However, the most effective ART strategies are little effective in higher dimen- sions. This fact distinctly affects their applicability since in a real testing area input domains usually are far from being one- or two-dimensional. The present work addresses this problem. It discusses the shortcomings of existing solu- tions and describes how prior knowledge can help solving the problem. Since in general no prior knowledge is avail- able, this work proposes a solution which--though not fully solving the dimensionality problem--seems to be very close to the theoretical optimum. The proposed approach is based on the ideas of the local search technique 'Hill Climbing'.",2008,0, 2733,Test Policy: Gaining Control on IT Quality and Processes,Too often projects deliver software of which the quality is difficult to predict. Sometimes the project completion is delayed due to the continuous change of requirements while the software is still being built. The quality level must align with the company needs. It is extremely important that the planned benefits of an IT system are reached. When the benefits are not achieved it will cause much more damage than just the cost of delay. The reputation of a company might be at stake!,2008,0, 2734,Multi-Dimensional Measures for Test Case Quality,"Choosing the right test cases is an important task in software development due to high costs of software testing as well as the significance of software failures. Therefore, evaluating the quality of test techniques and test suites may help improving test results. Benchmarking has been successfully applied to various domains such as database performance. However, the difficulty in benchmarking test case quality is to find suitable measures. In this paper, a multi- dimensional measuring of test case quality is proposed. It has been shown that not only the number of detected faults but also other aspects such as development artefacts like source code or usage profiles are important. Consequences of this multi-dimensional measure on creating a test bench- mark are described.",2008,0, 2735,Software Self-Testing of a Symmetric Cipher with Error Detection Capability,"Cryptographic devices are recently implemented with different countermeasures against side channel attacks and fault analysis. Moreover, some usual testing techniques, such as scan chains, are not allowed or restricted for security requirements. In this paper, we analyze the impact that error detecting schemes have on the testability of an implementation of the advanced encryption standard, in particular when software-based self-test techniques are envisioned. We show that protection schemes can improve concurrent error detection, but make initial testing more difficult.",2008,0, 2736,On Line Testing of Single Feedback Bridging Fault in Cluster Based FPGA by Using Asynchronous Element,"In this paper, we present a novel technique for online testing of feedback bridging faults in the interconnects of the cluster based FPGA. The detection circuit will be implemented using BISTER configuration. We have configured the Block Under Test (BUT) with a pseudo-delay independent asynchronous element. Since we have exploited the concept of asynchronous element known as Muller-C element in order to detect the fault, the fault has high ingredient of delay dependent properties due to variation of the feedback path delay. Xilinx Jbits 3.0 API (Application Program Interface) is used to implement the BISTER structure in the FPGA. By using Jbits, we can reconfigure dynamically the device, in which the partial bit stream only affects part of the device. In the comparison to the traditional FPGA development tool (ISE), Jbits is faster to map the specific portion of the circuit to a specific tile. We also have more controllability over the utilization of internal resources of FPGA, so that we can perform this partial reconfiguration.",2008,0, 2737,Building a reliable internet core using soft error prone electronics,"This paper describes a methodology for building a reliable internet core router that considers the vulnerability of its electronic components to single event upset (SEU). It begins with a set of meaningful system level metrics that can be related to product reliability requirements. A specification is then defined that can be effectively used during the system architecture, silicon and software design process. The system can then be modeled at an early stage to support design decisions and trade-offs related to potentially costly mitigation strategies. The design loop is closed with an accelerated measurement technique using neutron beam irradiation to confirm that the final product meets the specification.",2008,0, 2738,The Checkpoint Interval Optimization of Kernel-Level Rollback Recovery Based on the Embedded Mobile Computing System,"Due to the limited resources of the embedded mobile computing systems, such as wearable computers, PDAs or sensor nodes, reducing the overhead of the software implemented fault tolerance mechanism is a key factor in reliability design. Two checkpoint interval optimization techniques of kernel level rollback recovery mechanism are discussed. Step checkpointing algorithm modulates checkpoint intervals on-line according to the characteristics of software or hardware environment-dependent system that the failure rate fluctuates acutely shortly after the system fails. Checkpoint size monitoring and threshold-control technique adjusts the checkpoint interval by predicting the amount of data to be saved. Combining these two techniques can effectively improve the performance of the embedded mobile computer system.",2008,0, 2739,Test results for the WAAS Signal Quality Monitor,"The signal quality monitor (SQM) is an integrity monitor for the wide area augmentation system (WAAS). The monitor detects L1 signal waveform deformation of a GPS or a geosynchronous (GEO) satellite monitored by WAAS should that event occur. When a signal deformation occurs, the L1 correlation function measured by the receiver becomes distorted. The distortion will result in an error in the L1 pseudorange. The size of the error depends on the design characteristics of the user receiver. This paper describes test results for the WAAS SQM conducted using prototype software. There are two groups of test cases: the nominal testing and the negative path testing. For nominal test cases, recorded data are collected from a test facility in four 5-day periods. These four data sets include SQM correlation values for SV-receiver pairs, and satellite error bounds for satellites. They are used as input to the prototype. The prototype processes these data sets, executes the algorithm, and records test results. Parameters such as the """"maximum median-adjusted detection metric over threshold"""" (i.e., the maximum detection test), """"UDRE forwarded from upstream integrity monitors,"""" and """"UDRE supported by SQM"""" are shown and described. The magnitude of the maximum detection test for all GPS and GEO satellites are also shown. For negative path testing, this paper describes two example simulated signal deformation test cases. A 2-day data set is collected from the prototype. A few example ICAO signal deformations are simulated based on this data set and are inserted in different time slots in the 2-day period. The correlator measurements for selected satellites are pre- processed to simulate the signal deformation. The results demonstrate the sensitivity of the Signal Quality Monitor to the simulated deformation, and shows when the event is detected and subsequently cleared. It also shows that the SQM will not adversely affect WAAS performance.",2008,0, 2740,A Software Implementation of the Duval Triangle Method,"Monitoring and diagnosis of electrical equipment, in particular power transformers, has attracted considerable attention for many years. It is of great importance for the utilities to find the incipient faults in these transformers as early as possible. Dissolved gas analysis (DGA) is one of the most useful techniques to detect incipient faults in oil-filled power transformers. Various methods have been developed to interpret DGA results such as IEC ratio code, Rogers method and Duval triangle method. One of the most frequently used DGA methods is Duval triangular. It is a graphical method that allows one to follow the faults more easily and more precisely. In this paper a detailed implementation of Duval triangle method was presented for researchers and utilities interested in visualizing their own DGA results using a software program. The Java language is used for this software because of its growing importance in modern application development.",2008,0, 2741,System-Level Performance Estimation for Application-Specific MPSoC Interconnect Synthesis,"We present a framework for development of streaming applications as concurrent software modules running on multi-processors system-on-chips (MPSoC). We propose an iterative design space exploration mechanism to customize MPSoC architecture for given applications. Central to the exploration engine is our system-level performance estimation methodology, that both quickly and accurately determine quality of candidate architectures. We implemented a number of streaming applications on candidate architectures that were emulated on an FPGA. Hardware measurements show that our system-level performance estimation method incurs only 15% error in predicting application throughput. More importantly, it always correctly guides design space exploration by achieving 100% fidelity in quality-ranking candidate architectures. Compared to behavioral simulation of compiled code, our system-level estimator runs more than 12 times faster, and requires 7 times less memory.",2008,0, 2742,Measuring Package Cohesion Based on Context,"Packages play a critical role to understand, construct and maintain large-scale software systems. As an important design attribute, cohesion can be used to predict the quality of packages. Although a number of package cohesion metrics have been proposed in the last decade, they mainly converge on intra-package data dependences between components, which are inadequate to represent the semantics of packages in many cases. To address this problem, we propose a new cohesion metric for package called SCC on the assumption that two components are related tightly if they have similar contexts. Compared to existing works, SCC uses the common context of two components to infer whether they have close relation or not, which involves both inter- and intra- package data dependences. It is hence able to reveal semantic relations between components. We demonstrate the effectiveness of SCC by case studies.",2008,0, 2743,An Approach to Evaluation of Arguments in Trust Cases,"Trustworthiness of IT systems can be justified using the concept of a trust case. A trust case is an argument structure which encompasses justification and evidence supporting claimed properties of a system. It represents explicitly an expert's way of assessing that a certain object has certain properties. Trust cases can be developed collaboratively on the basis of evidence and justification of varying quality. They can be complex structures impossible to comprehend fully by a non-expert. A postulated model of communicating trust case contents to an 'ordinary' user is an expert acting on user's behalf and communicating his/her assessment to the user. Therefore, a mechanism for issuing and aggregating experts' assessments is required. The paper proposes such a mechanism which enables assessors to appraise strength of arguments included in a trust case. The mechanism uses Dempster-Shafer's model of beliefs to deal with uncertainty resulting from the lack of knowledge of the expert. Different types of argumentation strategies were identified and for each of them appropriate combination rules were presented.",2008,0, 2744,Dependable SoPC-Based On-board Ice Protection System: From Research Project to Implementation,"Both dependability of computer on-board systems (CBS), and the size and the weight limitations are very important characteristics. Basic aviation and aerospace CBSs requirements and some development principles are considered. The multi-version lifecycle of FPGA-based CBS as system-on-programmable-chip (SoPC) is described. The several dependable SoPC architectures are researched and assessed: one-version two-channel, two-version two-channel and two-version four-channel systems. The method of the architectural adaptation is considered as the means for physical and design faults tolerating. Itpsilas based on composition of a few versions embedded to the chip. The checking and reconfiguration block as intellectual property core and elements ofice protection system development and implementation process are given as the practical example of application of proposed technique.",2008,0, 2745,Tool-Supported Advanced Mutation Approach for Verification of C# Programs,"Mutation testing is a fault-based testing technique used to inject faults into an existing program and see if its test suite is sensitive enough to detect common faults. We are interested in using the mutation analysis to evaluate, compare and improve quality assurance techniques for testing object-oriented mechanisms and other advanced features of C# programs. This paper provides an overview of a current version of the CREAM system (creator of mutants), and reports on its use in experimental research. We apply advanced, object-oriented mutation operators to testing of open-source C# programs and discuss the results.",2008,0, 2746,Reputation-Based Service Discovery in Multi-agents Systems,"Reputation has recently received considerable attention within a number of disciplines such as distributed artificial intelligence, economics, evolutionary biology, and among others. Most papers about reputation provide an intuitive approach to reputation which appeals to common experiences without clarifying whether their use of reputation is similar or different from those used by others. DF provides a Yellow Pages service. Agents in the FIPA-compliant agent system can provide services to others, and store these services in the DF of the multiagent system. However, existing DF cannot detect the fake service which is registered by malicious agent. So,a user may search these fault services. In this paper, we analyze the DFpsilas problem and propose the solution. We describe the Reputation mechanism for searching these fake services. Reputation function assumes the presence of other agent who can provide ratings for other agents that are reflective of the performance or behavior of the corresponding agents.",2008,0, 2747,Automatic model-based service hosting environment migration,"The proper operation of Service-Oriented Architecture (SOA) depends on underlying system services of operating systems, so efficient and effective migration of Service Hosting Environment is critical to cope with intrinsic-changed nature of SOA. However, due to the large amount of configuration items, complicated mapping and complex dependency relationship among system services, migrating into a new Service Hosting Environment satisfying the operation requirement of SOA becomes an error-prone and time-consuming task. The SCM project in IBM develops a novel approach to migrate Service Hosting Environment shaped in Unix-like systems. Firstly, this approach builds a set of configuration models to describe various system services. Then based on models, it presents knowledge based mapping to translate system service configurations between Service Hosting Environments. Finally, it designs a dependency hierarchy deducting algorithm to compute the dependency relationship among system services for migration traceability and error determination. A SCM prototype has performed well on largely reducing time, labor and errors in real migration cases.",2008,0, 2748,Real-time problem localization for synchronous transactions in HTTP-based composite enterprise applications,"Loosely-coupled composite enterprise applications based on modern Web technologies are becoming increasingly popular. While composing such applications is appealing for a number of reasons, the distributed nature of the applications makes problem determination difficult. Stringent service level agreements in these environments require rapid localization of failing and poorly performing services. We present in this paper a method that performs real-time transaction level problem determination by tracking synchronous transaction flows in HTTP based composite enterprise applications. Our method relies on instrumentation of service requests and responses to transmit downstream path and monitoring information in realtime. Further, our method applies change-point based techniques on monitored information at the point of origin of a transaction, and quickly detects anomalies in the performance of invoked services. Since our method performs transaction level monitoring, it avoids the pitfalls associated with techniques that use aggregate performance metrics. Additionally, since we use change-point based techniques to detect problems, our method is more robust than error-prone static threshold based techniques.",2008,0, 2749,Formal specification and verification of a protocol for consistent diagnosis in real-time embedded systems,"This paper proposes a membership protocol for fault-tolerant distributed systems and describes the usage of formal verification methods to ascertain its correctness. The protocol allows nodes in a synchronous system to maintain consensus on the set of operational nodes, i.e., the membership, in the presence of omission failures and node restarts. It relies on nodes observing the transmissions of other nodes to detect failures. Consensus is maintained by exchanging a configurable number of acknowledgements for each nodepsilas message. Increasing this number makes the protocol resilient to a greater number of simultaneous or near-coincident failures.We used the SPIN model checker to formally verify the correctness of the membership protocol. This paper describes how we modeled the protocol and presents the results of the exhaustively verified model instances.",2008,0, 2750,Assessing Web Applications Consistently: A Context Information Approach,"In order to assess Web applications in a more consistent way we have to deal not only with non-functional requirement specification, measurement and evaluation (M&E) information but also with the context information about the evaluation project. When organizations record the collected data from M&E projects, the context information is very often neglected. This can jeopardize the validity of comparisons among similar evaluation projects. We highlight this concern by introducing a quality in use assessment scenario. Then, we propose a solution by representing the context information as a new add-in to the INCAMI M&E framework. Finally, we show how context information can improve Web application evaluations, particularly, data analysis and recommendation processes.",2008,0, 2751,Specification Patterns for Formal Web Verification,"Quality assurance of Web applications is usually an informal process. Meanwhile, formal methods have been proven to be reliable means for the specification, verification, and testing of systems. However, the use of these methods requires learning their mathematical foundations, including temporal logics. Specifying properties using temporal logic is often complicated even to experts, while it is a daunting and error prone task for non-expert users. To assist web developers and testers in formally specifying web related properties, we elaborate a library of web specification patterns. The current version of the library of 119 functional and non-functional patterns is a result of scrutinizing various resources in the field of quality assurance of Web Applications, which characterize successful web application using a set of standardized attributes.",2008,0, 2752,A Service Oriented Approach to Traffic Dependent Navigation Systems,"Navigation systems play an important role in planning of routes for transportation and individual traffic. The calculated routes are not always optimal, because routing relies on static speeds for different road types and on outdated and error prone traffic message channel (TMC) data which do not reflect current traffic situations. This leads to the effect, that vehicles are directed to """"preferred"""" routes for at least some time resulting in traffic jam on these routes. In order to overcome the suboptimal solution, floating car data (FCD) reflecting the real time traffic has to be integrated in the calculation. In this paper we present an architecture for real time traffic dependent navigation systems that is based on a service oriented approach. This architecture considers mediation of real time traffic input, data preprocessing ensuring correct and reliable traffic data, and service provisioning. We discuss the challenges in each of these areas.",2008,0, 2753,Modeling Business Process Availability,"In the world where on-demand and trustworthy service delivery is one of the main preconditions for successful business, availability of the services and business processes is of the paramount importance and cannot be compromised. We present a framework for modeling business process availability that takes into account services, the underlying ICT-infrastructure and people. Based on a fault model, we develop the methodology to map dependencies between ICT-components, services and business processes. The mapping enables us to model and analytically assess steady-state, interval and user perceived availability at all levels, up to the level of the business process.",2008,0, 2754,Rapid Deployment of SOA Solutions via Automated Image Replication and Reconfiguration,"Deployment is an important aspect of software solutions' life-cycle and is repeatedly employed at many stages including development, testing, delivery, and demonstration. Traditional script-based approaches for deployment are primarily manual and hence error prone, resulting in wasted time and labor. In this paper we propose a framework and approach for faster redeployment of distributed software solutions. In our approach the solution is first deployed on virtual machines using traditional methods. Then environment dependent configurations of the solution are discovered and preserved along with the images of virtual machines. For subsequent deployments, the preserved images are provisioned, and the deployer is provided an opportunity to change a subset of the recorded configurations that cannot be automatically derived e.g. IP addresses, ports. The remaining recorded configurations are derived by executing meta-model level constraints on the solution configuration model. Finally, the virtual machines are updated with new configurations by leveraging the semantics of appropriate scripts. Our framework allows product experts to describe the configuration meta-model, constraints, and script semantics. This product knowledge is specified only once, and is reused across solutions for automatic configuration discovery and re-configuration. We demonstrate with case studies that our approach reduces the time for repeated deployments of a solution from an order of weeks to an order of hours.",2008,0, 2755,wsrbench: An On-Line Tool for Robustness Benchmarking,"Testing Web services for robustness is a difficult task. In fact, existing development support tools do not provide any practical mean to assess Web services robustness in the presence of erroneous inputs. Previous works proposed that Web services robustness testing should be based on a set of robustness tests (i.e., invalid Web services call parameters) that are applied in order to discover both programming and design errors. Web services can be classified based on the failure modes observed. In this paper we present and discuss the architecture and use of an on-line tool that provides an easy interface for Web services robustness testing. This tool is publicly available and can be used by both web services providers (to assess the robustness of their Web services code) and consumers (to select the services that best fit their requirements). The tool is demonstrated by testing several Web services available in the Internet.",2008,0, 2756,A Framework for Model-Based Continuous Improvement of Global IT Service Delivery Operations,"In recent years, the ability to deliver IT infrastructure services from multiple geographically distributed locations has given rise to an entirely new IT services business model. In this model, called the """"Global Delivery Model"""", clients out source components of their IT infrastructure operations to multiple service providers, who in turn use a combination of onsite and offsite (including offshore) resources to manage the components on behalf of the.ir clients. Since the components of services provided can be assembled and processed at any of the delivery centers, a framework for continuous monitoring of quality and productivity of the delivery processes is essential to pinpoint and remedy potential process inefficiencies. In this paper, we describe a framework implemented by a large global service provider that uses continuous monitoring and process behavior charts to detect any potential shifts in its global ITservice delivery environment. Using this framework, the service provider has already improved several of its IT delivery processes resulting in improved quality and productivity. We discuss the major components of the framework, challenges in deploying such a system for global processes whose lifecycle spans multiple delivery centers, and present examples of process improvements that resulted from deploying the framework.",2008,0, 2757,A Novel Adaptive Failure Detector for Distributed Systems,"Combining adaptive heartbeat mechanism with fuzzy grey prediction algorithm, a novel implementation of failure detector is presented. The main parts of the implementation are adaptive grey prediction layer and adaptive fuzzy rule-based classification layer. The former layer employs a GM(1,1) unified-dimensional new message model, only needs a small volume of sample data, to predict heartbeat arrival time dynamically. Then, the predict value and the message loss rate in specific period are act as input variations for the latter layer to decide failure/non-failure. Furthermore, algorithms of how to predict arrival time and how to construct adaptive fuzzy rule-based classification system are presented. Experimental results validate the availability of our failure detector in detail.",2008,0, 2758,A Novel Embedded Accelerator for Online Detection of Shrew DDoS Attacks,"As one type of stealthy and hard-to-detect attack, low-rate TCP-targeted DDoS attack can seriously throttle the throughput of normal TCP flows for a long time without being noticed. The Power Spectral Density (PSD) analysis in frequency domain can detect this type of attack accurately. However, computational complexity of PSD analysis makes it impossible for software implementation at high speed network. Taking advantages of powerful computing capability and software-like flexibility, an embedded accelerator using FPGA for PSD analysis has been proposed. Optimized design in autocorrelation calculation algorithm and DFT processing distinguishes our scheme more meaningful for high speed real-time processing with limited resources. Simulation verifies that even working at very low system clock frequency, our design can still provide quality-service for malicious detection in multi-gigabyte rate network.",2008,0, 2759,APART: Low Cost Active Replication for Multi-tier Data Acquisition Systems,"This paper proposes APART (a posteriori active replication), a novel active replication protocol specifically tailored for multi-tier data acquisition systems. Unlike existing active replication solutions, APART does not rely on a-priori coordination schemes determining a same schedule of events across all the replicas, but it ensures replicas consistency by means of an a-posteriori reconciliation phase. The latter is triggered only in case the replicated servers externalize their state by producing an output event towards a different tier. On one hand, this allows coping with non-deterministic replicas, unlike existing active replication approaches. On the other hand, it allows attaining striking performance gains in the case of silent replicated servers, which only sporadically, yet unpredictably, produce output events in response to the receipt of a (possibly large) volume of input messages. This is a common scenario in data acquisition systems, where sink processes, which filter and/or correlate incoming sensor data, produce output messages only if some application relevant event is detected. Further, the APART replica reconciliation scheme is extremely lightweight as it exploits the cross-tier communication pattern spontaneously induced by the application logic to avoid explicit replicas coordination messages.",2008,0, 2760,Adaptive Checkpoint Replication for Supporting the Fault Tolerance of Applications in the Grid,"A major challenge in a dynamic Grid with thousands of machines connected to each other is fault tolerance. The more resources and components involved, themore complicated and error-prone becomes the system. Migol is an adaptive Grid middleware, which addresses the fault tolerance of Grid applications and services by providing the capability to recover applications from checkpoint files automatically. A critical aspect for an automatic recovery is the availability of checkpoint files: If a resource becomes unavailable, it is very likely that the associated storage is also unreachable, e. g. due to a network partition. A strategy to increase the availability of checkpoints isreplication.In this paper, we present the Checkpoint Replication Service. A key feature of this service is the ability to automatically replicate and monitor checkpoints in the Grid.",2008,0, 2761,An Experimental Evaluation of the Reliability of Adaptive Random Testing Methods,"Adaptive random testing (ART) techniques have been proposed in the literature to improve the effectiveness of random testing (RT) by evenly distributing test cases over the input space. Simulations and mutation analyses of various ART techniques have demonstrated their improvements on fault detecting ability when measured by the number of test cases required to detect the first fault. In this paper, we report an experiment with ART using mutants to evaluate ARTpsilas reliability in fault detecting ability. Our experiment discovered that ART is more reliable than RT in the sense that its degree of variation in fault detecting ability is significantly lower than RT. It is also recognized from the experiment data that the two main factors that affect ARTpsilas reliability are the failure rate of the system under test and the regularity of the failure domain measured by the standard deviation of random test results.",2008,0, 2762,Historical Value-Based Approach for Cost-Cognizant Test Case Prioritization to Improve the Effectiveness of Regression Testing,"Regression testing has been used to support software testing activities and assure the acquirement of appropriate quality through several versions of a software program. Regression testing, however, is too expensive because it requires many test case executions, and the number of test cases increases sharply as the software evolves. In this paper, we propose the Historical Value-Based Approach, which is based on the use of historical information, to estimate the current cost and fault severity for cost-cognizant test case prioritization. We also conducted a controlled experiment to validate the proposed approach, the results of which proved the proposed approachpsilas usefulness. As a result of the proposed approach, software testers who perform regression testing are able to prioritize their test cases so that their effectiveness can be improved in terms of average percentage of fault detected per cost.",2008,0, 2763,Reliability Improvement of Real-Time Embedded System Using Checkpointing,"The checkpointing problem in real-time embedded systems is dealt with from a reliability point of view. Transient faults are assumed to be detected in a non-concurrent manner (e.g., periodically). The probability of successful real-time task completion in the presence of transient faults is derived with the consideration of the effects of the transient faults that may occur during checkpointing or recovery operations. Based on this, an optimal equidistant checkpointing strategy that maximizes the probability of task completion is proposed.",2008,0, 2764,A Model of Bug Dynamics for Open Source Software,We present a model to describe open source software (OSS) bug dynamics. We validated the model using real world data and performed simulation experiments. The results show that the model has the ability to predict bug occurrences and failure rates. The results also reveal that there exists an optimal release cycle for effectively managing OSS quality.,2008,0, 2765,A New Method for Measuring Single Event Effect Susceptibility of L1 Cache Unit,"Cache SEE susceptibility measurements are required for predicting processorpsilas soft error rate in space missions. Previous dynamic or static real beam test based approaches are only tenable for processors which have optional cache operating modes such as disable(bypass)/enable, frozen, etc. As L1 cache are indispensable to the processorpsilas total performance, some newly introduced processors no longer have such cache management schemes, thus make the existed methods inapplicable. We propose a novel way to determine cache SEE susceptibility for any kind of processors, whether cache bypass mode supported or not, by combining heavy ion dynamic testing with software implemented fault injection approaches.",2008,0, 2766,An Estimation Model of Vulnerability for Embedded Microprocessors,"Embedded systems, and also the embedded microprocessors, have encountered the reliability challenge because the occurring probability of soft errors has a rising trend. When they are applied to safety-critical applications, designs with the fault tolerant consideration are required. For the complicated embedded systems or IP-based system-on-chip (SoC), it is unpractical and not cost-effective to protect the entire system or SoC. Analyzing the vulnerability of systems can help designers not only invest limited resource on the most crucial region but also understand the gain derived from the investment. In this paper we propose a model to fast estimate the microprocessor's vulnerability with only slight simulation effort. From our assessment results, the rank of component vulnerability related to the probability of causing the microprocessor failure can be acquired. By choosing one of the mainstream microprocessors - VLIW (Very Long Instruction Word) processor - as an example, the practical usefulness of our estimation model is demonstrated.",2008,0, 2767,Early Reliability Prediction: An Approach to Software Reliability Assessment in Open Software Adoption Stage,Conventional software reliability models are not adequate to assess the reliability of software system in which OSS (Open Source Software) adopted as a new feature add-on because OSS can be modified while the inside of COTS(Commercial Off-The-Shelf) products cannot be changed. This paper presents an approach to software reliability assessment of OSS adopted software system in the early stage. We identified the software factors that affect the reliability of software system when a large software system adopts OSS and assess software reliability using those factors. They are code modularity and code maintainability in software modules related with system requirements. We used them to calculate the initial fault rate with weight index (correlated value between requirement and module) which represents the degree of code modification. We apply the proposed initial fault rate to reliability model to assess software reliability in the early stage of a software life cycle. Early software reliability assessment in OSS adoption helps to make an effective development and testing strategies for improving the reliability of the whole system.,2008,0, 2768,New condition monitoring techniques for reliably drive systems operation,"The dominant application of electronics today is to process information. The computer industry is the biggest user of semiconductor devices and consumer electronics. Due to the successful development of semiconductors, electronic system and controls have gained wide acceptance in power and computing technology and due to the continuous use of drive systems (rotating machines, controlling thyristors and associated electronic components) in industry and in power stations, and the need to keep such systems running reliably, the detection of defects and anomalies is of increasing importance, and on-line monitoring to detect any fault in these systems is now a strong possibility and certainly periodic monitoring of a drive systems in strategic situations. The principal aim of the paper is to use both software and hardware and develop a fault diagnosis knowledge-based system, which will analyze and manipulate the output obtained from sensors using a microcomputer for acquiring the plant condition data and subsequently interpreting them, data collected can be analyzed using suitable computer programs, and any trends can be identified and compared with the knowledge base. The probability of certain condition can then be diagnosed and compared, providing the necessary information on which subsequent decisions can be based and provide any necessary alarms to the operator. To achieve this objective, the simulation and experimental technique considered is to use sensors placed in the wedges closing the stator slots to sense the induced voltage. The induced voltage and for each fault is shown to have a unique voltage pattern, thus, the fault identification through voltage pattern recognition were the basic rules for the development of the knowledge base. The predicted results are verified by measurements on a model system in which known faults can be established.",2008,0, 2769,New Protection Circuit for High-Speed Switching and Start-Up of a Practical Matrix Converter,"The matrix converter (MC) presents a promising topology that needs to overcome certain barriers (protection systems, durability, the development of converters for real applications, etc.) in order to gain a foothold in the market. Taking into consideration that the great majority of efforts are being oriented toward control algorithms and modulation, this paper focuses on MC hardware. In order to improve the switching speed of the MC and thus obtain signals with less harmonic distortion, several different insulated-gate bipolar transistor (IGBT) excitation circuits are being studied. Here, the appropriate topology is selected for the MC, and a recommended configuration is selected, which reduces the excursion range of the drivers, optimizes the switching speed of the IGBTs, and presents high immunity to common-mode voltages in the drivers. Inadequate driver control can lead to the destruction of the MC due to its low ride-through capability. Moreover, this converter is especially sensitive during start-up, as, at that moment, there are high overcurrents and overvoltages. With the aim of finding a solution for starting up the MC, a circuit is presented (separate from the control software), which ensures correct sequencing of supplies, thus avoiding a short circuit between input phases. Moreover, it detects overcurrent, connection/disconnection, and converter supply faults. Faults cause the circuit to protect the MC by switching off all the IGBT drivers without latency. All this operability is guaranteed even when the supply falls below the threshold specified by the manufacturers for the correct operation of the circuits. All these features are demonstrated with experimental results. Lastly, an analysis is made of the interaction that takes place during the start-up of the MC between the input filter, clamp circuit, and the converter. A variation of the clamp circuit and start-up strategy is presented, which minimizes the overcurrents that circulate through the conve- - rter. For all these reasons, it can be said that the techniques described in this paper substantially improve the MC start-up cycle, representing a step forward toward the development of reliable MCs for real applications.",2008,0, 2770,Fault tolerant IPMS motor drive based on adaptive backstepping observer with unknown stator resistance,"This work considers the problem of designing a fault tolerant system for IPMS motor drive subject to current sensor fault. To achieve this goal, two control strategies are considered. The first is based on field oriented control and a developed adaptive backstepping observer which simultaneously are used in the case of fault-free. The second approach proposed is concerned with fault tolerant strategy based on observer for faulty conditions. Stator resistance as possible source of system uncertainty is taken into account under different operating conditions. Current sensors failures are detected and observer based on adaptive backstepping approach is used to estimate currents and stator resistance. The nonlinear observer stability study based on the Lyapunov theory guarantees the stability and convergence of the estimated quantities, if the appropriate adaptation laws are designed and persistency of excitation condition is satisfied. In our control approach, references of d-q axis currents are generated on the basis of maximum power factor per ampere control scheme related to IPMSM drive. The complete proposed scheme is simulated using MATLAB/Simulink software. Simulation is made to illustrate the proposed strategy.",2008,0, 2771,A layered approach to semantic similarity analysis of XML schemas,"One of the most critical steps to integrating heterogeneous e-Business applications using different XML schemas is schema mapping, which is known to be costly and error-prone. Past research on schema mapping has not fully utilized semantic information in the XML schemas. In this paper, we propose a semantic similarity analysis approach to facilitate XML schema mapping, merging and reuse. Several key innovations are introduced to better utilize available semantic information. These innovations, including: 1) a layered semantic structure of XML schema, 2) layered specific similarity measures using an information content based approach, and 3) a scheme for integrating similarities at all layers. Experimental results using two different schemas from an real world application demonstrate that the proposed approach is valuable for addressing difficulties in XML schema mapping.",2008,0, 2772,Non-FPGA-based Field-programmable Self-repairable (FPSR) Microarchitecture,"A non-FPGA-based adaptable microarchitecture is presented for fault/defect-tolerance. This paper also introduces an architecture-level fault/defect recovery capability implemented in an adaptable architecture with field-programmable self-repair (FPSR). This FPSR scheme that dynamically cures delay/permanent faults and soft-errors detected at circuit- and architecture-level, respectively, is demonstrated as a means to overcome the limitations of circuit-level fault/defect tolerance. The FPSR adaptable architecture was developed without employing reconfigurable devices (e.g., FPGAs). This architecture is adaptable enough to fix errors by reasserting different patterns and delays of the recovering signals via rerouted alternative resources/paths for the same operation without causing the same faults again. In order to dynamically respond to FPSR operations with less redundancy, the adaptable microarchitecture generates and delivers alternative sequences of repair signal patterns via its adaptable architecture; these can be implemented in ASIC, while continuously and seamlessly supporting post-fabrication defect-prevention in both hardware and software at system levels.",2008,0, 2773,SCARS: Scalable Self-Configurable Architecture for Reusable Space Systems,"Creating an environment of ldquono doubtrdquo for mission success is essential to most critical embedded applications. With reconfigurable devices such as field programmable gate arrays (FPGAs), designers are provided with a seductive tool to use as a basis for sophisticated but highly reliable platforms. We propose a two-level self-healing methodology for increasing the probability of success in critical missions. Our proposed system first undertakes healing at node-level. Failing to rectify system at node-level, network-level healing is undertaken. We have designed a system based on Xilinx Virtex-5 FPGAs and Cirronet DM2200 wireless mesh nodes to demonstrate autonomous wireless healing capability among networked node devices.",2008,0, 2774,Code and carrier divergence technique to detect ionosphere anomalies,"A single and dual frequency smoothing techniques implemented to detect ionosphere anomalies for GBAS system (ground based augmentation system) were discuss in this paper. As a dominant threat for using differential navigation satellites systems in landing applications an ionosphere storm is considered. To detect these occurrences the number of algorithms is developed. Some of them are addressed to meet the integrity requirements of CAT III landing and base on the multi frequency GPS techniques. Depending on the combination of frequency used during code and carrier phase measurements the smoothed pseudorange achieves a different level of accuracy. From this reason the most popular algorithms e.g. the divergence free and ionosphere free smoothing algorithms are analyzed and compared. In the article the works realized in the Institute of Radioelectronics connected with GBAS application are presented, too. The investigations were conducted by using actually GPS signals and signals from GNSS simulator. The self prepared software was used to analyze the results.",2008,0, 2775,Case studies in arc flash reduction to improve safety and productivity,"With the advent of new power system analysis software, a more detailed arc flash analysis can be performed under various load conditions. These new ldquotoolsrdquo can also evaluate equipment damage, design systems with lower arc flash, and predict electrical fire locations based on high arc flash levels. This paper demonstrates how arc flash levels change with available utility MVA (mega volt amperes), additions in connected load, and selection of system components. This paper summarizes a detailed analysis of several power systems to illustrate possible misuses of 2004 NFPA 70E Risk Category Classification Tables while pointing toward future improvements of the Standards. In particular, findings indicate upstream protection may not open quick enough for fault on the secondary of a transformer or at the far end of a long cable due to the increase in system impedance. Several examples of how these problem areas can be dealt with are described in detail.",2008,0, 2776,Safety verification of fault tolerant goal-based control programs with estimation uncertainty,"Fault tolerance and safety verification of control systems that have state variable estimation uncertainty are essential for the success of autonomous robotic systems. A software control architecture called mission data system, developed at the Jet Propulsion Laboratory, uses goal networks as the control program for autonomous systems. Certain types of goal networks can be converted into linear hybrid systems and verified for safety using existing symbolic model checking software. A process for calculating the probability of failure of certain classes of verifiable goal networks due to state estimation uncertainty is presented. A verifiable example task is presented and the failure probability of the control program based on estimation uncertainty is found.",2008,0, 2777,Fault diagnosis and isolation in aircraft gas turbine engines,"This paper formulates and validates a novel methodology for diagnosis and isolation of incipient faults in aircraft gas turbine engines. In addition to abrupt large faults, the proposed method is capable of detecting and isolating slowly evolving anomalies (i.e., deviations from the nominal behavior), based on analysis of time series data observed from the instrumentation in engine components. The fault diagnosis and isolation (FDT) algorithm is based upon Symbolic Dynamic Filtering (SDF) that has been recently reported in literature and relies on the principles of Symbolic Dynamics, Statistical Pattern Recognition and Information Theory. Validation of the concept is presented and a real life software architecture is proposed based on the simulation model of a generic two-spool turbofan engine for diagnosis and isolation of incipient faults.",2008,0, 2778,Fault detection and isolation based on system feedback,"This paper present a method to detect the transducers fault in the close loop control systems. The necessities imposed for the fault detection algorithm are: rapid answer in the case of fault, which comes out; the diminution of the risk to come out false alarms; lower effort calculation. In the paper are presented the equations of fault detection structure that suggest the software algorithms. In last part of the paper, the algorithm was verified on the steam overhead equations, developed in this paper.",2008,0, 2779,Business intelligence as a competitive differentiator,"The successes of organizations vary greatly from industry to industry. For every business in every industry, revenue growth remains the most fundamental indicator, and by far the most critical. Lately, marketplace realities are making revenue targets harder and harder to reach. That's why every organization must infuse strategic and tactical decisions with the knowledge necessary to maximize revenue, reduce costs, minimize risk and achieve competitive advantage. Business intelligence is defined as getting the right information to the right people at the right time. The term encompasses all the capabilities required to turn data into intelligence that everyone in an organization can trust and use for more effective decision making. BI is a sustainable competitive advantage. It allows the organization to drive revenues, manage costs, and realize consistent levels of profitability. An 'intelligent enterprise' - one that uses BI to advance its business - is better able to predict how future economic and market changes will affect its business. Such an organization is able to adapt to the new changes in order to gain. Business intelligence competency center can achieve more intelligence for organization at less cost by supporting the corporate strategy with a BI strategy on a continuous basis.",2008,0, 2780,Adaptation platform for autonomic context-aware services,"Most context-aware services are not autonomic because of two main reasons. The first one is that a context-aware service reacts only to context states that are entirely predicted by the developer. The second reason is that the adaptation control is based on predefined, application and context-specific policies. In this paper we propose a solution based on an application-context description, which allows the machine to autonomously discover the context structure and the adaptation strategies. We have tested our model using a simple scenario where a forum service is adapted to the user language.",2008,0, 2781,A methodology for testbed validation and performance assessment of network/service management systems,"Delivery of multimedia real time flows over multi-domain IP networks require end-to-end quality of service guarantees. In order to manage and control the high-level services (video on demand, IPTV, etc.) as well as the network connectivity services across multiple domains in a coherent way, a distributed but integrated management system is proposed. Such a system has been defined, specified, and is currently implemented in the framework of the ENTHRONE European project. The system is being validated and assessed through several complex interconnected test-beds/pilots. This paper proposes a test methodology for validating the network service management functionalities on a multi-domain test-bed environment.",2008,0, 2782,Computer vision based decision support tool for hydro-dams surface deterioration assessment and visualization using fuzzy sets and pseudo-coloring,"Hydro-dams safety represents an important concern since their failure could be critical for the society. A key part of the hydro-dams surveillance programs is their visual inspection. However few computer vision support tools for implementing semi-automatically and objectively the visual surveillance and observation of the hydro-dams components exist. One of the issues addressed during the visual inspection, important in the preservation of a good condition of the concrete, is the examination of surface deterioration in respect to small patterned cracks and roughness on the downstream wall. This is particularly a task where digital image enhancement and analysis can bring significant benefit, not only by presenting the user with a more relevant image of the surface deterioration, but also by providing - through suitable numerical descriptors, correlated with linguistic descriptors- subjective and examiner-independent information on the surface state. The correlation of extracted numerical descriptors used to quantify the surface roughness with linguistic qualifiers of the deterioration state of the hydro-dam wall should be determined using information gathered from observers, since it must be compliant to the human expert interpretation of visual data in assessing the concrete surface deterioration. Such an approach would result in a computer vision decision support tool embedding expert knowledge, as designed, implemented and proposed in this paper. The resulting software system was verified on a set of images acquired from a Romanian hydro-dam. The compliance of the linguistic results with the human observation proves its functionality as a semi-automatic tool for hydro-dams surveillance.",2008,0, 2783,The emergence of the web,"Predicting the future is always a dicey business, and never more so than when the subject is the Web. The Web has been evolving so quickly that some say one Web year is the equivalent of three real years. Progress in communication technology has been characterized by a movement from lower to higher levels of abstraction. The semantic Web is not just for the World Wide Web. It represents a set of technologies that will work equally well on internal corporate intranets. This is analogous to Web services representing services not only across the Internet but also within a corporationpsilas intranet. So, the semantic Web will resolve several key problems facing current information technology architectures.",2008,0, 2784,Behavioral Dependency Measurement for Change-Proneness Prediction in UML 2.0 Design Models,"During the development and maintenance of object-oriented (OO) software, the information on the classes which are more prone to be changed is very useful. Developers and maintainers can make a more flexible software by modifying the part of classes which are sensitive to changes. Traditionally, most change-proneness prediction has been studied based on source codes. However, change-proneness prediction in the early phase of software development can provide an easier way for developing a stable software by modifying the current design or choosing alternative designs before implementation. To address this need, we present a systematic method for calculating the behavioral dependency measure (BDM) which helps to predict change-proneness in UML 2.0 models. The proposed measure has been evaluated on a multi-version medium size open-source project namely JFreeChart. The obtained results show that the BDM is an useful indicator and can be complementary to existing OO metrics for change-proneness prediction.",2008,0, 2785,Ontology Model-Based Static Analysis on Java Programs,"Typical enterprise and military software systems consist of millions of lines of code with complicated dependence on diverse library abstractions. Manually debugging these codes imposes developers overwhelming workload and difficulties. To address software quality concerns efficiently, this paper proposes an ontology-based static analysis approach to automatically detect bugs in the source code of Java programs. First, we elaborate bug list collected, classify bugs into different categories, and translate bug patterns into SWRL (semantic Web rule language) rules using an ontology tool, Protege. An ontology model of Java program is created according to Java program specification using Protege as well. Both SWRL rules and the program ontology model are exported in OWL (Web ontology language) format. Second, Java source code under analysis is parsed into the abstract syntax tree (AST), which is automatically mapped to the individuals of the program ontology model. SWRL bridge takes in the exported OWL file (representing the SWRL rules model and program ontology model) and the individuals created for the Java code, conduits to Jess (a rule engine), and obtains inference results indicating any bugs. We perform experiments to compare bug detection capability with well-known FindBugs tool. A prototype of bug detector tool is developed to show the validity of the proposed static analysis approach.",2008,0, 2786,A Systematic Approach for Integrating Fault Trees into System Statecharts,"As software systems are encompassing a wide range of fields and applications, software reliability becomes a crucial step. The need for safety analysis and test cases that have high probability to uncover plausible faults are necessities in proving software quality. System models that represent only the operational behavioral of a system are incomplete sources for deriving test cases and performing safety analysis before the implementation process. Therefore, a system model that encompasses faults is required. This paper presents a technique that formalizes a safety model through the incorporation of faults with system specifications. The technique focuses on introducing semantic faults through the integration of fault trees with system specifications or statechart. The method uses a set of systematic transformation rules that tries to maintain the semantics of both fault trees and statechart representations during the transformation of fault trees into statechart notations.",2008,0, 2787,Implicit Social Network Model for Predicting and Tracking the Location of Faults,"In software testing and maintenance activities, the observed faults and bugs are reported in bug report managing systems (BRMS) for further analysis and repair. According to the information provided by bug reports, developers need to find out the location of these faults and fix them. However, bug locating usually involves intensively browsing back and forth through bug reports and software code and thus incurs unpredictable cost of labor and time. Hence, establishing a robust model to efficiently and effectively locate and track faults is crucial to facilitate software testing and maintenance. In our observation, some related bug locations are tightly associated with the implicit links among source files. In this paper, we present an implicit social network model using PageRank to establish a social network graph with the extracted links. When a new bug report arrives, the prediction model provides users with likely bug locations according to the implicit social network graph constructed from the co-cited source files. The proposed approach has been implemented in real-world software archives and can effectively predict correct bug locations.",2008,0, 2788,Analyzing BPEL Compositionality Based on Petri Nets,"Process of service composition is complex and error-prone, which makes a formal modeling and analysis method highly desirable. This paper presents a Petri net-based approach to analyzing the soundness and compositionality of services in BPEL. A set of translation rules is proposed to transform BPEL processes into Petri nets, by which behaviors of the BPEL processes are articulated. The instantiation net of target services are used to capture all of the possible implementation flows of composition processes. Based on theories of Petri nets, the principles for analyzing soundness and compositionality of Web services are provided. A detailed example is given to demonstrate the applicability of our method.",2008,0, 2789,Keynote: Hierarchical Fault Detection in Embedded Control Software,"We propose a two-tiered hierarchical approach for detecting faults in embedded control software during their runtime operation: The observed behavior is monitored against the appropriate specifications at two different levels, namely, the software level and the controlled-system level. (The additional controlled- system level monitoring safeguards against any possible incompleteness at the software level monitoring.) A software fault is immediately detected when an observed behavior is rejected by a software level monitor. In contrast, when a system level monitor rejects an observed behavior it indicates a system level failure, and an additional isolation step is required to conclude whether a software fault occurred. This is done by tracking the executed behavior in the system model comprising of the models for the software and those for the nonfaulty hardware components: An acceptance by such a model indicates the presence of a software fault. The design of both the software-level and system-level monitors is modular and hence scalable (there exists one monitor for each property), and further the monitors are constructed directly from the property specifications and do not require any software or system model. Such models are required only for the fault isolation step when the detection occurs at the system level. We use input-output extended finite automata (I/O- EFA) for software as well as system level modeling, and also for modeling the property monitors. Note since the control changes only at the discrete times when the system/environment states are sampled, the controlled- system has a discrete-time hybrid dynamics which can be modeled as an I/O-EFA.",2008,0, 2790,Metamodeling Autonomic System Management Policies - Ongoing Works,"Autonomic computing is recognized as one of the most promising solution to address the increasingly complex task of distributed environments' administration. In this context, many projects relied on software components and architectures to organize such an autonomic management software. However, we observed that the interfaces of a component model are too low-level, difficult to use and still error prone. Therefore, we introduced higher-level languages for the modeling of deployment and management policies. These domain specific languages enhance simplicity and consistency of the policies. Our current work is to formally describe the metamodels and the semantics associated with these languages.",2008,0, 2791,Towards a Process Maturity Model for Open Source Software,"For traditional software development, process maturity models (CMMI, SPICE) have long been used to assess product quality and project predictability. For OSS, on the other hand, these models are generally perceived as inadequate. In practice, though, many OSS communities are well-organized, and there is evidence of process maturity in OSS projects. This position paper presents work in progress on developing a process maturity model specifically for OSS projects. 1.",2008,0, 2792,Improving the Quality of GNU/Linux Distributions,"The widespread adoption of free and open source software (FOSS) has lead to a freer and more agile marketplace where there is a higher number of components that can be used to build systems in many original and often unforeseen ways. One of the most prominent examples of complex systems built with FOSS components are GNU/Linux-based distributions. In this paper we present some tools that aim at helping distribution editors with maintaining the huge package bases associated with these distributions, and improving their quality, by detecting errors and inconsistencies in an effective, fast and automatic way.",2008,0, 2793,Error Modeling in Dependable Component-Based Systems,"Component-based development (CBD) of software, with its successes in enterprise computing, has the promise of being a good development model due to its cost effectiveness and potential for achieving high quality of components by virtue of reuse. However, for systems with dependability concerns, such as real-time systems, a major challenge in using CBD consists of predicting dependability attributes, or providing dependability assertions, based on the individual component properties and architectural aspects. In this paper, we propose a framework which aims to address this challenge. Specifically, we present a revised error classification together with error propagation aspects, and briefly sketch how to compose error models within the context of component-based systems (CBS). The ultimate goal is to perform the analysis on a given CBS, in order to find bottlenecks in achieving dependability requirements and to provide guidelines to the designer on the usage of appropriate error detection and fault tolerance mechanisms.",2008,0, 2794,Unknown non-self detection & robustness of distributed artificial immune system with normal model,"Biological immune system is typical distributed parallel system for processing biological information to defense the body against viruses and diseases. Inspired from nature, a distributed artificial immune system with the normal model is proposed for detecting unknown non-selfs such as worms and software faults. Traditional approaches are used to learn unknown features and types of the unknown non-selfs, but the learning problem can not be solved for human immune system in short time, neither that for the machines. A new detecting approach is proposed with the normal model of the system, and the selfs of the system are represented and detected at first. Depending on strictness and completeness of the normal model, the selfs are known and the process for detecting the selfs is much easier and more accurate than that for the non-selfs. Not only the artificial immune system can detect the non-selfs, but also the system can eliminate the non-selfs and repair the damaged parts of the system by itself. Minimization of the non-selfs and maximization of the selfs show robustness of the artificial immune system, and robustness of the distributed artificial immune system can be reduced according to each independent module.",2008,0, 2795,Research on double-end cooperative congestion control mechanism of random delay network,"Propagation delay is the main factor of affecting the network performance. Random propagation delay has an adverse impact on the stability of feedback control mechanism. We argue that some existing schemes which try to control the node-end-systems queue donpsilat work well when random propagation delay acts on the models. To find a solution to this problem, a double-end cooperative congestion control mechanism is proposed and analyzed. We have studied the performance of the control mechanism via simulations on OPNET software. Simulations show that the mechanism can improve network performance under random propagation delay. And in the node-end-systems, the probability of cells discard is lesser.",2008,0, 2796,Stroke detection and reconstruction of characters pressed on metal label,"In order to detect features of protuberant characters, a novel stroke detection method based on Gabor filters is proposed. First, the gray images of protuberant characters were preprocessed using morphological algorithm. Next, a set of Gabor filters is used to break down an image of protuberant characters into four directional images, which contain the stroke information of four directions. Then, a reconstruction experiment is carried out with the Gabor characters. The results show that the Gabor representation has strong reconstruction power. Finally, A BP neural network is introduced to classify the Gabor features and the experiment results tell that the Gabor features have good separate capability. All of the above proves that the proposed method can be reliably used for feature extraction of pressed characters in low-quality images.",2008,0, 2797,Extending ATAM to assess product line architecture,"Software architecture is a core asset for any organization that develops software-intensive systems. Unsuitable architecture can precipitate disaster because the architecture determines the structure of the project. To prevent this, software architecture must be evaluated. The current evaluation methods, however, focus on single product architecture, and not product line architectures and they hardly consider the characteristics of the product lines, such as the variation points. This paper describes the extension of a scenario-based analysis technique for software product architecture-called EATAM, which not only analyzes the variation points of the quality attribute using feature modeling but also creates variability scenarios for the derivation of the variation points using the extended PLUC tag approach. This is a method that aims to consider the tradeoffs in the variability scenarios of the software product family architecture. The method has been validated through a case study involving microwave oven software product line in the appliance domain.",2008,0, 2798,A new priority based congestion control protocol for Wireless Multimedia Sensor Networks,"New applications made possible by the rapid improvements and miniaturization in hardware has motivated recent developments in wireless multimedia sensor networks (WMSNs). As multimedia applications produce high volumes of data which require high transmission rates, multimedia traffic is usually high speed. This may cause congestion in the sensor nodes, leading to impairments in the quality of service (QoS) of multimedia applications. Thus, to meet the QoS requirements of multimedia applications, a reliable and fair transport protocol is mandatory. An important function of the transport layer in WMSNs is congestion control. In this paper, we present a new queue based congestion control protocol with priority support (QCCP-PS), using the queue length as an indication of congestion degree. The rate assignment to each traffic source is based on its priority index as well as its current congestion degree. Simulation results show that the proposed QCCP-PS protocol can detect congestion better than previous mechanisms. Furthermore it has a good achieved priority close to the ideal and near-zero packet loss probability, which make it an efficient congestion control protocol for multimedia traffic in WMSNs. As congestion wastes the scarce energy due to a large number of retransmissions and packet drops, the proposed QCCP-PS protocol can save energy at each node, given the reduced number of retransmissions and packet losses.",2008,0, 2799,Scalable architecture for context-aware activity-detecting mobile recommendation systems,"One of the main challenges in building multi-user mobile information systems for real-world deployment lies in the development of scalable systems. Recent work on scaling infrastructure for conventional web services using distributed approaches can be applied to the mobile space, but limitations inherent to mobile devices (computational power, battery life) and their communication infrastructure (availability and quality of network connectivity) challenge system designers to carefully design and optimize their software architectures. Additionally, notions of mobility and position in space, unique to mobile systems, provide interesting directions for the segmentation and scalability of mobile information systems. In this paper we describe the implementation of a mobile recommender system for leisure activities, codenamed Magitti, which was built for commercial deployment under stringent scalability requirements. We present concrete solutions addressing these scalability challenges, with the goal of informing the design of future mobile multi-user systems.",2008,0, 2800,A software tool to relate technical performance to user experience in a mobile context,"Users in todaypsilas mobile ICT environment are confronted with more and more innovations and an ever increasing technical quality, which makes them more demanding and harder to please. It is often hard to measure and to predict the user experience during service consumption. This is nevertheless a very important dimension that should be taken into account while developing applications or frameworks. In this paper we demonstrate a software tool that is integrated in a wireless living lab environment in order to validate and quantify actual user experience. The methodology to assess the user experience combines both technological and social assets. User experience of a Wineguide application on a PDA is related to signal strength, monitored during usage of the applications. Higher signal strengths correspond with a better experience (e.g. speed). Finally, difference in the experience among users will be discussed.",2008,0, 2801,SDTV Quality Assessment Using Energy Distribution of DCT Coefficients,"The VQM (Video Quality Measurement) scheme is a methodology that measures the difference of quality between the distorted video signal and the reference video signal. In this paper, we propose a novel video quality measurement method that extracts features in DCT (Discrete Cosine Transform) domain of H.263 SDTV. Main idea of the proposed method is to utilize the texture pattern and edge oriented information that is generated in DCT domain. For this purpose, the energy distribution of the reodered DCT coefficients is considered to obtain unique information of each video file. Then, we measure the difference of probability distribution of context information between original video and distorded one. The simulation results show that the proposed algorithm can represent correctly the video quality and give a high correlation with the video DMOS.",2008,0, 2802,A novel high-capability control-flow checking technique for RISC architectures,"Nowadays more and more small transistors make microprocessors more susceptible to transient faults, and then induce control-flow errors. Software-based signature monitoring is widely used for control-flow error detection. When previous signature monitoring techniques are applied to RISC architectures, there exist some branch-errors that they can not detect. This paper proposes a novel software-based signature monitoring technique: CFC-End (Control-Flow Checking in the End). One property of CFC-End is that it uses two global registers for storing the run-time signature alternately. Another property of CFC-End is that it compares the run-time signature with the assigned signature in the end of every basic block. CFC-End is better than previous techniques in the sense that it can detect any single branch-error when applied to RISC architectures. CFC-End has similar performance overhead in comparison with the RCF (Region based Control-Flow checking) technique, which has the highest capability of branch-error detection among previous techniques.",2008,0, 2803,Evaluating the Effectiveness of Random and Partition Testing by Delivered Reliability,"The software engineering literature is full of test data selection and adequacy strategies. However, It is still a question whether these adequacy strategies are effective or not. So it is necessary to research how to evaluate the effectiveness of test strategy. The effectiveness of random and subdomain testing methodology is normally evaluated by failure detecting ability. However, detecting more failures does not guarantee that the software is more reliable because those failures detected maybe small and subtle ones that will seldom occur in reality. So in this paper, delivered reliability which presents the reliability of software after testing is introduced to evaluate their effectiveness. The better method delivers higher reliability after all test failures have been eliminated.",2008,0, 2804,Towards Embedded Artificial Intelligence Based Security for Computer Systems,"This paper presents experiments using Artificial Intelligence (AI) algorithms for online monitoring of integrated computer systems, including System-on-Chip based embedded systems. This new framework introduces an AI-lead infrastructure that is intended to operate in parallel with conventional monitoring and diagnosis techniques. Specifically, an initial application is presented, where each of the systempsilas software tasks are characterised online during their execution by a combination of novel hardware monitoring circuits and background software. These characteristics then stimulate a Self-Organising Map based classifier which is used to detect abnormal system behaviour, as caused by failure and malicious tampering including viruses. The approach provides a system-level perspective and is shown to detect subtle anomalies.",2008,0, 2805,Development of customized distribution automation system (DAS) for secure fault isolation in low voltage distribution system,"This paper presents the development of customized distribution automation system (DAS) for secure fault isolation at the low voltage (LV) down stream, 415/240 V by using the Tenaga Nasional Berhad (TNB) distribution system. It is the first DAS research work done on customer side substation for operating and controlling between the consumer side system and the substation in an automated manner. Most of the work is focused on developing very secure fault isolation whereby the fault is detected, identified, isolated and remedied in few seconds. Supervisory Control and Data Acquisition (SCADA) techniques has been utilized to build Human Machine Interface (HMI) that provides a graphical operator interface functions to monitor and control the system. Microprocessor based Remote Monitoring Devices have been used for customized software to be downloaded to the hardware. Power Line Carrier (PLC) has been used as communication media between the consumer and the substation. As result, complete DAS fault isolation system has been developed for cost reduction, maintenance time saving and less human intervention during faults.",2008,0, 2806,Performance evaluation of a connection-oriented Internet service based on a queueing model with finite capacity,"The operating mechanism of a connection-oriented Internet service is analyzed. Considering the finite buffer in connection-oriented Internet service, we establish a Geom/G/1/K queueing model with setup-close delay-close down for user-initiated session. Using the approach of an embedded Markovian chain and supplementary variables, we derive the probability distribution for the steady queue length and the probability generating function for the waiting time. Correspondingly, we study the performance measures of quality of service (QoS) in terms of system throughput, system response time, and system blocking probability. Based on the simulation results, we discuss the influence of the upper limit of close delay period and the capacity of the queueing model on the performance measures, which have potential application in the design, resource assigning, and optimal setting for the next generation Internet.",2008,0, 2807,Assuring information quality in Web service composition,"As organizations have begun increasingly to communicate and interact with consumers via the Web, so the information quality (IQ) of their offerings has become a central issue since it ensures service usability and utility for each visitor and, in addition, improves server utilization. In this article, we present an IQ-enable Web service architecture, IQEWS, by introducing a IQ broker module between service clients and providers (servers). The functions of the IQ broker module include assessing IQ about servers, making selection decisions for clients, and negotiating with servers to get IQ agreements. We study an evaluation scheme aimed at measuring the information quality of Web services used by IQ brokers acting as the front-end of servers. This methodology is composed of two main components, an evaluation scheme to analyze the information quality of Web services and a measurement algorithm to generate the linguistic recommendations.",2008,0, 2808,Assessment driven process modeling for software process improvement,"Software process improvement (SPI) is used to develop processes to meet more effectively the software organizationpsilas business goals. Improvement opportunities can be exposed by conducting an assessment. A disciplined process assessment evaluates organizationpsilas processes against a process assessment model, which usually includes good software practices as indicators. Many benefits of SPI initiatives have been reported but some improvement efforts have failed, too. Our aim is to increase the probability to success by integrating software process modeling with assessments. A combined approach is known to provide more accurate process ratings and higher quality process models. In this study we have revised the approach by extending the scope of modeling further. Assessment Driven Process Modeling for SPI uses assessment evidence to create a descriptive process model of the assessed processes. The descriptive model is revised into a prescriptive process model, which illustrates an organizationpsilas processes after the improvements. The prescriptive model is created using a process library that is based on the indicators of the assessment model. Modeling during assessment is driven by both process performance and process capability indicators.",2008,0, 2809,Cross-Layer Transmission Scheme with QoS Considerations for Wireless Mesh Networks,"IEEE 802.11 wireless networks utilizes a hard handoff scheme when a station is travelling from one area of coverage to another one within a transmission duration. In IEEE 802.11s wireless mesh networks, the handoff procedure the transmitted data will first be buffered in the source MAP and be not relayed to the target MAP until the handoff procedure is finished. Besides, there are multi-hop in the path between the source station and the destination station. In each pair of neighboring MAPs, contention is needed to transmit data. The latency for successfully transmitting data is seriously lengthened so that the deadlines of data frames are missed with high probabilities. In this paper, we propose a cross-layer transmission (CLT) scheme with QoS considerations for IEEE 802.11 mesh wireless networks. By utilizing CLT, the ratios of missing deadlines will be significantly improved to conform strict time requirements for real-time multimedia applications. We develop a simulation model to investigate the performance of CLT. The capability of the proposed scheme is evaluated by a series of experiments, for which we have encouraging results.",2008,0, 2810,A fault tolerant approach in cluster computing system,"A long-term trend in high performance computing is the increasing number of nodes in parallel computing platforms, which entails a higher failure probability. Hence, fault tolerance becomes a key property for parallel application running on parallel computing systems. The message passing interface (MPI) is currently the programming paradigm and communication library most commonly used on parallel computing platforms. MPI applications may be stopped at any time during their execution due to an unpredictable failure. In order to avoid complete restarts of an MPI application because of only one failure, a fault tolerant MPI implementation is essential. In this paper, we propose a fault tolerant approach in cluster computing system. Our approach is based on reassignment of tasks to the remaining system and message logging is used for message losses. This system consists of two main parts, failure diagnosis and failure recovery. Failure diagnosis is the detection of a failure and failure recovery is the action needed to take over the workload of a failed component. This fault tolerant approach is implemented as an extension of the message passing interface.",2008,0, 2811,Adaptive multi-language code generation using YAMDAT,"In the current environment of accelerating technological change, software development continues to be difficult, unpredictable, expensive, and error-prone model driven architecture (MDA), sometimes known as Executable UML, offers a possible solution. MDA provides design notations with precisely defined semantics. Using these notations, developers can create a design model that is detailed and complete enough that the model can be verified and tested via simulation (ldquoexecutionrdquo). Design faults, omissions, and inconsistencies can be detected without writing any code. Furthermore, implementation code can be generated directly from the model. In fact, implementations in different languages or for different platforms can be generated from the same model.",2008,0, 2812,MPEG-4 video mobile uplink caching algorithm,"Digital cellular mobile technologies have been rapidly developed since the analog technology was presented. This has led to many new applications especially multimedia. More interestingly, the new mobile has embedded the digital camera in itself, so that the user can take photos, record the videos and make video calls. Although many applications of mobile uplink video exist, we cannot see the video clearly because of the bandwidth limitation and error-prone environment. Therefore, we propose a mobile video uplink caching algorithm that can make diminish PSNR variations which may result in better subjective video quality with simple implementation.",2008,0, 2813,An Executable Interface Specification for Industrial Embedded System Design,"Nowadays, designers resort to abstraction techniques to conquer the complexity of industrial embedded systems during the design process. However, due to the large semantic gap between the abstractions and the implementation, the designers often fails to apply the abstraction techniques. In this paper, an EIS-based (executable interface specification) approach is proposed for the embedded system design.The proposed approach starts with using interface state diagrams to specify system architectures. A set of rules is introduced to transfer these diagrams into an executable model (EIS model) consistently. By making use of simulation/verification techniques, many architectural design errors can be detected in the EIS model at an early design stage. In the end, the EIS model can be systematically transferred into an interpreted implementation or a compiled implementation based on the constraints of the embedded platform. In this way, the inconsistencies between the high-level abstractions and the implementation can largely be reduced.",2008,0, 2814,MUSIC: Mutation-based SQL Injection Vulnerability Checking,"SQL injection is one of the most prominent vulnerabilities for web-based applications. Exploitation of SQL injection vulnerabilities (SQLIV) through successful attacks might result in severe consequences such as authentication bypassing, leaking of private information etc. Therefore, testing an application for SQLIV is an important step for ensuring its quality. However, it is challenging as the sources of SQLIV vary widely, which include the lack of effective input filters in applications, insecure coding by programmers, inappropriate usage of APIs for manipulating databases etc. Moreover, existing testing approaches do not address the issue of generating adequate test data sets that can detect SQLIV. In this work, we present a mutation-based testing approach for SQLIV testing. We propose nine mutation operators that inject SQLIV in application source code. The operators result in mutants, which can be killed only with test data containing SQL injection attacks. By this approach, we force the generation of an adequate test data set containing effective test cases capable of revealing SQLIV. We implement a MUtation-based SQL Injection vulnerabilities Checking (testing) tool (MUSIC) that automatically generates mutants for the applications written in Java Server Pages (JSP) and performs mutation analysis. We validate the proposed operators with five open source web-based applications written in JSP. We show that the proposed operators are effective for testing SQLIV.",2008,0, 2815,Path-Sensitive Reachability Analysis of Web Service Interfaces (Short Paper),"WCFA (Web service interface control flow automata) is enhanced by allowing pre/post-conditions for certain Web service invocations to be declared. The formal definition of WCFA is given. Global behaviors of web service compositions (described by a set of WCFA) are captured by ARG (abstract reachability graph), in which each control point is equipped with a state formula and a call stack. The algorithm for constructing ARG uses a path-sensitive analysis to compute the state formulas. Pre/post-conditions are verified during the construction, where unreachable states are detected and pruned. Assertions can be made at nodes of ARG to express both safety properties and call stack inspection properties. Then a SAT solver is used to check whether the assertions are logical consequences of the state formulas(or/and call stacks).",2008,0, 2816,Does Adaptive Random Testing Deliver a Higher Confidence than Random Testing?,"Random testing (RT) is a fundamental software testing technique. Motivated by the rationale that neighbouring test cases tend to cause similar execution behaviours, adaptive random testing (ART) was proposed as an enhancement of RT, which enforces random test cases evenly spread over the input domain. ART has always been compared with RT from the perspective of the failure-detection capability. Previous studies have shown that ART can use fewer test cases to detect the first software failure than RT. In this paper, we aim to compare ART and RT from the perspective of program-based coverage. Our experimental results show that given the same number of test cases, ART normally has a higher percentage of coverage than RT. In conclusion, ART outperforms RT not only in terms of the failure-detection capability, but also in terms of the thoroughness of program-based coverage. Therefore, ART delivers a higher confidence of the software under test than RT even when no failure has been revealed.",2008,0, 2817,An Approach to Merge Results of Multiple Static Analysis Tools (Short Paper),"Defects have been compromising quality of software and costing a lot to find and fix. Thus a number of effective tools have been built to automatically find defects by analyzing code statically. These tools apply various techniques and detect a wide range of defects, with a little overlap among defect libraries. Unfortunately, the advantages of tools' defect detection capacity are stubborn to combine, due to the unique style each tool follows when generating analysis reports. In this paper, we propose an approach to merge results from different tools and report them in a universal manner. Besides, two prioritizing policies are introduced to rank results so as to raise users' efficiency. Finally, the approach and prioritizing policies are implemented in an integrated tool by merging results from three independent analyzing tools. In this way, end users may comfortably benefit from more than one static analysis tool and thus improve software's quality.",2008,0, 2818,Looking for More Confidence in Refactoring? How to Assess Adequacy of Your Refactoring Tests,"Refactoring is an important technique in today's software development practice. If applied correctly, it can significantly improve software design without altering behavior. During refactoring, developers rely on regression testing. However, without further knowledge about the test suite, how can we be confident that regression testing will detect potential refactoring faults? To get more insight into adequacy of refactoring tests, we therefore suggest test coverage of a refactoring's scope of impact as a quantitative measure of confidence. This paper shows how to identify a refactoring's scope of impact and proposes scope-based test coverage criteria. An example is included that illustrates how to use the new test coverage criteria for assessing the adequacy of refactoring tests.",2008,0, 2819,How to Measure Quality of Software Developed by Subcontractors (Short Paper),"In Japan, where the multiple subcontracting is very common in software development, it is difficult to measure the quality of software developed by subcontractors. Even if we request them for process improvement based on CMMI including measurement and analysis, it will not work immediately. Using """"the unsuccessful ratio in the first time testing pass"""" as measure to assess software quality, we have had good results. We can get these measures with a little effort of both outsourcer and subcontractor. With this measure, we could identify the defect-prone programs and conduct acceptance testing for these programs intensively. Thus, we could deliver the system on schedule. The following sections discuss why we devised this measure and its trial results.",2008,0, 2820,Towards a Method for Evaluating the Precision of Software Measures (Short Paper),"Software measurement currently plays a crucial role in software engineering given that the evaluation of software quality depends on the values of the measurements carried out. One important quality attribute is measurement precision. However, this attribute is frequently used indistinctly and confused with accuracy in software measurement. In this paper, we clarify the meaning of precision and propose a method for assessing the precision of software measures in accordance with ISO 5725. This method was used to assess a functional size measurement procedure. A pilot study was designed for the purpose of revealing any deficiencies in the design of our study.",2008,0, 2821,Architecture Compliance Checking at Runtime: An Industry Experience Report,"In this paper, we report on our experiences we made with architecture compliance checking at run-time. To that end, we constructed hierarchical colored Petri nets (CP-nets), using existing general purpose functional programming languages, for bridging the abstraction gap between architectural views and run-time traces. In an industry example, we were able to extract views that helped us to identify a number of architecturally relevant issues (e.g., style constraint violations) that would not have been detected otherwise. Finally, we demonstrate how to systematically design reusable hierarchical CP-nets, and package valuable experiences and lessons learned from the example application.",2008,0, 2822,Path and Context Sensitive Inter-procedural Memory Leak Detection,This paper presents a practical path and context sensitive inter-procedural analysis method for detecting memory leaks in C programs. A novel memory object model and function summary system are used. Preliminary experiments show that the method is effective. Several memory leaks have been found in real programs including which and wget.,2008,0, 2823,Adaptive Random Testing,"Summary form only given. Random testing is a basic testing technique. Motivated by the observation that neighboring inputs normally exhibit similar failure behavior, the approach of adaptive random testing has recently been proposed to enhance the fault detection capability of random testing. The intuition of adaptive random testing is to evenly spread the randomly generated test cases. Experimental results have shown that adaptive random testing can use as fewer as 50% of test cases required by random testing with replacement to detect the first failure. These results have very significant impact in software testing, because random testing is a basic and popular technique in software testing. In view of such a significant improvement of adaptive random testing over random testing, it is very natural to consider to replace random testing by adaptive random testing. Hence, many works involving random testing may be worthwhile to be reinvestigated using adaptive random testing instead. Obviously, there are different approaches of evenly spreading random test cases. In this tutorial, we are going to present several approaches, and discuss their advantages and disadvantages. Furthermore, the favorable and unfavorable conditions for adaptive random testing would also be discussed. Most existing research on adaptive random testing involves only numeric programs. The recent success of applying adaptive random testing for non-numeric programs would be discussed.",2008,0, 2824,Architecture-Based Assessment of Software Reliability,"With the growing advent of object-oriented and component-based software development paradigms, architecture-based software reliability analysis has emerged as an attractive alternative to the conventional black-box analysis based on software reliability growth models. The primary advantage of the architecture-based approach is that it explicitly relates the application reliability to component reliabilities, which eases the identification of components that are critical from a reliability perspective. Furthermore, these techniques can be used for an early assessment of the application reliability. These two features together can provide valuable information to practitioners and architects who design software applications, and managers who plan the allocation of resources to achieve the desired reliability targets in a cost effective manner.The objective of this tutorial is to discuss techniques to assess the reliability of a software application taking into consideration its architecture and the failure behavior of its components. The tutorial will also present how the architecture-based approach could be used to analyze the sensitivity of the application reliability to component and architectural parameters and to compute the importance measures of the application components. We will demonstrate the potential of the techniques presented in the tutorial through a case study of the IP multimedia subsystem (IMS).",2008,0, 2825,Where's My Jetpack?,"Software development tools often fail to deliver on inflated promises. Rather than the predicted progression toward ever-increasing levels of abstraction, two simple trends have driven the evolution of currently available software development tools: integration at the source-code level and a focus on quality. Thus source code has become the bus that tools tap into for communicating with other tools. Also, focus has shifted from defect removal in the later phases to defect prevention in the earlier phases. In the future, tools are likely to support higher levels of abstraction, perhaps in the form of domain-specific languages communicated using XML.",2008,0, 2826,Using Static Analysis to Find Bugs,"Static analysis examines code in the absence of input data and without running the code. It can detect potential security violations (SQL injection), runtime errors (dereferencing a null pointer) and logical inconsistencies (a conditional test that can't possibly be true). Although a rich body of literature exists on algorithms and analytical frameworks used by such tools, reports describing experiences in industry are much harder to come by. The authors describe FindBugs, an open source static-analysis tool for Java, and experiences using it in production settings. FindBugs evaluates what kinds of defects can be effectively detected with relatively simple techniques and helps developers understand how to incorporate such tools into software development.",2008,0, 2827,A Probabilistic Reliability Evaluation of Korea Power System,"Reliability and power quality have been increasingly important in recent years due to a number of black-out events occurring throughout the world. This paper presents a practical method of probabilistic reliability evaluation of Korea Power system by using the Probabilistic Reliability Assessment (PRA) program and Physical and Operational Margins (POM). The case study computes the Probabilistic Reliability Indices (PRI) of Korea Power system as applied PRA and POM. It takes a large number of contingency in load simulations and combines them with a practical method of characterizing the effect of the availabilities of generators, lines and transformers. The effectiveness and future works are illustrated by demonstrations of case study. The case studies of Korea power system are shown that these packages are effective in identifying possible weak points and root causes for likely reliability problems. The potential for these software packages is being explored further for assisting system operators with managing Korea power system.",2008,0, 2828,A Novel Watermarking-Based Reversible Image Authentication Scheme,"Nowadays, digital watermarking algorithms are widely applied to ownership protection and tampering detection of digital images. In this paper, we propose a novel reversible image authentication scheme based on watermarking techniques, which is employed to protect the rightful ownership and detect malicious manipulation over embedded images. In our scheme, the original image is firstly split into many non- overlapping blocks, then one-way function MD5 is used to compute the digest of every block and the digest is then inserted into the least significant bits of some selected pixels, the experimental results show that the proposed scheme is quite simple and the execution time is short. Moreover, the quality of the embedded image is very high, and if the image is authentic, the distortion due to embedding can be completely removed from the watermarked image after the hidden data has been extracted, in the meantime, the positions of the tampered parts are located correctly.",2008,0, 2829,On the Trend of Remaining Software Defect Estimation,"Software defects play a key role in software reliability, and the number of remaining defects is one of most important software reliability indexes. Observing the trend of the number of remaining defects during the testing process can provide very useful information on the software reliability. However, the number of remaining defects is not known and has to be estimated. Therefore, it is important to study the trend of the remaining software defect estimation (RSDE). In this paper, the concept of RSDE curves is proposed. An RSDE curve describes the dynamic behavior of RSDE as software testing proceeds. Generally, RSDE changes over time and displays two typical patterns: 1) single mode and 2) multiple modes. This behavior is due to the different characteristics of the testing process, i.e., testing under a single testing profile or multiple testing profiles with various change points. By studying the trend of the estimated number of remaining software defects, RSDE curves can provide further insights into the software testing process. In particular, in this paper, the Goel-Okumoto model is used to estimate this number on actual software failure data, and some properties of RSDE are derived. In addition, we discuss some theoretical and application issues of the RSDE curves. The concept of the proposed RSDE curves is independent of the selected model. The methods and development discussed in this paper can be applied to any valid estimation model to develop and study its corresponding RSDE curve. Finally, we discuss several possible areas for future research.",2008,0, 2830,ZigBee technology applied to supervisory system of boiler welding quality,"The welding process, detected parameters and its suitable application feature and requirement of boiler have been introduced, the communication technology, protocol and characteristic have been dissected, the design principle of wireless network platform has been discussed, the implementation details of the system software have been analyzed, and the certain problem related to industrial application have been approached.",2008,0, 2831,An approach of fault detection based on multi-mode,"Conventional multi-scale principal component analysis (MSPCA) only detects fault, but it canpsilat detect fault types. For these problems, a method of fault detection based on multi-mode that incorporates MSPCA into adaptive resonance (ART) neural network is presented. Firstly, this method presents a wavelet transform for samples data, and principal component analysis can be used to analyze data at each scale. Then ART is used to classify reconstruction data. It can detect fault effectively, and ART2 can classify fault using wavelet denoising easily, it separates the fault successfully in the system. At last, it develops multi-mode fault detection in autocorrelation system application through computer simulation experiment. The theory and simulation experiments shows that this method is of wide application prospect.",2008,0, 2832,An Approach to Separating Security Concerns in E-Commerce Systems at the Architecture Level,"Security is a requisite and vital concern that should be addressed in e-commerce systems. Traditionally, to add security properties to the application, developers had to specify when, where and how to apply what security policies manually. Such a process is often complicate and error-prone. This paper describes an aspect oriented approach to separating security and application concerns at the architecture level. In the approach, security and application concerns are specified in security aspect models and a base model separately. By specifying the crosscutting relationship between them, the two kinds of models are combined together through weaving. The weaving is based on process algebras and is automatic. Separating security aspects at the early stage of software development can promote maintainability and traceability of the system.",2008,0, 2833,HGRID: An Adaptive Grid Resource Discovery,"Grid resource discovery service is a fundamental problem that has been the focus of research in the recent past. We propose a scheme that has essential characteristics for efficient, self-configuring and fault-tolerant resource discovery and is able to handle dynamic attributes, such as memory capacity. Our approach consists of an overlay network with a hypercube topology connecting the grid nodes and a scalable, fault-tolerant, self-configuring and adaptive search algorithm. Every grid node keeps a small routing table of only log2N entries. The search algorithm is executed in less than (log2N +1) time steps and each grid node is queried only once. By design, the algorithm improves the probability of reaching all working nodes in the system even in the presence of non-alive nodes (inaccessible, crashed or heavily loaded nodes). We analyze the static resilience of the approach presented, which is the measure of how well the algorithm can discover resources without having to update the routing tables. This is done before the routing recovery is processed in order to reconfigure the overlay to avoid non-alive nodes. The results show that our approach has a significantly high static resilience for a grid environment.",2008,0, 2834,Automatic pixel-shift detection and restoration in videos,"A common form of serious defect in video is pixel-shift. It is caused by the consecutive pixels loss introduced by video transmission systems. Pixel-shift means a large amount of pixel shifts one by one due to a small quantity of image data loss. The damaged region in affected frame is usually large, and thus the visual effect is often very disturbing. So far there is no method of automatically treating pixel-shift. This paper focuses on a difficult issue to locate pixel-shift in videos. We propose an original algorithm of automatically detecting and restoring pixel-shift. Pixel-shift frames detection relies on spatio-temporal information and motion estimation. Accurate measure of pixels shift is best achieved based on the analysis of temporal-frequency information. Restoration is accomplished by reversing the pixels shift and spatio-temporal interpolation. Experimental results show that our completely automatic algorithm can achieve very good performances.",2008,0, 2835,Effective self-test routine for on-line testing of processors implemented in harsh environments,"Today, it is a common practice to test commercial off-the-shelf (COTS) processors with self-test routines. Faults in processors may cause failure in self-test routine execution, which is one of the essential disadvantages of these routines. In this paper, we present an effective register transfer level (RTL) method to develop on-line self-test routines. Our proposed method prioritizes components and instructions of processor to select instructions, and applies spectral RTL test pattern generation (TPG) strategy to select test patterns. This method analyzes the spectrum and the noise level with Walsh functions. Also, we use a few extra instructions for the purpose of the signature monitoring to detect control flow errors. We demonstrate that the combination of these three strategies is effective for developing small test programs with high fault coverage in a small test development time. This approach requires only instruction set architecture (ISA) and RTL information of the processors. Since proposed method is based on RTL test generation, it has the advantages of lower memory and test generation time complexities. We develop a self-test routine using our proposed method for Parwan processor and demonstrate the effectiveness of our proposed methodology for on-line testing by presenting experimental results for Parwan processor.",2008,0, 2836,An Analysis of Missed Structure Field Handling Bugs,"Despite the importance and prevalence of structures (or records) in programming, no study till now has deeply analyzed the bugs made in their usage. This paper makes a first step to fill that gap by systematically and deeply analyzing a subset of structure usage bugs. The subset, referred to as MSFH bugs, are errors of omission associated with structure fields when they are handled in a grouped context. We analyze the nature of these bugs by providing a taxonomy, root cause analysis, and barrier analysis. The analysis provided many new insights, which suggested new solutions for preventing and detecting the MSFH bugs.",2008,0, 2837,Finding Narrow Input/Output (NIO) Sequences by Model Checking,"Conformance test sequences for communication protocols specified by finite state machines (FSM)often use unique input/output (UIO) sequences to detect state transition transfer faults. Since a UIO sequence may not exist for every state of an FSM, in the previous research, we extended UIO sequence to introduce a new concept called narrow input/output(NIO) sequence. The general computation of NIO sequences may lead to state explosion when an FSM is very large. In this paper, we present an approach to find NIO sequences using symbolic model checking.Constructing a Kripke structure and a computation tree logic (CTL) formula for such a purpose is described in detail. We also illustrate the method using a model checker SMV.",2008,0, 2838,Coping with unreliable channels: Efficient link estimation for low-power wireless sensor networks,"The dynamic nature of wireless communication and the stringent energy constraints are major challenges for the design of low-power wireless sensor network applications. The link quality of a wireless link is known for its great variability, dependent on the distance between nodes, the antennapsilas radiation characteristic, multipath, diffraction, scattering and many more. Especially for indoor and urban deployments, there are numerous factors impacting the wireless channel. In an extensive experimental study contained in the first part of this paper, we show the magnitude of this problem for current Wireless Sensor Networks (WSNs) and that based on the overall connectivity graph of a typical multihop WSN, a large portion of the links actually exhibit very poor characteristics. We present a pattern based estimation technique that allows assessing the quality of a link at startup and as a result to construct an optimal neighbor table right at the beginning using a minimum of resources only. Our estimation technique is superior compared to other approaches where protocols continue to decide on the fly which links to use expending valuable energy both for unnecessary retransmissions and recursive link estimation.",2008,0, 2839,Hardware accelerated Scalable Parallel Random Number Generators for Monte Carlo methods,"Monte Carlo methods often demand the generation of many random numbers to provide statistically meaningful results. Because generating random numbers is time consuming and error-prone, the Scalable Parallel Random Number Generators (SPRNG) library is widely used for Monte Carlo simulation. SPRNG supports fast, scalable random number generation with good statistical properties. In order to accelerate SPRNG, we develop a hardware accelerated version of SPRNG that produces identical results. To demonstrate HASPRNG for Reconfigurable Computing (RC) applications, we develop a Monte Carlo pi-estimator for the Cray XD1 and XUP platforms. The RC MC pi-estimator shows 8.1 times speedup over the 2.2 GHz AMD Opteron processor in the Cray XD1.",2008,0, 2840,A New Vocoder based on AMR 7.4kbit/s Mode in Speaker Dependent Coding System,"A new code excited linear predictive (CELP) vocoder based on Adaptive Multi Rate (AMR) 7.4 kbit/s mode is proposed in this paper. The proposed vocoder achieves a better compression rate in an environment of Speaker Dependent Coding System (SDSC) and is efficiently used for systems, such as OGM (Outgoing message) and TTS (Text To Speech), that stores the speech data of a particular speaker. In order to enhance the compression rate of a coder, a new Line Spectral Pairs (LSP) codebook is employed by using Centroid Neural Network (CNN) algorithm. Moreover, applying the predicted pulses used in fixed code book searching enhances the quality of synthesis speech. In comparison with original (traditional) AMR 7.4 Kbit/s coder, the new coder shows a superior compression rate and an equivalent quality to AMR coder in term of informal subjective testing Mean Opinion Score(MOS).",2008,0, 2841,Mathematical Modelling for the Design of an Edge Router,"This paper presents the formulation, modelling and analysis of network traffic for the purpose of designing an edge router, catering for a large campus. Presuming packet arrival from Poissonpsilas process, and departure or service time distribution to be exponential, a Markov model has been developed. The stochastic characteristics of the traffic has been monitored and studied over long durations through different times of a day and different days of a week. Design parameters like buffer capacity, link speed and various other key parameters for an edge router have been derived. Additionally, an algorithm has been suggested to manipulate the weights of the queues dynamically in a weighted round robin scheduling, thereby changing the packet departure rate of the flows. The algorithm eventually provides a control over the load factors and the probability of packet.",2008,0, 2842,Meta-heuristic Enabled MAS Optimization in Supply Chain Procurement,"This paper introduces a meta-heuristic enabled multi-agent optimization architecture for dynamic transportation planning in the supply chain procurement (SCP) plans. When multi-agent systems (MAS) are used for real-time dynamic optimization, agents seek the solution using distributed heuristics. However, distributed heuristics based on local information are prone to converge at local optimality. To escape from local optimality toward higher quality solution, we introduce meta-heuristics over agent interactions to advise agents' searching process. In this paper, we mainly propose variable neighborhood search meta-heuristic (VNS-MH) over distributed market based heuristic (DMBH), a distributed heuristic based on market interactions for transportation planning. The numerical results show that VNS-MH performs better on achieving optimality than DMBH.",2008,0, 2843,A Crossover Game Routing Algorithm for Wireless Multimedia Sensor Networks,"The multi-constrained QoS-based routing problem of wireless multimedia sensor networks is an NP hard problem. Genetic algorithms (GAs) have been used to handle these NP hard problems in wireless networks. Because the crossover probability is a key factor of GAs' action and performance, and affects the convergence of GAs, and the selection of crossover probability is very difficult, so we propose a novel method - crossover game instead of probability crossover. The crossover game in routing problems is based of each node has restricted energy, and each node trend to get maximal whole network benefit but pay out minimum cost. The players of crossover game are individual routes. The individual would perform crossover operator if it is Nash equilibrium in crossover game. The Simulation results demonstrate that this method is effective and efficient.",2008,0, 2844,Finding Causes of Software Failure Using Ridge Regression and Association Rule Generation Methods,"An important challenge in finding latent errors in software is to find predicates which have the most effect on program failure. Since predicates have mutual effects on each other, it is not a good solution to analyze them in isolation, without considering the simultaneous effects of other predicates on failure. The aim is to detect those predicates which are best bug predictors and meanwhile have the least effects among themselves. To achieve this, recursive ridge regression method has been applied. In order to determine the main causes of program failure, the association rule generation is used to detect those predicates which are most often observed with bug predictors in faulty executions. Based on the detected predicates, the faulty paths in control flow graph are introduced to the debugger. Our empirical results on two well-known test suites, EXIF and Siemens imply that the proposed approach could detect main causes of program failure with more accuracy.",2008,0, 2845,Safety supervision layer,"This work covers a generic approach to fault detection for operating systems in fail-safe environments. A safety supervision layer between the application layer and the operating system interface is discussed. It is an attempt to detect operating system and hardware faults in an end-to-end way. Standard POSIX system calls are wrapped by procedures that provide fault detection features. Furthermore, potentials of an additional watchdog module on top of the operating system interface are analyzed. Applications that use the Safety Supervision Layer are notified of detected faults and deal with them by providing specific handlers to bring the fail-safe system to its safe state. The goal of the presented layer is to encapsulate the operating system and hardware layers a safety-critical application resides on, in order to detect faults produced by those and bring the system to a safe state. Advantages of such an attempt are portability, lower time-to-market, higher cost efficiency in building fail-safe systems and - most important - reduced error detection latency compared to usual periodic supervision approaches.",2008,0, 2846,An integrated framework of the modeling of failure-detection and fault-correction processes in software reliability analysis,"The failure-detection and fault-correction are critical processes in attaining good performance of software quality. In this paper, we propose several improvements on the conventional software reliability growth models (SRGMs) to describe actual software development process by eliminating some unrealistic assumptions. Most of these models have focused on the failure detection process and not given equal priority to modeling the fault correction process. But, most latent software errors may remain uncorrected for a long time even after they are detected, which increases their impact. The remaining software faults are often one of the most unreliable reasons for software quality. Therefore, we develop a general framework of the modeling of the failure detection and fault correction processes. Furthermore, we also analyze the effect of applying the delay-time non-homogeneous Poisson process (NHPP) models. Finally, numerical examples are shown to illustrate the results of the integration of the detection and correction process.",2008,0, 2847,Automatic model generation of IEC 61499 function block using net condition/event systems,"The IEC 61499 standard establishes a framework specifically designed for the implementation of decentralized reconfigurable industrial automation systems. However, the process of distributed systempsilas validation and verification is difficult and error-prone. This paper discusses the needs of model generators which are capable of automatically translating IEC 61499 function blocks into formal models following specific execution semantics. In particular, this paper introduces the prototype Net Condition/Event Systems model generator and aims to summarize the generic techniques of model translation.",2008,0, 2848,MultiAgent architecture for function blocks: Intelligent configuration strategies allocation,"This paper presents multiagent architecture which detects faults in process automation and allocates intelligent algorithms in field device function blocks to solve these faults. This architecture is a FIPA-standard based agent platform and was developed using JADE and foundation fieldbus technology. The main objective is to enable problem detection activities independent of userpsilas intervention. The use of artificial neural network (ANN) based algorithms, enables the agents to find out about problem patterns and to make decisions about which algorithm can be used in which situations. With this we intend to reduce the supervisor intervention to select and implement an appropriate structure of function blocks algorithms. Furthermore these algorithms, when implemented in device function blocks, provide a solution at fieldbus level, reducing data traffic between gateway and device, and speeding up the process of dealing with the problem. An example is demonstrated with a laboratory test process where fault scenarios have been imitated.",2008,0, 2849,Workflow mining: Extending the algorithm to mine duplicate tasks,"Designing a workflow model is a complicated, time-consuming and error-prone process. A possible solution is workflow mining which extracts workflow models from workflow logs. Considerable researches have been done to develop heuristics to mine event-data logs in order to make a workflow model. However, if there are cyclic tasks in workflow traces, the current research in workflow mining still has problems in mining duplicate tasks. Based on the alpha-algorithm, an improved workflow mining algorithm called alpha#-algorithm is presented. The complete experiments have been done to evaluate the proposed algorithm.",2008,0, 2850,Graphical Representation as a Factor of 3D Software User Satisfaction: A Metric Based Approach,"During the last few years, an increase in the development and research activity on 3D applications, mainly motivated by the rigorous growth of the game industry, is observed. This paper deals with assessing user satisfaction, i.e. a critical aspect of 3D software quality, by measuring technical characteristics of virtual worlds. Such metrics can be easily calculated in games and virtual environments of different themes and genres. In addition to that, the metric suite would provide an objective mean of comparing 3D software. In this paper, metrics concerning the graphical representation of a virtual world are introduced and validated through a pilot experiment.",2008,0, 2851,A BBN Based Approach for Improving a Telecommunication Software Estimation Process,"This paper describes analytically a methodology for improving the estimation process of a small-medium telecommunication (TLC) company. All the steps required for the generation of estimates such as data collection, data transformation, estimation model extraction and finally exploitation of the knowledge explored are described and demonstrated as a case study involving a Greek TLC company. Based on this knowledge certain interventions are suggested in the current process of the company under study in order to include formal estimation procedures in each development phase.",2008,0, 2852,Rating Agencies Interoperation for Peer-to-Peer Online Transactions,"In current peer-to-peer systems users interact with unknown services and users for the purpose of online transactions such as file sharing and trading of commodities. Peer-to-Peer reputation systems allow users to assess the trustworthiness of unknown entities based on subjective feedback from the other peers. However, this cannot constitute sufficient proof for many transactions like service composition, negotiations and coalition formation in which users require more solid proof of the quality of unknown services. Ratings certified by trusted third parties in the form of a security token are objective and reliable and, hence, allow building trust between peers. Because of the decentralized and distributed nature of peer-to-peer networks, a central authority (or hierarchy of them) issuing such certificates would not scale up. We propose a framework for peer-to-peer agencies interoperation based on rating certificates and meta certificates describing bilateral agencies relations.",2008,0, 2853,Dynamic Multipath Allocation in Ad Hoc Networks,"Ad hoc networks are characterized by fast dynamic changes in the topology of the network. A known technique to improve QoS is to use multipath routing where packets (voice/video/...) from a source to a destination travel in two or more maximal disjoint paths. We observe that the need to find a set of maximal disjoint paths can be relaxed by finding a set of paths S wherein only bottlenecked links are bypassed. In the proposed model we assume that there is only one edge along a path in S is a bottleneck and show that by selecting random paths in S the probability that bottlenecked edges get bypassed is high. We implemented this idea in the MRA system which is a highly accurate visual ad hoc simulator currently supporting two routing protocols AODV and MRA. We have extended the MRA protocol to use multipath routing by maintaining a set of random routing trees from which random paths can be easily selected. Random paths are allocated/released by threshold rules monitoring the session quality. The experiments show that: (1) session QoS is significantly improve, (2) the fact that many sessions use multiple paths in parallel does not depredate overall performances, (3) the overhead in maintaining multipath in the MRA algorithm is negligible.",2008,0, 2854,Development of Fault Detection System in Air Handling Unit,"Monitoring systems used at present to operate air handling unit(AHU) optimally do not have a function that enables to detect faults properly when there are faults of such as operating plants or performance falling, so they are unable to manage faults rapidly and operate optimally. In this paper, we have developed a classified rule-based fault detection system which can be inclusively used in AHU system of a building by installation of sensor which is composed of AHU system and required low costs compare to the model based fault detection system which can be used only in a special building or system. In order to experiment this algorithm, it was applied to AHU system which is installed inside environment chamber(EC), verified its own practical effect, and confirmed its own applicability to the related field in the future.",2008,0, 2855,The Study of Response Model & Mechanism Against Windows Kernel Compromises,"Malicious codes have been widely documented and detected in information security breach occurrences of Microsoft Windows platform. Legacy information security systems are particularly vulnerable to breaches, due to Window kernel-based malicious codes,that penetrate existing protection and remain undetected. To date there has not been enough quality study into and information sharing about Windows kernel and inner code mechanisms, and this is the corereason for the success of these codes into entering systems and remaining undetected. This paper focus on classification and formalization of type, target and mechanism of various Windows kernel-based attacks, and will present suggestions for effective response methodologies in the categories of ; """"Kernel memory protection"""", """"process & driver protection"""" and """"File system & registry protection"""". An effective Windows kernel protection system will be presented through the collection and analysis of Windows kernel and inside mechanisms, and through suggestions for the implementation methodologies of unreleased and new Windows kernel protection skill. Results presented in this paper will explain that the suggested system be highly effective and has more accurate for intrusion detection ratios, then the current legacy security systems (i.e., virus vaccines, and Windows IPS, etc) intrusion detection ratios. So, It is expected that the suggested system provides a good solution to prevent IT infrastructure from complicated and intelligent Windows kernel attacks.",2008,0, 2856,A Scalable Method for Improving the Performance of Classifiers in Multiclass Applications by Pairwise Classifiers and GA,"In this paper, a new combinational method for improving the recognition rate of multiclass classifiers is proposed. The main idea behind this method is using pairwise classifiers to enhance the ensemble. Because of more accuracy of them, they can decrease the error rate in error-prone feature space. Firstly, a multiclass classifier has been trained. Then, regarding to confusion matrix and evaluation data, the pair-classes that have the most error have been derived. After that, pairwise classifiers have been trained and added to ensemble of classifiers. Finally, weighted majority vote for combining the primary results is applied. In this paper, multi layer perceptron is used as base classifier. Also, GA determines the optimized weights in final classifier. This method is evaluated on a Farsi digit handwritten dataset. Using proposed method, the recognition rate of simple multiclass classifier has been improved from 97.83 to 98.89 which shows an adequate improvement.",2008,0, 2857,Dynamic Equilibrium Replica Location Algorithms in Data Grid,"Replica location is one of the key issues of data management in grid environment. Existing replica location services employ statistical location information to locate replica, lack initiative detecting mechanism for update, and equalize location information at large cost. In this paper, aimed at existing problems, combined with the characteristics of data gird, a dynamic equilibrium replica location algorithm (DERLS) was proposed. The idea underlying our DERLS algorithms consists of three parts: (1) Replica location service locates physical replica according to the density of pheromone by artificial ant spread. (2) A dynamic equilibrium technique is proposed to prevent positive feedback of basic ant algorithms. (3) Update strategy based on improved ant algorithms is used to detect new replica or resume fault replica. Experimental results show the availability and validity of the replica location in detail.",2008,0, 2858,Hardware/Software Design Considerations for Automotive Embedded Systems,"An increasing number of safety-critical functions is taken over by embedded systems in today's automobiles. While standard microcontrollers are the dominant hardware platform in these systems, the decreasing costs of new devices as field programmable gate arrays (FPGAs) make it interesting to consider them for automotive applications. In this paper, a comparison of microcontrollers and FPGAs with respect to safety and reliability properties is presented. For this comparison, hardware fault handling was considered as well as software fault handling. Own empirical evaluations in the area of software fault handling identified advantages of FPGAs with respect to the encapsulation of real-time functions. On the other hand, several dependent failures were detected in versions developed independently on microcontrollers and FPGAs.",2008,0, 2859,A Fault Tolerance Scheme for Hierarchical Dynamic Schedulers in Grids,"In dynamic grid environment failures (e.g. link down, resource failures) are frequent. We present a fault tolerance scheme for hierarchical dynamic scheduler (HDS) for grid workflow applications. In HDS all resources are arranged in a hierarchy tree and each resource acts as a scheduler. The fault tolerance scheme is fully distributed and is responsible for maintaining the hierarchy tree in the presence of failures. Our fault tolerance scheme handles root failures specially, which avoids root becoming single point of failure. The resources detecting failures are responsible for taking appropriate actions. Our fault tolerance scheme uses randomization to get rid of multiple simultaneous failures. Our simulation results show that the recovery process is fast and the failures affect minimally to the scheduling process.",2008,0, 2860,A Simulation Framework for Dependable Distributed Systems,"The use of discrete-event simulators in the design and development of distributed systems is appealing due to their efficiency and scalability. Their core abstractions of process and event map neatly to the components and interactions of modern-day distributed systems and allow designing realistic simulation scenarios. MONARC, a multi-threaded, process oriented simulation framework designed for modeling large scale distributed systems, allows the realistic simulation of a wide-range of distributed system technologies, with respect to their specific components and characteristics. In this paper we present an innovative solution to the problem of evaluating the dependability characteristic of distributed systems. Our solution is based on several proposed extensions to the simulation model of the MONARC simulation framework. These extensions refer to fault tolerance and system orchestration mechanisms being added in order to assess the reliability and availability of distributed systems. The extended simulation model includes the necessary components to describe various actual failure situations and provides the mechanisms to evaluate different strategies for replication and redundancy procedures, as well as security enforcement mechanisms.",2008,0, 2861,Improving the Efficiency of Misuse Detection by Means of the q-gram Distance,"Misuse detection-based intrusion detection systems (IDS) perform search through a database of attack signatures in order to detect whether any of them are present in incoming traffic. For such testing, fault-tolerant distance measures are needed. One of the appropriate distance measures of this kind is constrained edit distance, but the time complexity of its computation is too high. We propose a two-phase indexless search procedure for application in misuse detection-based IDS that makes use of q-gram distance instead of the constrained edit distance. We study how well q-gram distance approximates edit distance with special constraints needed in IDS applications. We compare the performances of the search procedure with the two distances applied in it. Experimental results show that the procedure with the q-gram distance implemented achieves for higher values of q almost the same accuracy as the one with the constrained edit distance implemented, but the efficiency of the procedure that implements the q-gram distance is much better.",2008,0, 2862,DAST: A QoS-Aware Routing Protocol for Wireless Sensor Networks,"In wireless sensor networks (WSNs), a challenging problem is how to advance network QoS. Energy-efficiency, network communication traffic and failure-tolerance, these important factors of QoS are closely related with the applied performance of WSNs. Hence a QoS-aware routing protocol called directed alternative spanning tree (DAST) is proposed to balance the above three factors of QoS. A directed tree-based model is constructed to bring data transmission more motivated and efficient. Based on Markov, a communication state predicted mechanism is proposed to choose reasonable parent, and packet transmission to double-parent is submitted with alternative algorithm. For enhancing network failure-tolerance, routing reconstruction is studied on. With the simulations, the proposed protocol is evaluated in comparison with the existing protocols from energy efficiency to the failure-tolerance. The performance of DAST is verified to be efficient and available, and it is competent for satisfying QoS of WSNs.",2008,0, 2863,Design and Analysis of Embedded GPS/DR Vehicle Integrated Navigation System,"Global Position system (GPS) is a positioning system with superior long-term error performance, while Dead Reckoning (DR) system has good positioning precision in short-term, through advantage complementation, a GPS/DR integration provides position data with high reliability for vehicle navigation system. This paper focuses on the design of the embedded GPS/DR vehicle integrated navigation system using the nonlinear Kalman filtering approach. The signal's observation gross errors are detected and removed at different resolution levels based on statistic 3sigma- theory, and navigation data are solved with Extended Kalman filter in real-time, the fault tolerance and precision of the vehicle integrated navigation system are improved greatly.",2008,0, 2864,Middleware for Dependable Computing,"As applications become more distributed and complex, the probability of faults undoubtedly increases. Distributed systems often face some challenges, such as node failure, object crash, network partition, value fault in applications, and so on. To support designers building dependable applications, research in the field of middleware systems has proliferated. In this paper, we examine some key issues of dependable middleware systems, introduce several basic concepts related to dependable middleware, present a detailed of review of the major dependable middleware systems in this field. Finally, we point out future directions of research and conclude the paper.",2008,0, 2865,Energy-Aware Multi-Path Streaming of MPEG-4 FGS Video over Wireless,"The wireless mobile nodes are self-powered and energy-sensitive. It is critical to prevent rapid energy dissipation while streaming high quality video over wireless. We investigate the energy consumption mode of mobile nodes and propose an energy-aware scheme for efficient streaming of MPEG-4 FGS video over multiple paths in wireless. We calculate the decoding aptitude of each FGS-coded frame before its decoding deadline, based on the available energy of a mobile node, in order to fully utilize its capacity to decode frames and avoid energy waste. We give the multi-path selection model that tries to minimize the packets drop probability on each path, while taking congestion, contention, channel error, interference and mobility into considerations. By incorporating the decoding aptitude with the path selection model, packets in each frame can be transmitted according to the available energy and throughput between mobile nodes, thus no energy is wasted. If the decoding aptitude is higher than the bandwidth on a single path, more packets can be transmitted over another path, thus the quality of received video can be progressively improved. This scheme is validated on Xscale-based mobile nodes.",2008,0, 2866,Study on Recognition Characteristics of Acoustic Emission Based on Fractal Dimension,"It is difficult to recognise acoustic emission (AE) signal because of serious pollution by noise. Fractal dimension is a new method to describe the characteristics of AE signal. According to the complex computation and low precision of box counting dimension, correlation dimension and Katz dimension, an algorithm of logarithmic fractal dimension based on waveform length was proposed in this paper the deduction process was introduced. The experiment datas was rub impact AE signals sampled from rotating test stand. Gaussian white noise and non-stationary noise were added to simulate the field AE signal which polluted seriously by noise. Then, three algorithms based on box counting dimension, Katz dimension and logarithmic fractal dimension were compared in AE signal recognition. The results show that the algorithm of logarithmic dimension distinguishes rub impact AE signals from strong noise is more effectively, and has lower computation and higher precision than others. It provides a new approach to identify characteristics of AE signal and detect rub impact fault of rotating machinery.",2008,0, 2867,Effective Web Service Composition in Diverse and Large-Scale Service Networks,"The main research focus of Web services is to achieve the interoperability between distributed and heterogeneous applications. Therefore, flexible composition of Web services to fulfill the given challenging requirements is one of the most important objectives in this research field. However, until now, service composition has been largely an error-prone and tedious process. Furthermore, as the number of available web services increases, finding the right Web services to satisfy the given goal becomes intractable. In this paper, toward these issues, we propose an AI planning-based framework that enables the automatic composition of Web services, and explore the following issues. First, we formulate the Web-service composition problem in terms of AI planning and network optimization problems to investigate its complexity in detail. Second, we analyze publicly available Web service sets using network analysis techniques. Third, we develop a novel Web-service benchmark tool called WSBen. Fourth, we develop a novel AI planning-based heuristic Web-service composition algorithm named WSPR. Finally, we conduct extensive experiments to verify WSPR against state-of-the-art AI planners. It is our hope that both WSPR and WSBen will provide useful insights for researchers to develop Web-service discovery and composition algorithms, and software.",2008,0, 2868,Automated duplicate detection for bug tracking systems,"Bug tracking systems are important tools that guide the maintenance activities of software developers. The utility of these systems is hampered by an excessive number of duplicate bug reports-in some projects as many as a quarter of all reports are duplicates. Developers must manually identify duplicate bug reports, but this identification process is time-consuming and exacerbates the already high cost of software maintenance. We propose a system that automatically classifies duplicate bug reports as they arrive to save developer time. This system uses surface features, textual semantics, and graph clustering to predict duplicate status. Using a dataset of 29,000 bug reports from the Mozilla project, we perform experiments that include a simulation of a real-time bug reporting environment. Our system is able to reduce development cost by filtering out 8% of duplicate bug reports while allowing at least one report for each real defect to reach developers.",2008,0, 2869,Using likely program invariants to detect hardware errors,"In the near future, hardware is expected to become increasingly vulnerable to faults due to continuously decreasing feature size. Software-level symptoms have previously been used to detect permanent hardware faults. However, they can not detect a small fraction of faults, which may lead to silent data corruptions(SDCs). In this paper, we present a system that uses invariants to improve the coverage and latency of existing detection techniques for permanent faults. The basic idea is to use training inputs to create likely invariants based on value ranges of selected program variables and then use them to identify faults at runtime. Likely invariants, however, can have false positives which makes them challenging to use for permanent faults. We use our on-line diagnosis framework for detecting false positives at runtime and limit the number of false positives to keep the associated overhead minimal. Experimental results using microarchitecture level fault injections in full-system simulation show 28.6% reduction in the number of undetected faults and 74.2% reduction in the number of SDCs over existing techniques, with reasonable overhead for checking code.",2008,0, 2870,Tempest: Towards early identification of failure-prone binaries,"Early estimates of failure-proneness can be used to help inform decisions on testing, refactoring, design rework etc. Often such early estimates are based on code metrics like churn and complexity. But such estimates of software quality rarely make their way into a mainstream tool and find industrial deployment. In this paper we discuss about the Tempest tool that uses statistical failure-proneness models based on code complexity and churn metrics across the Microsoft Windows code base to identify failure-prone binaries early in the development process. We also present the tool architecture and its usage as of date at Microsoft.",2008,0, 2871,ConfErr: A tool for assessing resilience to human configuration errors,"We present ConfErr, a tool for testing and quantifying the resilience of software systems to human-induced configuration errors. ConfErr uses human error models rooted in psychology and linguistics to generate realistic configuration mistakes; it then injects these mistakes and measures their effects, producing a resilience profile of the system under test. The resilience profile, capturing succinctly how sensitive the target software is to different classes of configuration errors, can be used for improving the software or to compare systems to each other. ConfErr is highly portable, because all mutations are performed on abstract representations of the configuration files. Using ConfErr, we found several serious flaws in the MySQL and Postgres databases, Apache web server, and BIND and djbdns name servers; we were also able to directly compare the resilience of functionally-equivalent systems, such as MySQL and Postgres.",2008,0, 2872,AGIS: Towards automatic generation of infection signatures,"An important yet largely uncharted problem in malware defense is how to automate generation of infection signatures for detecting compromised systems, i.e., signatures that characterize the behavior of malware residing on a system. To this end, we develop AGIS, a host-based technique that detects infections by malware and automatically generates an infection signature of the malware. AGIS monitors the runtime behavior of suspicious code according to a set of security policies to detect an infection, and then identifies its characteristic behavior in terms of system or API calls. AGIS then statically analyzes the corresponding executables to extract the instructions important to the infectionpsilas mission. These instructions can be used to build a template for a static-analysis-based scanner, or a regular-expression signature for legacy scanners. AGIS also detects encrypted malware and generates a signature from its plaintext decryption loop. We implemented AGIS on Windows XP and evaluated it against real-life malware, including keyloggers, mass-mailing worms, and a well-known mutation engine. The experimental results demonstrate the effectiveness of our technique in detecting new infections and generating high-quality signatures.",2008,0, 2873,"An integrated approach to resource pool management: Policies, efficiency and quality metrics","The consolidation of multiple servers and their workloads aims to minimize the number of servers needed thereby enabling the efficient use of server and power resources. At the same time, applications participating in consolidation scenarios often have specific quality of service requirements that need to be supported. To evaluate which workloads can be consolidated to which servers we employ a trace-based approach that determines a near optimal workload placement that provides specific qualities of service. However, the chosen workload placement is based on past demands that may not perfectly predict future demands. To further improve efficiency and application quality of service we apply the trace-based technique repeatedly, as a workload placement controller. We integrate the workload placement controller with a reactive controller that observes current behavior to i) migrate workloads off of overloaded servers and ii) free and shut down lightly-loaded servers. To evaluate the effectiveness of the approach, we developed a new host load emulation environment that simulates different management policies in a time effective manner. A case study involving three months of data for 138 SAP applications compares our integrated controller approach with the use of each controller separately. The study considers trade-offs between i) required capacity and power usage, ii) resource access quality of service for CPU and memory resources, and iii) the number of migrations. We consider two typical enterprise environments: blade and server based resource pool infrastructures. The results show that the integrated controller approach outperforms the use of either controller separately for the enterprise application workloads in our study. We show the influence of the blade and server pool infrastructures on the effectiveness of the management policies.",2008,0, 2874,Hot-spot prediction and alleviation in distributed stream processing applications,"Many emerging distributed applications require the real-time processing of large amounts of data that are being updated continuously. Distributed stream processing systems offer a scalable and efficient means of in-network processing of such data streams. However, the large scale and the distributed nature of such systems, as well as the fluctuation of their load render it difficult to ensure that distributed stream processing applications meet their Quality of Service demands. We describe a decentralized framework for proactively predicting and alleviating hot-spots in distributed stream processing applications in real-time. We base our hot-spot prediction techniques on statistical forecasting methods, while for hot-spot alleviation we employ a non-disruptive component migration protocol. The experimental evaluation of our techniques, implemented in our Synergy distributed stream processing middleware over PlanetLab, using a real stream processing application operating on real streaming data, demonstrates high prediction accuracy and substantial performance benefits.",2008,0, 2875,A recurrence-relation-based reward model for performability evaluation of embedded systems,"Embedded systems for closed-loop applications often behave as discrete-time semi-Markov processes (DTSMPs). Performability measures most meaningful to iterative embedded systems, such as accumulated reward, are thus difficult to solve analytically in general. In this paper, we propose a recurrence-relation-based (RRB) reward model to evaluate such measures. A critical element in RRB reward models is the notion of state-entry probability. This notion enables us to utilize the embedded Markov chain in a DTSMP in a novel way. More specifically, we formulate state-entry probabilities, state-occupancy probabilities, and expressions concerning accumulated reward solely in terms of state-entry probability and its companion term, namely the expected accumulated reward at the point of state entry. As a result, recurrence relations abstract away all the intermediate points that lack the memoryless property, enabling a solvable model to be directly built upon the embedded Markov chain. To show the usefulness of RRB reward models, we evaluate an embedded system for which we leverage the proposed notion and methods to solve a variety of probabilistic measures analytically.",2008,0, 2876,Beliefs learning in fuzzy constraint-directed agent negotiation,"This paper presents a belief learning model for fuzzy constraint-directed agent negotiation. The main features of the proposed model include: 1) fuzzy probability constraints for increasing the efficiency on the convergence of behavior patterns, and eliminating the noisy hypotheses or beliefs, 2) fuzzy instance matching method for reusing the prior opponent knowledge to speed up the problem-solving, and inferring the proximate regularities to acquire a desirable result on forecasting opponent behavior, and 3) adaptive interaction for making a dynamic concession to fulfill a desirable objective. Experimental results suggest that the proposed framework can improve both negotiation qualities.",2008,0, 2877,An approximate muscle guided global optimization algorithm for the Three-Index Assignment Problem,"The three-index assignment problem (AP3) is a famous NP-hard problem with wide applications. Since itpsilas intractable, many heuristics have been proposed to obtain near optimal solutions in reasonable time. In this paper, a new meta-heuristic was proposed for solving the AP3. Firstly, we introduced the conception of muscle (the union of optimal solutions) and proved that it is intractable to obtain the muscle under the assumption that P ne NP. Moreover, we showed that the whole muscle can be approximated by the union of local optimal solutions. Therefore, the approximate muscle guided global optimization (AMGO) is proposed to solve the AP3. AMGO employs a global optimization strategy to search in a search space reduced by the approximate muscle, which is constructed by a multi-restart scheme. During the global optimization procedure, the running time can be dramatically saved by detecting feasible solutions and extracting poor partial solutions. Extensive experimental results on the standard AP3 benchmark indicated that the new algorithm outperforms the state-of-the-art heuristics in terms of solution quality. Work of this paper not only provides a new meta-heuristic for NP-hard problems, but shows that global optimization can provide promising results in reasonable time, by restricting it to a fairly reduced search space.",2008,0, 2878,A Grammatical Swarm for protein classification,"We present a grammatical swarm (GS) for the optimization of an aggregation operator. This combines the results of several classifiers into a unique score, producing an optimal ranking of the individuals. We apply our method to the identification of new members of a protein family. Support vector machine and naive Bayes classifiers exploit complementary features to compute probability estimates. A great advantage of the GS is that it produces an understandable algorithm revealing the interest of the classifiers. Due to the large volume of candidate sequences, ranking quality is of crucial importance. Consequently, our fitness criterion is based on the area under the ROC curve rather than on classification error rate. We discuss the performances obtained for a particular family, the cytokines and show that this technique is an efficient means of ranking the protein sequences.",2008,0, 2879,Model-based optimization revisited: Towards real-world processes,"The application of empirically determined surrogate models provides a standard solution to expensive optimization problems. Over the last decades several variants based on DACE (design and analysis of computer experiments) have provided excellent optimization results in cases where only a few evaluations could be made. In this paper these approaches are revisited with respect to their applicability in the optimization of production processes, which are in general multiobjective and allow no exact evaluations. The comparison to standard methods of experimental design shows significant improvements with respect to prediction quality and accuracy in detecting the optimum even if the experimental outcomes are highly distorted by noise. The universally assumed sensitivity of DACE models to nondeterministic data can therefore be refuted. Additionally, a practical example points out the potential of applying EC-methods to production processes by means of these models.",2008,0, 2880,Module documentation based testing using Grey-Box approach,"Testing plays an important role to assure the quality of software. Testing is a process of detecting errors that can be highly effective if performed rigorously. The use of formal specifications provides significant opportunity to develop effective testing techniques. Grey-box testing approach usually based on knowledge obtains from specification and source code while seldom the design specification is concerned. In this paper, we propose an approach for testing a module with internal memory from its formal specification based on grey-box approach. We use formal specifications that are documented using Parnass Module Documentation (MD) method. The MD provides us with the information of external and internal view of a module that can be useful in greybox testing approach.",2008,0, 2881,Resistance factors in the implementation of software process improvement project,"Over decades, software model for improving the quality of software through management of the software process has became significant in the software industry. Many companies are now being assessed according to standards such as the CMM, SIX-SIGMA or ISO 9000, which have brought substantial profit to the companies that utilize them to improve the quality of software product. Several companies in Malaysia have been carried out software process improvement projects. However, a software process improvement initiative is still sometimes delayed, costs are over budgeted and some of them surrender before the project ends. Therefore, this paper attempt to analyze and identify the resistance factors which influence the implementation of the software process improvement project initiated by the company. This paper will serve as reference to the professionals in the area. In the other hand, it may also helping the other companies to manage future projects through the use of preventive actions that will eliminate or at least lessening the resistance factors consequences during the implementation of the software process improvement projects. This paper present a survey with 8 Malaysias companies around Kuala Lumpur and Selangor which have an experience in initiating and conducting software process improvement project. A total of 117 respondents from various background have participated this survey.",2008,0, 2882,Software quality prediction using Affinity Propagation algorithm,"Software metrics are collected at various phases of the software development process. These metrics contain the information of the software and can be used to predict software quality in the early stage of software life cycle. Intelligent computing techniques such as data mining can be applied in the study of software quality by analyzing software metrics. Clustering analysis, which can be considered as one of the data mining techniques, is adopted to build the software quality prediction models in the early period of software testing. In this paper, a new clustering method called Affinity Propagation is investigated for the analysis of two software metric datasets extracted from real-world software projects. Meanwhile, K-Means clustering method is also applied for comparison. The numerical experiment results show that the Affinity Propagation algorithm can be applied well in software quality prediction in the very early stage, and it is more effective on reducing Type II error.",2008,0, 2883,A model for long-term environmental sound detection,"Knowledge on primary processing of sound by the human auditory system has tremendously increased. This paper exploits the opportunities this creates for assessing the impact of (unwanted) environmental noise on quality of life of people. In particular the effect of auditory attention in a multisource context is focused on. The typical application envisaged here is characterized by very long term exposure (days) and multiple listeners (thousands) that need to be assessed. Therefore, the proposed model introduces many simplifications. The results obtained show that the approach is nevertheless capable of generating insight in the emergence of annoyance and the appraisal of open area soundscapes.",2008,0, 2884,Selecting software reliability models with a neural network meta classifier,"Software reliability is one of the most important quality characteristics for almost all systems. The use of a software reliability model to estimate and predict the system reliability level is fundamental to ensure software quality. However, the selection of an appropriate model for a specific case can be very difficult for project managers. This is because, there are several models that can be used and none has proved to perform well considering different projects and databases. Each model is valid only if its assumptions are satisfied. To aim at the task of choosing the best software reliability model for a dataset, this paper presents a meta-learning approach and describes experimental results from the use of a neural network meta classifier for selection among different kind of reliability models. The obtained results validate the idea and are very promising.",2008,0, 2885,Efficient clustered BVH update algorithm for highly-dynamic models,"We present a new algorithm that efficiently updates bounding volume hierarchy (BVH) for ray tracing. Our algorithm is applicable in handling various types of highly-dynamic models. The algorithm produces the SAH-based BVH of good quality for rendering. The algorithm unites the advantages of some previously developed methods and offers techniques and extensions to reduce the number of per-frame BVH update operations. It works with a binary BVH where every leaf is associated with a triangle. The algorithm always tries to perform less costly operations on BVH-clusters and avoids unnecessary work if it is possible. Firstly, it detects BVH-clusters of triangles that move coherently to each other, and reinserts only cluster-roots in the proper positions of the BVH. Thus it allows efficient handling of the structural motion. Secondly, the algorithm detects the exploded BVH-clusters for performing rebuild-operations on them. Careful and efficient localizing of rebuild-space into non-overlapping clusters greatly reduces the number of rebuild-operations. It can allow independent rebuilding of all detected clusters, even if one cluster is represented by an ancestor of another. Our algorithm accelerates the total BVH update time by 2-4times on the average in comparison to the full SAH-based binned-rebuild with a set of all triangles at input.",2008,0, 2886,"Perfect Generation, Monotonicity and Finite Queueing Networks","Perfect generation, also called perfect or exact simulation, provides a new technique to sample steady-state and avoids the burn-in time period. When the simulation algorithm stops, the returned state value is in steady-state. Initiated by Propp and Wilson in the context of statistical physics, this technique is based on a coupling from the past scheme that, provided some conditions on the system, ensures convergence in a finite time to steady-state. This approach has been successfully applied in various domains including stochastic geometry, interacting particle systems, statistical physics, networking.The aim of this tutorial is to introduce the concept of perfect generation and discuss about the algorithmic design of perfect samplers. To improve the efficiency of such samplers, structural properties of models such as monotonicity are enforced in the algorithm to improve drastically the complexity. Such samplers could then been used in brute force to estimate low probability events in finite queueing networks.",2008,0, 2887,Correctness Verification and Quantitative Evaluation of Timed Systems Based on Stochastic State Classes,"This tutorial addresses the integration of correctness verification and quantitative evaluation of timed systems, based on the stochastic extension of the theory of DBM state classes. In the first part, we recall symbolic state space analysis of non-deterministic models based on DBM state classes, describing the algorithms for state space enumeration and for timing analysis of individual traces.",2008,0, 2888,"Fault Detection, Isolation, and Localization in Embedded Control Software","Embedded control software reacts to plant and environment conditions in order to enforce a desired functionality, and exhibit hybrid dynamics: control-loops together with switching logic. Control software can contain errors (faults), and fault-tolerance methods must be developed to enhance system safety and reliability. We present an approach for fault detection and isolation that is key to achieving fault-tolerance. Detection approach is hierarchical involving monitoring both the control software, and the controlled-system. The latter is necessary to safeguard against any incompleteness of software level properties. A model of the system being monitored is not required, and further the approach is modular and hence scalable. When fault is detected at the system level, an isolation of a software fault is achieved by using residue methods to rule out any hardware (plant) fault. We also proposed a method to localize a software fault (to those lines of code that contain the fault). The talk will be illustrated through a servo control application.",2008,0, 2889,A novel method for DC system grounding fault monitoring on-line and its realization,"On the basis of comparison and analysis of the present grounding fault monitoring methods such as method of AC injection, method of DC leakage and so on, this paper points out their shortcomings in practical applications. A novel method named method of different frequency signals for detecting grounding fault of DC system is advanced, which can overcome the bad influence of distributed capacitor between the ground and branches. Finally a new kind of detector based on the proposed method is introduced. The detector, with C8051F041 as kernel, adopting method of different frequency signals, realizes on-line grounding fault monitoring exactly. The principles, hardware and software design are introduced in detail. The experimental results and practical operations show that the detector has the advantages of high-precision, better anti-interference, high degree of automation, low cost, etc.",2008,0, 2890,Novelty detection with instance-based learning for optical character quality control,"Novelty detection involves modeling the normal behavior of a system and detecting any divergence from normality which may indicate onset of damage or faults. Using instance-based learning, a novelty detection approach for optical characters quality control in machine vision inspection application is given in the paper. A normal characters information pattern adapted to special application can be established by training and products information can be effectively inspected with no delay for the print error can be automatically distinguished from print quality in the process, which has been verified by the experiment.",2008,0, 2891,A New Mitigation Approach for Soft Errors in Embedded Processors,"Embedded processors, like for example processor macros inside modern FPGAs, are becoming widely used in many applications. As soon as these devices are deployed in radioactive environments, designers need hardening solutions to mitigate radiation-induced errors. When low-cost applications have to be developed, the traditional hardware redundancy-based approaches exploiting m-way replication and voting are no longer viable as too expensive, and new mitigation techniques have to be developed. In this paper we present a new approach, based on processor duplication, checkpoint and rollback, to detect and correct soft errors affecting the memory elements of embedded processors. Preliminary fault injection results performed on a PowerPC-based system confirmed the efficiency of the approach.",2008,0, 2892,Type Highlighting: A Client-Driven Visual Approach for Class Hierarchies Reengineering,"Polymorphism and class hierarchies are key to increasing the extensibility of an object-oriented program but also raise challenges for program comprehension. Despite many advances in understanding and restructuring class hierarchies, there is no direct support to analyze and understand the design decisions that drive their polymorphic usage. In this paper we introduce a metric-based visual approach to capture the extent to which the clients of a hierarchy polymorphically manipulate that hierarchy. A visual pattern vocabulary is also presented in order to facilitate the communication between analysts. Initial evaluation shows that our techniques aid program comprehension by effectively visualizing large quantities of information, and can help detect several design problems.",2008,0, 2893,Performance Prediction Model for Service Oriented Applications,"Software architecture plays a significant role in determining the quality of a software system. It exposes important system properties for consideration and analysis. Performance related properties are frequently of interest in determining the acceptability of a given software design. This paper focuses mainly on developing an architectural model for applications that use service oriented architecture (SOA). This enables predicting the performance of the application even before it is completely developed. The performance characteristics of the components and connectors are modeled using queuing network model. This approach facilitates the performance prediction of service oriented applications. Further, it also helps in identification of various bottlenecks. A prototype service oriented application has been implemented and the actual performance is measured. This is compared against the predicted performance in order to analyze the accuracy of the prediction.",2008,0, 2894,Fault tolerant multipath routing with overlap-aware path selection and dynamic packet distribution on overlay network for real-time streaming applications,In this paper we propose overlap-aware path selection and dynamic packet distribution due to failure detection in multipath routing overlay network. Real-time communications that utilize UDP do not ensure reliability for realizing fast transmission. Therefore congestion or failure in a network deteriorates the quality of service significantly. The proposed method seeks an alternate path that hardly overlaps an IP path so as to improve its reliability. The proposed method also detects congestion or failure by differential of packet loss rate and apportion packets to the IP path and the alternate path dynamically. Evaluation on PlanetLab shows the proposed method avoids congestion Consequently the influence of congestion and failure lessens and the proposed multipath routing improves the reliability which can be used for real-time communications.,2008,0, 2895,Exploring the evolution of software quality with animated visualization,"Assessing software quality and understanding how events in its evolution have lead to anomalies are two important steps toward reducing costs in software maintenance. Unfortunately, evaluation of large quantities of code over several versions is a task too time-consuming, if not overwhelming, to be applicable in general. To address this problem, we designed a visualization framework as a semi-automatic approach to quickly investigate programs composed of thousands of classes, over dozens of versions. Programs and their associated quality characteristics for each version are graphically represented and displayed independently. Real-time navigation and animation between these representations recreate visual coherences often associated with coherences intrinsic to subsequent software versions. Exploiting such coherences can reduce cognitive gaps between the different views of software, and allows human experts to use their visual capacity and intuition to efficiently investigate and understand various quality aspects of software evolution. To illustrate the interest of our framework, we report our results on two case studies.",2008,0, 2896,Generic and reflective graph transformations for the checking and enforcement of modeling guidelines,"In the automotive industry, the model driven development of software, today considered as the standard paradigm, is generally based on the use of the tool MATLAB Simulink/Stateflow. To increase the quality, the reliability, and the efficiency of the models and the generated code, checking and elimination of detected guideline violations defined in huge catalogues has become an essential task in the development process. It represents such a tremendous amount of boring work that it must necessarily be automated. In the past we have shown that graph transformation tools like Fujaba/MOFLON allow for the specification of single modeling guidelines on a very high level of abstraction and that guideline checking tools can be generated from these specifications easily. Unfortunately, graph transformation languages do not offer appropriate concepts for reuse of specification fragments - a MUST, when we deal with hundreds of guidelines. As a consequence we present an extension of MOFLON that supports the definition of generic rewrite rules and combines them with the reflective programming mechanisms of Java and the model repository interface standard JMI.",2008,0, 2897,Evaluating Models for Model-Based Debugging,"Developing model-based automatic debugging strategies has been an active research area for several years, with the aim of locating defects in a program by utilising fully automated generation of a model of the program from its source code. We provide an overview of current techniques in model-based debugging and assess strengths and weaknesses of the individual approaches. An empirical comparison is presented that investigates the relative accuracy of different models on a set of test programs and fault assumptions, showing that our abstract interpretation based model provides high accuracy at significantly less computational effort than slightly more accurate techniques. We compare a range of model-based debugging techniques with other state-of-the-art automated debugging approaches and outline possible future developments in automatic debugging using model-based reasoning as the central unifying component in a comprehensive framework.",2008,0, 2898,Test-Suite Augmentation for Evolving Software,"One activity performed by developers during regression testing is test-suite augmentation, which consists of assessing the adequacy of a test suite after a program is modified and identifying new or modified behaviors that are not adequately exercised by the existing test suite and, thus, require additional test cases. In previous work, we proposed MATRIX, a technique for test-suite augmentation based on dependence analysis and partial symbolic execution. In this paper, we present the next step of our work, where we (I) improve the effectiveness of our technique by identifying all relevant change-propagation paths, (2) extend the technique to handle multiple and more complex changes, (3) introduce the first tool that fully implements the technique, and (4) present an empirical evaluation performed on real software. Our results show that our technique is practical and more effective than existing test-suite augmentation approaches in identifying test cases with high fault-detection capabilities.",2008,0, 2899,Cleman: Comprehensive Clone Group Evolution Management,"Recent research results have shown more benefits of the management of code clones, rather than detecting and removing them. However, existing management approaches for code clone group evolution are still ad hoc, unsatisfactory, and limited. In this paper, we introduce a novel method for comprehensive code clone group management in evolving software. The core of our method is Cleman, an algorithmic framework that allows for a systematic construction of efficient and accurate clone group management tools. Clone group management is rigorously formulated by a formal model, which provides the foundation for Cleman framework. We use Cleman framework to build a clone group management tool that is able to detect high-quality clone groups and efficiently manage them when the software evolves. We also conduct an empirical evaluation on real-world systems to show the flexibility of Cleman framework and the efficiency, completeness, and incremental updatability of our tool.",2008,0, 2900,A Comprehensive Ontology-Based Approach for SLA Obligations Monitoring,"Specifying clear quality of service (QoS) agreements between service providers and consumers is particularly important for the successful deployment of service-oriented architectures. The related challenges include correctly elaborating and monitoring QoS contracts (SLA: service level agreement) to detect and handle their violations. In this paper, first, we study and analyze existing SLA-related models. Then, we elaborate a complete, generic and semantically richer ontology-based model of SLA. We used the Semantic Web Rule Language (SWRL) to express SLA obligations in our model. This language facilitates the SLA monitoring process and the eventual action triggering in case of violations. We used this model to automatically generate semantic-enabled QoS obligations monitors. We have also developed a prototype to validate our model and our monitoring approach. Finally, we believe that this work is a step ahead to the total automation of the SLA management process.",2008,0, 2901,Intelligent Java Analyzer,"This paper presents a software metric working prototype to evaluate Java programmer's profiles. In order to automatically detect source code patterns, a Multi Layer Perceptron neural network is applied. Features determined from such patterns constitute the basis for systempsilas programmer profiling. Results presented here show that the proposed prototype is a confident approach for support in the software quality assurance process.",2008,0, 2902,Fault Detection of Bloom Filters for Defect Maps,"Bloom filters can be used as a data structure for defect maps in nanoscale memory. Unlike most other applications of Bloom filters, both false positive and false negative induced by a fault cause a fatal error in the memory system. In this paper, we present a technique for detecting faults in Bloom filters for defect maps. Spare hashing units and a simple coding technique for bit vectors are employed to detect faults during normal operation. Parallel write/read is also proposed to detect faults with high probability even without spare hashing units.",2008,0, 2903,Efficient modeling of a combined overhead-cable line for grounding-system analysis,"Simple compact models for combined overhead-cable lines supplying a substation are presented as an extension of a previous paper for grounding system analysis. The overhead line section can be equipped with uniform or combined ground wires, whereas the cable line section can consist of coated metal sheathed cables, with/without intermediate grounding, or uncoated metal sheathed cables in continuous contact with the earth. Besides the calculation of the earth current at the faulted substation, the proposed modeling method allows the evaluation of the leakage current at the transition station, where cables are connected to the overhead line, as well as at critical overhead line towers. In this manner, the effects of the so called ldquofault application transferrdquo phenomenon can be conveniently estimated at the design stage in order to assess the most appropriate safety conditions. Some numerical examples are given by applying a computer program based on the proposed methodology.",2008,0, 2904,Differential protection of three-phase transformers using Wavelet Transforms,"This paper proposes a novel formulation for differential protection of three-phase transformers using Wavelet Transforms (WTs). The new proposed methodology implements the WTs to extract predominant transient signals originated by transformer internal faults and captured from the current transformers. The Wavelet Transform is an efficient signal processing tool used to study non stationary signals with fast transition (high frequency components), mapping the signal in time-frequency representation. The three phase differential currents are the input signals used on-line to detect internal faults. The performance of this algorithm is demonstrated through simulation of different internal faults and switching conditions on a power transformer using ATP/EMTP software. The analyzed data is obtained from simulation of different normal and faulty operating conditions such as internal faults - phase/phase, phase/ground-, magnetizing inrush and external faults. The case study shows that the new algorithm is highly accurate and effective.",2008,0, 2905,Modeling and control of grid-connected photovoltaic energy conversion system used as a dispersed generator,"This paper proposes a detailed mathematical model and a multi-level control scheme of a three-phase grid-connected photovoltaic (PV) system used as a dispersed generator, including the PV array and the electronic power conditioning (PCS) system, based on the Matlab/Simulink software. The model of the PV array proposed uses theoretical and empirical equations together with data provided by the manufacturer, solar radiation and cell temperature among others variables, in order to accurately predict the current-voltage curve. The PCS utilizes a two-stage energy conversion system topology that meets all the requirement of high quality electric power, flexibility and reliability imposed for applications of modern distributed energy resources (DER). The control approach incorporates a maximum power point tracker (MPPT) for dynamic active power generation jointly with reactive power compensation of the distribution power system. Validation of simulation results has been carried out by using a 250 Wp PV experimental set-up.",2008,0, 2906,Assessing learning progress and quality of teaching in large groups of students,"The classic tool of assessing learning progress are written tests and assignments. In large groups of students the workload often does not allow in depth evaluation during the course. Thus our aim was to modify the course to include active learning methods and student centered teaching. We changed the course structure only slightly and established new assessment methods like minute papers, short tests, mini-projects and a group project at the end of the semester. The focus was to monitor the learning progress during the course so that problematic issues could be addressed immediately. The year before the changes 26.76 % of the class failed the course with a grade average of 3.66 (Pass grade is 4.0/30 % of achievable marks). After introducing student centered teaching, only 14 % of students failed the course and the average grade was 3.01. Grades were also distributed more evenly with more students achieving better results. We have shown that even in large groups of students with > 100 participants student centered and active learning is possible. Although it requires a great work overhead on the behalf of the teaching staff, the quality of teaching and the motivation of the students is increased leading to a better learning environment.",2008,0, 2907,Stool detection in colonoscopy videos,"Colonoscopy is the accepted screening method for detection of colorectal cancer or its precursor lesions, colorectal polyps. Indeed, colonoscopy has contributed to a decline in the number of colorectal cancer related deaths. However, not all cancers or large polyps are detected at the time of colonoscopy, and methods to investigate why this occurs are needed. One of the main factors affecting the diagnostic accuracy of colonoscopy is the quality of bowel preparation. The quality of bowel cleansing is generally assessed by the quantity of solid or liquid stool in the lumen. Despite a large body of published data on methods that could optimize cleansing, a substantial level of inadequate cleansing occurs in 10% to 75% of patients in randomized controlled trials. In this paper, a machine learning approach to the detection of stool in images of digitized colonoscopy video files is presented. The method involves the classification based on color features using a support vector machine (SVM) classifier. Our experiments show that the proposed stool image classification method is very accurate.",2008,0, 2908,Coal Management Module (CMM) for power plant,"Importance of coal management in power plant is very much significant and also one of the most critical areas in view of plant operation as well as cost involvement, so it forms an important part of the management process in a power plant. It deals with the management of commercial, operational and administrative functions pertaining to estimating coal requirements, selection of coal suppliers, coal quality check, transportation and coal handling, payment for coal received, consumption and calculation of coal efficiency. The results are then used for cost benefit analysis to suggest further plant improvement. At various levels, management information reports need to be extracted to communicate the required information across various levels of management. The core processes of coal management involve a huge amount of paper work and manual labour, which makes it tedious, time-consuming and prone to human errors. Moreover, the time taken at each stage as well as the transparency of the relevant information has a direct bearing on the economics and efficient operation of the power plant. Both system performance and information transparency can be enhanced by the introduction of Information Technology in managing this area. This paper reports on the design & development of Coal Management Module (CMM) Software, which aims at systematic functioning of the Core Business Processes of Coal Management of a typical coal-fired power plant.",2008,0, 2909,Unifying Models of Test Cases and Requirements,"In industry, due to market pressures, it is common that the system requirements are out of date or incomplete for certain parts of the system. Nevertheless, we can always find up to date test cases which implicitly complements the related requirements. Therefore, instead of simply using test cases to detect software failures, in this paper we present an approach to update requirements using test cases. To accomplish this, we first assume that both requirements and test cases are formally documented; we reuse previous works that provide such models automatically as CSP formal specifications. Thus, we formally define a merge operation using the operational semantics of CSP. Finally, we use part of a real case study to experience the proposed approach.",2008,0, 2910,A Strategy for Automatic Conformance Testing in Embedded Systems,"Software testing is an expensive and time-consuming activity; it is also error-prone due to human factors. But, it still is the most common effort used in the software industry to achieve an acceptable level of quality for its products. An alternative is to use formal verification approaches, although they are not widespread in industry yet. This paper proposes an automatic verification approach to aid system testing based on refinement checking, where the underlying formalisms are hidden from the developers. Our approach consists in using a controlled natural language (a subset of English) to describe requirements (where it is automatically translated into the formal specification language CSP) and extracting a model directly from a mobile phone using a developed tool support; these artifacts are normalized in the same abstraction level and compared using the refinement checker FDR. This approach is being used at Motorola; the source of our case study.",2008,0, 2911,A Study of Analogy Based Sampling for interval based cost estimation for software project management,"Software cost estimation is one of the most challenging activities in software project management. Since the software cost estimation affects almost all activities of software project development such as: biding, planning, and budgeting, the accurate estimation is very crucial to the success of software project management. However, due to the inherent uncertainties in the estimation process and other factors, the accurate estimates are often obtained with great difficulties. Therefore, it is safer to generate interval based estimates with a certain probability over them. In the literature, many approaches have been proposed for interval estimation. In this study, we propose a navel method namely Analogy Based Sampling (ABS) and compare ABS against the well established Bootstrapped Analogy Based Estimation (BABE) which is the only existing variant of analogy based method with the capability to generate interval predictions. The results and comparisons show that ABS could improve the performance of BABE with much higher efficiencies and more accurate interval predictions.",2008,0, 2912,Generating Version Convertors for Domain-Specific Languages,"Domain-specific languages (DSLs) improve programmer productivity by providing high-level abstractions for the development of applications in a particular domain. However,the smaller distance to the application domain entails more frequent changes to the language. As a result, existing DSL models need to be converted to the new version. Manual conversion is tedious and error prone.This paper presents an approach to support DSL evolution by generation of convertors between DSLs. By analyzing the differences between DSL meta-models, a mapping is reverse engineered which can be used to generate reengineering tools to automatically convert models between different versions of a DSL. The approach has been implemented for the Microsoft DSL Tools infrastructure in two tools called DSLCompare and ConverterGenerator. The approach has been evaluated by means of three case studies taken from the software development practice at the company Avanade.",2008,0, 2913,I2V Communication Driving Assistance System: On-Board Traffic Light Assistant,"Cooperative systems based on V2X wireless communications offer promising opportunities for automotive safety and traffic efficiency improvement. Under preventive safety, cooperative assistance systems increase in-vehicle integrated safety systems functionality, enlarging driver's time-space perception as well as the quality and reliability of the environment data, and therefore enhancing his response to incoming events. In this paper, an on-board driving assistance system that brings traffic light information inside the vehicle is presented. Making use of positioning and cooperative I2V communications technologies, this system predicts the forthcoming traffic light state and assists the driver by means of an intuitive graphical interface (HMI).",2008,0, 2914,Towards scalable proofs of robot swarm dependability,The concept of robot swarm has demonstrated its relevance in many safety critical applications as a cost-effective solution providing natural fault-tolerance by large number of mutually replacing agents. A critical factor to the swarm functionality is the high complexity of intra swarm coordination.We propose a fully distributed coordination algorithm that uses parameters like bidding distance and random waiting time between decision and action. Another key result is a formal method for predicting the success of swarm missions that rely on given coordination algorithm. The scalability of the model checking based proof method is addressed and a state symmetry based solution proposed.,2008,0, 2915,Quality improvement for adaptive deblocking filter in H.264/AVC System,"Blocking artifacts influences image quality most important factor in the low bit-rate. New generational video coding standard for H.264, the adaptive deblocking filter plays a very important role in order to detect and analyze real and artificially edges on coded block. This paper presents a new approach for the adaptive deblocking filter of the H.264/AVC in order to improve quality. Comparing with the standard algorithm, the experimental results demonstrate the improvement in both the objective and the subjective qualities, which can achieve the improvement about 0.25~0.35 dB PSNR in average compared with the original H.264/AVC reference software JM11.0.",2008,0, 2916,Refinement and test case generation in Unifying Theory of Programming,"This talk presents a theory of testing that integrates into Hoare and Hepsilas Unifying Theory of Programming (UTP). We give test cases a denotational semantics by viewing them as specification predicates. This reformulation of test cases allows for relating test cases via refinement to specifications and programs. Having such a refinement order that integrates test cases, we develop a testing theory for fault-based testing. Fault-based testing uses test data designed to demonstrate the absence of a set of pre-specified faults. A well-known fault-based technique is mutation testing. In mutation testing, first, faults are injected into a program by altering (mutating) its source code. Then, test cases that can detect these errors are designed. The assumption is that other faults will be caught, too. We apply the mutation technique to both specifications and programs. Using our theory of testing, two new test case generation laws for detecting injected (anticipated) faults are presented: one is based on the semantic level of design specifications, the other on the algebraic properties of a programming language.",2008,0, 2917,Application of system models in regression test suite prioritization,"During regression testing, a modified system needs to be retested using the existing test suite. Since test suites may be very large, developers are interested in detecting faults in the system as early as possible. Test prioritization orders test cases for execution to increase potentially the chances of early fault detection during retesting. Most of the existing test prioritization methods are based on the code of the system, but model-based test prioritization has been recently proposed. System modeling is a widely used technique to model state-based systems. The existing model based test prioritization methods can only be used when models are modified during system maintenance. In this paper, we present model-based prioritization for a class of modifications for which models are not modified (only the source code is modified). After identification of elements of the model related to source-code modifications, information collected during execution of a model is used to prioritize tests for execution. In this paper, we discuss several model-based test prioritization heuristics. The major motivation to develop these heuristics was simplicity and effectiveness in early fault detection. We have conducted an experimental study in which we compared model-based test prioritization heuristics. The results of the study suggest that system models may improve the effectiveness of test prioritization with respect to early fault detection.",2008,0, 2918,Using random test selection to gain confidence in modified software,"This paper presents a method that addresses two practical issues concerning the use of random test selection for regression testing: the number of random samples needed from the test suite to provide reliable results, and the confidence levels of the predictions made by the random samples. The method applies the Chernoff bound, which has been applied in various randomized algorithms, to compute the error bound for random test selection. The paper presents three example applications, based on the method, for regression testing. The main benefits of the method are that it requires no distribution information about the test suite from which the samples are taken, and the computation of the confidence level is independent of the size of the test suite. The paper also presents the results of an empirical evaluation of the technique on a set of C programs, which have been used in many testing experiments, along with three of the GCC compilers. The results demonstrate the effectiveness of the method and show its potential for regression testing on real-world, large-scale applications.",2008,0, 2919,Assessing the value of coding standards: An empirical study,"In spite of the widespread use of coding standards and tools enforcing their rules, there is little empirical evidence supporting the intuition that they prevent the introduction of faults in software. Not only can compliance with a set of rules having little impact on the number of faults be considered wasted effort, but it can actually result in an increase in faults, as any modification has a non-zero probability of introducing a fault or triggering a previously concealed one. Therefore, it is important to build a body of empirical knowledge, helping us understand which rules are worthwhile enforcing, and which ones should be ignored in the context of fault reduction. In this paper, we describe two approaches to quantify the relation between rule violations and actual faults, and present empirical data on this relation for the MISRA C 2004 standard on an industrial case study.",2008,0, 2920,Constructive architecture compliance checking an experiment on support by live feedback,"This paper describes our lessons learned and experiences gained from turning an analytical reverse engineering technology - architecture compliance checking - into a constructive quality engineering technique. Constructive compliance checking constantly monitors the modifications made by developers. When a structural violation is detected, the particular developer receives live feedback allowing prompt removal of the violations and hence, training the developers on the architecture. An experiment with six component development teams gives evidence that this training pro-actively prevents architecture decay. The three teams supported by the live compliance checking inserted about 60% less structural violations into the architecture than did the three other development teams. Based on the results, we claim that constructive compliance checking is a promising application of reverse engineering technology to the software implementation phase.",2008,0, 2921,Supporting software evolution analysis with historical dependencies and defect information,"More than 90% of the cost of software is due to maintenance and evolution. Understanding the evolution of large software systems is a complex problem, which requires the use of various techniques and the support of tools. Several software evolution approaches put the emphasis on structural entities such as packages, classes and structural relationships. However, software evolution is not only about the history of software artifacts, but it also includes other types of data such as problem reports, mailing list archives etc. We propose an approach which focuses on historical dependencies and defects. We claim that they play an important role in software evolution and they are complementary to techniques based on structural information. We use historical dependencies and defect information to learn about a software system and detect potential problems in the source code. Moreover, based on design flaws detected in the source code, we predict the location of future bugs to focus maintenance activities on the buggy parts of the system. We validated our defect prediction by comparing it with the actual defects reported in the bug tracking system.",2008,0, 2922,Goal trees and fault trees for root cause analysis,"Typical enterprise applications are built upon different platforms, operate in a heterogeneous, distributed environment, and utilize different technologies, such as middleware, databases and Web services. Diagnosing the root causes of problems in such systems is difficult in part due to the number of possible configuration and tuning parameters. Today a variety of tools are used to aid operators of enterprise applications identify root causes. For example, a user input validation tool detects and prevents Website intrusions or a log analysis tool identifies malfunctioning components. Searching for the root causes of such failures in a myriad of functional and non-functional requirements poses significant challenges-not only for users, but also for experienced operators when monitoring, auditing, and diagnosing systems. We propose the notion of a guide map-a set of goal trees and fault trees-to aid users in the process of choosing (supported by high level goal trees) and applying (supported by low level fault trees) suitable diagnostic tools. In this paper we discuss two case studies to illustrate how the guide map aids users to apply two home grown diagnostic tools.",2008,0, 2923,An Ultrasonic System for Detecting Channel Defects in Flexible Packages,"Ultrasonic system was developed for detecting channel defects embedded in bonded 2-sheet flexible packages film. This hardware system consisted of spherically focused 22.66-MHz ultrasonic transducer, four-axis precision positioning system, NI PXI-bus embedded controller and ultrasonic pulser-receiver. The software system was designed based on the modularization, realized the echo signal on-line processing using ultrasonic backscattered echo envelope integral (BEEI) imaging method. Some experimental results were presented, and the BEEI-mode imaging of channel defect was shown. The system can be easily used to detect the channel defects in flexible packages.",2008,0, 2924,Power Quality Auto-monitoring for Distributed Generation Based on Virtual Instrument,"Distributed generation (DG) brings new green energy to the power system, meanwhile it also brings more power quality disturbances. According to the power quality disturbances of grid-connected distributed generation system, a new idea of the power quality auto-monitoring for distributed generation based on virtual instrument is proposed in this paper. Its realization of hardware system and software system are analyzed, its main functions and characteristics are expatiated on. The theory algorithm of auto-monitoring is researched and simulation analysis is carried out in detail. The results of simulation and experiment show that the auto-monitoring system can detect the power quality disturbances of distributed generation roundly, real-timely and accurately.",2008,0, 2925,Research and Design of Multifunctional Intelligent Melted Iron Analyzer,"A multifunctional intelligent melted iron analyzer is researched and designed. This device combined the function of thermal analysis for quality analysis of melted iron and ultrasonic measuring for nodularity. By thermal analysis, equation of linear regression wasused and the percentage composition of carbon, silicon and phosphorus etc. were obtained. In addition, thermal analysis was used to predict gray iron inoculation result, structure and performance. Therefore, ultrasonic measuring was adopted to be applied in this study to survey the nodularity of spheroidal graphite iron. In order to protect the analyzer far away from boiling melted iron, wireless temperature collection was adopted. The system is composed of PC104, single chip SPCE061 and other peripheral circuits. Designed temperature collection and ultrasonic measuring module, the software of data analysis and management is programmed by VB6.0 based on PC104 and windows operation system. After running for more than one year, the analyzer goes very well in the foundry.",2008,0, 2926,Molecular imaging of the myoskeletal system through Diffusion Weighted and Diffusion Tensor Imaging with parallel imaging techniques,"Diffusion weighted imaging (DWI) and diffusion tensor imaging (DTI) are useful tools when used in combination with standard imaging methods that may offer a significant advantage in certain clinical applications such as orthopedics and myoskeletal tissue imaging. Incorporation of these tools in clinical practice is limited due to the considerable amount of user intervention that apparent diffusion coefficient (ADC) and anisotropy data require in terms of processing and quantification require and due to the importance of acquisition parameter optimization in image quality. In this work various acquisition parameters and their effects in DWI and DTI are investigated. To assess the quality of these techniques, a series of experiments were conducted using a phantom. The application of lipid suppression techniques and their compatibility with other parameters were also investigated. Artifacts were provoked to study the effects in imaging quality. All the data were processed with specialized software to analyze various aspects of the measurements and quantify various parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), and the accuracy of ADC and fractional anisotropy values. The experience acquired from the experiments was applied in acquisition parameter optimization and improvement of clinical applications for the rapid screening and differential diagnosis of myoskeletal pathologies.",2008,0, 2927,Assessment and optimization of TEA-PRESS sequences in 1H MRS and MRSI of the breast,"Magnetic Resonance Spectroscopy (MRS) and Magnetic Resonance Spectroscopic Imaging (MRSI) are useful tools when used in combination with standard imaging methods that may offer a significant advantage in certain clinical applications such as cancer localization and staging. Incorporation of these tools in clinical practice is, however, limited due to the considerable amount of user intervention that spectrum processing and quantification requires and due to the importance of acquisition parameter optimization in spectrum quality. In this work various acquisition parameters and their effects in spectrum quality are investigated. In order to assess the quality of various spectroscopic techniques, a series of experiments were conducted using a standard solution. The application of water and fat suppression techniques and their compatibility with other parameters were also investigated. A number of artifacts were provoked to study the effects in spectrum quality. The stability of the equipment, the appearance of errors and artifacts and the reproducibility of the results were also examined to obtain useful conclusions for the interaction of acquisition parameters. All the data were processed with specialized computer software (jMRUI 2.2, FUNCTOOL) to analyze various aspects of the measurements and quantify various parameters such as signal to noise ratio (SNR), full width at half maximum (FWHM), peak height and j-modulation. The experience acquired from the conducted experiments was successfully applied in acquisition parameter optimization and improvement of clinical applications for the biochemical analysis of breast lesions by significantly improving the spectrum quality, SNR and spatial resolution.",2008,0, 2928,Towards validated network configurations with NCGuard,"Today, most IP networks are still configured manually on a router-by-router basis. This is error-prone and often leads to misconfiguration. In this paper, we describe the Network Configuration Safeguard (NCGuard), a tool that allows the network architect to apply a safer methodology. The first step is to define his design rules. Based on a survey of the networking literature, we classify the most common types of rules in three main patterns: presence, uniqueness and symmetry and provide several examples. The second step is to write a high-level representation of his network. The third step is to validate the network representation and generate the configuration of each router. This last step is performed automatically by our prototype. Finally, we describe our prototype and apply it to the Abilene network.",2008,0, 2929,Fault detection and visualization through micron-resolution X-ray imaging,"This paper describes a novel, non-intrusive method for the detection of faults within printed circuit boards (PCBs) and their components using digital imaging and image analysis techniques. High-resolution X-ray imaging systems provide a means to detect and analyze failures and degradations down to micron-levels both within the PCB itself and the components that populate the board. Further, software tools can aid in the analysis of circuit features to determine whether a failure has occurred, and to obtain positive visual confirmation that a failure has occurred. Many PCB and component failures previously undetectable through todaypsilas test methodologies are now detectable using this approach.",2008,0, 2930,Automating control and evaluation of FPGA testing using SJ BIST,"SJ BISTreg is a method to detect intermittent faults in ball grid array (BGA) packages of field programmable gate arrays (FPGAs). Failure of monitored I/O pins on operational, fully-programmed FPGAs is detected and reported by SJ BIST to provide positive indication of damage to one or more I/O solder-joint networks of an FPGA on an electronic digital board. The board can then be replaced before accumulated fatigue damage results in intermittent or long-lasting operational faults. This paper presents the test procedures to provide a lap-top-based test bed for controlling SJ BIST in the FPGAs on those evaluation boards. The procedures include using a Spartan 3trade development kit, a verilog-based test program, and a MATLABreg program for collecting, saving and displaying test data, all of which reside on a lap-top PC with a serial data port. The FPGA on a HALT evaluation board is programmed with SJ BIST (patent pending).",2008,0, 2931,Regression analysis of automated measurment systems,"Automated measurement systems are dependent upon successfully application of multiple integrated systems to perform measurement analysis on various units-under-tests (UUT)s. Proper testing, fault isolation and detection of a UUT are contingent upon accurate measurements of the automated measurement system. This paper extends previous presentation from 2007 AUTOTESTCON on the applicability of measurement system analysis for automated measurement systems. The motivation for this research was to reduce risk of transportability issues from legacy measurement systems to emerging systems. Improving regression testing utilizing parametric metadata for large scale automated measurement systems over existing regression testing techniques which provides engineers, developers and management increased confidence that mission performance is not compromised. The utilization of existing software statistical tools such as MinitabR provides the necessary statistical techniques to evaluate measurement capability of automated measurement systems. By applying measurement system analysis to assess the measurement variability between the US Navypsilas two prime automated test systems the Consolidated Automated Support System (CASS) and the Reconfigurable-Transportable Consolidated Automated Support System (RTCASS). Measurement system analysis shall include capability analysis between one selected CASS and RTCASS instrument to validate measurement process capability; general linear model to assess variability between stations, multivariate analysis to analyze measurement variability of UUTs between measurement systems, and gage repeatability and reproducibility analysis to isolate sources of variability at the UUT testing level.",2008,0, 2932,Scheduling on the Grid via multi-state resource availability prediction,"To make the most effective application placement decisions on volatile large-scale heterogeneous Grids, schedulers must consider factors such as resource speed, load, and reliability. Including reliability requires availability predictors, which consider different periods of resource history, and use various strategies to make predictions about resource behavior. Prediction accuracy significantly affects the quality of the schedule, as does the method by which schedulers combine various factors, including the weight given to predicted availability, speed, load, and more. This paper explores the question of how to consider predicted availability to improve scheduling, concentrating on multi-state availability predictors. We propose and study several classes of schedulers, and a method for combining factors. We characterize the inherent tradeoff between application makespan and the number of evictions due to failure, and demonstrate how our schedulers can navigate this tradeoff under various scenarios. We vary application load and length, and the percentage of jobs that are checkpointable. Our results show that the only other multi-state prediction based scheduler causes up to 51% more evicted jobs while simultaneously increasing average job makespan by 18% when compared with our scheduler.",2008,0, 2933,An IDE framework for grid application development,"Grid computing enables the aggregation of a large number of computational resources for solving complex scientific and engineering problems. However, writing, deploying, and testing grid applications over highly heterogeneous and distributed infrastructure are complex and error prone. A number of grid integrated development environments (IDEs) have been proposed and implemented to simplify grid application development. This paper presents an extension to our previous work on a grid IDE in the form of a software framework with a well-defined API and an event mechanism. It provides novel tools to automate routine grid programming tasks and allow programmable actions to be invoked based on certain events. Its system model regards resources as first-class objects in the IDE and allows tight integration between the execution platforms and the code development process. We discuss how the framework improves the process of grid application development.",2008,0, 2934,A Heuristic on Job Scheduling in Grid Computing Environment,"This paper introduces a model and a job scheduling algorithm in grid computing environments. In grid computing several applications require numerous resources for execution which are not often available for them, thus presence of a scheduling system to allocate resources to input jobs is vital. The resource selection criteria in the proposed algorithm are based on input jobs, communication links and resource computational capability. Then, the proposed algorithm will be assessed in simulated grid environment with statistical patterns of job insertion into system which each of them follow the normal, Poisson and exponential distribution. The results show that the new proposed algorithm has a better efficiency in comparison with the results obtained from other known algorithms.",2008,0, 2935,ADOM: An Adaptive Objective Migration Strategy for Grid Platform,"Object migration is the movement of objects from one machine to another during execution. It can be used to enhance the efficiency and the reliability of grid systems, such as to balance load distribution, to enable fault resilience, to improve system administration, and to minimize communication overhead. Most existing schemes apply fixed object migration strategies, which are unadaptable to changing requirements of applications. In this paper,we address the issue of object migration for large scale grid system with multiple object levels.First, we devise a probabilistic object tree model and formulate the object migration problem as an optimization problem. Then we proposed an adaptive object migration algorithm called ADOM to solve the problem. the ADOM algorithm applies the breadth first search scheme to traversal the object tree and migrates object adaptively according to their access probability. Finally we evaluate the performance of different object migration algorithms in our grid platform, which show that the ADOM algorithm outperforms other algorithms under large object tree size.",2008,0, 2936,A Method for Layout Evaluation of Online Handwritten Chinese Character Quality Based on Template,"Evaluation of Chinese handwriting character quality is an important function of computer assisted Chinese learning technology it can point out the errors in a handwritten character and make objective assessment on the writing quality of the character. However, only a few studies were reported on this new research topic in the literature. In this paper, the main target of handwriting evaluation has been presented. The common layout errors in handwriting samples are summarized and then a new layout evaluation method is proposed. The method is consisted of three parts of stroke layout evaluation, component layout evaluation and entire character shape evaluation, through the application of nine assessment rules. The experiment results show that the method is capable for detecting layout errors of handwriting samples and making objective assessment on whether a character is written good or not.",2008,0, 2937,Dynamic Web Service Selection for Reliable Web Service Composition,"This paper studies the dynamic web service selection problem in a failure-prone environment, which aims to determine a subset of Web services to be invoked at run-time so as to successfully orchestrate a composite web service. We observe that both the composite and constituent web services often constrain the sequences of invoking their operations and therefore propose to use finite state machine to model the permitted invocation sequences of Web service operations. We assign each state of execution an aggregated reliability to measure the probability that the given state will lead to successful execution in the context where each web service may fail with some probability. We show that the computation of aggregated reliabilities is equivalent to eigenvector computation and adopt the power method to efficiently derive aggregated reliabilities. In orchestrating a composite Web service, we propose two strategies to select Web services that are likely to successfully complete the execution of a given sequence of operations. A prototype that implements the proposed approach using BPEL for specifying the invocation order of a web service is developed and served as a testbed for comparing our proposed strategies and other baseline Web service selection strategies.",2008,0, 2938,Usage of MCA8 software for improving reliability of electrical networks,"The MCA8 software application is described and applied to model task in this paper. This software application was developed for the purpose of the support of multi-criteria decision-making in the field of electrical power engineering. The MCA8 runs on Windows with .NET framework 2.0 and it is user-friendly. The MCA8 offers six methods of multi-criteria analysis (MCA) for solving multi-criteria decision-making tasks. These methods are WSA, IPA, TOPSIS, AGREPREF, CDA and PROMETHEE. We can use them for example for selecting the most suitable old devices in electrical networks, which we need to replace by new devices (Reclosers in this paper). The application of these remote-controlled devices causes acceleration in handling and thus shortening of duration of a fault in the network. This results in rising of probability of faultless service and thus the reliability of electrical energy supply.",2008,0, 2939,A Block-Structured Mining Approach to Support Process Discovery,"Deploying process-driven information systems is a time-consuming and error-prone. Constructing process models from scratch is a complicated time consuming task that often requires high expertise. And there are discrepancies between the actual workflow processes and the processes as perceived by the management. Therefore, techniques for discovering process models have been developed. Process mining just attempts to improve this by automatically generating a process model from sets of systems' executions (audit logs). In this paper, a block-structured process mining approach from audit logs to support process discovery is designed. Compare with other algorithms, the result of this approach is more visible and understanding of process model. This approach is used to a widely commercial tool for the visualization and analysis of process model.",2008,0, 2940,The Copper Surface Defects Inspection System Based on Computer Vision,"The surface defects in copper strips severely affect the quality of copper. So detecting the surface defects in copper strip has great significance to improve the quality. This paper presents a copper strip surface inspection based on computer vision, which uses modularized frame of hardware and the software of image processing. The paper adopts a self-adaptive weight averaging filtering method to preprocess image, and uses the moment invariants to pick the characters of typical defects which eigenvector is identified with the RBF neural networks. Experiments show that the real-time method can effectively detect the copper strip surface defects in the production line.",2008,0, 2941,On Co-Training Style Algorithms,"During the past few years, semi-supervised learning has become a hot topic in machine learning and data mining, since manually labeling training examples is a tedious, error prone and time-consuming task in many practical applications. As one of the most predominant semi-supervised learning algorithms, co-training has drawn much attention and shown its superiority in many applications. So far, there have been a variety of variants of co-training algorithms aiming to settle practical problems. In order to launch an effective co-training process, these variants as a whole create their diversities in four different ways, i.e. two-view level, underlying classifiers level, datasets level and active learning level. This paper gives a review on co-training style algorithms just from this view and presents typical examples and analysis for each level respectively.",2008,0, 2942,The Evaluation of Reliability Based on the Software Architecture in Neural Networks,"Software reliability is one of the key quality attributes. To avoid reworks after developing software, mentioned attribute of software must be evaluated correctly. Therefore, estimation and prediction of reliability during system development are very important. In this article a method to predict the failure probability in whole system by the neural networks has been presented. The model of this neural network varies based on the architectural style of software. An evaluation model has been implemented for one architectural style of software as a case study.",2008,0, 2943,Data Flow Testing of SQL-Based Active Database Applications,"The relevance of reactive capabilities as a unifying paradigm for handling a number of database features and applications is well-established. Active database systems have been used to implement the persistent data requirements of applications on several knowledge domains. They extend passive ones by automatically performing predefined actions in response to events that they monitor. These reactive abilities are generally expressed with active rules defined within the database itself. We investigate the use of data flow-based testing to identify the presence of faults in active rules written in SQL. The goal is to improve reliability and overall quality in this realm. Our contribution is the definition of a family of adequacy criteria, which require the coverage of inter-rule persistent data flow associations, and its effectiveness in various data flow analysis precisions. Both theoretical and empirical investigations show that the criteria have strong fault detecting ability at a polynomial complexity cost.",2008,0, 2944,A Comparative Evaluation of Using Genetic Programming for Predicting Fault Count Data,"There have been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of models' assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count data.",2008,0, 2945,Automatic evaluation of flickering sensitivity of fluorescent lamps caused by interharmonic voltages,"Recent studies have shown that fluorescent lamps are also prone to light flickering due to the increasing level of inter-harmonics in the power systems. This possibility and the levels of problematic inter-harmonics were confirmed through laboratory tests. These tests were manually conducted, and therefore were laborious and time consuming. It would be convenient to automate the testing by using the advanced features of data acquisition system to save time and manpower. This paper describes the development of an automated flicker measurement and testing system. It consists of a lighting booth, a programmable AC source, photo-sensing circuit, a data acquisition device and automated flicker measurement and testing software. It is developed in accordance to the IEC 61000-4-15 standard. Tests were later conducted on various types of compact fluorescent lamps, confirming their sensitivity to interharominc voltages.",2008,0, 2946,Development of Functional Delay Tests,"With ever shrinking geometries, growing density and increasing clock rate of chips, delay testing is gaining more and more industry attention to maintain test quality for speed-related failures. The aim of this paper is to explore how functional delay tests constructed at algorithmic level detect transition faults at gate-level. Main attention was paid to investigation of the possibilities to improve the transition fault coverage using n-detection functional delay fault tests. The proposed functional delay test construction approaches allowed achieving 99% transition fault coverage which is acceptable even for manufacturing test.",2008,0, 2947,Real time system-in-the-loop simulation of tactical networks,"Military mission success probability is closely related to careful planning of communication infrastructures for Command and Control Information Systems (C2IS). Over recent years, the designers of tactical networks have realized more and more need for using simulation tools in the process of designing networks with optimal performances, in regard to terrain conditions. One of the most demanding simulation problems is the modeling of protocols and devices; especially on the application layer, because the credibility of simulation results mainly depends on the quality of modeling. A new branch of communications network simulations has appeared for resolving these kinds of problems the - simulations with real communication devices in the simulation loop. Such simulations initiate real-time into the simulation process. The results of our research are aimed at simulation methodology. Using this system, military command personnel, can perform realistic training on the real C2IS equipment connected to the simulation tool, by modeled wireless links over a virtual terrain. In our research work, we used the OPNET Modeler simulation tool, with additional modules.",2008,0, 2948,Dynamic QoS for commercial VoIP in heterogeneous networks,"The R equation, derived by ITU for network planning that forms part of the E-Model (ITU-T G.107), has been the backbone of quality-of-service (QoS) prediction of telephony for decades and it has been very successful in predicting the Mean Opinion Score (MOS) and QoS for telephone systems. In this paper we question its applicability to Voice-over-IP (VoIP) systems and propose a preliminary VoIP-eM equation and research directions to examine this question. The issues to examine include the linearity of adding the disparate transmission parameters, the random assumption behind the packet loss and interpretation of Id and Ie equation parameters representing impairments due to echo and delay, and impairments from transmission equipment, respectively. Further examination into the combined behavior of packet loss and packet delay, and other impairment scenarios including the environmental factors such as background conditions will also form a part of this investigation. In addition, demographics and languages spoken may show the variability in the R to MOS prediction and will therefore be investigated. The new QoS models will be evaluated for CODEC G.711, G.729, AMR, and GSM-FR. The proposed work will lead towards developing a dynamic QoS monitoring and active control system for the future heterogeneous VoIP networks.",2008,0, 2949,Automating construction project quality assurance with RFID- and mobile technologies,"The most critical part of a construction project is normally the construction elements that form the structure of the building. Concrete construction elements are often used as support structures and on the facade of the building, but they are vulnerable to manufacturing defects as well as being damaged on-site. Such defects can create a lot of additional costs to all parties involved in the project. Discovering and communicating about errors can be problematic as normally quality assurance is done manually and data is stored in paper format. Our research concentrated on automating the quality assurance of concrete elements. The task was approached by embedding RFID-tags in the elements enabling them to be identified wirelessly and associated with information in a data system. In field conditions users identify the construction elements with a mobile phone and interact electronically with the quality assurance system. As the information is created in digital form, the system can analyze it to detect errors and react automatically, notifying people responsible of faults. This paper presents the implementation of the system and discusses challenges and benefits.",2008,0, 2950,Predicting Fault Proneness of Classes Trough a Multiobjective Particle Swarm Optimization Algorithm,"Software testing is a fundamental software engineering activity for quality assurance that is also traditionally very expensive. To reduce efforts of testing strategies, some design metrics have been used to predict the fault-proneness of a software class or module. Recent works have explored the use of machine learning (ML) techniques for fault prediction. However most used ML techniques can not deal with unbalanced data and their results usually have a difficult interpretation. Because of this, this paper introduces a multi-objective particle swarm optimization (MOPSO) algorithm for fault prediction. It allows the creation of classifiers composed by rules with specific properties by exploring Pareto dominance concepts. These rules are more intuitive and easier to understand because they can be interpreted independently one of each other. Furthermore, an experiment using the approach is presented and the results are compared to the other techniques explored in the area.",2008,0, 2951,HistoSketch: A Semi-Automatic Annotation Tool for Archival Documents,"This article describes a sketch-based framework for semi-automatic annotation of historical document collections. It is motivated by the fact that fully automatic methods, while helpful for extracting metadata from large collections, have two main drawbacks in a real-world application: (i) they are error-prone and (ii) they only capture a subset of all the knowledge in the document base, both meaning that manual intervention is always required. Therefore, we have developed a practical framework for allowing experts to extract knowledge from document collections in a sketch-based scenario. The main possibilities of the proposed framework are: (a) browsing the collection efficiently, (b) providing gestures for metadata input, (c) supporting handwritten notes and (d) providing gestures for launching automatic extraction processes such as OCR or word spotting.",2008,0, 2952,Automatically Determining Compatibility of Evolving Services,"A major advantage of Service-Oriented Architectures (SOA) is composition and coordination of loosely coupled services. Because the development lifecycles of services and clients are decoupled, multiple service versions have to be maintained to continue supporting older clients. Typically versions are managed within the SOA by updating service descriptions using conventions on version numbers and namespaces. In all cases, the compatibility among services description must be evaluated, which can be hard, error-prone and costly if performed manually, particularly for complex descriptions. In this paper, we describe a method to automatically determine when two service descriptions are backward compatible. We then describe a case study to illustrate how we leveraged version compatibility information in a SOA environment and present initial performance overheads of doing so. By automatically exploring compatibility information, a) service developers can assess the impact of proposed changes; b) proper versioning requirements can be put in client implementations guaranteeing that incompatibilities will not occur during run-time; and c) messages exchanged in the SOA can be validated to ensure that only expected messages or compatible ones are exchanged.",2008,0, 2953,Building Profit-Aware Service-Oriented Business Applications,"Service composition is becoming a prevalent way to building service-oriented business applications (SOBAs). In an open service environment, the profit of composition (PoC) is a primary concern of building such applications. How to improve the PoC is a significant issue in developing SOBAs but was largely overlooked by current research. Particularly, the modeling and prediction of PoC should play a key role to drive the composition process. In this paper, we focus on how to model and predict PoC in SOBAs. We regard the PoC of a composite service as a function of the quality of service (QoS) attributes, defined in the service level agreement (SLA) between the service and its external partners. Based on the PoC prediction approach, we further propose a profit-driven composition methodology to assist enterprises to make more profit in their SOBAs.",2008,0, 2954,Invited Talk: The Role of Empiricism in Improving the Reliability of Future Software,"This talk first bemoan the general absence of empiricism in the evolution of software system building and then go on to show the results of some experiments in attempting to understand how defects appear in software, what factors affect their appearance and their relationship to testing generally. It challenge a few cherished beliefs on the way and demonstrate in no particular order at least the following: 1) the equilibrium state of a software system appears to conserve defect; 2) there is strong evidence in quasi-equilibrated systems for xlogx growth in defects where x is a measure of the lines of code; 3) component sizes in OO and non-OO software systems appear to be scale-free, (this is intimately related to the first two bullet points); 4) software measurements, (also known rather inaccurately as metrics) are effectively useless in determining the defect behaviour of a software system; 5) most such measurements, (including the ubiquitous cyclomatic complexity) are almost as highly correlated with lines of code as the relationship between temperature in degrees Fahrenheit and degrees Centigrade measured with a slightly noisy thermometer. In other words, lines of code are just about as good as anything else when estimating defects; 6) 'gotos considered irrelevant'. The goto statement has no obvious relationship with defects even when studied over very long periods. It probably never did; 7) checklists in code inspections appear to make no significant difference to the efficiency of the inspection; and 8) when you find a defect, there is an increasing probability of finding another in the same component. This strategy is effective up to a surprisingly large number of defects in youthful systems but not at all in elderly systems.",2008,0, 2955,Exploring the Relationship of a File's History and Its Fault-Proneness: An Empirical Study,"Knowing which particular characteristics of software are indicators for defects is very valuable for testers in order to allocate testing resources appropriately. In this paper, we present the results of an empirical study exploring the relationship between history characteristics of files and their defect count. We analyzed nine open source Java projects across different versions in order to answer the following questions: 1)Do past defects correlate with a filepsilas current defect count? 2) Do late changes correlate with a filepsilas defect count? 3) Is the file's age a good indicator for its defect count? The results are partly surprising. Only 4 of 9 programs show moderate correlation between a file's defects in previous and in current releases in more than the half of analysed releases. In contrast to our expectations, the oldest files represent the most fault-prone files. Additionally, late changes influence filepsilas defect count only partly.",2008,0, 2956,Improving Fault Injection of Soft Errors Using Program Dependencies,"Research has shown that modern micro-architectures are vulnerable to soft errors, i.e., temporary errors caused by voltage spikes produced by cosmic radiation. Soft-error impact is usually evaluated using fault injection, a black-box testing approach similar to mutation testing. In this paper, we complement an existing evaluation of a prototype brake-by-wire controller, developed by Volvo Technology, with static-analysis techniques to improve test effectiveness. The fault-injection tests are both time- and data-intensive, which renders their qualitative and quantitative assessment difficult. We devise a prototype visualization tool, which groups experiments by injection point and provides an overview of both instruction and fault coverage, and the ability to detect patterns and anomalies. We use the program-dependence graph to identify experiments with a priori known outcome, and implement a static analysis to reduce the test volume. The existing pre-injection heuristic is extended with liveness analysis to enable an unbiased fault-to-failure probability.",2008,0, 2957,WebVizOr: A Visualization Tool for Applying Automated Oracles and Analyzing Test Results of Web Applications,"Web applications are used extensively for a variety of critical purposes and, therefore, must be reliable. Since Web applications often contain large amounts of code and frequently undergo maintenance, testers need automated tools to execute large numbers of test cases to determine if an application is behaving correctly. Evaluating the voluminous output-typically Web pages full of content-is tedious and error-prone. To ease the difficulty, testers can apply automated oracles, which have tradeoffs in false positives and false negatives. In this paper, we present the design, implementation, and evaluation of WebVizOr, a tool that aids testers byapplying a set of oracles to the output from test cases and highlighting the symptoms of possible faults. Using WebVizOr, a tester can compare the test results from several executions of a test case and can more easily determine if a test case exposes a fault.",2008,0, 2958,Fuzzy critical analysis for an electric generator protection system,"The paper explains fuzzy critical analysis for the electric generator (EG) protection system. Power system electric generator (EG) is protected to various types of faults and abnormal workings. The protection system (PS) is composed of waiting subsystems, which must properly respond to each kind of dangerous events. An original fuzzy logic- system enables us to analyze the qualitative evaluation of the event-tree, modeling PS behavior. Fuzzy - set logic is used to account for imprecision and uncertainty in data while employing event-tree analysis. The fuzzy event-tree logic allows the use of verbal statement for the probabilities and consequences, such as very high, moderate and low probability.",2008,0, 2959,A Bayesian approach for software quality prediction,"Many statistical algorithms have been proposed for software quality prediction of fault-prone and non fault-prone program modules. The main goal of these algorithms is the improvement of software development processes. In this paper, we introduce a new software prediction algorithm. Our approach is purely Bayesian and is based on finite Dirichlet mixture models. The implementation of the Bayesian approach is done through the use of the Gibbs sampler. Experimental results are presented using simulated data, and a real application for software modules classification is also included.",2008,0, 2960,A Brief History of Software Technology,"To mark IEEE Software's 25th anniversary, Software Technology column editor Christ of Ebert presents a review and road map of major software technologies, starting with the magazine's inauguration, 1984. Learning from the many hypes and often long introduction cycles, he provides some timeless principles of technology evaluation and introduction. Good car drivers assess situations past, present, and future with a mix of skills and qualities. They make unconscious decisions and meld impressions, experiences, and skills into appropriate real-time actions. The same holds for assessing software technology. When reflecting on which technologies have had the most impact in the past 25 years, we can assess it quantitatively, by looking at research papers or """"hype-cycle"""" duration, for example. Alternatively, we might judge it like the expert driver, intuitively evaluating what was achieved compared to what was promised from a user perspective. Of course, many major technology breakthroughs happened before 1984: Milestones such as the IBM OS/360 and the microprocessor, and even many still-relevant software engineering practices, had been developed much earlier.",2008,0, 2961,Research on rotary dump health monitoring expert system based on causality diagram theory,"Causality diagram theory is a kind of uncertainty reasoning theory based on the belief network. It expresses the knowledge and causality relationship by diagrammatic form and direct causality intensity. Furthermore, it resolves the shortages of the belief network, and realizes a hybrid model which can process discrete and continuous variations. The theory of causality diagram model and the steps of causality diagram reasoning methodology are studied in this paper, and a model of rotary dump health monitoring expert system is proposed. In addition, this paper establishes the causality diagram of rotary dump and converts it to the causality tree. According to the causality tree of rotary dump, the causality diagram reasoning methodology composed of four steps is described. Finally, an application of rotary dump health monitoring expert system is shown, and the system performance analysis is discussed.",2008,0, 2962,Reasoning of fuzzy Causality Diagram with interval probability,"Causality diagram is a probabilistic reasoning method. Fuzzy set theory was introduced to develop causality diagram methodology after discussing the development and the restriction of conventional causality diagram. The application of causality diagram is extended to fuzzy field by introducing fuzzy set theory. Fuzzy causality diagram can overcome the shortcomings that it is difficult to gain the accurate probability of the event in conventional causality diagram. Interval numbers can express all kinds of fuzzy number. So it is necessary to dicuss the reasoning of fuzzy causality diagram with interval probability. Based on the interval number, operator, fuzzy conditional probability and the normalization method were discussed in this paper. Then two reasoning algorithm of single-value fuzzy causality diagram is proposed, some remarks about these algorithms are given. The result of numerical simulating of a subsystem in nuclear plant is coincident with the fact, and it shows the normalizing method is effective. The research shows that Interval Fuzzy causality diagram is so effective in fault analysis, and it is more flexible and adaptive than conventional method.",2008,0, 2963,ACAR: Adaptive Connectivity Aware Routing Protocol for Vehicular Ad Hoc Networks,"Developing routing protocol for vehicular ad hoc networks (VANET) is a challenging task due to potentially large network sizes, rapidly changing topology and frequent network disconnections, which can cause failure or inefficiency in traditional ad hoc routing protocols. We propose an adaptive connectivity aware routing (ACAR) protocol that addresses these problems by adaptively selecting an optimal route with the best network transmission quality based on the statistical and realtime density data that are gathered through an on-the-fly density collection process. The protocol consists of two parts: (1) select an optimal route, consisting of road segments, with the best estimated transmission quality (2) in each road segment in the selected route, select the most efficient multi-hop path that will improve delivery ratio and throughput. The optimal route can be selected using our new connectivity model that takes into account vehicles densities and traffic light periods to estimate transmission quality at road segments, which considers the probability of connectivity and data delivery ratio for transmitting packets. In each road segment along the optimal path, each hop is selected to minimize the packet error rate of the entire path. Our simulation results show that the proposed ACAR protocol outperforms existing VANET routing protocols in terms of data delivery ratio, throughput and data packet delay. In addition, ACAR works very well even if accurate statistical data is not available.",2008,0, 2964,Usefulness and effectiveness of HW and SW protection mechanisms in a processor-based system,Fault-injection based dependability analysis has proved to be an efficient mean to predict the behavior of a circuit in presence of faults. Emulation-based approaches enable fast and flexible analyses of significant designs such as processors running significant application software. This paper presents the results obtained with an encryption application and questions the usefulness and the effectiveness of detection mechanisms in both hardware and software.,2008,0, 2965,Numerical simulation of low pressure die-cast of magnesium alloy wheel,"The simulation of low pressure die casting process of a magnesium alloy wheel is practiced. Using professional casting software, the temperature field during filling and solidification process is simulated, and then the potential defects are predicted and previewed. It is found that there will be shrinkage at the center of hub through analyzing. And this shrinkage can not be eliminated by reducing the pouring velocity. But installing cooling pipe system in the top mold alone is a valid way to enhance the cooling capacity at the areas near the center. The solidified order is adjusted by the cooling pipe system and the shrinkage in the center of the hub is eliminated.",2008,0, 2966,Predicting the SEU error rate through fault injection for a complex microprocessor,"This paper deals with the prediction of SEU error rate for an application running on a complex processor. Both, radiation ground testing and fault injection, were performed while the selected processor, a Power PC 7448, executed a software issued from a real space application. The predicted error rate shows that generally used strategies, based on static cross-section, significantly overestimate the application error rate.",2008,0, 2967,The distributed diagnosis system of vehicles based on TH-OSEK embedded platform,"We proposed a fully-software distributed failure diagnosis system for vehicles based on the TH-OSEK real-time embedded OS platform we previously developed. The diagnosis system puts all the ECUs into a virtual logical ring and uses the MR(Maintain Ring) algorithm and OL(Off Line) algorithm to detect a faulted ECU and isolate it without destroying the structure of the logical ring. When a faulted ECU is recovered, with the proposed algorithms the system can also add it to the logical ring by updating the predecessor and successor of every node in the ring in time. The experiment result on the TH-OpenECU platform is also presented which shows that the system works well and usefulness for diagnosing the faults of vehicles.",2008,0, 2968,The Application of Improved Genetic Algorithm in Optimization of Function,"This paper points out defects of the traditional genetic algorithm (TGA), and has made improvement in it. An optimization strategy of combination is described. The improved genetic algorithm (IGA) is used to search the better answer in the whole feasible domain, and TGA is used to find the best answer in the local domain. The example shows the rationality and efficiency of this algorithm. This algorithm improves population diversity in the process of evolution, adapting bigger probability of crossover and mutation.",2008,0, 2969,A Hierarchy Management Framework for Automated Network Fault Identification,"An autonomous diagnosis approach of faulty links is proposed in this paper,. Given the information by which paths a designated network node with management responsibilities can communicate with certain other nodes, and can't communicate with another set of node, with the help of building diagnosis model and computing probability of link's failure the node with management responsibilities would like to identify as quickly as possible a ranked list of the most probable failed network links, and furthermore, accurately check out which links have failed by testing. Based on this approach, a hierarchy network management architecture is designed to deal with the fault diagnosis for a heterogeneous network environment. The simulation shows that this approach has the features of real-time, higher accuracy and autonomy, especially, it will occupy a few bit of bandwidth and even require no bandwidth.",2008,0, 2970,Research on Enterprise Culture Maturity Evaluation Basing on KPA,"Based on the selection of the key process area ( KPA) in the development of enterprise culture, a model is created to assess the enterprise culture maturity in this thesis. This maturity model serves as a core guideline for the enterprise culture evaluation system, which is used to assess the dynamic process of the development of enterprise culture,and it also reveals the key areas and the key operable problems in the evaluation of enterprise culture.",2008,0, 2971,Use of Data Mining to Enhance Security for SOA,"Service-oriented architecture (SOA) is an architectural paradigm for developing distributed applications so that their design is structured on loosely coupled services such as Web services. One of the most significant difficulties with developing SOA concerns its security challenges, since the responsibilities of SOA security are based on both the servers and the clients. In recent years, a lot of solutions have been implemented, such as the Web services security standards, including WS-Security and WS-SecurityPolicy. However, those standards are completely insufficient for the promising new generations of Web applications, such as Web 2.0 and its upgraded edition, Web 3.0. In this work, we are proposing an intelligent security service for SOA using data mining to predict the attacks that could arise with SOAP (Simple Object Access Protocol) messages. Moreover, this service will validate the new security policies before deploying them on the service provider side by testing the probability of their vulnerability.",2008,0, 2972,API Fuzz Testing for Security of Libraries in Windows Systems: From Faults To Vulnerabilites,"Application programming interface (API) fuzz testing is used to insert unexpected data into the parameters of functions and to monitor for resulting program errors or exceptions in order to test the security of APIs. However, vulnerabilities through which a user cannot insert data into API parameters are not security threats, because attackers cannot exploit such vulnerabilities. In this paper, we propose a methodology that can automatically find paths between inputs of programs and faulty APIs. Where such paths exist, faults in APIs represent security threats. We call our methodology Automated Windows API Fuzz Testing II (AWAFTII). This method extends our previous research for performing API fuzz testing into the AWAFTII process. The AWAFTII process consists of finding faults using API fuzz testing, analyzing those faults, and searching for input data related to parameters of APIs with faults. We implemented a practical tool for AWAFTII and applied it to programs in the system folder of Windows XP SP2. Experimental results show that AWAFTII can detect paths between input of programs and APIs with faults.",2008,0, 2973,Event-Based Data Dissemination on Inter-Administrative Domains: Is it Viable?,"Middleware for timely and reliable data dissemination is a fundamental building block of the event driven architecture (EDA), an ideal platform for developing air traffic control, defense systems, etc. Many of these middlewares are compliant to the data distribution service (DDS) specification and they have been traditionally designed to be deployed on managed environments where they show predictable behaviors. However, the enterprise setting can be unmanaged and characterized by geographic inter-domain scale and heterogeneous resources. In this paper we present a study aimed at assessing the strengths and weaknesses of a commercial DDS implementation deployed on an unmanaged setting. Our experiments campaign outlines that, if the application manages a small number of homogeneous resources, this middleware perform timely and reliably. In a more general setting with fragmentation and heterogeneous resources, reliability and timeliness rapidly degenerate pointing out a need of research in self-configuring scalable event dissemination with QoS guarantee on unmanaged settings.",2008,0, 2974,Human-Intention Driven Self Adaptive Software Evolvability in Distributed Service Environments,"Evolvability is essential to adapting to the dynamic and changing requirements in response to the feedback from context awareness systems. However, most of current context models have limited capability in exploring human intentions that often drive system evolution. To support service requirements analysis of real-world applications in distributed service environments, this paper focuses on human-intention driven software evolvability. In our approach, requirements analysis via an evolution cycle provides the means of speculating requirement changes, predicting possible new generations of system behaviors, and assessing the corresponding quality impacts. Furthermore, we also discuss evolvability metrics by observing intentions from user contexts.",2008,0, 2975,A New Method to Predict Software Defect Based on Rough Sets,"High quality software should have as few defects as possible. Many modeling techniques have been proposed and applied for software quality prediction. Software projects vary in size and complexity, programming languages, development processes, etc. We research the correlation of software metrics focusing on the data sets of software defect prediction. A rough set model is presented to reduce the attributes of data sets of software defect prediction in this paper. Experiment shows its splendid performance.",2008,0, 2976,Fault detection for OSPF based E-NNI routing with probabilistic testing algorithm,"In this paper, a probabilistic testing algorithm is proposed to increase the fault coverage for OSPF based E-NNI routing protocol testing. It automatically constructs random network topologies and checks database information consistency with real optical network topology and resource for each generated topology. Theoretical analysis indicates that our algorithm can efficiently increase the fault coverage. This algorithm has been implemented in a software test tool called E-NNI Routing Testing System (ERTS). Experiment results based on ERTS are also reported.",2008,0, 2977,Supporting Requirements Change Management in Goal Oriented Analysis,"Requirements changes frequently occur at any time of a software development process and their management is a crucial issue to develop software of high quality. Meanwhile, recently goal-oriented analysis techniques are being put into practice to elicit requirements. In this situation, the change management of goal graphs and its support is necessary. This paper presents two topics related to change management of goal graphs; 1) version control of goal graphs and 2) impact analysis on a goal graph when its modifications occur. In our version control system, we extract the differences between successive versions of a goal graph by means of monitoring modification operations performed through a goal graph editor, and store them in a repository. Our impact analysis detects conflicts that arise when a new goal is added, and investigates the achievability of the other goals when the existing goal is deleted.",2008,0, 2978,Rule-Based Maintenance of Post-Requirements Traceability Relations,"An accurate set of traceability relations between software development artifacts is desirable to support evolutionary development. However, even where an initial set of traceability relations has been established, their maintenance during subsequent development activities is time consuming and error prone, which results in traceability decay. This paper focuses solely on the problem of maintaining a set of traceability relations in the face of evolutionary change, irrespective of whether generated manually or via automated techniques, and it limits its scope to UML-driven development activities post-requirements specification. The paper proposes an approach for the automated update of existing traceability relations after changes have been made to UML analysis and design models. The update is based upon predefined rules that recognize elementary change events as constituent steps of broader development activities. A prototype traceMaintainer has been developed to demonstrate the approach. Currently, traceMaintainer can be used with two commercial software development tools to maintain their traceability relations. The prototype has been used in two experiments. The results are discussed and our ongoing work is summarized.",2008,0, 2979,Using Goal-Oriented Requirements Engineering for Improving the Quality of ISO/IEC 15504 based Compliance Assessment Frameworks,Within the context of business processes design and deployment we introduce and illustrate the use of goal models for capturing compliance requirements applicable over business processes configurations. More specifically we explain how a goal-oriented approach can be used together with the ISO/IEC 15504 standard in order to provide a formal framework according to which the compliance of business processes against regulations and their associated requirements can be assessed and measured. The overall approach is discussed and illustrated through the handling of a real business case related to the Basel II Accords on operational risk management in the financial sector.,2008,0, 2980,Using Scenarios to Discover Requirements for Engine Control Systems,"Rolls-Royce control systems are complex, safety critical and developed in ever-compressed timescales. Scenario techniques are utilised during systems design, safety analysis and systems verification. Scenarios can be used to improve requirements quality and to ensure greater confidence in requirements coverage for both normal and exception behaviour. A study was undertaken to investigate whether the ART-SCENE process and tool could enable engineers to identify exception behaviours earlier in the system design process, thus reducing cost and improving quality. ART-SCENE provides automatic generation of scenarios and alternative course events through the Scenario Presenter. These recognition cues are used to prompt engineers to identify deviations that may otherwise be missed. This paper describes a comparative evaluation between ART-SCENE and a standard hazard identification technique to assess the effectiveness of this approach.",2008,0, 2981,Assessing the Quality of Software Requirements Specifications,"Software requirements specifications (SRS) are hard to compare due to the uniqueness of the projects they were created in. In practice this means that it is not possible to objectively determine if a projects SRS fails to reach a certain quality threshold. Therefore, a commonly agreed-on quality model is needed. Additionally, a large set of empirical data is needed to establish a correlation between project success and quality levels. As there is no such quality model, we had to define our own based on the goal-question-metric (GQM) method. Based on this we analyzed more than 40 software projects (student projects in undergraduate software engineering classes), in order to contribute to the empirical part. This paper contributes in three areas: Firstly, we outline our GQM plan and our set of metrics. They were derived from widespread literature, and hence could lead to a discussion of how to measure requirements quality. Practitioners and researchers can profit from our experience, when measuring the quality of their requirements. Secondly, we present our findings. We hope that others find these valuable when comparing them to their own results. Finally,we show that the results of our quality assessment correlate to project success. Thus, we give an empirical indication for the correlation of requirements engineering and project success.",2008,0, 2982,Using Formal Verification to Reduce Test Space of Fault-Tolerant Programs,"Testing object-oriented programs is still a hard task, despite many studies on criteria to better cover the test space. Test criteria establish requirements one want to achieve in testing programs to help in finding software defects. On the other hand, program verification guarantees that a program preserves its specification but its application is not very straightforward in many cases. Both program testing and verification are expensive tasks and could be used to complement each other.This paper presents a new approach to automate and integrate testing and program verification for fault-tolerant systems. In this approach we show how to assess information from programs verification in order to reduce the test space regarding exceptions definition/use testing criteria. As properties on exception-handling mechanisms are checked using a model checker(Java PathFinder), programs are traced. Information from these traces can be used to realize how much testing criteria have been covered, reducing the further program test space.",2008,0, 2983,Towards supporting evolution of service-oriented architectures through quality impact prediction,"The difficulty in evolving service-oriented architectures with extra-functional requirements seriously hinders the spread of this paradigm in critical application domains. This work tries to offset this disadvantage by introducing a design-time quality impact prediction and trade-off analysis method, which allows software engineers to predict the extra-functional consequences of alternative design decisions and select the optimal architecture without costly prototyping.",2008,0, 2984,Architecting for evolvability by means of traceability and features,"The frequent changes during the development and usage of large software systems often lead to a loss of architectural quality which hampers the implementation of further changes and thus the systemspsila evolution. To maintain the evolvability of such software systems, their architecture has to fulfil particular quality criteria. Available metrics and rigour approaches do not provide sufficient means to evaluate architectures regarding these criteria, and reviews require a high effort. This paper presents an approach for an evaluation of architectural models during design decisions, for early feedback and as part of architectural assessments. As the quality criteria for evolvability, model relations in terms of traceability links between feature model, design and implementation are evaluated. Indicators are introduced to assess these model relations, similar to metrics, but accompanied by problem resolution actions. The indicators are defined formally to enable a tool-based evaluation. The approach has been developed within a large software project for an IT infrastructure.",2008,0, 2985,Design method for parameterized IP generator using structural and creational design patterns,"Parameterized intellectual property (IP) core generators, which produce synthesizable hardware design based on predefined microarchitectural-level parameters, can improve the efficiency of IP reuse. However, the process of designing such IP core generators is usually time-consuming and error-prone because it has to couple SW with HW designs in one product. In this paper, we propose a pattern-oriented design method for parameterized IP core generators. The paper shows that IP core generators quality can be improved through combining structural design pattern with creational design pattern. We also demonstrate the method through a soft-decision Viterbi decoder generator application.",2008,0, 2986,A General QoS Error Detection and Diagnosis Framework for Accountable SOA,"Accountability is a composite measure for different but related quality aspects. To be able to ensure accountability in practice, it is required to define specific quality attributes of accountability, and metrics for each quality attribute. In this paper, we propose a quality detection and diagnosis framework for the service accountability. We first identify types of quality attributes which are essential to manage QoS in accountability framework. We then present a detection and diagnosis model for problematic situations in services system. In this model, we design situation link representing dependencies among quality attributes, and provide information to detect and diagnose problems and their root causes. Based on the model, we propose an integrated model-based and case-based diagnosis method using the situation link.",2008,0, 2987,Systematic Structural Testing of Firewall Policies,"Firewalls are the mainstay of enterprise security and the most widely adopted technology for protecting private networks. As the quality of protection provided by a firewall directly depends on the quality of its policy (i.e., configuration), ensuring the correctness of security policies is important and yet difficult.To help ensure the correctness of a firewall policy, we propose a systematic structural testing approach for firewall policies. We define structural coverage (based on coverage criteria of rules, predicates, and clauses) on the policy under test. Considering achieving higher structural coverage effectively, we develop three automated packet generation techniques: the random packet generation, the one based on local constraint solving (considering individual rules locally in a policy), and the most sophisticated one based on global constraint solving (considering multiple rules globally in a policy).We have conducted an experiment on a set of real policies and a set of faulty policies to detect faults with generated packet sets. Generally, our experimental results show that a packet set with higher structural coverage has higher fault detection capability (i.e., detecting more injected faults). Our experimental results show that a reduced packet set (maintaining the same level of structural coverage with the corresponding original packet set) maintains similar fault detection capability with the original set.",2008,0, 2988,Fault-Tolerant Coverage Planning in Wireless Networks,"Typically wireless networks coverage is planned with static redundancy to compensate temporal variations in the environment. As a result, the service still is delivered but the network coverage could have entered a critical state, meaning that further changes in the environment may lead to service failure. Service failures have to be explicitly notified by the applications. Therefore, in this paper we propose a methodology for fault-tolerant coverage planning. The idea is detecting the critical state and removing it by on-line system reconfiguration, and restoration of the original static redundancy. Even in case of a failure the system automatically generates a new configuration to restore the service, leading to shorter repair times. We describe how this approach can be applied to wireless mesh networks, often used in industrial applications like manufacturing, automation and logistics. The evaluation results show that the underlying model used for error detection and system recovery is accurate enough to correctly identify the system state.",2008,0, 2989,Vision based pointing device with slight body movement,"It is widely acknowledged that computer is powerful tool to improve the quality of life of the people with disability. One problem is how the user manipulates the computer using suitable input device designed for him or her. This paper proposes tow pointing devices based on vision for the people with disability as input devices. These devices detect the position of the marks attached on the users head. According to the head movements, computer cursor on the computer display is moved in two-dimensionally. A distinct advantage of these devices is that various strategies could be installed based on the situation of the users. The pointing device was applied to a drawing tool of the painting software for an patients successfully.",2008,0, 2990,Design of intelligent testing device for airplane navigation radar,"Modern airborne radar systems are very complex electronic equipment systems, high reliability is demanded, and fine functions of automatic detect is needed for guarantee. In this dissertation, we have studied the detect method of a new kind of airborne radar systems. With Embed machinery control as its core, adoption unit wooden blocks type construction, examining the faults of radar one by one by exciting the fault models input and checking out the response, to get the faults localized. Being programmed by Visual Basic6.0, the software can be enlarged and advanced and it provides the users with an intelligent and automatic testing environment and amicable interface. Under the guidance of testing interface, testers can complete the fault localization of radar circuit automatically. By testing, this intelligent and synthesis detect system holds well-found function advanced techniques and predominant capabilities. It can proceed the all-directions performance test to airplane navigation radar. So it has important significance for ensuring flight safety and increasing combat effectiveness.",2008,0, 2991,Evaluation of an efficient control-oriented coverage metric,"Dynamic verification, the use of simulation to determine design correctness, is widely used due to its tractability for large designs. A serious limitation of dynamic techniques is the difficulty in determining whether or not a test sequence is sufficient to detect all likely design errors. Coverage metrics are used to address this problem by providing a set of goals to be achieved during the simulation process; if all coverage goals are satisfied then the test sequence is assumed to be complete. Many coverage metrics have been proposed but no effort has been made to identify a correlation between existing metrics and design quality. In this paper we present a technique to evaluate a coverage metric by examining its ability to ensure the detection of real design errors. We apply our evaluation technique to our control-oriented coverage metric to verify its ability to reveal design errors.",2008,0, 2992,Bottom up approach to enhance top level SoC verification,SoCs today rely heavily on behavioral models of analog circuits for Top Level Verification. The minimum modeling requirement is to model the functional behavior of the circuit. A lot of ongoing work is also focused on modeling analog circuits to predict the system performance of the SoC. This paper presents a methodology to enhance the quality of SoC verification by using a bottom up approach to verify the equivalence of building blocks and then work at higher levels to increase coverage. It is shown that this methodology can be used to verify functional and performance equivalence of behavioral models.,2008,0, 2993,Analyzing Performance of Web-Based Metrics for Evaluating Reliability and Maintainability of Hypermedia Applications,"This paper has been designed to identify the Web metrics for evaluating the reliability and maintainability of hypermedia applications. In the age of information and communication technology (ICT), Web and the Internet, have brought significant changes in information technology (IT) and their related scenarios. Therefore in this paper an attempt has been made to trace out the Web-based measurements towards the creation of efficient Web centric applications. The dramatic increase in Web site development and their relative usage has led to the need of Web-based metrics. These metrics will accurately assess the efforts in the Web-based applications. Here we promote the simple, but elegant approaches to estimate the efforts needed for designing Web-based applications with the help of user behavior model graph (UBMG), Web page replacement algorithms, and RS Web Application Effort Assessment (RSWAEA) method. Effort assessment of hyperdocuments is crucial for Web-based systems, where outages can result in loss of revenue and dissatisfied customers. Here we advocate a simple, but elegant approach for effort estimation for Web applications from an empirical point of view. The proposed methods and models have been designed after carrying out an empirical study with the students of an advanced university class and Web designers that used various client-server based Web technologies. Our first aim was to compare the relative importance of each Web-based metric and method. Second, we also implemented the quality of the designs obtained based by constructing the User Behavior Model Graphs (UBMGs) to capture the reliability of Web-based applications. Thirdly, we use Web page replacement algorithms for increasing the Web site usability index, maintainability, reliability, and ranking. The results obtained from the above Web-based metrics can help us to analytically identify the effort assessment and failure points in Web-based systems and makes the evaluation of reliability of thes- - e systems simple.",2008,0, 2994,An Efficient Forward and Backward Fault-Tolerant Mobile Agent System,"Mobile agent is a special program and it can switch and execute the task of user commands among the networks and hosts. During the task executing, the mobile agent can convey the data, state and program code to another host in order to autonomously execute and continue the task in another host. While the mobile agent is executing the task at any of the software and hardware fault incurred or network problems, there will be two conditions as following lists: 1. Users continuously wait the reply from the agent, but users will never have the reply because of the some faults incurred to the agent in the networks or hosts. 2. Users assign a new agent as the former agent has been lost to restart the former task, but the former agent only congested the delay problem of the network or host. This causes that these two agents have the same task to be executed. Therefore, the fault detecting and recovering of the mobile agent are important issues to be discussed. However, this paper propose a front behind failure detection and recovery method that the task agent has to report the task process at the present stage to the former and latter agent and agents will exchange their messages for the present stage. This is more accurate than the method in for the task at present because it can reduce the loading of the task fault report from the network congested.",2008,0, 2995,A stochastic approach for fine grain QoS control,"We present a method for fine grain QoS control of multimedia applications. This method takes as input an application software composed of actions parameterized by quality levels. Our method allows the construction of a Quality Manager which computes adequate action quality levels, so as to meet QoS requirements (action deadlines are met and quality levels are maximal) for a given platform.",2008,0, 2996,Improving the performance of speech recognition systems using fault-tolerant techniques,"In this paper, using of fault tolerant techniques are studied and experimented in speech recognition systems to make these systems robust to noise. Recognizer redundancy is implemented to utilize the strengths of several recognition methods that each one has acceptable performance in a specific condition. Duplication-with-comparison and NMR methods are experimented with majority and plurality voting on a telephony Persian speech-enabled IVR system. Results of evaluations present two promising outcomes, first, it improves the performance considerably; second, it enables us to detect the outputs with low confidence.",2008,0, 2997,Enhancement of degraded document images using hybrid thresholding,"The paper presents a hybrid thresholding approach for binarization and enhancement of degraded documents. Historical documents contain information of great cultural and scientific value. But such documents are frequently degraded over time. Digitized degraded documents require specialized processing to remove different kinds of noise and to improve readability. The approach for enhancing degraded documents uses a combination of two thresholding . First, iterative global thresholding is applied to the smoothed degraded image until the stopping criteria is reached. Then a threshold selection method from gray level histogram is used to binarize the image. The next step is detecting areas where noise still remains and applying iterative thresholding locally. A method to improve the quality of textual information in the document is also done as a post processing stage, thus making the approach efficient and more suited for OCR applications.",2008,0, 2998,Multiple information fusion of aluminum alloy resistance spot welding based on principal component analysis,"The monitoring of aluminum alloy resistance spot welding (RSW) is realized by distributed multiple sensor synchronous collection system and the data processing software is also developed by using LABVIEW graphical language. Statistical analysis has been applied to investigate the relationship between the extracted features and the RSW quality. The results show that the expulsion in spot welding is related to the notching curve of voltage and electrode displacement signal. Moreover, there is a correlation between the high frequency impulse amplitude and duration of the electrode force signal and the expulsion strength, and three features simultaneously or separately occur according to the expulsion strength in spot welding. Resistance spot welding quality can be assessed by nine features of high Signal-to-Noise ratio, and these may be the base of on-line quality classification of aluminum alloy spot welding in future. Furthermore, using principal component analysis (PCA) may implement the information fusion and data compression. The percentage of spot welding quality classification accuracy can reach 98%.",2008,0, 2999,A Theoretical Model of the Effects of Losses and Delays on the Performance of SIP,The session initiation protocol (SIP) is widely used for VoIP communication. Losses caused by network or server overload would cause retransmissions and delays in the session establishment and would hence reduce the perceived service quality of the users. In order to be able to take counter measures network and service planers require detailed models that would allow them to predict such effects in advance. This paper presents a theoretical model of SIP that can be used for determining various parameters such as the delay and the number of messages required for establishing a SIP session when taking losses and delays into account. The model is restricted to the case when SIP is transported over UDP. The theoretical results are then verified using measurements.,2008,0, 3000,"Towards """"Guardian Angels"""" and Improved Mobile User Experience","Today's mobile users expect high-quality experience which involves both high quality services as well as high service availability. It may take only a few bad service experiences such as dropped calls, unavailable navigation service, or delayed emails, to cause a mobile customer to consider switching service providers. Although great progress has been made in the radio communication and operations optimization as well as in the customer services areas, there are some hard technical problems yet to be solved to offer a personalized quality of service assurance across both the basic phone service and advanced applications. Our position as a major telecommunication software provider has given us great insight into these issues and in this paper we present the theoretical and architectural details of two interrelated approaches that could provide feasible means for improved quality of mobile user experience through intelligent device and network-resident software components. For the end user it will seem as though a """"guardian angel"""" is on her shoulder describing, predicting and explaining disruptive events.",2008,0, 3001,Features-Pooling Blind JPEG Image Steganalysis,"In this research, we introduce a new blind steganalysis in detecting grayscale JPEG images. Features-pooling method is employed to extract the steganalytic features and the classification is done by using neural network. Three different steganographic models are tested and classification results are compared to the five state-of-the-art blind steganalysis.",2008,0, 3002,Predicting Defect Content and Quality Assurance Effectiveness by Combining Expert Judgment and Defect Data - A Case Study,"Planning quality assurance (QA) activities in a systematic way and controlling their execution are challenging tasks for companies that develop software or software-intensive systems. Both require estimation capabilities regarding the effectiveness of the applied QA techniques and the defect content of the checked artifacts. Existing approaches for these purposes need extensive measurement data from his-torical projects. Due to the fact that many companies do not collect enough data for applying these approaches (es-pecially for the early project lifecycle), they typically base their QA planning and controlling solely on expert opinion. This article presents a hybrid method that combines commonly available measurement data and context-specific expert knowledge. To evaluate the methodpsilas applicability and usefulness, we conducted a case study in the context of independent verification and validation activities for critical software in the space domain. A hybrid defect content and effectiveness model was developed for the software requirements analysis phase and evaluated with available legacy data. One major result is that the hybrid model provides improved estimation accuracy when compared to applicable models based solely on data. The mean magni-tude of relative error (MMRE) determined by cross-validation is 29.6% compared to 76.5% obtained by the most accurate data-based model.",2008,0, 3003,Trace Normalization,"Identifying truly distinct traces is crucial for the performance of many dynamic analysis activities. For example, given a set of traces associated with a program failure, identifying a subset of unique traces can reduce the debugging effort by producing a smaller set of candidate fault locations. The process of identifying unique traces, however, is subject to the presence of irrelevant variations in the sequence of trace events, which can make a trace appear unique when it is not. In this paper we present an approach to reduce inconsequential and potentially detrimental trace variations. The approach decomposes traces into segments on which irrelevant variations caused by event ordering or repetition can be identified, and then used to normalize the traces in the pool. The approach is investigated on two well-known client dynamic analyses by replicating the conditions under which they were originally assessed, revealing that the clients can deliver more precise results with the normalized traces.",2008,0, 3004,Automated Identification of Failure Causes in System Logs,"Log files are commonly inspected by system administrators and developers to detect suspicious behaviors and diagnose failure causes. Since size of log files grows fast, thus making manual analysis impractical, different automatic techniques have been proposed to analyze log files. Unfortunately, accuracy and effectiveness of these techniques are often limited by the unstructured nature of logged messages and the variety of data that can be logged.This paper presents a technique to automatically analyze log files and retrieve important information to identify failure causes. The technique automatically identifies dependencies between events and values in logs corresponding to legal executions, generates models of legal behaviors and compares log files collected during failing executions with the generated models to detect anomalous event sequences that are presented to users. Experimental results show the effectiveness of the technique in supporting developers and testers to identify failure causes.",2008,0, 3005,Finding Faults: Manual Testing vs. Random+ Testing vs. User Reports,"The usual way to compare testing strategies, whether theoretically or empirically, is to compare the number of faults they detect. To ascertain definitely that a testing strategy is better than another, this is a rather coarse criterion: shouldn't the nature of faults matter as well as their number? The empirical study reported here confirms this conjecture. An analysis of faults detected in Eiffel libraries through three different techniques-random tests, manual tests, and user incident reports-shows that each is good at uncovering significantly different kinds of faults. None of the techniques subsumes any of the others, but each brings distinct contributions.",2008,0, 3006,Testing of User-Configurable Software Systems Using Firewalls,"User-configurable software systems present many challenges to software testers. These systems are created to address a large number of possible uses, each of which is based on a specific configuration. As configurations are made up of groups of configurable elements and settings, a huge number of possible combinations exist. Since it is infeasible to test all configurations before release, many latent defects remain in the software once deployed. An incremental testing process is presented to address this problem, including examples of how it can be used with various user-configurable systems in the field. The proposed solution is evaluated with a set of empirical studies conducted on two separate ABB software systems using real customer configurations and changes. The three case studies analyzed failures reported by many different customers around the world and show that this incremental testing process is effective at detecting latent defects exposed by customer configuration changes in user-configurable systems.",2008,0, 3007,Cost Curve Evaluation of Fault Prediction Models,"Prediction of fault prone software components is one of the most researched problems in software engineering. Many statistical techniques have been proposed but there is no consensus on the methodology to select the """"best model"""" for the specific project. In this paper, we introduce and discuss the merits of cost curve analysis of fault prediction models. Cost curves allow software quality engineers to introduce project-specific cost of module misclassification into model evaluation. Classifying a software module as fault-prone implies the application of some verification activities, thus adding to the development cost. Misclassifying a module as fault free carries the risk of system failure, also associated with cost implications. Through the analysis of sixteen projects from public repositories, we observe that software quality does not necessarily benefit from the prediction of fault prone components. The inclusion of misclassification cost in model evaluation may indicate that even the """"best"""" models achieve performance no better than trivial classification. Our results support a recommendation to adopt cost curves as one of the standard methods for software quality model performance evaluation.",2008,0, 3008,Using Statistical Models to Predict Software Regressions,"Incorrect changes made to the stable parts of a software system can cause failures - software regressions. Early detection of faulty code changes can be beneficial for the quality of a software system when these errors can be fixed before the system is released. In this paper, a statistical model for predicting software regressions is proposed. The model predicts risk of regression for a code change by using software metrics: type and size of the change, number of affected components, dependency metrics, developerpsilas experience and code metrics of the affected components. Prediction results could be used to prioritize testing of changes: the higher is the risk of regression for the change, the more thorough testing it should receive.",2008,0, 3009,Reliability Assessment of Mass-Market Software: Insights from Windows Vista,"Assessing the reliability of mass-market software (MMS), such as the Windowsreg operating system, presents many challenges. In this paper, we share insights gained from the Windows Vistareg and Windows Vistareg SP1 operating systems. First, we find that the automated reliability monitoring approach, which periodically reports reliability status, provides higher quality data and requires less effort compared to other approaches available today. We describe one instance in detail: the Windows reliability analysis component, and illustrate its advantages using data from Windows Vista. Second, we show the need to account for usage scenarios during reliability assessments. For pre-release versions of Windows Vista and Vista SP1, usage scenarios differ by 2-4X for Microsoft internal and external samples; corresponding reliability assessments differ by 2-3X. Our results help motivate and guide further research in reliability assessment.",2008,0, 3010,Architecting for Reliability - Recovery Mechanisms,Telecommunications systems achieve high levels of reliability by implementing detection and recovery mechanisms with high coverage. With the trend towards the use of more COTS components in these systems the choices available for the systems detection and recovery mechanisms are more limited. An escalating recovery model with varying coverage factors and recovery durations is developed to provide insight into high availability design alternatives for commercial products. This work extends our previous examination of escalating detection by considering recovery.,2008,0, 3011,Static Detection of Redundant Test Cases: An Initial Study,"As software systems evolve, the size of their test suites grow due to added functionality and customer-detected defects. Many of these tests may contain redundant elements with previous tests. Existing techniques to minimize test suite size generally require dynamic execution data, but this is sometimes unavailable. We present a static technique that identifies test cases with redundant instruction sequences, allowing them to be merged or eliminated. Initial results at ABB show that 7%-23% of one test suite may be redundant.",2008,0, 3012,Comparative Study of Fault-Proneness Filtering with PMD,"Fault-prone module detection is important for assurance of software quality. We have proposed a novel approach for detecting fault-prone modules using spam filtering technique, named Fault-proneness filtering. In order to show the effectiveness of fault-proneness filtering, we conducted comparative study with a static code analysis tool, PMD. In the study, Fault-proneness filtering obtains higher F1 than PMD.",2008,0, 3013,On Reliability Analysis of Open Source Software - FEDORA,"Reliability analyses of software systems often focus only on the number of faults reported against the software. Using a broader set of metrics, such as problem resolution times and field software usage levels, can provide a more comprehensive view of the product. Some of these metrics are more readily available for open source products. We analyzed a suite of FEDORA releases and obtained some interesting findings. For example, we show that traditional reliability models may be used to predict problem rates across releases. We also show that security related reports tend to have a different profile than non-security related problem reporting and repair.",2008,0, 3014,AADSS: Agent-Based Adaptive Dynamic Semantic Web Service Selection,"At present, Web Services invocation follows the ldquobind-once-invoke-many-timesrdquo pattern, while the Web Services run in a failure prone environment. Therefore, Web Services are not actually used on a large scale in real business world which calls for high robustness and trustworthiness. To solve this problem we propose an Agent-based Adaptive Dynamic Semantic Web Service Selection framework, called AADSS, in which Web Service consumers can dynamically select the ldquorightrdquo service, and adaptively change the bound services according to the real-time conditions such as QoS properties values, service reputation ranking scores and so on. We also describe a four-phase semantic web service selection strategy for this dynamic selection.",2008,0, 3015,A Practical Monitoring Framework for ESB-Based Services,"Services in service-oriented computing (SOC) are often black-box since they are typically developed by 3rd party developers and deployed only with their interface specifications. Therefore, internal details of services and implementation details of service components are not readily available. Also, services in SOC are highly evolvable since new services can be registered into repositories, and existing services maybe modified for their logic and interfaces, or they may suddenly disappear. As the first and most essential step for service management, service monitoring is to acquire useful data and information about the services and to assess various quality of service (QoS). Being able to monitor services is a strong prerequisite to effective service management. In this paper, we present a service monitoring framework for ESB-based services, as a sub-system of our Open Service Management Framework (OSMaF). We firstly define the criteria for designing QoS monitoring framework, and present the architecture and key components of the framework. Then, we illustrate the key techniques used to efficiently monitor services and compute QoS metrics. Finally, an implementation of the framework is presented to show its applicability.",2008,0, 3016,Challenges in Scaling Software-Based Self-Testing to Multithreaded Chip Multiprocessors,"Functional software-based self-testing (SBST) has been recently studied by leading academic research groups and applied by major microprocessor manufacturers as a complement to other classic structural testing techniques for microprocessors and processor-based SoCs. Is the SBST paradigm scalable to testing multithreaded chip multiprocessors (CMPs) and effectively detect faults not only in the functional components but also in the thread-specific and core interoperability logic? We study the challenges in scaling existing software-based self-test capital (uniprocessor self-test programs and self-test generation techniques) to real, multithreaded CMPs, like Sun's OpenSPARC T1 and T2. Since this type of CMPs is built around well studied microprocessor cores of mature architecture (like SPARC v9 in the OpenSPARC case), tailoring, enhancing and scheduling of existing uniprocessor self-test programs can be an effective methodology for software-based self-test of CMPs.",2008,0, 3017,Using static analysis to improve communications infrastructure,"Static analysis is a promising technique for improving the safety and reliability of software used in avionics infrastructure. Source code analyzers are effective at locating a significant class of defects that are not detected by compilers during standard builds and often go undetected during runtime testing as well. Related to bug finders are a number of other static code improvement tasks, including automated unit test generation, programmer and software metrics tracking, and coding standards enforcement. However, adoption of these tools for everyday avionics software developer has been low. This paper will discuss the major barriers to adoption of these important tools and provide advice regarding how they can be effectively promulgated across the enterprise. Case studies of popular open source applications will be provided for illustration.",2008,0, 3018,HyperMIP: Hypervisor Controlled Mobile IP for Virtual Machine Live Migration across Networks,"Live migration provides transparent load-balancing and fault-tolerant mechanism for applications. When a Virtual Machine migrates among hosts residing in two networks, the network attachment point of the Virtual Machine is also changed, thus the Virtual Machine will suffer from IP mobility problem after migration. This paper proposes an approach called Hypervisor controlled Mobile IP to support live migration of Virtual Machine across networks, which enables virtual machine live migration over distributed computing resources. Since Hypervisor is capable of predicting exact time and destination host of Virtual Machine migration, our approach not only can improve migration performance but also reduce the network restoration latency. Some comprehensive experiments have been conducted and the results show that the HyperMIP brings negligible overhead to network performance of Virtual Machines. The network restoration time of HyperMIP supported migration is about only 3 second. HyperMIP is a promising essential component to provide reliability and fault tolerant for network application running in Virtual Machine.",2008,0, 3019,Formal Support for Quantitative Analysis of Residual Risks in Safety-Critical Systems,"With the increasing complexity in software and electronics in safety-critical systems new challenges to lower the costs and decrease time-to-market, while preserving high assurance have emerged. During the safety assessment process, the goal is to minimize the risk and particular, the impact of probable faults on system level safety. Every potential fault must be identified and analysed in order to determine which faults that are most important to focus on. In this paper, we extend our earlier work on formal qualitative analysis with a quantitative analysis of fault tolerance. Our analysis is based on design models of the system under construction. It further builds on formal models of faults that have been extended for estimated occurence probability allowing to analyse the system-level failure probability. This is done with the help of the probabilistic model checker PRISM. The extension provides an improvement in the costly process of certification in which all forseen faults have to be evaluated with respect to their impact on safety and reliability. We demonstrate our approach using an application from the avionic industry: an Altitude Meter System.",2008,0, 3020,An Interaction-Based Test Sequence Generation Approach for Testing Web Applications,"Web applications often use dynamic pages that interact with each other by accessing shared objects, e.g., session objects. Interactions between dynamic pages need to be carefully tested, as they may give rise to subtle faults that cannot be detected by testing individual pages in isolation. Since it is impractical to test all possible interactions, a trade-off must be made between test coverage (in terms of number of interactions covered in the tests) and test effort. In this paper, we present a test sequence generation approach to cover all pairwise interactions, i.e., interactions between any two pages. Intuitively, if a page P could reach another page Ppsila, there must exist a test sequence in which both P and Ppsila are visited in the given order. We report a test sequence generation algorithm and two case studies in which test sequences are generated to achieve pairwise interaction coverage for two Web applications. The empirical results indicate that our approach achieves good code coverage and is effective for detecting interaction faults in the subject applications.",2008,0, 3021,Detection and Diagnosis of Recurrent Faults in Software Systems by Invariant Analysis,"A correctly functioning enterprise-software system exhibits long-term, stable correlations between many of its monitoring metrics. Some of these correlations no longer hold when there is an error in the system, potentially enabling error detection and fault diagnosis. However, existing approaches are inefficient, requiring a large number of metrics to be monitored and ignoring the relative discriminative properties of different metric correlations. In enterprise-software systems, similar faults tend to reoccur. It is therefore possible to significantly improve existing correlation-analysis approaches by learning the effects of common recurrent faults on correlations. We present methods to determine the most significant correlations to track for efficient error detection, and the correlations that contribute the most to diagnosis accuracy. We apply machine learning to identify the relevant correlations, removing the need for manually configured correlation thresholds, as used in the prior approaches. We validate our work on a multi-tier enterprise-software system. We are able to detect and correctly diagnose 8 of 10 injected faults to within three possible causes, and to within two in 7 out of 8 cases. This compares favourably with the existing approaches whose diagnosis accuracy is 3 out of 10 to within 3 possible causes. We achieve a precision of at least 95%.",2008,0, 3022,A Self-Managing Brokerage Model for Quality Assurance in Service-Oriented Systems,"Service-oriented system quality is not just a function of the quality of a provided service, but the interdependencies between services, the resource constraints of the runtime environment and network outages. This makes it difficult to anticipate how the consequences of these factors might influence system behaviour, which in turn makes it difficult to specify the right system environment in advance. Current quality management schemes for service-oriented systems are inadequate for assuring system quality as they rely largely on static service properties to predict system quality. Secondly, they offer the consumer only limited control over the quality of service. This paper describes a self-managing, consumer-centred approach based on a brokerage architecture that allows different monitoring, negotiation, forecasting and provider reputation schemes to be integrated into a runtime quality assurance framework for service-oriented systems. We illustrate our solution with a small service-oriented application.",2008,0, 3023,A Compression Framework for Personal Image Used in Mobile RFID System,"Radio frequency identification (RFID), a novel automatic identification technology, has been widely used in modern society. To improve the security of RFID card, a novel idea of inserting personal image to the card and restoring it rapidly on the mobile device is proposed in this paper. A compression framework based on facial feature is proposed to solve the key problem caused by memory limitation of RFID card and resource limitations of the mobile system. In this framework, the facial region is detected and extracted at first, and then compressed with high compression ratio using fast lifting wavelet transform. After that, the compressed data is encoded and saved in the card. Experimental results indicate that higher compression ratio, better image quality and rapid decompression can be achieved by using this framework.",2008,0, 3024,Generating Test Cases of Object-Oriented Software Based on EDPN and Its Mutant,"In object-oriented software testing, a class is considered to be a basic unit of testing. The state of the objects may cause faults that cannot be easily revealed with traditional testing techniques. In this paper, we propose a new technique for class testing by using event-driven Petri nets (EDPN), which is an extended version of Petri Nets, one of techniques having the ability to analyze and test the behavior for the interaction between data members and member functions in class. We demonstrate how to specify a class specification by EDPNs and a given fault model by mutant of EDPNs, which is a theoretical model to describe the dynamic behaviors of EDPNs. A test case generation technique is presented to detect the given faults by analyzing the differences of test scenario in the dynamic behaviors of both EDPNs. The presented algorithm can select a test case that detects errors described in the fault models.",2008,0, 3025,Partheno-Genetic Algorithm for Test Instruction Generation,"Test case generation is the classic method in finding software defects, and test instruction generation is one of its typical applications in embedded chipset systems.In this paper, the optimized partheno-genetic algorithm(PGA) is proposed after a 0-1 integer programming model is set up for instruction-set test cases generation problem. Based on simulation, the proposed model and algorithm achieve a convincing computational performance, in most cases 50%~70%, instruction-set test cases with better ability of error detecting obtained using this algorithm could save the execution time up to 3 seconds. Besides, it also avoids the problem of using complicated crossover and mutation operations that traditional genetic algorithm shave.",2008,0, 3026,A Markov Decision Approach to Optimize Testing Profile in Software Testing,"In this paper, we demonstrate an approach to optimize software testing, minimize the expected cost with given software parameters of concern. Taking software testing process as a Markov decision process, a Markov decision model of software testing is proposed, and by using a learning strategy based on the Cross-Entropy method to optimize the software testing, we obtain the optimal testing profile. Simulation results show that this learning strategy reduces significantly in expected cost comparing with random testing, moreover, this learning strategy is more feasible and significantly in reducing the number of test cases required to detect and revealing a certain number of software defects than random testing.",2008,0, 3027,Analytical Modeling Approach to Detect Magnet Defects in Permanent-Magnet Brushless Motors,The paper presents a novel approach to detect magnet faults such as local demagnetization in brushless permanent-magnet motors. We have developed a new form of analytical model that solves the Laplacian/quasi-Poissonian field equations in the machine's air-gap and magnet element regions. We verified the model by using finite-element software in which demagnetization faults were simulated and electromotive force was calculated as a function of rotor position. We then introduced the numerical data of electromotive force into a gradient-based algorithm that uses the analytical model to locate demagnetized regions in the magnet as simulated in the finite-element package. The fast and accurate convergence of the algorithm makes the model useful in magnet fault diagnostics.,2008,0, 3028,Vehicle Design Validation via Remote Vehicle Diagnosis: A feasibility study on battery management system,"In recent years, passenger vehicle product development faces great challenges to maintain high vehicle quality due to the proliferation of Electronics, Control and Software (ECS) features and the resultant system complexity. Quickly detecting and trouble-shooting faults of integrated vehicle systems during the validation stage in a key to enhancing vehicle quality. In this paper, we present a feasibilty study of Vehicle Design Validation via Remote Vehicle Diagnosis (VDV-via-RVD) and its application in the validation of vehicle battery management system. After the discussion of the advantages and challenges of VDV-via-RVD, some preliminary experimental results are presented to demonstrate the concept.",2008,0, 3029,A connected path approach for staff detection on a music score,"The preservation of many music works produced in the past entails their digitalization and consequent accessibility in an easy-to-manage digital format. Carrying this task manually is very time consuming and error prone. While optical music recognition systems usually perform well on printed scores, the processing of handwritten musical scores by computers remain far from ideal. One of the fundamental stages to carry out this task is the staff line detection. In this paper a new method for the automatic detection of music staff lines based on a connected path approach is presented. Lines affected by curvature, discontinuities, and inclination are robustly detected. Experimental results show that the proposed technique consistently outperforms well-established algorithms.",2008,0, 3030,Error resilient macroblock rate control for H.264/AVC video coding,"In this paper, an error resilient rate control scheme for the H.264/AVC standard is proposed. This scheme differs from traditional rate control schemes in that macroblock mode decisions are not made only to minimize their rate-distortion cost, but also take into account that the bitstream will have to be transmitted through an error-prone network. Since channel errors will probably occur, error propagation due to predictive coding should be mitigated by adequate Intra coding refreshes. The proposed scheme works by comparing the rate-distortion cost of coding a macroblock in Intra and Inter modes: if the cost of Intra coding is only slightly larger than the cost of Inter coding, the coding mode is changed to Intra, thus reducing error propagation. Additionally, cyclic Intra refresh is also applied to guarantee that all macroblocks are eventually refreshed. The proposed scheme outperforms the H.264/AVC reference software, for typical test sequences, for error-free transmission and several packet loss rates.",2008,0, 3031,Determination of a failure probability prognosis based on PD - diagnostics in gis,"In complex high voltage components local insulation defects can cause partial discharges (PD). Especially in highly stressed gas insulated switchgear (GIS) these PD affected defects can lead to major blackouts. Today each PD activity on important insulation components causes an intervention of an expert, who has to identify and analyze the PD source and has to decide: Are the modules concerned to be switched off or can they stay in service? To reduce these cost and time intensive expert interventions, this contribution specifies a proposal which combines an automated PD defect identification procedure with a quantifiable diagnosis confidence. A risk assessment procedure is described, which is based on measurements of phase resolved PD pulse sequence data and a subsequent PD source identification. A defect specific risk is determined and then integrated within a failure probability software using the Farmer diagram. The risks of failure are classified into three levels. The uncertainty of the PD diagnosis is assessed by applying different PD sources and comparisons with other evaluation concepts as well as considering system theoretical investigations. It is shown that the PD defect specific risk is the key aspect of this approach which depends on the so called criticality range and the main PD impact aspects PD location, time dependency, defect property and (over) voltage dependency.",2008,0, 3032,Comparative study of cognitive complexity measures,"Complexity metrics are used to predict critical information about reliability and maintainability of software systems. Cognitive complexity measure based on cognitive informatics, plays an important role in understanding the fundamental characteristics of software, therefore directly affects the understandability and maintainability of software systems. In this paper, we compared available cognitive complexity measures and evaluated cognitive weight complexity measure in terms of Weyukerpsilas properties.",2008,0, 3033,Learning software engineering principles using open source software,"Traditional lectures espousing software engineering principles hardly engage studentspsila attention due to the fact that students often view software engineering principles as mere academic concepts without a clear understanding of how they can be used in practice. Some of the issues that contribute to this perception include lack of experience in writing and understanding large programs, and lack of opportunities for inspecting and maintaining code written by others. To address these issues, we have worked on a project whose overarching goal is to teach students a subset of basic software engineering principles using source code exploration as the primary mechanism. We attempted to espouse the following software engineering principles and concepts: role of coding conventions and coding style, programming by intention to develop readable and maintainable code, assessing code quality using software metrics, refactoring, and reverse engineering to recover design elements. Student teams have examined the following open source Java code bases: ImageJ, Apache Derby, Apache Lucene, Hibernate, and JUnit. We have used Eclipse IDE and relevant plug-ins in this project.",2008,0, 3034,Research and Assessment of the Reliability of a Fault Tolerant Model Using AADL,"In order to solve the problem of the assessment of the reliability of the fault tolerant system, the work in this paper is devoted to analyze a subsystem of ATC (air traffic control system), and use AADL (architecture analysis and design language) to build its model. After describing the various software and hardware error states and as well as error propagation from hardware to software, the work builds the AADL error model and convert it to GSPN (general stochastic Petri net). Using current Petri Net technology to assess the reliability of the fault tolerant system which is based on ATC as the background, this paper receives good result of the experiment.",2008,0, 3035,Progress and Quality Modeling of Requirements Analysis Based on Chaos,"It is important and difficult for us to know the progress and quality of requirements analysis. We introduce chaos and software requirements complexity to the description of requirements decomposing, and get a method which can help us to evaluate the progress and quality. The model shows that requirements decomposing procedure has its own regular pattern which we can describe in a equation and track in a trajectory. The requirements analysis process of a software system can be taken as normal if its trajectory coincide with the model. We may be able to predict the time we need to finish all requirements decomposition in advance based on the model. We apply the method in the requirements analysis of homephone service management system, and the initial results show that the method is useful in the evaluation of requirements decomposition.",2008,0,3678 3036,Design Diverse-Multiple Version Connector: A Fault Tolerant Component Based Architecture,"Component based software engineering (CBSE) is a new archetype to construct the systems by using reusable components ldquoas it isrdquo. To achieve high dependability in such systems, there must be appropriate fault tolerance mechanism in them at the architectural level. This paper presents a fault tolerant component based architecture that relies on the C2 architectural style and is based on design diverse and exception handling fault tolerance strategies. The proposed fault tolerant component architecture employs special-purpose connectors called design diverse-multiple version connectors (DD-MVC). These connectors allow design diverse n-versions of components to run in parallel. The proposed architecture has a fault tolerant connector (FTC), which detects and tolerates different kinds of errors. The proposed architecture adjusts the tradeoff between dependability and efficiency at run time and exhibits the ability to tolerate the anticipated and unanticipated faults effectively. The applicability of proposed architecture is demonstrated with a case study.",2008,0, 3037,Towards High-Level Parallel Programming Models for Multicore Systems,"Parallel programming represents the next turning point in how software engineers write software. Multicore processors can be found today in the heart of supercomputers, desktop computers and laptops. Consequently, applications will increasingly need to be parallelized to fully exploit multicore processors throughput gains now becoming available. Unfortunately, writing parallel code is more complex than writing serial code. This is where the threading building blocks (TBB) approach enters the parallel computing picture. TBB helps developers create multithreaded applications more easily by using high-level abstractions to hide much of the complexity of parallel programming, We study the programmability and performance of TBB by evaluating several practical applications. The results show very promising performance but parallel programming with TBB is still tedious and error-prone.",2008,0, 3038,Assessing Quality of Policy Properties in Verification of Access Control Policies,"Access control policies are often specified in declarative languages. In this paper, we propose a novel approach, called mutation verification, to assess the quality of properties specified for a policy and, in doing so, the quality of the verification itself. In our approach, given a policy and a set of properties, we first mutate the policy to generate various mutant policies, each with a single seeded fault. We then verify whether the properties hold for each mutant policy. If the properties still hold for a given mutant policy, then the quality of these properties is determined to be insufficient in guarding against the seeded fault, indicating that more properties are needed to augment the existing set of properties to provide higher confidence of the policy correctness. We have implemented Mutaver, a mutation verification tool for XACML, and applied it to policies and properties from a real-world software system.",2008,0, 3039,An Ant Colony Optimization Algorithm Based on the Nearest Neighbor Node Choosing Rules and the Crossover Operator,"The ant colony optimization algorithm (ACO) has a powerful capacity to find out solutions to combinatorial optimization problems, but it still has two defects, namely, it is slow in convergence speed and is prone to falling in the local optimal solution. Against the deficiencies of this algorithm, in this study we proposed an ACO based on basic ACO algorithm based on well-distributed on the initiation, the nearest neighbor node choosing rules and with crossover operator. In the initiation of the algorithm, the convergence speed of the ACO is increased by distributing the ant colony evenly in all the cities and adopting the nearest neighbor choosing node rule and making crossover computation among better individual ants at the end of each round of cycle when each ant chooses the next city. The experiment results indicate that the ACO proposed in this study is valid.",2008,0, 3040,A Universal Fault Diagnostic Expert System Based on Bayesian Network,"Fault diagnosis is an area of great concern of any industry to reduce maintenance cost and increase profitability in the mean time. But most of the researches tend to rely on sensor data and equipment structure, which are expensive because each category of equipment differs from the others. Thus developing a universal system remains a key challenge to be solved. A universal expert system is developed in this paper making full use of expertspsila knowledge to diagnose the possible root causes and the corresponding probabilities for maintenance decision making support. Bayesian network was chosen as the inference engine of the system through raw data analysis. Improved causal relationship questionnaire and probability scale method were applied to construct the Bayesian network. The system has been applied to the production line of a chipset factory and the results show that the system can support decision making for fault diagnosis promptly and correctly.",2008,0, 3041,Probability-Based Binary Particle Swarm Optimization Algorithm and Its Application to WFGD Control,"Sulfur dioxide is an air pollutant and an acid rain precursor. Coal-fired power generating plants are major sources of sulfur dioxide, so the limestone-gypsum wet flues gas desulphurization (WFGD) technology has been widely used in thermal power plant in China nowadays to reduce the emission of sulfur dioxide and protect the environment. The absorber slurry pH value control is very important for limestone-gypsum WFGD technique since it directly determine the desulphurization performance and the quality of product. However, it is hard to achieve the satisfactory adjustment performance for the traditional PID controller because of the complexity of the absorber slurry pH control. To tackle this problem, a novel probability-based binary particle swarm optimization (PBPSO) is proposed to tuning the PID parameters. The simulation results show PBPSO is valid and outperform the traditional binary PSO algorithm in terms of easy implementation and global optimal search ability. And it also presents that the proposed PBPSO algorithm can search the optimal PID parameters and achieves the expected control performance with PID controller by constructing the proper fitness function of PID tuning.",2008,0, 3042,Conflict Resolution within Multi-agent System in Collaborative Design,"A critical challenge to create effective agent-based systems is allowing them to operate effectively when the design environment is complex, dynamic, and error-prone. Much research has been made on multi-agent system (MAS) in CSCD field, especially on conflict handling, which is not yet well-addressed due to its sophistication. Conflict resolution plays a central role in collaborative design. In this paper, a prototype system of conflict resolution is presented, adopting solution of deadlock problems in the operating system to conflict prevention and classifying the conflicts to match solution strategies according to their features. Video conference negotiation strategy based on MICAD(Multimedia Intelligent CAD) platform, which is researched and implemented by our team, is adopted to deal with new conflicts that can not get matching strategies.",2008,0, 3043,Application of Entropy-Based Markov Chains Data Fusion Technique in Fault Diagnosis,"This paper proposes an entropy-based Markov (EMC) chain fusion technique to solve the problem that the sample set is incompletion in fault diagnostic field. Firstly, the concept about probability Petri net is defined. It can calculate the fault occurred probability from incidence matrix based on the complemental information. Secondly, probability Petri net diagnostic model is designed from diagnostic rules that obtained by Skowron default rule generation method after the sample set is reduced by rough set theory. And in order to simplify the framework of the diagnostic model, Petri net model is designed as distributed form. Finally, depending on the diagnosis of distributed diagnostic model, EMC technique will be used to obtain consensus output if the places that represent fault in the model have several tokens. The diagnostic result is the consensus output that with the maximum of posterior probability after normalized treatment. The design is described by an example about rotating machinery fault diagnosis, and is proved availability by test sample set.",2008,0, 3044,An Immune Algorithm-Based Atmospheric Quality Assessment Model and Its Applications,"Atmospheric quality assessment is an important research subject, focusing on the evaluation of the quality of atmospheric environment, a model based on the immune algorithm (IM) is proposed in this paper. The model has the characters of pellucid principle and physical explication. Moreover, the simplification is the important advantage of the method. Experimental results show that the proposed model is effective and feasible for assessing atmospheric quality. It could provide a new reference basis and approach in the field of environment. Therefore it has great potential in the field of assessment the atmospheric quality.",2008,0, 3045,Mining Change Patterns in AspectJ Software Evolution,"Understanding software change patterns during evolution is important for researchers concerned with alleviating change impacts. It can provide insight to understand the software evolution, predict future changes, and develop new refactoring algorithms. However, most of the current research focus on the procedural programs like C, or object-oriented programs like Java; seldom effort has been made for aspect-oriented software. In this paper, we propose an approach for mining change patterns in AspectJ software evolution. Our approach first decomposes the software changes into a set of atomic change representations, then employs the apriori data mining algorithm to generate the most frequent itemsets. The patterns we found reveal multiple properties of software changes, including their kind, frequency, and correlation with other changes. In our empirical evaluation on several non-trivial AspectJ benchmarks, we demonstrate that those change patterns can be used as measurement aid and fault predication for AspectJ software evolution analysis.",2008,0, 3046,Evaluation of Requirements Analysis Progress Based on Chaos,"Reliable results of requirements engineering are important to the success of a software project. Based on the research of chaotic behavior of requirements analysis, we set up a theoretical model which can be used to lead us to analysis requirements and evaluate the quality of decomposing process. The model shows that analyzing trajectory may consists of three segments i.e. initial segment, middle segment and last segment so long as Requirements Decomposition Rate Parameter is in its stable region. The decomposing process can be taken as normal if all segments or last two segments exist in the trajectory. We may be able to predict the time we need to finish all requirements decomposition in advance. We apply the method in the requirements analysis of home phone service management system, and the initial results show that the method is useful in the evaluation of requirements decomposition.",2008,0, 3047,Domain Knowledge Consistency Checking for Ontology-Based Requirement Engineering,"Domain knowledge is one of crucial factors to get a great success in requirements elicitation of high quality. In ontology-based requirements engineering, ontology is used to express domain knowledge, so that the inconsistency of domain knowledge can be found by semantic checking. This paper purposes a new algorithm based on Tableaux algorithm to detecting and resolving inconsistencies of ontology. All kinds of consistency rules of domain knowledge are formally defined at first, and then the semantic checking algorithm is presented to resolve these inconsistencies. Finally, a case study is given to show the process and validate the usability of the algorithm.",2008,0, 3048,Some Metrics for Accessing Quality of Product Line Architecture,"Product line architecture is the most important core asset of software product line. vADL, a product line architecture description languages, can be used for specifying product line architecture, and also provide enough information for measuring quality of product line architecture. In this paper, some new metrics are provided to assess similarity, variability, reusability, and complexity of product line architecture. The main feature of our approach is to assess the quality of product line architecture by analyzing its formal vADL specification, and therefore the process of metric computation can be automated completely.",2008,0, 3049,Considering the Dependency of Fault Detection and Correction in Software Reliability Modeling,"Most existing software reliability growth models (SRGMs) focused on the fault detection process, while the fault correction process was ignored by assuming that the detected faults can be removed immediately and perfectly. However, these assumptions are not realistic. The fault correction process is a critical part in software testing. In this paper, we studied the dependency of the fault detection and correction processes in view of the number of faults. The ratio of corrected fault number to detected fault number is used to describe the dependency of the two processes, which appears S-shaped. Therefore, we adopt the logistical function to represent the ratio function. Based on this function, both fault correction and detection processes are modeled. The proposed models are evaluated by a data set of software testing. The experimental results show that the new models fit the data set of fault detection and correction processes very well.",2008,0, 3050,Fault Tree Based Prediction of Software Systems Safety,"In our modern world, software controls much of the hardware (equipment, electronics, and instruments) around us. Sometimes hardware failure can lead to a loss of human life. When software controls, operates, or interacts with such hardware, software safety becomes a vital concern. To assure the safety of software controlling system, prediction of software safety should be done at the beginning of systempsilas design. The paper focused on safety prediction using the key node property of fault trees. This metric use parameter """"s"""" related to the fault tree to predict the safety of the software control systems. This metric allow designers to measure and the safety of software systems early in the design process. An applied example is shown in the paper.",2008,0, 3051,The SRGM Framework of Integrated Fault Detection Process and Correction Process,"In this paper, the hypothesis that the detected faults will be immediately removed is revised. An integration of fault detection process and correction process is considered. According to the statistic of the faults, the two new frameworks, which are the SRGM framework including repeated faults (CRDW) and the SRGM framework excluding repeated faults (CNRDW), are presented. The above two frameworks, not only can predict the number of cumulative detected faults, but also can predict the number of corrected faults. In this paper, as an example, two reliability models are gained from CNRDW with different detection process and different correction process. The fitting capability and prediction capability of the two models are evaluated by an open software failure data set. The experimental results show that the presented models have a fairly accurate fitting capability and prediction capability compared with other software reliability growth models.",2008,0, 3052,QoS-Satisfied Pathover Scheme in FMIPv6 Environment,"Using the application of bluck data transfer, we investigate the performance of QoS-satisfied pathover for transport layer mobility scheme such as mSCTP in FMIPv6 envirmonment. We find that existing scheme has some defects in aspect of pathover and throughput. Based on this, we make a potential change to mSCTP by adding QoS-Measurement-Chunk, which is used to take into account information about wireless link condition in reselection/handover process of FMIPv6 network, we proposed a scheme with an algorithm named congestion-oriented pathover (COPO) to detect congestion of the primary path using back-to-back RTTs and adapt change of the wireless link parameters. A demonstrate using simulation is provided showing how the proposed scheme provides better performance to pathover and throughput.",2008,0, 3053,Cluster-Based Error Messages Detecting and Processing for Wireless Sensor Networks,"Wireless sensor networks (WSNs) have emerged as a new technology about acquiring and processing messages for a variety of applications. Faults occurring to sensor nodes are common due to lack of power or environmental interference. In order to guarantee the network reliability of service, it is necessary for the WSN to be able to detect and processes the faults and take appropriate actions. In this paper, we propose a novel approach to distinguish and filter the error messages for cluter-based WSNs. The simulation results show that the proposed method not only can avoid frequent re-clustering but also can save the energy of sensor nodes, thus prolong the lifetime of sensor network.",2008,0, 3054,Subjective Evaluation of Sound Quality for Mobile Spatial Digital Audio,"In the past decades, technical developments have enabled the delivery of sophisticated mobile spatial audio signals to consumers, over links that range very widely in quality, requiring decisions to be made about the tradeoffs between different aspects of audio quality. It is therefore important to determine the most important spatial quality attributes of reproduced sound fields and to find ways of predicting perceived sound quality on the basis of objective measurements.This paper first briefly reviews several subjective quality measures developed for streaming realtime audio over mobile network communications and in digital audio broadcasts. Then a experimental design on the application of the subjective listening test for mobile spatial audio is described. Finally, the conclusion is analysed and some future research directions are identified.",2008,0, 3055,Reliability and Safety Modeling of Fault Tolerant Control System,"This paper proposes a generalized approach of reliability and safety modeling for fault tolerant control system based on Markov model. The reliability and safety function, computed from the transition probability of the Markov process, provides a proper quantitative measure of the fault tolerant control system because it incorporates the deadline, failure detection and fault isolation, permanent and correlated fault. State transition diagram was established based on the state transition of the system. State transition equation could be obtained by state transition diagram. Different state probability diagrams were acquired with different parameters of failure rate, recovery rate from transient fault, failure detection rate and fault isolation rate.",2008,0, 3056,A Fast Audio Digital Watermark Method Based on Counter-Propagation Neural Networks,"In this thesis, we present a novel audio digital watermark method based on counter-propagation Neural Networks. After dealing with the audio by discrete wavelet transform, we select the important coefficients which are ready to be trained in the neural networks. By making use of the capabilities of memorization and fault tolerance in CPN, watermark is memorized in the nerve cells of CPN. In addition, we adopt a kind of architecture with a adaptive number of parallel CPN to treat with each audio frame and the corresponding watermark bit. Comparing with other traditional methods by using CPN, it was largely improve the efficiency for watermark embedding and correctness for extracting, namely the speed of whole algorithm. The extensive experimental results show that, we can detect the watermark exactly under most of attacks. This method efficaciously tradeoff both the robustness and inaudibility of the audio digital watermark.",2008,0, 3057,Bayesian Network to Construct Interoperability Model of Open Source Software,"There are few topics more heated than the discussion surrounding open source software versus commercial and proprietary software. They are not only in an opposite relation, but also looking for cooperation. Moreover, there are many unresolved problems between them, in which the most typical one is the interoperability. There is a real need for a widely adopted, standardized method to assess the interoperability of open source software. However, few groups or researchers have given the guide up to now. This paper proposed Bayesian Network to construct the structure of interoperability and then learn the condition probability table of the structure. The structure and its condition probability table constitute the interoperability model. The model can be used not only to help user evaluate the interoperability of open source software, but also to guide the software developer to improve the quality of open source software more efficiently. An application showed how to use the model, and the result proved the validity of this model.",2008,0, 3058,The Portable PCB Fault Detector Based on ARM and Magnetic Image,"Traditional fault detector technique can hardly adapt to the modern electronic technique. This paper introduces how to set up military electronic equipmentspsila portable fault detector based on magnetic image, which can fast detect the electronic equipment in the scene. The detector has made use of ARM and uC/OS-II as the developing platform, taking MiniGUI as the figure interface.",2008,0, 3059,Damaged Mechanism Research of RS232 Interface under Electromagnetic Pulse,"RS232 interface executes the role of Transportation and Communication, which has became the important interface between MCU of embedded system and peripheral equipment. Because RS232 mainly work in bottom of communication protocol, so it is important to protect the infrastructure of RS232. In test, pulse double electromagnetic is pulled into RS232 data transmission lines by coupling clamp, simulated differential-mode pulse voltage, basing on the test data, it will get the damaged mechanism of RS232 interface, meanwhile presenting the Logistic model of injected voltage pulse and probability of damage to the port interface, and getting the performance evaluation of RS232 interface, moreover estimating the voltage pulse range of upper and lower bounds at the port in the normal and damage state.",2008,0, 3060,A Novel Hybrid Approach of KPCA and SVM for Crop Quality Classification,"Quality evaluation and classification is very important for crop market price determination. A lot of methods have been applied in the field of quality classification including principal component analysis (PCA) and artificial neural network (ANN) etc. The use of ANN has been shown to be a cost-effective technique. But their training is featured with some drawbacks such as small sample effect, black box effect and prone to overfitting. This paper proposes a novel hybrid approach of kernel principal component analysis (KPCA) with support vector machine (SVM) for developing the accuracy of quality classification. The tobacco quality data is evaluated in the experiment. Traditional PCA-SVM, SVM and ANN are investigated as comparison basis. The experimental results show that the proposed approach can achieve better performance in crop quality classification.",2008,0, 3061,Research on Grouping Strategy in Series Course Projects of Software Engineering,"In order to address such problems as """"random grouping"""" and """"low consistency"""" when our students are conducting their series course projects of software engineering, """"stability factor"""" was put forward to evaluate the stability of a group and to assess the collaboration efficiency of the members in it. A Web-based MIS also was developed to help teachers do real-time supervision during the course projects. When complying with this framework, the quality of practical teaching can be assured, learning outcome will be enhanced, and the scientific encouraging policy will make a positive influence upon cultivating studentspsila team-working capability as well as their collaboration consciousness.",2008,0, 3062,Color Reproduction Quality Metric on Printing Images Based on the S-CIELAB Model,"The just-perceived color difference in a pair of printing images, the reproduction and its original, has been confirmed, subjectively by a paired-comparison psychological experiment and objectively by the S-CIELAB color quality metric. For one color image, a total number of 53 pairs of test images, simulating a number of varieties in C, M, Y, and K ink amounts, were produced, and determined their corresponding color difference recognizing probabilities by visual paired-comparison. Also, the image color difference Delta Es values, presented in the S-CIELAB model, for each pairs of images were calculated and correlated with their color difference recognizing probability. The results showed that the just-perceived image color difference Delta Es, when 0.9 color difference recognizing probability being considered as the just-perceived level, was about 1.4 Delta Eab units for the experiment image, being the image color fidelity threshold parameter.",2008,0, 3063,Methodology of the Correct Plate-Making to Keep Consistence of Tone Reproduction,"The process of offset plate copy is controlled through the correct reproduction at the highest and the lowest tone. We propose that we should control the whole tone, and we need the correct control method. Based on the offset control strap, this paper proposes a new detected method of plate resolution and analytic method of the correct copy range of plate. Using these methods and through the experiment, we analyze the resolution and the correct copy range of three types of offset plates. By working in the correct copy range, we can guarantee the consistence of tone reproduction of different types of plates. This is the precondition for color consistence in color printing and convenient to adjust the press.",2008,0, 3064,Transparent and Autonomic Rollback-Recovery in Cluster Systems,"Cluster systems provide an excellent environment to run computation hungry applications. However, due to being created using commodity components they are prone to failures. To overcome these failures we propose to use rollback-recovery, which consists of the checkpointing and recovery facilities. Checkpointing facilities have been the focus of many previous studies; however, the recovery facilities have been overlooked. This paper focuses on the requirements, concept and architecture of recovery facilities. The synthesized fault tolerant system was implemented in the GENESIS system and evaluated. The results show that the synthesized system is efficient and scalable.",2008,0, 3065,A Case Retrieval Method for Knowledge-Based Software Process Tailoring Using Structural Similarity,"Reuse of the software development process and its knowledge and experiences is a critical factor for the success of the software project. On the other hand, the software development process needs to be tailored to reflect the specific characteristics of the software project. So, if we can retrieve a similar process to a new project, process tailoring will be less costly and less error-prone because the retrieved process can be tailored to the new case with fewer modifications. In this paper, we propose the case retrieval method based on structural similarity. The structural similarity is calculated by the degree that process elements in a past case are applicable to a new project. By measuring the structural similarity, the retrieved process is ensured to be tailored to the new case with fewer modifications. We validate the usefulness of our method through the experiments using 30 cases.",2008,0, 3066,A Design Quality Model for Service-Oriented Architecture,"Service-Oriented Architecture (SOA) is emerging as an effective solution to deal with rapid changes in the business environment. To handle fast-paced changes, organizations need to be able to assess the quality of its products prior to implementation. However, literature and industry has yet to explore the techniques for evaluating design quality of SOA artifacts. To address this need, this paper presents a hierarchical quality assessment model for early assessment of SOA system quality. By defining desirable quality attributes and tracing necessary metrics required to measure them, the approach establishes an assessment model for identification of metrics at different abstraction levels. Using the model, design problems can be detected and resolved before they work into the implemented system where they are more difficult to resolve. The model is validated against an empirical study on an existing SOA system to evaluate the quality impact from explicit and implicit changes to its requirements.",2008,0, 3067,SA@Work A Field Study of Software Architecture and Software Quality at Work,"Designing and maintaining a software architecture that strikes the right balance between conflicting quality attributes is a daunting task facing every software architect. In the SA@Work project we have conducted ethnographical field studies of practicing software architects in four Danish software companies to study architectural work in general and architectural techniques in particular. In this paper, we describe observed techniques related to architectural quality as input to the architectural body of knowledge. Second, these techniques are classified according to the quality view classification framework of Garvin. Our analysis shows that techniques for assessing and ensuring quality in software architecture predominately view quality as an intrinsic quality of the architecture itself and less view it as related to business and users. This hints at a need to extend the architectpsilas toolbox and may explain observed mismatches between architectural work and agile processes.",2008,0, 3068,Theoretical Maximum Prediction Accuracy for Analogy-Based Software Cost Estimation,"Software cost estimation is an important area of research in software engineering. Various cost estimation model evaluation criteria (such as MMRE, MdMRE etc.) have been developed for comparing prediction accuracy among cost estimation models. All of these metrics capture the residual difference between the predicted value and the actual value in the dataset, but ignore the importance of the dataset quality. What is more, they implicitly assume the prediction model to be able to predict with up to 100% accuracy at its maximum for a given dataset. Given that these prediction models only provide an estimate based on observed historical data, absolute accuracy cannot be possibly achieved. It is therefore important to realize the theoretical maximum prediction accuracy (TMPA) for the given model with a given dataset. In this paper, we first discuss the practical importance of this notion, and propose a novel method for the determination of TMPA in the application of analogy-based software cost estimation. Specifically, we determine the TMPA of analogy using a unique dynamic K-NN approach to simulate and optimize the prediction system. The results of an empirical experiment show that our method is practical and important for researchers seeking to develop improved prediction models, because it offers an alternative for practical comparison between different prediction models.",2008,0, 3069,Experimental Study of Discriminant Method with Application to Fault-Prone Module Detection,"Some techniques have been applied to improving software quality by classifying the software modules into fault-prone or non fault-prone categories. This can help developers focus on some high risk fault-prone modules. In this paper, a distribution-based Bayesian quadratic discriminant analysis (D-BQDA) technique is experimental investigated to identify software fault-prone modules. Experiments with software metrics data from two real projects indicate that this technique can classify software modules into a proper class with a lower misclassification rate and a higher efficiency.",2008,0, 3070,Force Feature Spaces for Visualization and Classification,"Distance-preserving dimension reduction techniques can fail to separate elements of different classes when the neighborhood structure does not carry sufficient class information. We introduce a new visual technique, K-epsilon diagrams, to analyze dataset topological structure and to assess whether intra-class and inter-class neighborhoods can be distinguished. We propose a force feature space data transform that emphasizes similarities between same-class points and enhances class separability. We show that the force feature space transform combined with distance-preserving dimension reduction produces better visualizations than dimension reduction alone. When used for classification, force feature spaces improve performance of K-nearest neighbor classifiers. Furthermore, the quality of force feature space transformations can be assessed using K-epsilon diagrams.",2008,0, 3071,Detecting Inconsistent Values Caused by Interaction Faults Using Automatically Located Implicit Redundancies,"This paper addresses the problem of detecting inconsistent values caused by interaction faults originated from an external system.This type of error occurs when a correctly formatted message that is not corrupted during transmission is generated with a field that contains incorrect data.When traditional schemes cannot be used, one alternative is resorting to receiver-based strategies that employ implicit redundancies - relations between events or data, often identified by a human expert.We propose an approach for detecting inconsistent values using implicit redundancies which are automatically located in examples of communications.We show that, even without adding any redundant information to the communication, the proposed approach can achieve a reasonable error detection coverage in fields where sequential relations exist.Other aspects, such as false alarms and latency, are also evaluated.",2008,0, 3072,Orderly Random Testing for Both Hardware and Software,"Based on random testing, this paper introduces a new concept of orderly random testing for both hardware and software systems. Random testing, having been employed for years, seems to be inefficient for its random selection of test patterns. Therefore, a new concept of pre-determined distance among test vectors is proposed in the paper to make it more effective in testing. The idea is based on the fact that the larger the distance between two adjacent test vectors in a test sequence, the more the faults will be detected by the test vectors. Procedure of constructing such a testing sequence is presented in detail. The new approach has shown its remarkable advantage of fitting in with both hardware and software testing. Experimental results and mathematical analysis are also given to evaluate the performances of the novel method.",2008,0, 3073,A Study of Modified Testing-Based Fault Localization Method,"In software development and maintenance, locating faults is generally a complex and time-consuming process. In order to effectively identify the locations of program faults, several approaches have been proposed. Similarity-aware fault localization (SAFL) is a testing-based fault localization method that utilizes testing information to calculate the suspicion probability of each statement. Dicing is also another method that we have used. In this paper, our proposed method focuses on predicates and their influence, instead of on statements in traditional SAFL. In our method, fuzzy theory, matrix calculating, and some probability are used. Our method detects the importance of each predicate and then provides more test data for programmers to analyze the fault locations. Furthermore, programmers will also gain some important information about the program in order to maintain their program accordingly. In order to speed up the efficiency, we also simplified the program. We performed an experimental study for several programs, together with another two testing-based fault localization (TBFL) approaches. These three methods were discussed in terms of different criteria such as line of code and suspicious code coverage. The experimental results show that the proposed method from our study can decrease the number of codes which have more probability of suspicion than real bugs.",2008,0, 3074,Bayesian Inference Approach for Probabilistic Analogy Based Software Maintenance Effort Estimation,"Software maintenance effort estimation is essential for the success of software maintenance process. In the past decades, many methods have been proposed for maintenance effort estimation. However, most existing estimation methods only produce point predictions. Due to the inherent uncertainties and complexities in the maintenance process, the accurate point estimates are often obtained with great difficulties. Therefore some prior studies have been focusing on probabilistic predictions. Analogy Based Estimation (ABE) is one popular point estimation technique. This method is widely accepted due to its conceptual simplicity and empirical competitiveness. However, there is still a lack of probabilistic framework for ABE model. In this study, we first propose a probabilistic framework of ABE (PABE). The predictive PABE is obtained by integrating over its parameter k number of nearest neighbors via Bayesian inference. In addition, PABE is validated on four maintenance datasets with comparisons against other established effort estimation techniques. The promising results show that PABE could largely improve the point estimations of ABE and achieve quality probabilistic predictions.",2008,0, 3075,PRASE: An Approach for Program Reliability Analysis with Soft Errors,"Soft errors are emerging as a new challenge in computer applications. Current studies about soft errors mainly focus on the circuit and architecture level. Few works discuss the impact of soft errors on programs. This paper presents a novel approach named PRASE, which can analyze the reliability of a program with the effect of soft errors. Based on the simple probability theory and the corresponding assembly code of a program, we propose two models for analyzing the probabilities about error generation and error propagation. The analytical performance is increased significantly with the help of basic block analysis. The programAs reliability is determined according to its actual execution paths. We propose a factor named PVF (program vulnerability factor), which represents the characteristic of programAs vulnerability in the presence of soft errors. The experimental results show that the reliability of a program has a connection with its structure. Comparing with the traditional fault injection techniques, PRASE has the advantage of faster speed and lower price with more general results.",2008,0, 3076,Training Security Assurance Teams Using Vulnerability Injection,"Writing secure Web applications is a complex task. In fact, a vast majority of Web applications are likely to have security vulnerabilities that can be exploited using simple tools like a common Web browser. This represents a great danger as the attacks may have disastrous consequences to organizations, harming their assets and reputation. To mitigate these vulnerabilities, security code inspections and penetration tests must be conducted by well-trained teams during the development of the application. However, effective code inspections and testing takes time and cost a lot of money, even before any business revenue. Furthermore, software quality assurance teams typically lack the knowledge required to effectively detect security problems. In this paper we propose an approach to quickly and effectively train security assurance teams in the context of web application development. The approach combines a novel vulnerability injection technique with relevant guidance information about the most common security vulnerabilities to provide a realistic training scenario. Our experimental results show that a short training period is sufficient to clearly improve the ability of security assurance teams to detect vulnerabilities during both code inspections and penetration tests.",2008,0, 3077,Considering Fault Correction Lag in Software Reliability Modeling,"The fault correction process is very important in software testing, and it has been considered into some software reliability growth models (SRGMs). In these models, the time-delay functions are often used to describe the dependency of the fault detection and correction processes. In this paper, a more direct variable """"correction lag"""", which is defined as the difference between the detected and corrected fault numbers, is addressed to characterize the dependency of the two processes. We investigate the correction lag and find that it appears Bell-shaped. Therefore, we adopt the Gamma function to describe the correction lag. Based on this function, a new SRGM which includes the fault correction process is proposed. And the experimental results show that the new model gives better fit and prediction than other models.",2008,0, 3078,Managing the Life-cycle of Industrial Automation Systems with Product Line Variability Models,"The current trend towards component-based software architectures has also influenced the development of industrial automation systems (IAS). Despite many advances, the life-cycle management of large-scale, component-based IAS still remains a big challenge. The knowledge required for the maintenance and runtime reconfiguration is often tacit and relies on individual stakeholders' capabilities - an error-prone and risky strategy in safety critical environments. This paper presents an approach based on product line variability models to manage the lifecycle of IAS and to automate the maintenance and reconfiguration process. We complement the standard IEC 61499 with a variability modeling approach to support both initial deployment and runtime reconfiguration. We illustrate the automated model-based life-cycle management and maintenance process using sample IAS usage scenarios.",2008,0, 3079,Software Defect Prediction Using Call Graph Based Ranking (CGBR) Framework,"Recent research on static code attribute (SCA) based defect prediction suggests that a performance ceiling has been achieved and this barrier can be exceeded by increasing the information content in data. In this research we propose static call graph based ranking (CGBR) framework, which can be applied to any defect prediction model based on SCA. In this framework, we model both intra module properties and inter module relations. Our results show that defect predictors using CGBR framework can detect the same number of defective modules, while yielding significantly lower false alarm rates. On industrial public data, we also show that using CGBR framework can improve testing efforts by 23%.",2008,0, 3080,Let The Puppets Move! Automated Testbed Generation for Service-oriented Mobile Applications,"There is a growing interest for techniques and tools facilitating the testing of mobile systems. The movement of nodes is one of the relevant factors of context change in ubiquitous systems and a key challenge in the validation of context-aware applications. An approach is proposed to generate a testbed for service-oriented systems that takes into account a mobility model of the nodes of the network in which the accessed services are deployed. This testbed allows a tester to assess off-line the QoS properties of a service under test, by considering possible variations in the response of the interacting services due to node mobility.",2008,0, 3081,Easy Recommendation Based on Probability Model,"Traditional collaborative filtering recommendation system suffers from some significant limitations, such as scalability and sparsity, which cause the speed and quality of recommendation system is unacceptable. To alleviate these problems, this paper proposes a novel algorithm based on probability model. Our algorithm can directly generate the preference prediction from database and at the same time has the ability to reflect the changes of users' interest incrementally. The effectiveness of the new algorithm is estimated by our experiments.",2008,0, 3082,An Improving Fault Detection Mechanism in Service-Oriented Applications Based on Queuing Theory,"SOA has become more and more popular, but fault tolerance is not fully supported in most existing SOA frameworks and solutions provided by various major software companies. SOA implementations with large number of users, services, or traffic, maintaining the necessary performance levels of applications integrated using an ESB presents a substantial challenge, both to the architects who design the infrastructure as well as to IT professionals who are responsible for administration. In this paper, we improve the performance model for analyzing and detecting faults based on the queuing theory. The performance of services of SOA applications is measuring in two categories (individual services and composite services). We improve the model of the individuals services and add the composite services performance measuring.",2008,0, 3083,FEM simulation and study on rectangular drawing process with flange,"Drawing process is important in manufacturing area. The development of tooling for rectangular drawing can be time-consuming and expensive, for the drawing process is very sensitive to some processing parameters, such as blank holder force, lubrication condition and layout of draw bead. Numerical simulation of the drawing process is a powerful tool for reducing costly trial-and-error loops and shortening development cycle. The finite element method (FEM) can simulate the stress-strain and thickness changes of the sheet, and predict the forming defects such as cracking, wrinkling, and thinning. As an example, numerical simulation of the rectangular drawing process is presented in the paper. A 3D finite element model of the drawing process is developed by the commercial FEM software, DYNAFORM, which is based on dynamic-explicit FEM procedure, LS-DYNA. The geometrical surfaces of tooling and sheet are modeled in CAD software, UG NX. The plastic stress-strain characteristics of the sheet are based on uniaxial tensile test. Simulation results show the distribution of stress, strain, and thickness. Based on the FLD and deformation results, the mould is designed. ManufacturerAs stamping experiments have shown that the agreement between simulation and experiment is good.",2008,0, 3084,Pairwise Statistical Significance of Local Sequence Alignment Using Substitution Matrices with Sequence-Pair-Specific Distance,"Pairwise sequence alignment forms the basis of numerous other applications in bioinformatics. The quality of an alignment is gauged by statistical significance rather than by alignment score alone. Therefore, accurate estimation of statistical significance of a pairwise alignment is an important problem in sequence comparison. Recently, it was shown that pairwise statistical significance does better in practice than database statistical significance, and also provides quicker individual pairwise estimates of statistical significance without having to perform time-consuming database search. Under an evolutionary model, a substitution matrix can be derived using a rate matrix and a fixed distance. Although the commonly used substitution matrices like BLOSUM62, etc. were not originally derived from a rate matrix under an evolutionary model, the corresponding rate matrices can be back calculated. Many researchers have derived different rate matrices using different methods and data. In this paper, we show that pairwise statistical significance using rate matrices with sequence-pair-specific distance performs significantly better compared to using a fixed distance. Pairwise statistical significance using sequence-pair-specific distanced substitution matrices also outperforms database statistical significance reported by BLAST.",2008,0, 3085,Research on PolicyBased Collaboration Models in Autonomic Computing,"Autonomic computing is expected to solve system management complexity and cost problems in IT environment by enabling systems to be self-managing. The autonomic computing collaborative work based on policy-driven is studied, including competition and collaboration. The autonomic elements should work collaboratively to implement the complex self-management tasks of autonomic computing system. The competition model is proposed based on the probability policy, and the equilibrium solutions are studied and resolved. It can improve the success rate and utility for practical applications effectively.",2008,0, 3086,Optimization Processing in Quadrilateral Meshes Generation Based on Cloud Data,"Quadrilateral meshes generation algorithm based on cloud data is proposed in the paper. Quadrilateral meshes are generated by means of dynamic edges extending without considering the parity of number of points on boundaries to adapt itself to complex boundaries. Collisions detecting of mesh boundaries and an algorithm of optimization processing are set forth. The quality of meshes is improved, and insure rate of the algorithm running. The realization of optimization processing algorithms was introduced in detail, and examples are presented to illustrate the ability of the algorithm.",2008,0, 3087,Active Authorization Rules for Enforcing RBAC with Spatial Characteristics,"The integration of the spatial dimension into RBAC-based models has been the hot topic as a consequence of the growing relevance of geo-spatial information in advanced GIS and mobile applications. Dynamically monitoring the state changes of an underlying system, detecting and reacting to changes without delay are crucial for the success of any access control enforcement mechanism. Thus, current systems or models should provide a flexible mechanism for enforcing RBAC with spatial characteristics in a seamless way, and adapt to policy or role structure changes in enterprises, which are indispensable to make RBAC with spatial characteristics usable in diverse domains. In this paper we will show how On-If-Then-Else authorization rules (or enhanced ECA rules) are used for enforcing RBAC with spatial characteristics in a seamless way. Large enterprises have hundreds of roles, which requires thousands of rules for providing access control, and generating these rules manually is error-prone and a cognitive-burden for non-computer specialists. Thus, in this paper, we will discuss briefly how these authorization rules can be automatically generated from high level specifications of enterprise access control policies.",2008,0, 3088,Reversible Data Hiding Based on Histogram Shifting of Prediction Errors,"In this paper, a reversible data hiding scheme based on histogram-shifting of prediction errors (HSPE) is proposed. Two-stage structures, the prediction stage and the error modification stage, are employed in our scheme. In the prediction stage, value of each pixel is predicted, and the error of the predicted value is obtained. In the error modification stage, histogram-shifting technique is used to prepare vacant positions for embedding data. The peak signal-to-noise ratio (PSNR) of the stego image produced by HSPE is guaranteed to be above 48 dB, while the embedding capacity is, in average, 4.74 times higher than that of the well known Ni et al.psilas technique with the same PSNR. Besides, the stego image quality produced by HSPE gains 7.99 dB higher than that of Ni et al.psilas method under the same embedding capacity. Experimental results indicate that the proposed data hiding scheme outperforms the prior works not only in terms of larger payload, but also in terms of stego image quality.",2008,0, 3089,An Interface Matrix Based Detecting Method for the Change of Component,"Component-based software engineering has increased the quality and the efficiency in software development. But the component adaptation is still a crucial issue in Component-based software engineering. In this paper, we focus on a dynamic analysis on the change of component and a method detecting the impact on both the correlative component and the whole system. Firstly, the component model and adaptation principle is described in formal specification. Then the connection matrix of component interface is constructed to help us analyze the relationship of interface. Finally, we propose a new dynamic detecting method based on interface matrix. According to the detecting method expressed in this paper, we have developed a tool CIDT, which is used in CBSE to analyze the impact of the component change.",2008,0, 3090,Operators for Analyzing Software Reliability with Petri Net,"Reliability is one of the most important indicators for software quality. Among the present researches of software reliability, majority focus on the appliance of probability statistics model for the whole software system. Few work based on software model for analyzing the software reliability is learned. Reliability Petri Net (RPN) is presented in this paper. In RPN, transaction means the function or module of software, and is marked with reliability gene. Based on analyzing the Petri net structure, four reliability operators are developed to perform the relationships between tractions. Reliability formulas are provided respectively for the reduction and decomposition operations of Petri net. Furthermore, priority of these reliability operators is given. With this research, more complex Petri net model could be greatly simplified and the reliability of the system could be evaluated effectively and easily. An example is provided for demonstrating the practicability of this reliability analysis method.",2008,0, 3091,Full-Reference Quality Assessment for Video Summary,"As video summarization techniques have attracted more and more attention for efficient multimedia data management, quality assessment of video summary is required. To address the lack of automatic evaluation techniques, this paper proposes a novel framework including several new algorithms to assess the quality of the video summary against a given reference. First, we partition the reference video summary and the candidate video summary into the sequences of summary unit (SU). Then, we utilize alignment based algorithm to match the SUs in the candidate summary with the SUs in the corresponding reference summary. Third, we propose a novel similarity based 4 C - assessment algorithm to evaluate the candidate video summary from the perspective of coverage, conciseness, coherence, and context, respectively. Finally, the individual assessment results are integrated according to userpsilas requirement by a learning based weight adaptation method. The proposed framework and techniques are experimented on a standard dataset of TRECVID 2007 and show the good performance in automatic video summary assessment.",2008,0, 3092,An End-to-End Content-Aware Congestion Control Approach for MPEG Video Transmission,"In this paper, we present a receiver-based, bandwidth estimation rate control mechanism with content-aware probability retransmission to limit the burden on multimedia transmission congested network. Like the TCP friendly rate control (TFRC) protocol, we compute the sending rate as a function of the loss event rate and round-trip time. Considering the different importance of four distinct types of frames in Standard MPEG encoders, we divide data packets into three grades coarsely and adopt adaptive probability retransmission strategy to assure video playback quality. It is an extension of TCP friendly congestion control. This paper describes the smooth rate algorithm and probability retransmission mechanism. The result of experiments with competing TFRC specification demonstrates the proposed approach reaches a higher throughput and higher PSNR than TFRC especially on the bottleneck links.",2008,0, 3093,APD-based measurement technique to estimate the impact of noise emission from electrical appliances on digital communication systems,"This paper describes a technique to measure noise emissions from electrical appliances and study their impact towards digital communication systems by using the amplitude probability distribution (APD) methodology. The APD has been proposed within CISPR for measurement of electromagnetic noise emission. We present a measurement approach which utilizes a programmable digitizer with an analysis software to evaluate the noise APD pattern and probability density function (PDF). A unique noise APD pattern is obtained from each measurement of noise emission from different appliances. The noise PDF is useful for noise modeling and simulation, from which we can estimate the degradation on digital communication performance in terms of bit error probability (BEP). This technique provides a simple platform to examine the effect of other electrical appliances noise emission towards specific digital communication services.",2008,0, 3094,"Panel discussion: What makes good research in modeling and simulation: Assessing the quality, success, and utility of M&S research","This paper presents the Aposition papersA contributed by the participants of a panel at the 2008 Winter Simulation Conference. As the paper pre-dates the actual panel, the purpose of the paper is to provide some background information about the views of the individual panelists prior to the actual panel. Each panelist was asked to submit a position paper addressing the general question of AWhat makes good Modeling and Simulation research?A This paper presents a summary of these position papers along with an introduction and conclusion aimed at identifying the common themes to setup the conference panel.",2008,0, 3095,Simulation of modular building construction,"Modular construction has the advantage of producing structures quickly and efficiently, while not requiring the resources to build a structure to be co-located with the construction site. Large modules can be produced in quality controlled environments, and then shipped to the construction site and assembled with minimal labor requirements. An additional advantage is that once the modules are on-site, construction can proceed extremely quickly. This is ideal for situations where compressed schedules are required in order to meet clientAs time constraints. This paper examines using software simulation, specifically Simphony.NET, in the design and analysis of the construction process. This is done both before and after project execution to predict productivity and duration and also to allow for exploration of alternate construction scenarios.",2008,0, 3096,On-the-fly software replacement in faulty remote robots,"Remote robots are too expensive to be abandoned once a fault is detected. Moreover, faults may be overcome by wireless transmission of software replacements. But improper timing and replacements could cause crash and damages rather than system recovery. We designed a dynamic software controller that tracks robot applications, builds and continuously populates relevant data structures, for posterior accurate on-the-fly component replacement. The non-invasive controller implementation and validating tests are described.",2008,0, 3097,Scientific Computing Autonomic Reliability Framework,"Large scientific computing clusters require a distributed dependability subsystem that can provide fault isolation and recovery and is capable of learning and predicting failures, to improve the reliability of scientific workflows. In this paper, we outline the key ideas in the design of a Scientific Computing Autonomic Reliability Framework (SCARF) for large computing clusters used in the Lattice Quantum Chromo Dynamics project at Fermi Lab.",2008,0, 3098,Application of Random Forest in Predicting Fault-Prone Classes,"There are available metrics for predicting fault prone classes, which may help software organizations for planning and performing testing activities. This may be possible due to proper allocation of resources on fault prone parts of the design and code of the software. Hence, importance and usefulness of such metrics is understandable, but empirical validation of these metrics is always a great challenge. Random forest (RF) algorithm has been successfully applied for solving regression and classification problems in many applications. This paper evaluates the capability of RF algorithm in predicting fault prone software classes using open source software. The results indicate that the prediction performance of random forest is good. However, similar types of studies are required to be carried out in order to establish the acceptability of the RF model.",2008,0, 3099,The Use of E-SQ to establish the internet bank service quality table,"In order to assist Internet bank to be able to reach the enterpriseAs goal, which is satisfying the customerAs demand and this goal is different from the PZB service quality model; thus, this study uses ZPM e-service quality model as the foundation to assess Web sites. The study object would be the companies that provide Internet bank services at present. Then, the factors that influence customersA quality satisfaction towards services would be generalized, and the questionnaire survey would be carried out the users, administrators, and employees of Internet bank. A service quality table that assesses Internet bank would be established through the evidence-based study result, it also verifies that information gap, design gap and fulfillment gap are significant. The result also finds out eight dimensions, including AefficiencyA, AreliabilityA, AprivacyA, AcompensationA, AresponsivenessA, AcontactA, Asense of beautyA and AindividualizationA, are the key factors that influence the service quality of Internet bank.",2008,0, 3100,Detecting Defects in Golden Surfaces of Flexible Printed Circuits Using Optimal Gabor Filters,"This paper studies the application of advanced computer image processing techniques for solving the problem of automated defect detection for golden surfaces of flexible printed circuits (FPC). A special defect detection scheme based on semi-supervised mechanism is proposed, which consists of an optimal Gabor filter and a smoothing filter. The aim is to automatically discriminate between """"known"""" non-defective background textures and """"unknown"""" defective textures of golden surfaces of FPC. In developing the scheme, the parameters of the optimal Gabor filter are searched with the help of the genetic algorithm based on constrained minimization of a Fisher cost function. The performance of the proposed defect detection scheme is evaluated off-line by using a set of golden images acquired from CCD. The results exhibit accurate defect detection with low false alarms, thus showing the effectiveness and robustness of the proposed scheme.",2008,0, 3101,Improving Keyphrase Extraction Using Wikipedia Semantics,"Keyphrase extraction plays a key role in various fields such as information retrieval, text classification etc. However, most traditional keyphrase extraction methods relies on word frequency and position instead of document inherent semantic information, often results in inaccurate output. In this paper, we propose a novel automatic keyphrase extraction algorithm using semantic features mined from online Wikipedia. This algorithm first identifies candidate keyphrases based on lexical methods, and then a semantic graph which connects candidate keyphrases with document topics is constructed. Afterwards, a link analysis algorithm is applied to assign semantic feature weight to the candidate keyphrases. Finally, several statistical and semantic features are assembled by a regression model to predict the quality of candidates. Encouraging results are achieved in our experiments which show the effectiveness of our method.",2008,0, 3102,Design of a video system to detect and sort the faults of cigarette package,A hardware flatform detecting and sorting the faults of cigarette package is designed by using the technology of machine vision. Algorithm and systemic software are also designed according to its characteristics. With stability and accuracy it is of some guiding value to the high speed on-line package detection.,2008,0, 3103,A new image coding quality assessment,"Based on characteristics of HVS (human visual system), a new objective image quality assessment is proposed. In this method, the information of luminance, frequency and edge is used to predict the quality of compressed images. Multi-linear regression analysis is used to integrate these information. Experimental results show that this new image quality assessment closely approximates human subjective tests such as MOS (mean opinion score) with high Pearson and Spearman correlation coefficients of 0.970 and 0.976, which are of significant improvement over some typical objective image quality estimations such as PSNR (peak signal-to-noise ratio).",2008,0, 3104,A novel basic unit level rate control algorithm and architecture for H.264/AVC video encoders,"Rate control (RC) techniques play an important role for interactive video coding applications, especially in video streaming applications with bandwidth constraints. Among the RC algorithms in H.264 reference software JM, the basic unit (BU)-level RC algorithm achieves better video quality than frame-level one. However, the inherent sequential processing in H.264 BU-level RC algorithm makes it difficult to be realized in a pipelined H.264 hardware encoder without increasing the processing latency. In this paper we propose a new H.264 BU-level rate control algorithm and the associated architecture by exploiting a new predictor model to predict the MAD value and target bits for hardware realization. The proposed algorithm breaks down the sequential processing dependence in the original H.264 RC algorithm and reduces up to 80.6% of internal buffer size for H.264 D1 video encoding, while maintaining good video quality.",2008,0, 3105,A cross-layer quality driven approach in Web service selection,"In order to make Web services operate in a performance optimal status, it is necessary to make an effective decision on selecting the most suitable service provider among a set Web services that provide identical functions. We argue that the network performance between the service container and service consumer can pose a significant influence to the performance of Web service that the consumer actually receive, while current researches have limited emphasis on this issue. In this paper, we propose a cross-layer approach for Web service selection which takes the network performance issue into consideration during the service selection process. A discrete representation of cross-layer performance correlation is proposed. Based on which, a qualitative reasoning method is introduced to predict the performance at the service user side. The integration of the quality driven Web service selection method to service oriented architecture is also considered. Simulation is designed and experiment results suggest that the new approach significantly improves the accuracy of Web service selection and delivers a performance elevation for Web services.",2008,0, 3106,An IT Body of Knowledge: The Key to an Emerging Profession,"The information technology body of knowledge (BOK) is reviewed for its support to IT as an emerging profession. The author discusses the IT BOK for the important roles it can fulfill in support of education, certification, professional stature, professional development, and organizational improvement. Efforts to develop and maintain the IT BOK also have beneficial side effects, such as focusing attention on global perceptions and practices and keeping the characterization of IT current. The author offers recommendations for IT professionals to enhance the IT BOK by participating in its development, experimenting with ways to represent it more effectively, and assessing the potential benefits of creating a new BOK oriented to IT professional practice.",2008,0, 3107,Multiphysic modeling and design of carbon nanotubes based variable capacitors for microwave applications,"This paper describes the multiphysic methodology developed to design carbon nanotubes (CNT) based variable capacitor (varactor). Instead of using classical RF-MEMS design methodologies; we take into account the real shape of the CNT, its nanoscale dimensions and its real capacitance to ground. A capacitance-based numerical algorithm has then been developed in order to predict the pull-in voltage and the RF-capacitance of the CNT-based varactor. This software, which has been validated by measurements on various devices, has been used to design varactor device for which 20V of actuation voltage has been predicted. We finally extend the numerical modeling to describe the electromagnetical behavior of the devices. The RF performances has also been efficiently predicted and the varactor (in parallel configuration) exhibits predicted losses of 0.3 dB at 5 GHz and quality factor of 22at 5 GHz, which is relevant for high quality reconfigurable circuits requirements where as the expected sub-microsecond switching time range opens the door to real time tunability.",2008,0, 3108,Reversi: Post-silicon validation system for modern microprocessors,"Verification remains an integral and crucial phase of todaypsilas microprocessor design and manufacturing process. Unfortunately, with soaring design complexities and decreasing time-to-market windows, todaypsilas verification approaches are incapable of fully validating a microprocessor before its release to the public. Increasingly, post-silicon validation is deployed to detect complex functional bugs in addition to exposing electrical and manufacturing defects. This is due to the significantly higher execution performance offered by post-silicon methods, compared to pre-silicon approaches. Validation in the post-silicon domain is predominantly carried out by executing constrained-random test instruction sequences directly on a hardware prototype. However, to identify errors, the state obtained from executing tests directly in hardware must be compared to the one produced by an architectural simulation of the designpsilas golden model. Therefore, the speed of validation is severely limited by the necessity of a costly simulation step. In this work we address this bottleneck in the traditional flow and present a novel solution for post-silicon validation that exposes its native high performance. Our framework, called Reversi, generates random programs in such a way that their correct final state is known at generation time, eliminating the need for architectural simulations. Our experiments show that Reversi generates tests exposing more bugs faster, and can speed up post-silicon validation by 20x compared to traditional flows.",2008,0, 3109,New approach to the modeling of Command and Control Information Systems,"Probability of military activities successes in great degree depends on the qualities of their tactical planning processes. A very important part of tactical planning is the planning of a command and control information systempsilas (C2IS) communication infrastructure, respectively tactical communication networks. There are several known software tools that assist the process of tactical planning. However, they are often useless because different armies use different information and communication technologies. This is the main reason we started to develop a simulation system that helps in the process of planning tactical communication networks. Our solution is based on the well-known OPNET modeler simulation environment, which is also used for some other solutions in this area. In addition to the simulation and modeling methodologies we have also developed helper software tools. TPGEN is a tool which enables the user-friendly entering and editing of tactical network models. It performs mapping from C2IS and tactical communication network descriptions to an OPNET simulation modelpsilas parameters. Because the simulation results obtained by an OPNET modeler are user-unfriendly and need expert knowledge when analyzing them, we have developed an expert system for the automatic analysis of simulation results. One of outputs from this expert system is the user-readable evaluation of tactical networkspsila performances with guidance on how to improve the network. Another output is formatted to use in our tactical player. This tactical player is an application which helps when visualizing simulation and expert system results. It is also designed to control an OPNET history player in combination with 3DNV visualization of a virtual terrain. This developed solution, in user-friendly way, helps in the process of designing and optimizing tactical networks.",2008,0, 3110,SmartFISMATM,"Foreign and domestic hackers have been increasingly attacking the U.S. Government computing environments with impunity, bypassing impressively expensive defenses and threatening our capability to defend and support our nation and allies. Adversaries are now appearing as legitimate users to Department of Defense (DoD) applications and networks, while threatening the integrity and confidentiality of DoD information. Attackers are frequently exploiting hardware and software vulnerabilities before DoD can test and disseminate effective patches. The complexity of information technology (IT) management operations and security is a constant challenge for enterprises (both large and small). Balancing the workforcepsilas need for availability and ease of use while complying with the frequent security advisories, bulletins, changes, and reporting requirements can be daunting. The continuous enhancements and upgrades combined with the requirement to react to security threats to both operating systems and applications are overwhelming the routine operational capability for the system and security administrators. Many organizations continue to treat asset management; configuration management; data protection; access control; intrusion prevention; risk analysis; compliance; vulnerability management; certification and accreditation (C&A); incident detection and response; and reporting as isolated processes that rarely, if ever, interact. The stove-piping of these critical network and system operations results in inconsistent views of IT assets and their security postures, inefficient use of resources, and the inability to accurately assess the overall security status of the organization at any given time. Additionally, the C&A and Information Assurance Vulnerability Management (IAVM) processes, along with the annual Federal Information Security Management Act (FISMA) reporting, has become a resource intense, complex, and sometimes unpredictable process. These processes a- - nd procedures are particularly challenging for IT managers in establishing and maintaining a secure computing environment the naval workforce expects without sacrificing quality of service. Smartronix Inc., in conjunction with the Office of Naval Research (ONR), and security product partners Telos Corporation, IBM Internet Security Systems, Inc. and McAfee have developed a solution to address these issues.",2008,0, 3111,Improving software reliability and security with automated analysis,"Static-analysis tools that identify defects and security vulnerabilities in source and executables have advanced significantly over the last few years. A brief description of how these tools work is given. Their strengths and weaknesses in terms of the kinds of flaws they can and cannot detect are discussed. Methods for quantifying the accuracy of the analysis are described, including sources of ambiguity for such metrics. Recommendations for deployment of tools in a production setting are given.",2008,0, 3112,Automated discovery of information services in heterogeneous distributed networks,"The Global Information Grid (GIG) will be comprised of collections of different service capability domains (SCDs). Each SCD offers a set of information services, such as voice over IP (VoIP), video delivery, and information translation, and is managed as a separate system. The GIG information services will include several types of communications services (e.g., VoIP and streaming video), translation services (e.g., document translation and data translation) and information services (e.g., content discovery and domain name service). These different services may be described using various methods, including specification documents and lookup tables, and will have associated service level agreements (SLAs). As SCDs become richer in their service offerings and more dynamic in their service availability, the discovery of end-to-end services meeting end-user needs becomes extremely challenging. Currently manual methods are employed to map end-user needs to the end-to-end service combinations that GIG deployments can support. Such manual methods are inefficient and error prone. Automation of end-to-end service discovery within the GIG is highly desirable, but is an exceedingly complex task. Current efforts to automate service discovery, for example, the service location protocol or the service oriented architecture, provide service discovery through registration and strict service type definitions. These require coordination across all SCDs for all possible information technology (IT) service offerings. Ideally, individual SCDs would describe their services through individual service description documents. Then mapping of end-user service requests to appropriate collections of SCD services could be performed automatically. This requires the development of semantic reasoning and ontology for service descriptions and service capability matching. This paper describes our approach for automated discovery of information services on GIG-like network deployments.",2008,0, 3113,Towards a Requirements-Aware Common Web Engineering Metamodel,"In recent years, Web engineering development projects have grown increasingly complex and critical for the smooth running of the organizations. However, recent studies reveal that, due to an incorrect requirements management, a high percentage of these projects miss the quality parameters required by stakeholders. Despite this, current Web Engineering methodologies continue focusing on web design features, thus limiting the Requirements Engineering tasks to the elicitation of high-level functional requirements. This fact has caused a requirements-support gap in the recently proposed Common Web Engineering Metamodel. This paper tries to cover this gap and proposes a requirements extension for this Common Web Engineering metamodel that supports measurable requirements. The reinforcement of the role that requirements play in current Web engineering methodologies, and their explicit connection with quantitative measures that contribute to their fulfilment, is a necessary step in order to reduce some of the quality failures detected in Web engineering development projects, thus increasing the satisfaction of their users.",2008,0, 3114,Scalable and Accurate Application Signature Discovery,"Newly emerged applications are producing a large amount of traffic and connection in the Internets. And they are becoming increasingly difficult to detect. Signature based method are currently the approaches for discovering and detecting the patterns of application. However, these methods may confront their difficulty in validating the efficiency and quality of signatures for unknown applications. Therefore, how to generate the more accurate and representative patterns and validate the quality of signatures is a critical issue.In this paper, a new method has been proposed with a new structure to generate high quality signatures. Different from traditional methods, this one employs a signature learning mechanism that is designed to refine the signatures by merging the similar patterns to improve the signature quality. The experiment indicates that this method is efficient to generate accurate and robust signatures. And the quality of signatures is improved by signature learning.",2008,0, 3115,The Application of Improved BP Neural Network Algorithm in Urban Air Quality Prediction: Evidence from China,"According to the limitations of traditional BP neural network algorithm, the method of adding momentum factor and changing learning rate is used to improve the traditional BP neural network algorithm and establish the new model of BP neural network which is applied to the urban air quality prediction. Practical application shows that improved BP neural network algorithm overcome the shortcomings like slow convergence speed, bad generation ability and easily falling into local minimum values. The model established for urban air quality prediction has characteristics of representative and predicting ability so that it has a broad application prospect in future urban air quality assessment.",2008,0, 3116,An Efficient Local Bandwidth Management System for Supporting Video Streaming,"In order to guarantee continuous delivery of video streaming over best-effort (BE) forwarding network, some quality-of-service (QoS) strategies such as RSVP and DiffServ must be used to improve the transmission performance. However, these methods are too difficult to be employed in practical applications since their technical complexity. In this paper, we design and implement an efficient local bandwidth management system to tackle this problem in IPv6 environment. The system monitors local access network and provides assured forwarding (AF) service through controlling the video streaming requests based on available network bandwidth. To assess the benefit of this system, we perform tests to compare its performance with that of conventional BE service. Our test results indicate convincingly that AF offers substantially better performance than BE.",2008,0, 3117,Virtualization in Grid,"In grid environment where resources are generally owned by different people, communities or organizations with varied administration policies and capabilities, managing the grid resources is not a simple task. Resource brokers simplify this process by providing an abstraction layer for users to access heterogeneous resources transparently. However, discovery of grid resources that suits to the user's job requirements is always a difficult job as the probability of grid resources satisfying the user's requirements is very less. Conventionally, in order to run a job on the grid a user has to identify a set of platforms capable of running that job by the virtue of having the required installation of operating system, libraries, tools, and the configuration of environment variables, etc. In practice, the availability of such software environments will either be limited to a very narrow set, or the job has to be made compatible with an environment supported by a large resource provider, such as TeraGrid. Further, if we could identify such an environment, it is hard to guarantee that the resource will be available when needed, for as long as needed, and that user will get his or her fair share of that resource. This difficulty can be overcome by incorporating the concept of virtualization in grid environment that enables the creation of dynamic user defined execution environment in a grid resource. Virtualization is a methodology of dividing the resources of a computer into multiple execution environments, by applying one or more concepts or technologies such as hardware and software partitioning, time-sharing, partial or complete machine simulation, emulation, quality of service, and many others. Recognizing the importance of virtualization in grid environment, our Centre for Advanced Computing Research and Education (CARE) attempts to develop a virtualization framework that facilitates job submission to the virtualized infrastructure by creating and managing virtual w- - orkspaces, and also it monitors the job execution on to the workspaces. This tutorial presents the proposed CRB (CARE resource broker) capable of supporting virtualization of resources, co-allocation and trust based resources for job execution in the grid environment. CRB supports on-demand job scheduling and SLA-based resource allocation.",2008,0, 3118,An Efficient Event Based Approach for Verification of UML Statechart Model for Reactive Systems,"This paper describes an efficient method to detect safety specification violations in dynamic behavior model of concurrent/reactive systems. The dynamic behavior of each concurrent object in a reactive system is assumed to be represented using UML (Unified Modeling Language) statechart diagram. The verification process involves building a global state space graph from these independent statechart diagrams and traversal of large number of states in global state space graph for detecting a safety violation. In our approach, a safety property to be verified is read first and a set of events, which could violate this property, is computed from the model description. We call them as """"relevant events"""". The global state space graph is constructed considering only state transitions caused by the occurrence of these relevant events. This method reduces the number of states to be traversed for finding a property violation. Hence, this technique scales well for complex reactive systems. As a case study, the proposed technique is applied to verification of Generalized Railroad Crossing (GRC) system and safety property """"When train is at railroad crossing, the gate always remain closed"""" is checked. We could detect a flaw in the infant UML model and eventually, correct model is built with the help of counter example generated. The result of the study shows that, this technique reduces search space by 59% for the GRC example.",2008,0, 3119,A new clustering approach based on graph partitioning for navigation patterns mining,We present a study of the Web based user navigation patterns mining and propose a novel approach for clustering of user navigation patterns. The approach is based on the graph partitioning for modeling user navigation patterns. For the clustering of user navigation patterns we create an undirected graph based on connectivity between each pair of Web pages and we propose novel formula for assigning weights to edges in such a graph. The experimental results represent that the approach can improve the quality of clustering for user navigation pattern in Web usage mining systems. These results can be use for predicting userpsilas next request in the huge Web sites.,2008,0, 3120,Cost-efficient Automated Visual Inspection system for small manufacturing industries based on SIFT,"This paper presents a cost efficient automated visual inspection (AVI) system for small industriespsila quality control system. The complex hardware and software make current AVI systems too expensive to afford for small-size manufacturing industries. Proposed approach to AVI systems is based on an ordinary PC with a medium resolution camera without any other extra hardware. The scale invariant feature transform (SIFT) is used to acquire good accuracy and make it applicable for different situations with different sample sizes, positions, and illuminations. Proposed method can detect three different defect types as well as locating and measuring defect percentage for more specialized utilization. To evaluate the performance of this system different samples with different sizes, shapes, and complexities are used and the results show that proposed system is highly applicable to different applications and is invariant to noise, illumination changes, rotation, and transformation.",2008,0, 3121,Impact of metrics based refactoring on the software quality: a case study,"As the software system changes, the design of the software deteriorates hence reducing the quality of the system. This paper presents a case study in which an inventory application is considered and efforts are made to improve the quality of the system by refactoring. The code is an open source application namely ldquoinventor deluxe v 1.03rdquo, which was first, assessed using the tool Metrics 1.3.6 (an Eclipse plug-in). The code was then refactored and three more versions were built. At the end of creation of every version the code was assessed to find the improvement in the quality. The results obtained after measuring various metrics helped in tracing the spots in the code, which requires further improvement and hence can be refactored. Most of the refactoring was done manually with little tool support. Finally, a trend was found which shows, that average complexity and size of the code reduces with refactoring - based development, which helps to make the software more maintainable. Thus, although refactoring is time consuming and a labor-intensive work, it has a positive impact on the software quality.",2008,0, 3122,StegoHunter: Passive audio steganalysis using Audio Quality Metrics and its realization through genetic search and X-means approach,"Steganography is used to hide the occurrence of communication. This creates a potential problem when this technology is misused for planning criminal activities. Differentiating anomalous audio document (stego audio) from pure audio document (cover audio) is difficult and tedious. This paper investigates the use of a Genetic-X-means classifier, which distinguishes a pure audio document from the adulterated one. The basic idea is that, the various audio quality metrics (AQMs) calculated on cover audio signals and on stego-audio signals vis-a-vis their denoised versions, are statistically different. Our model employs these AQMs to steganalyse the audio data. Genetic paradigm is exploited to select the AQMs that are sensitive to various embedding techniques. The classifier between cover and stego-files is built using X-means clustering on the selected feature set. The presented method can not only detect the presence of hidden message but also identify the hiding domains. The experimental results show that the combination strategy (Genetic-X-means) can improve the classification precision even with lesser payload compared to the traditional ANN (Back Propagation Network).",2008,0, 3123,Discrete wavelet transform and probabilistic neural network based algorithm for classification of fault on transmission systems,"This paper presents the development of an algorithm based on discrete wavelet transform (DWT) and probabilistic neural network (PNN) for classifying the power system faults. The proposed technique consists of a preprocessing unit based on discrete wavelet transform in combination with PNN. The DWT acts as extractor of distinctive features in the input current signal, which are collected at source end. The information is then fed into PNN for classifying the faults. It can be used for off-line process using the data stored in the digital recording apparatus. Extensive simulation studies carried out using MATLAB show that the proposed algorithm not only provides an accepted degree of accuracy in fault classification under different fault conditions but it is also reliable, fast and computationally efficient tool.",2008,0, 3124,Intelligent Fault Diagnosis System in Large Industrial Networks,"Traditional fault diagnosis system in large industrial networks is not intelligent enough and cannot predict faults. It is too expensive for industrial corporations. This paper brings forward an intelligent fault diagnosis system-IFDS, which uses new types of intelligent database technology, and has the ability of effectively solving the fault diagnosis and prediction issue of current industrial Ethernet network. In addition, this paper discusses some methods which can be implemented in IBM DB2 database.",2008,0, 3125,BP-Neural Network Used to Choose the Style of Graphic-Interface of Application Program,"With the interface from words to graphic, the quality of application program is more and more depend on the degree which the graphic interface feat the taste of the users. So if we can predict the users' taste of the style of application program, our work must be more popular. Now with the help of BP-neural network we can do it because of its strong capacity of prediction.",2008,0, 3126,Modeling Software Contention Using Colored Petri Nets,"Commercial servers, such as database or application servers, often attempt to improve performance via multi-threading. Improper multi-threading architectures can incur contention, limiting performance improvements. Contention occurs primarily at two levels: (1) blocking on locks shared between threads at the software level and (2) contending for physical resources (such as the cpu or disk) at the hardware level. Given a set of hardware resources and an application design, there is an optimal number of threads that maximizes performance. This paper describes a novel technique we developed to select the optimal number of threads of a target-tracking application using a simulation-based colored Petri nets (CPNs) model. This paper makes two contributions to the performance analysis of multi-threaded applications. First, the paper presents an approach for calibrating a simulation model using training set data to reflect actual performance parameters accurately. Second, the model predictions are validated empirically against the actual application performance and the predicted data is used to compute the optimal configuration of threads in an application to achieve the desired performance. Our results show that predicting performance of application thread characteristics is possible and can be used to optimize performance.",2008,0, 3127,Optimized multipinhole design for mouse imaging,"To enhance high-sensitivity focused mouse imaging using multipinhole SPECT on a dual head camera, a fast analytical method was used to predict the contrast-to-noise ratio (CNR) in many points of a homogeneous cylinder for a large number of pinhole collimator designs with modest overlap. The design providing the best overall CNR, a configuration with 7 pinholes, was selected. Next, the pinhole pattern was made slightly irregular to reduce multiplexing artifacts. Two identical, but mirrored 7-pinhole plates were manufactured. In addition, the calibration procedure was refined to cope with small deviations of the camera from circular motion. First, the new plates were tested by reconstructing a simulated homogeneous cylinder measurement. Second, a Jaszczak phantom filled with 37 MBq 99mTc was imaged on a dual head gamma camera, equipped with the new pinhole collimators. The image quality before and after refined calibration was compared for both heads, reconstructed separately and together. Next, 20 short scans of the same phantom were performed with single and multipinhole collimation to investigate the noise improvement of the new design. Finally, two normal mice were scanned using the new multipinhole designs to illustrate the reachable image quality of abdomen and thyroid imaging. The simulation study indicated that the irregular patterns suppress most multiplexing artifacts. Using body support information strongly reduces the remaining multiplexing artifacts. Refined calibration improved the spatial resolution. Depending on the location in the phantom, the CNR increased with a factor of 1 to 2.5 using the new instead of a single pinhole design. The first proof of principle scans and reconstructions were successful, allowing the release of the new plates and software for preclinical studies in mice.",2008,0,3596 3128,Towards the application of a model based design methodology for reliable control systems on HEP experiments,"The software development process of user interfaces for complex control system can constantly change in requirements. In those systems changes are costly (time consuming) and error prone, since we must guarantee that the resulting system implementation will still be robust and reliable. A way to tackle this problem is to bring a software model based approach for specification and providing at the same time rapid prototyping capabilities (to speed up design) and Simulation/Verification capabilities (to assure quality). We propose a full model-based methodology to guide designers through specification changes.",2008,0, 3129,Data quality monitor of the muon spectrometer tracking detectors of the ATLAS experiment at the Large Hadron Collider: First experience with cosmic rays,"The Muon Spectrometer of the ATLAS experiment at the CERN Large Hadron Collider is completely installed and many data have been collected with cosmic rays in different trigger configurations. In the barrel part of the spectrometer, cosmic ray muons are triggered with Resistive Plate Chambers, RPC, and tracks are obtained joining segments reconstructed in three measurement stations equipped with arrays of high-pressure drift tubes, MDT. The data are used to validate the software tools for the data extraction, to assess the quality of the drift tubes response and to test the performance of the tracking programs. We present a first survey of the MDT data quality based on large samples of cosmic ray data selected by the second level processors for the calibration stream. This data stream was set up to provide high statistics needed for the continuous monitor and calibration of the drift tubes response. Track segments in each measurement station are used to define quality criteria and to assess the overall performance of the MDT detectors. Though these data were taken in not optimized conditions, when the gas temperature and pressure was not stabilized, the analysis of track segments shows that the MDT detector system works properly and indicates that the efficiency and space resolution are in line with the results obtained with previous tests with a high energy muon beam.",2008,0, 3130,Detecting of water shortage information in crops with acoustic emission technology and automatic irrigation system,"The automatic and real time irrigation system based on detecting of water shortage information in crops with acoustic emission (AE) technology was studied and developed, an experimental study in greenhouse with the crop of tomato as the target was carried out. PCI-2 AE board-card, R15 sensor, electronic balance, temperature sensor, humidity sensor, CO2 sensor, illumination sensor and PCI-8333 DAQ were adopted to compose the hardware detecting system, the virtual instrument technology was used to construct the software system, the three factors of soil, crops and atmosphere in SPAC system were effectively integrated, the real time acquisition and detecting system of information between crop acoustic emission and each environmental factor was established. It shows that: to some extent, the frequency counts of AE of the crops in water stress increase gradually with the increase of the water stress extent in crops, and are related with the transpiration rate of crops; in order to avoid the influence from water stress on crops, it has the potential to realize the automatic irrigation and regulation in crops based on the information acquired from crops with AE sensor, to make the transpiration amount and irrigation amount of crops achieve an balanced regulation, aiming to make the crops grow in an optimum soil water environment, to increase the utilization of water, and improve the quality of crop fruit.",2008,0, 3131,Software quality prediction techniques: A comparative analysis,"There are many software quality prediction techniques available in literature to predict software quality. However, literature lacks a comprehensive study to evaluate and compare various prediction methodologies so that quality professionals may select an appropriate predictor. To find a technique which performs better in general is an undecidable problem because behavior of a predictor also depends on many other specific factors like problem domain, nature of dataset, uncertainty in the available data etc. We have conducted an empirical survey of various software quality prediction techniques and compared their performance in terms of various evaluation metrics. In this paper, we have presented comparison of 30 techniques on two standard datasets.",2008,0, 3132,A meta-measurement approach for software test processes,"Existing test process assessment and improvement models intend to raise maturity of an organization with reference to testing activities. Such process assessments are based on ldquowhatrdquo testing activities are being carried out, and thus implicitly evaluate process quality. Other test process measurement techniques attempt to directly assess some partial quality attribute such as efficiency or effectiveness some test metrics. There is a need for a formalized method of test process quality evaluation that addresses both implicitly and partially of these current evaluations. This paper describes a conceptual framework to specify and explicitly evaluate test process quality aspects. The framework enables provision of evaluation results in the form of objective assessments, and problem-area identification to improve the software testing processes.",2008,0, 3133,Global Sensitivity Analysis (GSA) Measures the Quality of Parameter Estimation. Case of Soil Parameter Estimation with a Crop Model,"The behavior of crops can be accurately predicted when all the parameters of the crop model are well known, and assimilating data observed on crop status in the model is one way of estimating parameters. Nevertheless, the quality of the estimation depends on the sensitivity of model output variables to the parameters. In this paper, we quantify the link between the global sensitivity analysis (GSA) of the soil parameters of the mechanistic crop model STICS, and the ability to retrieve the true values of these parameters. The Global sensitivity indices were computed by a variance based method (Extended FAST) and the quality of parameter estimation (RRMSE) was computed with an importance sampling method based on Bayes theory (GLUE). Criteria based on GSA were built to link GSA indices with the quality of parameters estimation. The result shows that the higher the criteria, the better the quality of parameters estimation and GSA appeared to be useful to interpret and predict the performance of the estimation parameters process.",2008,0, 3134,Online Optimization in Application Admission Control for Service Oriented Systems,"In a service oriented environment an application is created through the composition of different service components. In this paper, we investigate the problem of application admission control in a service oriented environment. We propose an admission control system that makes admission decisions using an online optimization approach. The core part of the proposed system is an online optimization algorithm that solves a binary integer programming problem which we formulate in this paper. An online optimizer maximizes the system revenue given the system's available resources as well as the system's previous commitments. Another part of the proposed system carries out a feasibility evaluation that is intended to guarantee an agreed level of probability of success for each admitted application instance. We use simulations and performance comparisons to show that the proposed application admission control system can improve the system revenue while guaranteeing the required level of quality of service.",2008,0, 3135,A Stochastic Performance Model Supporting Time and Non-time QoS Matrices for Web Service Composition,"In recently years, Web service composition becomes a new approach to overcome many difficult problems confronted by B2B e-commerce, inter-organization workflow management, enterprise application integration etc. Due to the uncertainty of the Internet and various Web services, the performance of the composed Web service can not be ensured. How to model and predict the performance of the composed Web service is a difficult problem in the Web service composition. A novel simulation model that can model and simulate time and non-time performance characters, called STPM+, is presented in this paper. Based on Petri net, the STPM+ model can simulate and predict multiple performance characters, such as the cost, the reliability and the reputation of the composed Web service etc. To examine the validation of the STMP+ model, a visual performance evaluation tool, called VisualWSCPE, has been implemented. Besides, some simulation experiments have been fulfilled based on VisualWSCPE. The experiment results demonstrate the feasibility and efficiency of the STPM+ model.",2008,0, 3136,Anomaly Detection Support Vector Machine and Its Application to Fault Diagnosis,"We address the issue of classification problems in the following situation: test data include data belonging to unlearned classes. To address this issue, most previous works have taken two-stage strategies where unclear data are detected using an anomaly detection algorithm in the first stage while the rest of data are classified into learned classes using a classification algorithm in the second stage. In this study, we propose anomaly detection support vector machine (ADSVM) which unifies classification and anomaly detection. ADSVM is unique in comparison with the previous work in that it addresses the two problems simultaneously. We also propose a multiclass extension of ADSVM that uses a pairwise voting strategy. We empirically present that ADSVM outperforms two-stage algorithms in application to an real automobile fault dataset, as well as to UCI benchmark datasets.",2008,0, 3137,Time Sensitive Ranking with Application to Publication Search,"Link-based ranking has contributed significantly to the success of Web search. PageRank and HITS are the best known link-based ranking algorithms. These algorithms do not consider an important dimension, the temporal dimension. They favor older pages because these pages have many in-links accumulated over time. Bringing new and quality pages to the users is important because most users want the latest information. Existing remedies to PageRank are mostly heuristic approaches. This paper investigates the temporal aspect of ranking with application to publication search, and proposes a principled method based on the stationary probability distribution of the Markov chain. The proposed techniques are evaluated empirically using a large collection of high energy particle physics publication. The results show that the proposed methods are highly effective.",2008,0, 3138,Towards Process Rebuilding for Composite Web Services in Pervasive Computing,"The emerging paradigm of pervasive computing and web services needs a flexible service discovery and composition infrastructure. A composite Web service is essential a process in a loosely-coupled service-oriented architecture. It is usually a black box for service requestors and only its interfaces can be seen externally. In some scenarios, to conduct performance debugging and analysis, a workflow representation of the underlying process is required. This paper describes a method to discover such underlying processes from execution logs. Based on a probabilistic assumption model, the algorithm can discover sequential, parallel, exclusive choice and iterative structures. Some examples are given to illustrate the algorithm.",2008,0, 3139,Post-mold cure process simulation of IC packaging,"Epoxy molding compound (EMC) is a common material used in IC packaging. One of its defects is warpage. Warpage could be a serious issue for some IC encapsulation processes. To alleviate the warpage problem during encapsulation, post mold cure process (PMC) is the most common strategy used. However, there are still no adequate tools or models to simulate the post mold cure process. Since EMC behaves like a viscoelastic material during post mold cure process, a viscoelastic model must be considered. The object of this paper was to construct a correct viscoelastic model, and then to input this model into a software package. This study adopted a dualistic shift factor Maxwell Model to simulate the post mold cure process. With this model, the amount of warpage after PMC process could be predicted. With dynamic mechanical analyzer (DMA) testing, the Generalized Maxwell model and Williams-Landel-Ferry (WLF) equation of fully cured EMC under different temperatures could be derived. Then, the partially cured EMC was tested by DMA. The viscoelastic properties of partially cured EMC were considered to have the similar behavior as temperature. Thus, a new model considering partially cured EMC as a cure induced shift factor similar to temperature shift factor could be derived. This model then became a model with dualistic shift factor Maxwell model. With some modification of structure analysis tool such as ANSYS, this dualistic shift factor Maxwell model could be applied and predict the post mold cure behavior of EMC. The results of calculation showed reasonable agreement between experiments and simulation.",2008,0, 3140,A strategy for Grid based t-way test data generation,"Although desirable as an important activity for ensuring quality assurances and enhancing reliability, complete and exhaustive software testing is next to impossible due to resources as well as timing constraints. While earlier work has indicated that pairwise testing (i.e. based on 2-way interaction of variables) can be effective to detect most faults in a typical software system, a counter argument suggests such conclusion cannot be generalized to all software system faults. In some system, faults may also be caused by more than two parameters. As the number of parameter interaction coverage (i.e. the strength) increases, the number of t-way test set also increases exponentially. As such, for large system with many parameters, considering higher order t-way test set can lead toward combinatorial explosion problem (i.e. too many data set to consider). We consider this problem for t-way generation of test set using the Grid strategy. Building and complementing from earlier work in In-Parameter-Order-General (or IPOG) and its modification (or MIPOG), we present the Grid MIPOG strategy (G_MIPOG). Experimental results demonstrate that G_MIPOG scales well against the sequential strategies IPOG and MIPOG with the increase of the computers as computational nodes.",2008,0, 3141,The Design and Fabrication of a Full Field Quantitative Mammographic Phantom,"Breast cancer is among the leading causes of death in women worldwide. Screen-film mammography (SFM) is still the standard method used to detect early breast cancer thus leading to early treatment. Digital mammography (DM) has recently been designated as the imaging technology with the greatest potential for improving the diagnosis of breast cancer. For successful mammography, high quality images must be achieved and maintained, and reproducible quantitative quality control (QC) testing is thus required. Assessing images of known reference phantoms is one accepted method of doing QC testing. Quantitative QC techniques are useful for the long-term follow-up of mammographic quality. Following a comprehensive critical evaluation of available mammography phantoms, it was concluded that a more suitable phantom for DM could be designed. A new relatively inexpensive Applied Physics Group (APG) phantom was designed to be fast and easy to use, to provide the user with quantitative and qualitative measures of high and low contrast resolution over the full field of view and to demonstrate any geometric distortions. It was designed to cover the entire image receptor so as to assess the heel effect, and to be suitable for both SFM and DM. The APG phantom was designed and fabricated with embedded test objects and software routines were developed to provide a complete toolkit for SFM and DM QC. The test objects were investigated before embedding them.",2008,0, 3142,In-service monitoring with Centralized Failure Detection System (CFDS) in FTTH access network,"This paper focuses on developing a simple, attracting and user-friendly graphical user interface (GUI) for centralized failure detection system (CFDS) by using MATLAB software. The developed program will be installed with optical line terminal (OLT) at central office (CO) to centrally monitoring each optical fiber line's status and detecting the failure location that occurs in the drop region of fiber to the home (FTTH) access network downwardly from CO towards customer premises. Conventionally, the faulty fiber and failure location can be detected by using optical time domain reflectometer (OTDR) upwardly from customer premises towards CO. However, OTDR can only display a single result of a line in a time and also time and cost misspend. CFDS is interfaced with the OTDR to accumulate every network testing result to be displayed on a single computer screen for further analysis. The program will identify and present the parameters of optical line such as the line's status either in working or nonworking condition, magnitude of decreasing, failure location and other details as shown in the OTDR's screen. The analysis result will be sent to field engineers or service providers for promptly action.",2008,0, 3143,Incorporating varying requirement priorities and costs in test case prioritization for new and regression testing,"Test case prioritization schedules the test cases in an order that increases the effectiveness in achieving some performance goals. One of the most important performance goals is the rate of fault detection. Test cases should run in an order that increases the possibility of fault detection and also that detects the most severe faults at the earliest in its testing life cycle. Test case prioritization techniques have proved to be beneficial for improving regression testing activities. While code coverage based prioritization techniques are found to be taken by most scholars, test case prioritization based on requirements in a cost effective manner has not been taken for study so far. Hence, in this paper, we propose to put forth a model for system level test case prioritization (TCP) from software requirement specification to improve user satisfaction with quality software that can also be cost effective and to improve the rate of severe fault detection. The proposed model prioritizes the system test cases based on the six factors: customer priority, changes in requirement, implementation complexity, usability, application flow and fault impact. The proposed prioritization technique is experimented in three phases with student projects and two sets of industrial projects and the results show convincingly that the proposed prioritization technique improves the rate of severe fault detection.",2008,0, 3144,Detecting spurious features using parity space,"Detection of spurious features is instrumental in many computer vision applications. The standard approach is feature based, where extracted features are matched between the image frames. This approach requires only vision, but is computer intensive and not yet suitable for real-time applications. We propose an alternative based on algorithms from the statistical fault detection literature. It is based on image data and an inertial measurement unit (IMU). The principle of analytical redundancy is applied to batches of measurements from a sliding time window. The resulting algorithm is fast and scalable, and requires only feature positions as inputs from the computer vision system. It is also pointed out that the algorithm can be extended to also detect non-stationary features (moving targets for instance). The algorithm is applied to real data from an unmanned aerial vehicle in a navigation application.",2008,0, 3145,Generating Requirements Analysis Models from Textual Requirements,"Use case modeling is a commonly used technique to describe functional requirements in requirements engineering. Typically, use cases are captured from textual requirements documents describing the functionalities the system should meet. Requirements elicitation, analysis and modeling is a time consuming and error-prone activity, which it is not usually supported by automated tools. This paper tackles this problem by taking free-form textual requirements and offering a semi-automatic process for generation of domain models, such as use cases. Our goal is twofold: (i) reduce the time spent to produce requirements artifacts; and (ii) enable future application of model-driven engineering techniques to maintain traceability information and consistency between textual and requirements visual models artifacts.",2008,0, 3146,Rapid 3D Transesophageal Echocardiography using a fast-rotating multiplane transducer,"3D transesophageal echocardiography (3D TEE) with acquisition gating for electrocardiogram (ECG) and respiration is slow, cumbersome for the patient and prone to motion artifacts. We realized a rapid 3D TEE solution based on a standard multiplane TEE probe, extended with a fast-rotating transducer array (FR-TEE). The fast left-right rotation allows acquisition of sufficient image data from the entire rotation range for the full heart cycle within one breath-hold. No ECG- or respiration-gating is applied. In normal mode, the probe has uncompromised optimal 2D quality. 10 seconds of image data with ECG and angle values are recorded and post-processed with specially developed 4D reconstruction software based on normalized convolution interpolation. High quality 3D images of phantoms were acquired, accurately depicting the imaged objects. Sequences of reconstructed 3D volumes of a cyclic moving (4D) balloon phantom show only minimal temporal artifacts. Preliminary results on 5 open-chest pigs and 3 humans showed the overall anatomy as well as valvular details with good diagnostic accuracy and high temporal and spatial resolution. A bicuspid aortic valve was diagnosed from the 3D reconstructions and confirmed by a separate 2D exam, proving the 3D diagnostic capabilities of FR-TEE.",2008,0, 3147,Comparison of the acoustic response of attached and unattached BiSphereTM microbubbles,"Two systems that independently allow the investigation of the response of individual unattached and attached microbubbles have previously been described. Both offered methods of studying the acoustic response of single microbubbles in well defined acoustic fields. The aim of the work described here was to investigate the responses of single attached microbubbles for a range of acoustic pressures and to compare these to the backscatter from unattached single microbubbles subjected to the same acoustic fields. Single attached BiSphereTM (Point Biomedical) microbubbles were attached to polyester with poly-L-lysine. Individual attached microbubbles were insonated at 1.6 MHz for acoustic pressures ranging from 300 to 1000 kPa using a Sonos5500 (Philips Medical Systems) research ultrasound scanner. Each microbubble was aligned to 6 cycle pulse, M-mode ultrasound beams, and unprocessed backscattered RF data captured using proprietary hardware and software. The backscatter from these microbubbles was compared to that of single unattached microbubbles subjected to the same acoustic parameters, microbubbles were insonated several times to determine possible differences in rate of decrease of backscatter between attached and unattached microbubbles. In total over 100 single attached microbubbles have been insonated. At 550kPa an acoustic signal was detected for 20 % of the attached microbubbles and at 1000 kPa for 63%. At acoustic pressures of 300kPa no signal was detected. Mean RMS fundamental pressure from attached and unattached microbubbles insonated at 800 kPa was 9.7 Pa and 8.7 Pa respectively. The ratio between the first two backscattered pulses decreased with increasing pressure. However, for unattached microbubbles the magnitude of the ratio was less than that of attached (at 550kPa mean ratio attached: 0.92 + 0.1, unattached: 0.28 + 0.2). There was no significant difference in the peak amplitude of the backscattered signal for unattached and attached micro- - bubbles. BiSphereTM microbubbles comprise an internal polymer shell with an albumin coating, resulting in a stiff shell. BiSphereTM microbubbles do not oscillate in the same manner as a softer shelled microbubble, but allow gas leakage which then performs free bubble oscillations. The results here agree with previous acoustic and optical microscopy measurements which show that a proportion of microbubbles will scatter and this number increases with acoustic pressure. The lack of difference in scatter between the unattached and attached microbubbles may be attributed to the free microbubble oscillation being in the vicinity of the stiff shell, which may provide the same motion damping to a wall. Second pulse exposure shows that the wall becomes important in the survival of the free gas. These high quality measurements can be further improved by incorporating microbubble sizing to increase the specificity of the comparisons between unattached and attached microbubbles.",2008,0, 3148,3-D laparoscopic imaging,"This paper describes a clinical study conducted on a porcine animal model to assess the 3-D imaging capability of a laparoscopic imaging probe. A commercially available 128 element 7 MHz laparoscopic imaging probe was modified by the addition of a motor for rotation and a position sensor to locate each image plane in the 3-D volume. Images were acquired on a number of organs, including the gallbladder, liver, kidney, and urinary bladder. Because the system uses a conventional one dimensional array for the acquisition, the image quality is high. Volumetric surface renderings of the organs with additional 2-D cross sections demonstrate the features of the software for a number of viewing modes.",2008,0, 3149,Cardiac monitoring using transducers attached directly to the heart,"Cardiac ultrasound systems deliver excellent information about the heart, but are constructed for intermittent imaging and interpretation by a skilled operator. This paper presents a dedicated ultrasound system to monitor cardiac function continuously during and after cardiac surgery. The system uses miniature 10 MHz transducers sutured directly to the heart surface. M-mode images give a visual interpretation of the contraction pattern, while tissue velocity curves give detailed quantitative information. The ultrasound measurements are supported by synchronous ECG and pressure recordings. The system has been tested on pigs, demonstrating M-mode and tissue velocity measurements of good quality. When occluding the LAD coronary artery, the system detected changes in contraction pattern that correspond with known markers of ischemia. The system uses dedicated analog electronics and a PC with digitizers and LabVIEW software, and may also be useful in other experimental ultrasound applications.",2008,0, 3150,An Agent-Oriented System for Workflow Enactment Tracking,"The notion of workflows has evolved from a means to describe the flow of paperwork through an organization to a more abstract and general technique used to express best practices and lessons learned in many application domains. When putting workflow definitions into practice, it is important to stay informed which tasks are currently performed, as this allows detecting slipping schedules or unwanted deviations. In this paper, an agent-based approach for automatically tracking the set of active tasks is presented by observing the data produced during enactment. This enables short-term planning and quality control without obliging team members to explicitly document progress.",2008,0, 3151,Quality Monitoring on Chinese Automatic Word-segmentation Software,"Many kinds of Chinese word-segmentation software have taken shapes in recent years. It is difficult to assess which is good and which is bad, for their various calculating methods and man decisions. The article tries to propose the quality monitoring content, methods and models of the Chinese automatic word-segmentation software according to its characteristics, and to monitor and control the software quality in the different stages of its development.",2008,0, 3152,Artificial immune system based on normal model and immune learning,"Inspired form natural immune system, a new artificial immune system was proposed to detect, recognize and eliminate the non-selfs such as computer worms and software faults. Because unknown non-selfs are very difficult to detect only by recognizing the features of the non-selfs, a normal model was built to provide an easy and effective tool for completely detecting the unknown non-selfs by detecting the known selfs. The probability for detecting unknown non-selfs with traditional approaches depends on the complexity for the features of unknown non-selfs, and the usage of the selfs for detecting the non-selfs in some systems has been neglected. After the normal model is built with the space-time properties of the selfs for the systems, the probability for detecting the unknown non-selfs can be improved with the normal model. To overcome the bottleneck for finding which to recognize and how to learn the unknown worms, an adaptive immune learning model was proposed against the unknown worms, by searching in the multi-dimension feature space of worms with random evolutionary computation. The goal of the adaptive immune learning was to find the most similar known worm to the unknown worm or establish a new class for the unknown worm. The normal model and the innate immune tier on the normal model provided a better source of unknown non-selfs so that the probability for recognizing the unknown worms was increased. To recognize and learn the unknown non-selfs with uncertainty in the artificial immune system, the evolutionary immune algorithm was used to search and reason with uncertainty. At last, a prototype for using the artificial immune system in anti-worm and fault diagnosis applications validated the models.",2008,0, 3153,An Efficient Probability-Based t out of n Secret Image Sharing Scheme,"Noar and Shamir presented the concept of visual cryptography. Many researches following it go down the same track: to expand the secret pixels to blocks. As a result, the size of the secret image becomes larger, and the quality of the expanded secret image becomes worse. In order to prevent the pixels from expanding, Yang has proposed his probability-based visual secret sharing scheme, where the concept of probability is employed to pick out pixels from the black or white sets. In this paper, we shall propose a new scheme that is a modified version of Yangpsilas scheme. Our experimental results show that we can obtain better recovered image quality with high contrast.",2008,0, 3154,Fault Management for Self-Healing in Ubiquitous Sensor Network,"This work concerns the development of a fault model of sensor for detecting and isolating sensor, actuator, and various faults in USNs (Ubiquitous Sensor Network). USN are developed to create relationships between humans, objects and computers in various fields. A management research of sensor nodes is very important because the ubiquitous sensor network has the numerous sensor nodes. However, Self-healing technologies are insufficient to restore when an error event occurs in a sensor node in a USN environment. A layered healing architecture for each node layer (3-tier) is needed, because most sensor devices have different capacities in USN. In this paper, we design a fault model and architecture of the sensor and sensor node separately for self-healing in USN. In order to evaluate our approach, we implement prototype of the USN fault management system to evaluate our approach. We compare the resource use of self-healing components in the general distributed computing (wired network) and the USN.",2008,0, 3155,ARTOO,"Intuition is often not a good guide to know which testing strategies will work best. There is no substitute for experimental analysis based on objective criteria: how many faults a strategy finds, and how fast. """"Random"""" testing is an example of an idea that intuitively seems simplistic or even dumb, but when assessed through such criteria can yield better results than seemingly smarter strategies. The efficiency of random testing is improved if the generated inputs are evenly spread across the input domain. This is the idea of adaptive random testing (ART). ART was initially proposed for numerical inputs, on which a notion of distance is immediately available. To extend the ideas to the testing of object-oriented software, we have developed a notion of distance between objects and a new testing strategy called ARTOO, which selects as inputs objects that have the highest average distance to those already used as test inputs. ARTOO has been implemented as part of a tool for automated testing of object-oriented software. We present the ARTOO concepts, their implementation, and a set of experimental results of its application. Analysis of the results shows in particular that, compared to a directed random strategy, ARTOO reduces the number of tests generated until the first fault is found, in some cases by as much as two orders of magnitude. ARTOO also uncovers faults that the random strategy does not find in the time allotted, and its performance is more predictable.",2008,0, 3156,Early prediction of software component reliability,"The ability to predict the reliability of a software system early in its development, e.g., during architectural design, can help to improve the system's quality in a cost-effective manner. Existing architecture-level reliability prediction approaches focus on system-level reliability and assume that the reliabilities of individual components are known. In general, this assumption is unreasonable, making component reliability prediction an important missing ingredient in the current literature. Early prediction of component reliability is a challenging problem because of many uncertainties associated with components under development. In this paper we address these challenges in developing a software component reliability prediction framework. We do this by exploiting architectural models and associated analysis techniques, stochastic modeling approaches, and information sources available early in the development lifecycle. We extensively evaluate our framework to illustrate its utility as an early reliability prediction approach.",2008,0, 3157,Predicting accurate and actionable static analysis warnings,"Static analysis tools report software defects that may or may not be detected by other verification methods. Two challenges complicating the adoption of these tools are spurious false positive warnings and legitimate warnings that are not acted on. This paper reports automated support to help address these challenges using logistic regression models that predict the foregoing types of warnings from signals in the warnings and implicated code. Because examining many potential signaling factors in large software development settings can be expensive, we use a screening methodology to quickly discard factors with low predictive power and cost-effectively build predictive models. Our empirical evaluation indicates that these models can achieve high accuracy in predicting accurate and actionable static analysis warnings, and suggests that the models are competitive with alternative models built without screening.",2008,0, 3158,Automatic modularity conformance checking,"According to Parnas's information hiding principle and Baldwin and Clark's design rule theory, the key step to decomposing a system into modules is to determine the design rules (or in Parnas's terms, interfaces) that decouple otherwise coupled design decisions and to hide decisions that are likely to change in independent modules. Given a modular design, it is often difficult to determine whether and how its implementation realizes the designed modularity. Manually comparing code with abstract design is tedious and error-prone. We present an automated approach to check the conformance of implemented modularity to designed modularity, using design structure matrices as a uniform representation for both. Our experiments suggest that our approach has the potential to manifest the decoupling effects of design rules in code, and to detect modularity deviation caused by implementation faults. We also show that design and implementation models together provide a comprehensive view of modular structure that makes certain implicit dependencies within code explicit.",2008,0, 3159,An approach to detecting duplicate bug reports using natural language and execution information,"An open source project typically maintains an open bug repository so that bug reports from all over the world can be gathered. When a new bug report is submitted to the repository, a person, called a triager, examines whether it is a duplicate of an existing bug report. If it is, the triager marks it as duplicate and the bug report is removed from consideration for further work. In the literature, there are approaches exploiting only natural language information to detect duplicate bug reports. In this paper we present a new approach that further involves execution information. In our approach, when a new bug report arrives, its natural language information and execution information are compared with those of the existing bug reports. Then, a small number of existing bug reports are suggested to the triager as the most similar bug reports to the new bug report. Finally, the triager examines the suggested bug reports to determine whether the new bug report duplicates an existing bug report. We calibrated our approach on a subset of the Eclipse bug repository and evaluated our approach on a subset of the Firefox bug repository. The experimental results show that our approach can detect 67%-93% of duplicate bug reports in the Firefox bug repository, compared to 43%-72% using natural language information alone.",2008,0, 3160,Mining framework usage changes from instantiation code,"Framework evolution may break existing users, which need to be migrated to the new framework version. This is a tedious and error-prone process that benefits from automation. Existing approaches compare two versions of the framework code in order to find changes caused by refactorings. However, other kinds of changes exist, which are relevant for the migration. In this paper, we propose to mine framework usage change rules from already ported instantiations, the latter being applications build on top of the framework, or test cases maintained by the framework developers. Our evaluation shows that our approach finds usage changes not only caused by refactorings, but also by conceptual changes within the framework. Further, it copes well with some issues that plague tools focusing on finding refactorings such as deprecated program elements or multiple changes applied to a single program element.",2008,0, 3161,The influence of organizational structure on software quality,"Often software systems are developed by organizations consisting of many teams of individuals working together. Brooks states in the Mythical Man Month book that product quality is strongly affected by organization structure. Unfortunately there has been little empirical evidence to date to substantiate this assertion. In this paper we present a metric scheme to quantify organizational complexity, in relation to the product development process to identify if the metrics impact failure-proneness. In our case study, the organizational metrics when applied to data from Windows Vista were statistically significant predictors of failure-proneness. The precision and recall measures for identifying failure-prone binaries, using the organizational metrics, was significantly higher than using traditional metrics like churn, complexity, coverage, dependencies, and pre-release bug measures that have been used to date to predict failure-proneness. Our results provide empirical evidence that the organizational metrics are related to, and are effective predictors of failure-proneness.",2008,0, 3162,Predicting defects using network analysis on dependency graphs,"In software development, resources for quality assurance are limited by time and by cost. In order to allocate resources effectively, managers need to rely on their experience backed by code complexity metrics. But often dependencies exist between various pieces of code over which managers may have little knowledge. These dependencies can be construed as a low level graph of the entire system. In this paper, we propose to use network analysis on these dependency graphs. This allows managers to identify central program units that are more likely to face defects. In our evaluation on Windows Server 2003, we found that the recall for models built from network measures is by 10% points higher than for models built from complexity metrics. In addition, network measures could identify 60% of the binaries that the Windows developers considered as critical-twice as many as identified by complexity metrics.",2008,0, 3163,Analyzing medical processes,"This paper shows how software engineering technologies used to define and analyze complex software systems can also be effective in detecting defects in human-intensive processes used to administer healthcare. The work described here builds upon earlier work demonstrating that healthcare processes can be defined precisely. This paper describes how finite-state verification can be used to help find defects in such processes as well as find errors in the process definitions and property specifications. The paper includes a detailed example, based upon a real-world process for transfusing blood, where the process defects that were found led to improvements in the process.",2008,0, 3164,Interval Quality: Relating Customer-Perceived Quality to Process Quality,"We investigate relationships among software quality measures commonly used to assess the value of a technology, and several aspects of customer perceived quality measured by interval quality (IQ): a novel measure of the probability that a customer will observe a failure within a certain interval after software release. We integrate information from development and customer support systems to compare defect density measures and IQ for six releases of a major telecommunications system. We find a surprising negative relationship between the traditional defect density and IQ. The four years of use in several large telecommunication products demonstrates how a software organization can control customer perceived quality not just during development and verification, but also during deployment by changing the release rate strategy and by increasing the resources to correct field problems rapidly. Such adaptive behavior can compensate for the variations in defect density between major and minor releases.",2008,0, 3165,A business process explorer,"A business process is composed of a set of interrelated tasks which are joined together by control flow elements. E-commerce systems implement business processes to automate the daily operations of an organization. Organizations must continuously modify their e-commerce systems to accommodate changes to business processes. However, modifying e-commerce systems is a time consuming and error prone task. To correctly perform this task, developers require an in-depth understanding of multi-tiered e-commerce systems and the business processes that they implement. In this paper, we present a business process explorer tool which automatically recovers business processes from three tier e-commerce systems. Developers can explore the recovered business processes and browse the corresponding source code. We integrate our tool with IBM WebSphere Business Modeler (WBM), a leading commercial tool for business process management and modeling. Business analysts could then visualize and analyze the recovered processes using WBM. The business process explorer eases the co-evolution of business processes and their e-commerce system implementation.",2008,0, 3166,Modeling Quality of Service Adaptability,"Quality of service adaptability refers to the ability of services (or components) to adapt the quality exhibited during run-time, or to the faculty of architectural models to show that several alternatives concerning quality could be implemented. Enclosing quality properties with architectural models has been typically used to improve system understanding. Nevertheless, these properties can also be used to compose subsystems whose quality can be adapted or/and to predict the behavior of the run-time adaptability. Existing software modeling languages lack enough mechanisms to cope with adaptability, e.g. to describe software elements that may offer/require several quality levels. This paper presents concepts that such a language needs to include to model quality-adaptable systems, and how we use those concepts to compose and analyze software architectures.",2008,0, 3167,A framework for interoperability analysis on the semantic web using architecture models,IT decision making requires analysis of possible future scenarios. The quality of the decisions can be enhanced by the use of architecture models that increase the understanding of the components of the system scenario. It is desirable that the created models support the needed analysis effectively since creation of architecture models often is a demanding and time consuming task. This paper suggests a framework for assessing interoperability on the systems communicating over the semantic web as well as a metamodel suitable for this assessment. Extended influence diagrams are used in the framework to capture the relations between various interoperability factors and enable aggregation of these into a holistic interoperability measure. The paper is concluded with an example using the framework and metamodel to create models and perform interoperability analysis.,2008,0, 3168,Improvement model for collaborative networked organizations,Small and medium enterprises (SMEs) have huge improvement potential both in domain and in collaboration/interoperability capabilities. Before implementing respective improvement measures it's necessary to assess the performance in specific process areas which we divide in domain (e.g. tourism) and collaboration oriented ones. Both in enterprise collaboration (EC) and in enterprise interoperability (EI) the behavior of organizations regarding interoperability must be improved.,2008,0, 3169,A randomness test based on T-codes,"In this paper, a new randomness test is proposed based on the T-complexity of a T-code, which is a variable-length self-synchronizing code introduced by Titchener in 1984. The proposed test can be used instead of the Lempel-Ziv compression test, which was removed from the NIST statistical test suite because the LZ-complexity has a defect such that its distribution of P-values is strictly discrete for random sequences of length 106. We show that T-complexity has almost ideal continuous distribution of P-values for the same sequences. In order to calculate T-complexity, a new T-decomposition algorithm is also proposed to realize forward parsing for a given sequence although the original T-decomposition uses backward parsing. Furthermore, it is shown that the proposed randomness test can detect undesirable pseudorandom numbers that the NIST statistical test suite cannot detect.",2008,0, 3170,Identifying factors influencing reliability of professional systems,"Modern product development strategies call for a more proactive approach to fight intense global competition in terms of technological innovation, shorter time to market, quality and reliability and accommodative price. From a reliability engineering perspective, development managers would like to estimate as early as possible how reliably the product is going to behave in the field, so they can then focus on system reliability improvement. To steer such a reliability driven development process, one of the important aspects in predicting the reliability behavior of a new product, is to know the factors that may influence its field performance. In this paper, two methods are proposed for identifying reliability factors and their significance in influencing the reliability of the product.",2008,0, 3171,Software tools for PRA,"Probabilistic Risk Assessment (PRA) is performed to assess the probability of failure or success of a system's operation. Results provided by the risk assessment methodology are used to make decisions concerning choice of improvements to the design. PRA has been applied or recommended to NASA space applications to identify and mitigate their risks. The complexity of these tasks and varied information sources required for space applications makes solving them manually infeasible. Software tools are mandated. To date, numerous software tools have been developed and claimed as PRA solutions. It is always a concern which software best fits a particular PRA. The authors conducted a limited scope PRA on a NASA application using four different Reliability/PRA software tools (Relex, QRAS, SAPHIRE, GoldSim), which were readily available. The strength and weakness for each tool are identified and discussed. Recommendations on how to improve each tool to better satisfy NASA PRA needs are discussed.",2008,0, 3172,Probabilistic analysis of safety-critical adaptive systems with temporal dependences,"Dynamic adaptation means that components are reconfigured at run time. Consequently, the degree to which a system fulfils its functional and safety requirements depends on the current system configuration at run time. The probability of a violation of functional requirements in combination with an importance factor for each requirement gives us a measure for reliability. In the same way, the degree of violation of safety requirements can be a measure for safety. These measures can easily be derived based on the probabilities of possible system configurations. For this purpose, we are introducing a new probabilistic analysis technique that determines configuration probabilities based on Fault trees, Binary Decision Diagrams (BDDs) and Markov chains. Through our recent work we have been able to determine configuration probabilities of systems but we neglected timing aspects . Timing delays have impact on the adaptation behavior and are necessary to handle cyclic dependences. The contribution of the present article is to extend analysis towards models with timing delays. This technique builds upon the Methodologies and Architectures for Runtime Adaptive Systems (MARS) , a modeling concept we use for specifying the adaptation behavior of a system at design time. The results of this paper determine configuration probabilities, that are necessary to quantify the fulfillment of functional and safety requirements by adaptive systems.",2008,0, 3173,Intermittent faults and effects on reliability of integrated circuits,"A significant amount of research has been aimed at analyzing the effects of high energy particles on semiconductor devices. However, less attention has been given to the intermittent faults. Field collected data and failure analysis results presented in this paper clearly show intermittent faults are a major source of errors in modern integrated circuits. The root cause for these faults ranges from manufacturing residuals to oxide breakdown. Burstiness and high error rates are specific manifestations of the intermittent faults. They may be activated and deactivated by voltage, frequency, and operating temperature variations. The aggressive scaling of semiconductor devices and the higher circuit complexity are expected to increase the likelihood of occurrence of the intermittent faults, despite the extensive use of fault avoidance techniques. Herein we discuss the effectiveness of several fault tolerant approaches, taking into consideration the specifics of the errors generated by intermittent faults. Several solutions, previously proposed for handling particle induced soft errors, are exclusively based on software and too slow for handling large bursts of errors. As a result, hardware implemented fault tolerant techniques, such as error detecting and correcting codes, self checking, and hardware implemented instruction retry, are necessary for mitigating the impact of the intermittent faults, both in the case of microprocessors, and other complex integrated circuits.",2008,0, 3174,A methodology for quantitative evaluation of software reliability using static analysis,"This paper proposes a methodology for quantitative evaluation of software reliability in updated COTS or Open Source components. The model combines static analysis of existing source code modules, limited testing with execution path capture, and a series of Bayesian Belief Networks. Static analysis is used to detect faults within the source code which may lead to failure. Code coverage is used to determine which paths within the source code are executed as well as their execution rate. A series of Bayesian Belief Networks is then used to combine these parameters and estimate the reliability for each method. A second series of Bayesian Belief Networks then combines the module reliabilities to estimate the net software reliability. A proof of concept for the model is provided, as the model is applied to five different open-source applications and the results are compared with reliability estimates using the STREW (Software Testing and Early Warning) metrics. The model is shown to be highly effective and the results are within the confidence interval for the STREW reliability calculations, and typically the results differed by less than 2%. This model offers many benefits to practicing software engineers. Through the usage of this model, it is possible to quickly assess the reliability of a given release of a software module supplied by an external vendor to determine whether it is more or less reliable than a previous release. The determination can be made independent of any knowledge of the developer's software development process and without any development metrics.",2008,0, 3175,A New Methodology for the Test of SoCs and for Analyzing Elusive Failures,The increasing complexity of SoCs in form of more complex architecture designs and smaller structures qualifies test procedures and failure analysis as one of the key skills in the semiconductor industry. In this contribution we show a new SoC test methodology which significantly increases the test coverage of a SoC. The integrated system integrity control functionality of the hid ICE approach observes a SoC core and is able to detect irregularities in comparison to a reference system. In case of a failure detection an exhaustive trace is available which helps to identify the root cause. Especially in multi-core systems the interference between different subsystems can be tested under real operating conditions. Also the effort to identify and analyze elusive failures will be reduced.,2008,0, 3176,A Deterministic Methodology for Identifying Functionally Untestable Path-Delay Faults in Microprocessor Cores,"Delay testing is crucial for most microprocessors. Software-based self-test (SBST) methodologies are appealing, but devising effective test programs addressing the true functionally testable paths and assessing their actual coverage are complex tasks. In this paper, we propose a deterministic methodology, based on the analysis of the processor instruction set architecture, for determining rules arbitrating the functional testability of path-delay faults in the data path and control unit of processor cores. Moreover, the performed analysis gives guidelines for generating test programs. A case study on a widely used 8-bit microprocessor is provided.",2008,0, 3177,Evaluation of Test Criteria for Space Application Software Modeling in Statecharts,"Several papers have addressed the problem of knowing which software test criteria are better than others with respect to parameters such as cost, efficiency and strength. This paper presents an empirical evaluation in terms of cost and efficiency for one test method for finite state machines, switch cover, and two test criteria of the statechart coverage criteria family, all-transitions and all-simple-paths, for a reactive system of a space application. Mutation analysis was used to evaluate efficiency in terms of killed mutants. The results show that the two criteria and the method presented the same efficiency but all-simple-paths presented a better cost because its test suite is smaller than the one generated by switch cover. Besides, test suite due to the all-simple-paths criterion killed the mutants faster than the other test suites meaning that it might be able to detect faults in the software more quickly than the other criteria.",2008,0, 3178,Connector-Driven Gradual and Dynamic Software Assembly Evolution,"Complex and long-lived software need to be upgraded at runtime. Replacing a software component with a newer version is the basic evolution operation that has to be supported. It is error-prone as it is difficult to guarantee the preservation of functionalities and quality. Few existing work on ADLs fully support a component replacement process from its description to its test and validation. The main idea of this work is to have software architecture evolution dynamically driven by connectors (the software glue between components). It proposes a connector model which autonomically handle the reconfiguration of connections in architectures in order to support the versioning of components in a gradual, transparent and testable manner. Hence, the system has the choice to commit the evolution after a successful test phase of the software or rollback to the previous state.",2008,0, 3179,Automatic Detection of Shared Objects in Multithreaded Java Programs,"This paper presents a simple and efficient automated tool called DoSSO that detects shared objects in multithreaded Java programs. Our main goal is to help programmers see all potentially shared objects that may cause some complications at runtime. This way programmers can implement a concurrent software without considering synchronization issues and then use appropriate locking mechanism based on the DoSSO results. To illustrate the effectiveness of our tool, we have performed an experiment on a multithreaded system with graphical user interfaces and remote method invocations and achieved promising results.",2008,0, 3180,An efficient parallel approach for identifying protein families in large-scale metagenomic data sets,"Metagenomics is the study of environmental microbial communities using state-of-the-art genomic tools. Recent advancements in high-throughput technologies have enabled the accumulation of large volumes of metagenomic data that was until a couple of years back was deemed impractical for generation. A primary bottleneck, however, is in the lack of scalable algorithms and open source software for large-scale data processing. In this paper, we present the design and implementation of a novel parallel approach to identify protein families from large-scale metagenomic data. Given a set of peptide sequences we reduce the problem to one of detecting arbitrarily-sized dense subgraphs from bipartite graphs. Our approach efficiently parallelizes this task on a distributed memory machine through a combination of divide-and-conquer and combinatorial pattern matching heuristic techniques. We present performance and quality results of extensively testing our implementation on 160 K randomly sampled sequences from the CAMERA environmental sequence database using 512 nodes of a BlueGene/L supercomputer.",2008,0, 3181,Nimrod/K: Towards massively parallel dynamic Grid workflows,"A challenge for Grid computing is the difficulty in developing software that is parallel, distributed and highly dynamic. Whilst there have been many general purpose mechanisms developed over the years, Grid programming still remains a low level, error prone task. Scientific workflow engines can double as programming environments, and allow a user to compose dasiavirtualpsila Grid applications from pre-existing components. Whilst existing workflow engines can specify arbitrary parallel programs, (where components use message passing) they are typically not effective with large and variable parallelism. Here we discuss dynamic dataflow, originally developed for parallel tagged dataflow architectures (TDAs), and show that these can be used for implementing Grid workflows. TDAs spawn parallel threads dynamically without additional programming. We have added TDAs to Kepler, and show that the system can orchestrate workflows that have large amounts of variable parallelism. We demonstrate the system using case studies in chemistry and in cardiac modelling.",2008,0, 3182,Analysis of ROV video imagery for krill identification and counting under Antarctic sea ice,"An off-the-shelf SeaBotix ROV was deployed under Antarctic sea ice near Palmer Station Antarctica during the September-October 2007 Sea Ice Mass Balance in the Antarctic (SIMBA) project from the research vessel NB Palmer. Video imagery taken showed significant numbers of Antarctic krill (sp. Euphasia superba and/or Euphasia crystallorophis) under the sea ice at the two stations deployed. The goal of this image analysis is to estimate the krill population densities, as well, as to identify other life forms. The relative motion between the krill and vehicle complicate the video analysis process. The avoidance behavior of the krill adds to this challenge along with the changing lighting conditions under the ice. We discuss these challenges and the algorithms that are under development in this paper. The ROV videos were converted into a string of images, which were used to simulate a running speed of 3 frames/second (fps). A 5 second clip (15 frames) was selected as an initial test for the vision software. The LabVIEWreg Vision Builder AI software has been selected as an image processing platform, which allows us to rapidly prototype algorithms. We have applied noise reduction techniques to reduce some of the noise. Edge detection filters (such as, but not limited to the Roberts Filter) have been applied to further reduce the image's noise level and to increase the contrast between the krill and the water. Next, we applied thresholds to detect the object, which was subsequently used to identify and count and log the number of distinct objects in the image. Once all of the parameters were set, the 15 images were cycled through the configured inspection in chronological order to simulate an actual inspection of the ROV video. The results of our automated krill population estimate to that of humans counting the krill manually using the same video was conducted. We found that illumination and image quality allowed the most prominent individuals in - proximity of the camera to be counted. However, because of background noise and low scattering of some individuals the filtering removed suspected individuals that would suggest this krill density is significantly underestimated. The technique however appears practical. In considering the motion of the krill relative to the vehicle, tracking becomes paramount. Plans for implementing this technique in ongoing development are discussed.",2008,0, 3183,On the Relation between External Software Quality and Static Code Analysis,"Only a few studies exist that try to investigate whether there is a significant correlation between external software quality and the data provided by static code analysis tools. A clarification on this issue could pave the way for more precise prediction models on the probability of defects based on the violation of programming rules. We therefore initiated a study where the defect data of selected versions of the open source development environment ldquoEclipse SDKrdquo is correlated with the data provided by the static code analysis tools PMD and FindBugs applied the source code of Eclipse. The results from this study are promising as especially some PMD rulesets show a good correlation with the defect data and could therefore serve as basis for measurement, control and prediction of software quality.",2008,0, 3184,Database Mediation Using Multi-agent Systems,"This paper first proposes a multi-agent architecture to mediate access to data sources. The mediator follows the classical approach to process user queries. However, in the background, it post-processes query results to gradually construct matchings between the export schemas and the mediated schema. The central theme of the paper is an extensional schema matching strategy based on similarity functions. The paper concludes with experimental results that assess the quality of the matching strategy.",2008,0, 3185,SeaWASP: A small waterplane area twin hull autonomous platform for shallow water mapping,"Students with Santa Clara University (SCU) and the Monterey Bay Aquarium Research Institute (MBARI) are developing an innovative platform for shallow water bathymetry. Bathymetry data is used to analyze the geography, ecosystem, and health of marine habitats. However, current methods for shallow water measurements typically involve large, manned vessels. These vessels may pose a danger to themselves and the environment in shallow, semi-navigable waters. Small vessels, however, are prone to disturbance by the waves, tides, and currents of shallow water. The SCU / MBARI autonomous surface vessel (ASV) is designed to operate safely, stably in waters > 1 m and without significant manned support. Final deployment will be at NOAA's Kasitsna Bay Laboratory in Alaska. The ASV utilizes several key design components to provide stability, shallow draft, and long-duration unmanned operations. Bathymetry is measured with a multibeam sonar in concert with DVL and GPS sensors. Pitch, roll, and heave are minimized by a Small Waterplane Area Twin Hull (SWATH) design. The SWATH has a submerged hull, small water-plane area, and high mass to damping ratio, making it less prone to disturbance and ideal for accurate data collection. Precision sensing and actuation is controlled by onboard autonomous algorithms. Autonomous navigation increases the quality of the data collection and reduces the necessity for continuous manning. The vessel has been operated successfully in several open water test environments, including Elkhorn Slough, CA, Steven's Creek, CA, and Lake Tahoe, NV. It is currently is in the final stages of integration and test for its first major science mission at Orcas Island, San Juan Islands, WA, in August, 2008. The Orcas Island deployment will feature design upgrades implemented in Summer, 2008, including additional batteries for all-day power (minimum eight hours), active ballast, real-time data monitoring, updated autonomous control electronics and software, and data- editing using in-house bathymetry mapping software, MB-System. This paper will present the results of the Orcas Island mission and evaluate possible design changes for Alaska. Also, we will include a discussion of our shallow water bathymetry design considerations and a technical overview of the subsystems and previous test results. The ASV has been developed in partnership with Santa Clara University, the Monterey Bay Aquarium Research Institute, the University of Alaska Fairbanks, and NOAA's West Coat and Polar Regions Undersea Research Center.",2008,0, 3186,Optimization of economizer tubing system renewal decisions,"The economizer is a critical component in coal fired power stations. An optimal renewal strategy is needed for minimizing the lifetime cost of this component. Here we present an effective optimization approach which considers economizer tubing failure probabilities, repair and renewal costs, potential production losses, and fluctuations in electricity market prices.",2008,0, 3187,Test suite consistency verification,"Test cases are themselves prone to errors, thus techniques and tools to validate tests are needed. In this paper, we suggest a method to check mutual consistency of tests in a test suite.",2008,0, 3188,Regression via Classification applied on software defect estimation,"In this paper we apply Regression via Classification (RvC) to the problem of estimating the number of software defects. This approach apart from a certain number of faults, it also outputs an associated interval of values, within which this estimate lies with a certain confidence. RvC also allows the production of comprehensible models of software defects exploiting symbolic learning algorithms. To evaluate this approach we perform an extensive comparative experimental study of the effectiveness of several machine learning algorithms in two software data sets. RvC manages to get better regression error than the standard regression approaches on both datasets",2008,1, 3189,Mining software repositories for comprehensible software fault prediction models,"Software managers are routinely confronted with software projects that contain errors or inconsistencies and exceed budget and time limits. By mining software repositories with comprehensible data mining techniques, predictive models can be induced that offer software managers the insights they need to tackle these quality and budgeting problems in an efficient way. This paper deals with the role that the Ant Colony Optimization (ACO)-based classification technique AntMiner+ can play as a comprehensible data mining technique to predict erroneous software modules. In an empirical comparison on three real-world public datasets, the rule-based models produced by AntMiner+ are shown to achieve a predictive accuracy that is competitive to that of the models induced by several other included classification techniques, such as C4.5, logistic regression and support vector machines. In addition, we will argue that the intuitiveness and comprehensibility of the AntMiner+ models can be considered superior to the latter models.",2008,1, 3190,A systematic review of software fault prediction studies,"This paper provides a systematic review of previous software fault prediction studies with a specific focus on metrics, methods, and datasets. The review uses 74 software fault prediction papers in 11 journals and several conference proceedings. According to the review results, the usage percentage of public datasets increased significantly and the usage percentage of machine learning algorithms increased slightly since 2005. In addition, method-level metrics are still the most dominant metrics in fault prediction research area and machine learning algorithms are still the most popular methods for fault prediction. Researchers working on software fault prediction area should continue to use public datasets and machine learning algorithms to build better fault predictors. The usage percentage of class-level is beyond acceptable levels and they should be used much more than they are now in order to predict the faults earlier in design phase of software life cycle.",2008,1, 3191,Applying machine learning to software fault-proneness prediction,"The importance of software testing to quality assurance cannot be overemphasized. The estimation of a module's fault-proneness is important for minimizing cost and improving the effectiveness of the software testing process. Unfortunately, no general technique for estimating software fault-proneness is available. The observed correlation between some software metrics and fault-proneness has resulted in a variety of predictive models based on multiple metrics. Much work has concentrated on how to select the software metrics that are most likely to indicate fault-proneness. In this paper, we propose the use of machine learning for this purpose. Specifically, given historical data on software metric values and number of reported errors, an Artificial Neural Network (ANN) is trained. Then, in order to determine the importance of each software metric in predicting fault-proneness, a sensitivity analysis is performed on the trained ANN. The software metrics that are deemed to be the most critical are then used as the basis of an ANN-based predictive model of a continuous measure of fault-proneness. We also view fault-proneness prediction as a binary classification task (i.e., a module can either contain errors or be error-free) and use Support Vector Machines (SVM) as a state-of-the-art classification method. We perform a comparative experimental study of the effectiveness of ANNs and SVMs on a data set obtained from NASA's Metrics Data Program data repository.",2008,1, 3192,Predicting defect-prone software modules using support vector machines,"Effective prediction of defect-prone software modules can enable software developers to focus quality assurance activities and allocate effort and resources more efficiently. Support vector machines (SVM) have been successfully applied for solving both classification and regression problems in many applications. This paper evaluates the capability of SVM in predicting defect-prone software modules and compares its prediction performance against eight statistical and machine learning models in the context of four NASA datasets. The results indicate that the prediction performance of SVM is generally better than, or at least, is competitive against the compared models.",2008,1, 3193,A Positive Detecting Code and Its Decoding Algorithm for DNA Library Screening,"The study of gene functions requires high-quality DNA libraries. However, a large number of tests and screenings are necessary for compiling such libraries. We describe an algorithm for extracting as much information as possible from pooling experiments for library screening. Collections of clones are called pools, and a pooling experiment is a group test for detecting all positive clones. The probability of positiveness for each clone is estimated according to the outcomes of the pooling experiments. Clones with high chance of positiveness are subjected to confirmatory testing. In this paper, we introduce a new positive clone detecting algorithm, called the Bayesian network pool result decoder (BNPD). The performance of BNPD is compared, by simulation, with that of the Markov chain pool result decoder (MCPD) proposed by Knill et al. in 1996. Moreover, the combinatorial properties of pooling designs suitable for the proposed algorithm are discussed in conjunction with combinatorial designs and d-disjunct matrices. We also show the advantage of utilizing packing designs or BIB designs for the BNPD algorithm.",2009,0, 3194,Deterministic Priority Channel Access Scheme for QoS Support in IEEE 802.11e Wireless LANs,"The enhanced distributed channel access (EDCA) of IEEE 802.11e has been standardized to support quality of service (QoS) in wireless local area networks (LANs). The EDCA statistically supports the QoS by differentiating the probability of channel access among different priority traffic and does not provide the deterministically prioritized channel access for high-priority traffic, such as voice or real-time video. Therefore, lower priority traffic still affects the performance of higher priority traffic. In this paper, we propose a simple and effective scheme called deterministic priority channel access (DPCA) to improve the QoS performance of the EDCA mechanism. To provide guaranteed channel access to multimedia applications, the proposed scheme uses a busy tone to limit the transmissions of lower priority traffic when higher priority traffic has packets to send. Performance evaluation is conducted using both numerical analysis and simulation and shows that the proposed scheme significantly outperforms the EDCA in terms of throughput, delay, delay jitter, and packet drop ratio under a wide range of contention level.",2009,0, 3195,Toward a Fuzzy Domain Ontology Extraction Method for Adaptive e-Learning,"With the widespread applications of electronic learning (e-Learning) technologies to education at all levels, increasing number of online educational resources and messages are generated from the corresponding e-Learning environments. Nevertheless, it is quite difficult, if not totally impossible, for instructors to read through and analyze the online messages to predict the progress of their students on the fly. The main contribution of this paper is the illustration of a novel concept map generation mechanism which is underpinned by a fuzzy domain ontology extraction algorithm. The proposed mechanism can automatically construct concept maps based on the messages posted to online discussion forums. By browsing the concept maps, instructors can quickly identify the progress of their students and adjust the pedagogical sequence on the fly. Our initial experimental results reveal that the accuracy and the quality of the automatically generated concept maps are promising. Our research work opens the door to the development and application of intelligent software tools to enhance e-Learning.",2009,0, 3196,Beyond Output Voting: Detecting Compromised Replicas Using HMM-Based Behavioral Distance,"Many host-based anomaly detection techniques have been proposed to detect code-injection attacks on servers. The vast majority, however, are susceptible to """"mimicry"""" attacks in which the injected code masquerades as the original server software, including returning the correct service responses, while conducting its attack. """"Behavioral distance,"""" by which two diverse replicas processing the same inputs are continually monitored to detect divergence in their low-level (system-call) behaviors and hence potentially the compromise of one of them, has been proposed for detecting mimicry attacks. In this paper, we present a novel approach to behavioral distance measurement using a new type of hidden Markov model, and present an architecture realizing this new approach. We evaluate the detection capability of this approach using synthetic workloads and recorded workloads of production Web and game servers, and show that it detects intrusions with substantially greater accuracy than a prior proposal on measuring behavioral distance. We also detail the design and implementation of a new architecture, which takes advantage of virtualization to measure behavioral distance. We apply our architecture to implement intrusion-tolerant Web and game servers, and through trace-driven simulations demonstrate that it experiences moderate performance costs even when thresholds are set to detect stealthy mimicry attacks.",2009,0, 3197,What Types of Defects Are Really Discovered in Code Reviews?,"Research on code reviews has often focused on defect counts instead of defect types, which offers an imperfect view of code review benefits. In this paper, we classified the defects of nine industrial (C/C++) and 23 student (Java) code reviews, detecting 388 and 371 defects, respectively. First, we discovered that 75 percent of defects found during the review do not affect the visible functionality of the software. Instead, these defects improved software evolvability by making it easier to understand and modify. Second, we created a defect classification consisting of functional and evolvability defects. The evolvability defect classification is based on the defect types found in this study, but, for the functional defects, we studied and compared existing functional defect classifications. The classification can be useful for assigning code review roles, creating checklists, assessing software evolvability, and building software engineering tools. We conclude that, in addition to functional defects, code reviews find many evolvability defects and, thus, offer additional benefits over execution-based quality assurance methods that cannot detect evolvability defects. We suggest that code reviews may be most valuable for software products with long life cycles as the value of discovering evolvability defects in them is greater than for short life cycle systems.",2009,0, 3198,A Novel Bicriteria Scheduling Heuristics Providing a Guaranteed Global System Failure Rate,"We propose a new framework for the (length and reliability) bicriteria static multiprocessor scheduling problem. Our first criterion remains the schedule's length, which is crucial to assess the system's real-time property. For our second criterion, we consider the global system failure rate, seen as if the whole system were a single task scheduled onto a single processor, instead of the usual reliability, because it does not depend on the schedule length like the reliability does (due to its computation in the classical exponential distribution model). Therefore, we control better the replication factor of each individual task of the dependency task graph given as a specification, with respect to the desired failure rate. To solve this bicriteria optimization problem, we take the failure rate as a constraint, and we minimize the schedule length. We are thus able to produce, for a given dependency task graph and multiprocessor architecture, a Pareto curve of nondominated solutions, among which the user can choose the compromise that fits his or her requirements best. Compared to the other bicriteria (length and reliability) scheduling algorithms found in the literature, the algorithm we present here is the first able to improve significantly the reliability, by several orders of magnitude, making it suitable to safety-critical systems.",2009,0, 3199,CoMoM: Efficient Class-Oriented Evaluation of Multiclass Performance Models,"We introduce the class-oriented method of moments (CoMoM), a new exact algorithm to compute performance indexes in closed multiclass queuing networks. Closed models are important for performance evaluation of multitier applications, but when the number of service classes is large, they become too expensive to solve with exact methods such as mean value analysis (MVA). CoMoM addresses this limitation by a new recursion that scales efficiently with the number of classes. Compared to the MVA algorithm, which recursively computes mean queue lengths, CoMoM also carries on in the recursion information on higher-order moments of queue lengths. We show that this additional information greatly reduces the number of operations needed to solve the model and makes CoMoM the best-available algorithm for networks with several classes. We conclude the paper by generalizing CoMoM to the efficient computation of marginal queue-length probabilities, which finds application in the evaluation of state-dependent attributes such as quality-of-service metrics.",2009,0, 3200,Linking Model-Driven Development and Software Architecture: A Case Study,"A basic premise of model driven development (MDD) is to capture all important design information in a set of formal or semi-formal models which are then automatically kept consistent by tools. The concept however is still relatively immature and there is little by way of empirically validated guidelines. In this paper we report on the use of MDD on a significant real-world project over several years. Our research found the MDD approach to be deficient in terms of modelling architectural design rules. Furthermore, the current body of literature does not offer a satisfactory solution as to how architectural design rules should be modelled. As a result developers have to rely on time-consuming and error-prone manual practices to keep a system consistent with its architecture. To realise the full benefits of MDD it is important to find ways of formalizing architectural design rules which then allow automatic enforcement of the architecture on the system model. Without this, architectural enforcement will remain a bottleneck in large MDD projects.",2009,0, 3201,Quasi-Renewal Time-Delay Fault-Removal Consideration in Software Reliability Modeling,"Software reliability growth models based on a nonhomogeneous Poisson process (NHPP) have been considered as one of the most effective among various models since they integrate the information regarding testing and debugging activities observed in the testing phase into the software reliability model. Although most of the existing NHPP models have progressed successfully in their estimation/prediction accuracies by modifying the assumptions with regard to the testing process, these models were developed based on the instantaneous fault-removal assumption. In this paper, we develop a generalized NHPP software reliability model considering quasi-renewal time-delay fault removal. The quasi-renewal process is employed to estimate the time delay due to identifying and prioritizing the detected faults before actual code change in the software reliability assessment. Model formulation based on the quasi-renewal time-delay assumption is provided, and the generalized mean value function (MVF) for the proposed model is derived by using the method of steps. The general solution of the MVFs for the proposed model is also obtained for some specific existing models. The numerical examples, based on a software failure data set, show that the consideration of quasi-renewal time-delay fault-removal assumption improves the descriptive properties of the model, which means that the length of time delay is getting decreased since testers and programmers adapt themselves to the working environment as testing and debugging activities are in progress.",2009,0, 3202,Compositional Dependability Evaluation for STATEMATE,"Software and system dependability is getting ever more important in embedded system design. Current industrial practice of model-based analysis is supported by state-transition diagrammatic notations such as Statecharts. State-of-the-art modelling tools like STATEMATE support safety and failure-effect analysis at design time, but restricted to qualitative properties. This paper reports on a (plug-in) extension of STATEMATE enabling the evaluation of quantitative dependability properties at design time. The extension is compositional in the way the model is augmented with probabilistic timing information. This fact is exploited in the construction of the underlying mathematical model, a uniform continuous-time Markov decision process, on which we are able to check requirements of the form: """"The probability to hit a safety-critical system configuration within a mission time of 3 hours is at most 0.01."""" We give a detailed explanation of the construction and evaluation steps making this possible, and report on a nontrivial case study of a high-speed train signalling system where the tool has been applied successfully.",2009,0, 3203,Carving and Replaying Differential Unit Test Cases from System Test Cases,"Unit test cases are focused and efficient. System tests are effective at exercising complex usage patterns. Differential unit tests (DUT) are a hybrid of unit and system tests that exploits their strengths. They are generated by carving the system components, while executing a system test case, that influence the behavior of the target unit, and then re-assembling those components so that the unit can be exercised as it was by the system test. In this paper we show that DUTs retain some of the advantages of unit tests, can be automatically generated, and have the potential for revealing faults related to intricate system executions. We present a framework for carving and replaying DUTs that accounts for a wide variety of strategies and tradeoffs, we implement an automated instance of the framework with several techniques to mitigate test cost and enhance flexibility and robustness, and we empirically assess the efficacy of carving and replaying DUTs on three software artifacts.",2009,0, 3204,Power-Law Distributions of Component Size in General Software Systems,"This paper begins by modeling general software systems using concepts from statistical mechanics which provide a framework for linking microscopic and macroscopic features of any complex system. This analysis provides a way of linking two features of particular interest in software systems: first the microscopic distribution of defects within components and second the macroscopic distribution of component sizes in a typical system. The former has been studied extensively, but the latter much less so. This paper shows that subject to an external constraint that the total number of defects is fixed in an equilibrium system, commonly used defect models for individual components directly imply that the distribution of component sizes in such a system will obey a power-law Pareto distribution. The paper continues by analyzing a large number of mature systems of different total sizes, different implementation languages, and very different application areas, and demonstrates that the component sizes do indeed appear to obey the predicted power-law distribution. Some possible implications of this are explored.",2009,0, 3205,Reproducible Research in Computational Harmonic Analysis,"Scientific computation is emerging as absolutely central to the scientific method. Unfortunately, it's error-prone and currently immature-traditional scientific publication is incapable of finding and rooting out errors in scientific computation-which must be recognized as a crisis. An important recent development and a necessary response to the crisis is reproducible computational research in which researchers publish the article along with the full computational environment that produces the results. In this article, the authors review their approach and how it has evolved over time, discussing the arguments for and against working reproducibly.",2009,0, 3206,Mining Software History to Improve Software Maintenance Quality: A Case Study,Errors in software updates can cause regressions failures in stable parts of the system. The Binary Change Tracer collects data on software projects and helps predict regressions in software projects.,2009,0, 3207,Analytics-Driven Dashboards Enable Leading Indicators for Requirements and Designs of Large-Scale Systems,"Mining software repositories using analytics-driven dashboards provides a unifying mechanism for understanding, evaluating, and predicting the development, management, and economics of large-scale systems and processes. Dashboards enable measurement and interactive graphical displays of complex information and support flexible analytic capabilities for user customizability and extensibility. Dashboards commonly include system requirements and design metrics because they provide leading indicators for project size, growth, and volatility. This article focuses on dashboards that have been used on actual large-scale software projects as well as example empirical relationships revealed by the dashboards. The empirical results focus on leading indicators for requirements and designs of large-scale software systems based on insights from two sets of software projects containing 14 systems and 23 systems.",2009,0, 3208,Prediction of Metallic Conductor Voltage Owing to Electromagnetic Coupling Using Neuro Fuzzy Modeling,"Electromagnetic interference effects of transmission lines on nearby metallic structures such as pipelines, communication lines, or railroads are a real problem, which can place both operator safety and structure integrity at risk. The level of these voltages can be reduced to a safe value in accordance with the IEEE standard 80 by designing a proper mitigation system. This paper presents a Fuzzy algorithm that can predict the level of the metallic conductor voltage. The model outlined in this paper is both fast and accurate and can accurately predict the voltage magnitude even with changing system parameters (soil resistivity, fault current, separation distance, mitigated or unmitigated system). Simulation results for three different scenarios, confirm the capability of the proposed Fuzzy system model in modeling and predicting the total voltage and are found to be in good agreement with data obtained from the CDEGS software.",2009,0, 3209,A Novel Sustained Vector Technique for the Detection of Hardware Trojans,"Intentional tampering in the internal circuit structure by implanting Trojans can result in disastrous operational consequences. While a faulty manufacturing leads to a nonfunctional device, effect of an external implant can be far more detrimental. Therefore, effective detection and diagnosis of such maligned ICs in the post silicon testing phase is imperative, if the parts are intended to be used in mission critical applications. We propose a novel sustained vector methodology that proves to be very effective in detecting the presence of a Trojan in an IC. Each vector is repeated multiple times at the input of both the genuine and the Trojan circuits that ensures the reduction of extraneous toggles within the genuine circuit. Regions showing wide variations in the power behavior are analyzed to isolate the infected gate(s). Experimental results on ISCAS benchmark circuits show that this approach can magnify the behavioral difference between a genuine and infected IC up to thirty times as compared to the previous approaches.",2009,0, 3210,An Evidential-Reasoning-Interval-Based Method for New Product Design Assessment,"A key issue in successful new product development is how to determine the best product design among a lot of feasible alternatives. In this paper, the authors present a novel rigorous assessment methodology to improve the decision-making analysis in the complex multiple-attribute environment of new product design (NPD) assessment in early product design stage, where several performance measures, like product functions and features, manufacturability and cost, quality and reliability, maintainability and serviceability, etc., must be accounted for, but no concrete and reliable data are available, in which conventional approaches cannot be applied with confidence. The developed evidential reasoning (ER) interval methodology is able to deal with uncertain and incomplete data and information in forms of both qualitative and quantitative nature, data expressed in interval and range, judgment with probability functions, judgment in a comparative basis, unknown embedded, etc. An NPD assessment model, incorporated with the ER-based methodology, is then developed and a software system is built accordingly for validation. An industrial case study of electrical appliances is used to illustrate the application of the developed ER methodology and the product design assessment system.",2009,0, 3211,Applying eMM in a 3D Approach to e-Learning Quality Improvement,"The e-learning maturity model (eMM) is a framework for quality improvement, by which institutions can assess and compare their capability to sustainably develop, deploy and support e-learning. This paper presents a three-dimensional (3D) approach to e-learning quality improvement. In the approach the eMM is applied in ldquodiagnosisrdquo phase as an assessment tool for e-learning process improvement in institutional context where the key elements necessary for improvement in e-learning activities are identified. The ldquodevelopmentrdquo phase of the 3D approach concentrates on putting together improvement or change packages to target areas of deficiency. In strategic point of views, the packages are translated into implementation plans in a short term, a mid term, and a long term. In ldquodeliveryrdquo phase of the approach, the main focus is the human resource and marketing efforts for implementing the change packages in operational point of views. The 3D approach described can be beneficial in guiding individual institution's understanding of their e-learning capability and providing educational institutions with a roadmap for e-learning process improvement as well as providing a framework for strategic and operational planning and investment.",2009,0, 3212,Software-Intensive Equipment Fault Diagnosis Research Based on D-S Evidential Theory,"Aiming at limitations of the current fault diagnosis of the software-intensive equipment (SIE), considering the advantages of D-S evidential theory at dealing with multi-information, this paper presents a method of fault diagnosis at the decision level based on D-S evidential theory. The method establishes system structure of fault diagnosis, constructs the reasonable basic probability assignment algorithm, carries out multi-criteria fusion using D-S fusion model and method. The method is proved to be effective for fault location by instance. It makes diagnosed information more definite and improves the accuracy of diagnosis.",2009,0, 3213,Application of an Improved Particle Swarm Optimization for Fault Diagnosis,"In this paper, the feasibility of using probabilistic causal-effect model is studied and we apply it in particle swarm optimization algorithm (PSO) to classify the faults of mine hoist. In order to enhance the PSO performance, we propose the probability function to nonlinearly map the data into a feature space in probabilistic causal-effect model, and with it, fault diagnosis is simplified into optimization problem from the original complex feature set. The proposed approach is applied to fault diagnosis, and our implementation has the advantages of being general, robust, and scalable. The raw datasets obtained from mine hoist system are preprocessed and used to generate networks fault diagnosis for the system. We studied the performance of the improved PSO algorithm and generated a Probabilistic Causal-effect network that can detect faults in the test data successfully. It can get >90% minimal diagnosis with cardinal number of fault symptom sets greater than 25.",2009,0, 3214,Towards the Validation of Plagiarism Detection Tools by Means of Grammar Evolution,"Student plagiarism is a major problem in universities worldwide. In this paper, we focus on plagiarism in answers to computer programming assignments, where students mix and/or modify one or more original solutions to obtain counterfeits. Although several software tools have been developed to help the tedious and time consuming task of detecting plagiarism, little has been done to assess their quality, because determining the real authorship of the whole submission corpus is practically impossible for markers. In this paper, we present a grammar evolution technique which generates benchmarks for testing plagiarism detection tools. Given a programming language, our technique generates a set of original solutions to an assignment, together with a set of plagiarisms of the former set which mimic the basic plagiarism techniques performed by students. The authorship of the submission corpus is predefined by the user, providing a base for the assessment and further comparison of copy-catching tools. We give empirical evidence of the suitability of our approach by studying the behavior of one advanced plagiarism detection tool (AC) on four benchmarks coded in APL2, generated with our technique.",2009,0, 3215,Rough Set Based Ensemble Prediction for Topic Specific Web Crawling,"The rapid growth of the World Wide Web had made the problem of useful resource discovery an important one in recent years. Several techniques such as focused crawling and intelligent crawling have recently been proposed for topic specific resource discovery. All these crawlers use the hypertext features behavior in order to perform topic specific resource discovery. A focused crawler uses the relevance score of the crawled page to score the unvisited URLs extracted from it. The scored URLs are then added to the frontier. Then it picks up the best URL to crawl next. Focused crawlers rely on different types of features of the crawled pages to keep the crawling scope within the desired domain and they are obtained from URL, anchor text, link structure and text contents of the parent and ancestor pages. Different focused crawling algorithms use these different set of features to predict the relevance and quality of the unvisited Web pages. In this article a combined method based on rough set theory has been proposed. It combines the available predictions using decision rules and can build much larger domain-specific collections with less noise. Our experiment in this regard has provided better Harvest rate and better target recall for focused crawling.",2009,0, 3216,Implementing a Software-Based 802.11 MAC on a Customized Platform,"Wireless network (WLAN) platforms today are based on complex and inflexible hardware solutions to meet performance and deadline constraints for media access control (MAC). Contrary to that, physical layer solutions, such as software defined radio, become more and more flexible and support several competing standards. A flexible MAC counterpart is needed for system solutions that can keep up with the rising variety of WLAN protocols. We revisit the case for programmable MAC implementations looking at recent WLAN standards and show the feasibility of a software-based MAC. Our exploration uses IEEE 802.11a-e as design driver, which makes great demands on Quality-of-Service, security, and throughput. We apply our SystemC framework for efficiently assessing the quality of design points and reveal trade-offs between memory requirements, core performance, application-specific optimizations, and responsiveness of MAC implementations. In the end, two embedded cores at moderate speed (< 200 MHz) with lightweight support for message passing using small buffers are sufficient for sustaining worst-case scenarios in software.",2009,0, 3217,CPS: A Cooperative-Probe Based Failure Detection Scheme for Application Layer Multicast,"The failure of non-leaf nodes in application layer multicast (ALM) tree partitions the multicast tree, which may significantly degrade the quality of multicast service. Accurate and timely failure recovery from node failures in ALM tree, which is based on the premise of efficient failure detection scheme, is critical to minimize the disruption of service. In fact, failure detection has hardly been studied in the context of ALM. Firstly, this paper analyzes the failure detection issues in ALM and then proposes a model which is based on the relationship between parent and child nodes for it. Secondly, the cooperative-probe based failure detection scheme (CPS), in which the information about probe loss is shared among monitor nodes, is designed. Thirdly, the performance of CPS is analyzed. The Numerical results show that our proposed scheme can simultaneously reduce the detection time and the probability of false positive at the cost of little increased control overhead by comparing with the basic-probe based failure detection scheme (BPS). Finally, some possible directions for the future work are discussed.",2009,0, 3218,Preprocessor of Intrusion Alerts Correlation Based on Ontology,"Intrusion detection systems (IDS) often provide a large number and poor quality alerts, which are insufficient to support rapid identification of ongoing attacks or predict an intruderpsilas next likely goal. Several alert correlation techniques have been proposed to facilitate the analysis of intrusion alerts. However, many works directly upon the alerts, they do not distinguish between alerts and intruders' attack actions. In addition, many works are not grounded on any standard taxonomy, their associated classification schemes are ad hoc and localized. This paper focus on reducing alerts to attack actions with IDMEF and CVE standards in the preprocessor of our intrusion alerts correlation system which is based on ontology. At first, we introduce our intrusion alerts correlation system. Then we present each modules of the preprocessor, they are local preprocessor, IDMEF parser, alert to attack module and attack to ontology module.",2009,0, 3219,Usability Measurement Using a Fuzzy Simulation Approach,"Usability is a three-dimensional quality of software product, encompassing efficiency, effectiveness and satisfaction. Satisfaction always relies on subjective judgment, but on some degree, the measurement of efficiency and effectiveness is able to be objectively calculated by some approaches, which can help designers to compare different prototype design candidates in the early stage of product development. To address the limitation of state-redundancy and inter-uncertainty of FSM, this paper introduces an existing approach to simplify a general FSM by reducing the redundant states, and proposes a fuzzy mathematical formula for calculating the probability of state-transition. Then, an algorithm for measuring efficiency and effectiveness data on simulated FSM is designed based on the fuzzy function. This measurement approach is very appropriate to be applied on the small-medium types of embedded software in practice.",2009,0, 3220,A Method to Detect the Microshock Risk During a Surgical Procedure,"During a surgical procedure, the patient is exposed to a risk that is inherent in the use of medical electric equipment. The situations involved in this study presume that, in open thorax surgery, 60-Hz currents with milliampere units can pass through the myocardium. A current as small as 50 or 70 muA crossing the heart can cause cardiac arrest. This risk exists due to the electrical supply in the building and medical electric equipment. Even while following established standards and technical recommendations, the equipment use electricity and can cause problems, even with a few milliamperes. This study simulates by software an electrical fault possibility of this type and shows the efficiency of the proposed method for detecting microshock risks. In addition to the simulation, a laboratory experiment is conducted with an electric circuit that is designed to produce leakage currents of known values that are detected by equipment named Protegemed. This equipment, as well as the simulation, also proves the efficiency of the proposed method. The developed method and the applied equipment for this method are covered in the Brazilian Invention Patent (PI number 9701995).",2009,0, 3221,Dependable Service-Oriented Computing,"Distributed computing, in which an application runs over multiple independent computing nodes, has a higher risk of one or more nodes failing than a centralized, single-node environment. On the other hand, distributed computing can also make an overall system more dependable by detecting those faulty nodes - whether they're due to an underlying hardware or software failure or to compromised security through malicious attacks and then redistributing application components or coordinating them via predefined protocols to avoid such problems. So, traditional dependability studies focus on fault detection, protocols for redistributing application components and coordinating them across nodes, and even failure estimation using system and component characterization.",2009,0, 3222,Eliminating microarchitectural dependency from Architectural Vulnerability,"The architectural vulnerability factor (AVF) of a hardware structure is the probability that a fault in the structure will affect the output of a program. AVF captures both microarchitectural and architectural fault masking effects; therefore, AVF measurements cannot generate insight into the vulnerability of software independent of hardware. To evaluate the behavior of software in the presence of hardware faults, we must isolate the software-dependent (architecture-level masking) portion of AVF from the hardware-dependent (microarchitecture-level masking) portion, providing a quantitative basis to make reliability decisions about software independent of hardware. In this work, we demonstrate that the new program vulnerability factor (PVF) metric provides such a basis: PVF captures the architecture-level fault masking inherent in a program, allowing software designers to make quantitative statements about a program's tolerance to soft errors. PVF can also explain the AVF behavior of a program when executed on hardware; PVF captures the workload-driven changes in AVF for all structures. Finally, we demonstrate two practical uses for PVF: choosing algorithms and compiler optimizations to reduce a program's failure rate.",2009,0, 3223,Anomaly Detection System Using Resource Pattern Learning,"In this paper, Anomaly Detection by Resource Monitoring (Ayaka), a novel lightweight anomaly and fault detection infrastructure, is presented for Information Appliances. Ayaka provides a general monitoring method for detecting anomalies using only resource usage information on systems independent of its domain, target application and programming languages. Ayaka modifies the kernel to detect faults and uses a completely application black-box approach based on machine learning methods. It uses the clustering method to quantize the resource usage vector data and learn the normal patterns with Hidden Markov Model. In the running phase, Ayaka finds anomalies by comparing the application resource usage with learned model. The evaluation experiment indicates that our prototype system is able to detect anomalies, such as SQL injection and buffer overrun, without significant overheads.",2009,0, 3224,M-MAC: Mobility-Based Link Management Protocol for Mobile Sensor Networks,"In wireless sensor networks with mobile sensors, frequent link failures caused by node mobility generate wasteful retransmissions, resulting in increased energy consumption and decreased network performance. In this paper we propose a new link management protocol called M-MAC that can dynamically measure and predict the link quality. Based on the projected link status information each node may drop, relay, or selectively forward a packet, avoiding unnecessary retransmissions. Our simulation results show that M-MAC can effectively reduce the per-node energy consumption by as much as 25.8% while improving the network performance compared to a traditional sensor network MAC protocol in the case of both low and high mobility scenarios.",2009,0, 3225,An application of infrared sensors for electronic white stick,"Presently, blind people use a white stick as a tool for directing them when they move or walk. Although, the white stick is useful, it cannot give a high guarantee that it can protect blind people away from all level of obstacles. Many researchers have been interested in developing electronic devices to protect blind people away from obstacles with a higher guarantee. This paper introduces an obstacles avoidance alternative by using an electronic stick that serves as a tool for blind people in walking. It employs an infrared sensor for detecting obstacles along the pathway. With all level of obstacles, the infrared stick enables to detect all type of materials available in the course such as concrete, wood, metal, glass, and human being. The result also shows that the stick detects obstacles in range of 80 cm which is the same as the length of white stick. The stick is designed to be small and light, so that blind people can carry it comfortably.",2009,0, 3226,Mobility prediction for wireless network resource management,"User mobility prediction has been studied for various applications under diverse scenarios to improve network performance. Examples include driving habits in cell stations, roaming habits of cell phone users, student movement in campuses, etc. Use of such information enables use to better manage and adapt resources to provide improved Quality Of Service (QoS). In particular, advance reservation of resources at future destinations can provide better service to a mobile user. However, there is a certain cost associated with each of these kinds of prediction schemes. We concentrate on a campus network to predict user movement and demonstrate a better movement prediction which can be used to design the network architecture differently. This user prediction scheme uses a second order Markov chain to capture user movement history to make predictions. This is highly suitable for a campus environment because of its simplicity and can also be extended to several other network architectures.",2009,0, 3227,A Hardware Accelerated Semi Analytic Approach for Fault Trees with Repairable Components,"Fault tree analysis of complex systems with repairable components can easily be quite complicated and usually requires significant computer time and power despite significant simplifications. Invariably, software-based solutions, particularly those involving Monte Carlo simulation methods, have been used in practice to compute the top event probability. However, these methods require significant computer power and time. In this paper, a hardware-based solution is presented for solving fault trees. The methodology developed uses a new semi analytic approach embedded in a Field Programmable Gate Array (FPGA) using accelerators. Unlike previous attempts, the methodology developed properly handles repairable components in fault trees. Results from a specially written software-based simulation program confirm the accuracy and validate the efficacy of the hardware-oriented approach.",2009,0, 3228,SAFE: Scalable Autonomous Fault-tolerant Ethernet,"In this paper, we present a new fault-tolerant Ethernet scheme called SAFE (scalable autonomous fault-tolerant ethernet). SAFE scheme is based on software approach which takes place in layer 2 and layer 3 of the OSIRM. The goal of SAFE is to provide scalability, autonomous fault detection and recovery. SAFE divides a network into several subnets and limits the number of nodes in a subnet. Network can be extended by adding additional subnets. All nodes in a subnet automatically detect faults and perform fail-over by sending and receiving Ethernet based heartbeat each other. For inter-subnet fault recovery, SAFE manages master nodes in each subnet. Master nodes communicate each other using IP packets to exchange the subnet status. We also propose a master election algorithm to recover faults of master node automatically. Proposed SAFE performs efficiently for large scale network and provides fast and autonomous fault recovery.",2009,0, 3229,"A general piece-wise nonlinear library modeling format and size reduction technique for gate-level timing, SI, power, and variation analysis","Standard cell libraries are used extensively in CMOS digital circuit designs. In the past ten years, standard cell library size has increased by more than 10X. Reducing the library size is becoming a must. In this paper, we present an efficient piece-wise nonlinear library modeling format and library size reduction technique. Instead of using tables and vectors, this format uses base templates (curve or surface templates) to model the shape of curves or surfaces. It works very well for standard cells that exhibit similar behaviors. It is also efficient because the shape can be modeled by a single id. This technique can be applied to timing, SI, power, and variation modeling. It can be directly applied to the traditional nonlinear delay model (NLDM) and the recently developed current source models. This paper also presents a fast method for efficiently selecting the optimal template and detecting bad library data. Our technique uses standardized base templates and allows using any curves or shape functions as base templates. Base templates can be created as needed, and the optimal templates are selected during the model creation process. The fitting and modeling process becomes much simpler and more efficient. It avoids many difficult issues such nonlinear fitting or over-fitting. The modeling accuracy can be easily controlled. Because this format is simpler, using this format could also help speed up the calculation. Several examples are provided in this paper to show this technique is generic, robust, and efficient. Results show .lib library size can be reduced by 5~10X. Experiments using an industry-leading STA tool show the impact on accuracy is ignorable. This technique also help reduces STA tool memory usage and improves runtime.",2009,0, 3230,Real-time Volumetric Reconstruction and Tracking of Hands and Face as a User Interface for Virtual Environments,"Enhancing desk-based computer environments with virtual reality technology requires natural interaction support, in particular head and hand tracking. Todays motion capture systems instrument users with intrusive hardware like optical markers or data gloves which limit the perceived realism of interactions with a virtual environment and constrain the free moving space of operators. Our work therefore focuses on the development of fault-tolerant techniques for fast, non-contact 3D hand motion capture, targeted to the application in standard office environments. This paper presents a table-top setup which utilizes vision based volumetric reconstruction and tracking of skin colored objects to integrate the users hands and face into virtual environments. The system is based on off-the-shelf hardware components and satisfies real-time constraints.",2009,0, 3231,On Efficient Recommendations for Online Exchange Markets,"Presently several marketplace applications over online social networks are gaining popularity. An important class of applications is online market exchange of items. Examples include peerflix.com and readitswapit.co.uk. We model this problem as a social network where each user has two associated lists. The item list consists of items the user is willing to give away to other users. The wish list consists of items the user is interested in receiving. A transaction involves a user giving an item to another user. Users are motivated to transact in expectation of realizing their wishes. Wishes may be realized by a pair of users swapping items corresponding to each other's wishes, but more generally by means of users exchanging items through a cycle, where each user gives an item to the next user in the cycle, in accordance with the receiving user's wishes. The problem we consider is how to efficiently generate recommendations for item exchange cycles, for users in a social network. Each cycle has a value which is determined by the number of items exchanged through the cycle. We focus on the problem of generating recommendations under two models. In the deterministic model, the value of a recommendation is the total number of items exchanged through cycles. In the probabilistic model, there is a probability associated with a user transacting with another user and a user being willing to trade an item for another. The value of a recommendation then is the expected number of items exchanged. We show that under both models, the problem of determining an optimal recommendation is NP-complete and develop efficient approximation algorithms for both. We show that our algorithms have guaranteed approximation factors of 2k (for greedy), 2k -1 (for local search), and(2k + 1)/3 (for combination of greedy and local search) where k is the max cycle length. We also develop a so-called maximal algorithm, which does not have an approximation guarantee but is more efficient. We conduct- - a comprehensive set of experiments. Our experiments show that in practice, the approximation quality achieved by maximal is competitive w.r.t. that of the other algorithms. On the other hand, maximal outperforms all other algorithms on scalability w.r.t. network size.",2009,0, 3232,Tracking High Quality Clusters over Uncertain Data Streams,"Recently, data mining over uncertain data streams has attracted a lot of attentions because of the widely existed imprecise data generated from a variety of streaming applications. In this paper, we try to resolve the problem of clustering over uncertain data streams. Facing uncertain tuples with different probability distributions, the clustering algorithm should not only consider the tuple value but also emphasis on its uncertainty. To fulfill these dual purposes, a metric named tuple uncertainty will be integrated into the overall procedure of clustering. Firstly, we survey uncertain data model and propose our uncertainty measurement and corresponding properties. Secondly, based on such uncertainty quantification method, we provide a two phase stream clustering algorithm and elaborate implementation detail. Finally, performance experiments over a number of real and synthetic data sets demonstrate the effectiveness and efficiency of our method.",2009,0, 3233,Towards Composition as a Service - A Quality of Service Driven Approach,"Software as a Service (SaaS) and the possibility to compose Web services provisioned over the Internet are important assets for a service-oriented architecture (SOA). However, the complexity and time for developing and provisioning a composite service is very high and it is generally an error-prone task. In this paper we address these issues by describing a semi-automated """"Composition as a Service'' (CaaS) approach combined with a domain-specific language called VCL (Vienna composition language). The proposed approach facilitates rapid development and provisioning of composite services by specifying what to compose in a constraint-hierarchy based way using VCL. Invoking the composition service triggers the composition process and upon success the newly composed service is immediately deployed and available. This solution requires no client-side composition infrastructure because it is transparently encapsulated in the CaaS infrastructure.",2009,0, 3234,Using Quality Audits to Assess Software Course Projects,"Assessing software engineering course projects should evaluate the achievement of proposed goals, as well as the compliance with mandated standards and processes. This might require the examination of a significant volume of materials, following a consistent, repetitive and effective procedure, even for small projects. Quality assurance must be performed according to similar requirements. This paper discusses the use of quality audits, a comprehensive quality assurance procedure, as a tool for the assessment of course projects.",2009,0, 3235,Automatic Failure Diagnosis Support in Distributed Large-Scale Software Systems Based on Timing Behavior Anomaly Correlation,"Manual failure diagnosis in large-scale software systems is time-consuming and error-prone. Automatic failure diagnosis support mechanisms can potentially narrow down, or even localize faults within a very short time which both helps to preserve system availability. A large class of automatic failure diagnosis approaches consists of two steps: 1) computation of component anomaly scores; 2) global correlation of the anomaly scores for fault localization. In this paper, we present an architecture-centric approach for the second step. In our approach, component anomaly scores are correlated based on architectural dependency graphs of the software system and a rule set to address error propagation. Moreover, the results are graphically visualized in order to support fault localization and to enhance maintainability. The visualization combines architectural diagrams automatically derived from monitoring data with failure diagnosis results. In a case study, the approach is applied to a distributed sample Web application which is subject to fault injection.",2009,0, 3236,3rd International Workshop on Software Quality and Maintainability,"Software is playing a crucial role in modern societies. Not only do people rely on it for their daily operations or business, but for their lives as well. For this reason correct and consistent behavior of software systems is a fundamental part of end user expectations. Additionally, businesses require cost- effective production, maintenance, and operation of their systems. Thus, the demand for software quality is increasing and is setting it as a differentiator for the success or failure of a software product. In fact, high quality software is becoming not just a competitive advantage but a necessary factor for companies to be successful. The main question that arises now is how quality is measured. What, where and when we assess and assure quality, are still open issues. Many views have been expressed about software quality attributes, including maintainability, evolvability, portability, robustness, reliability, usability, and efficiency. These have been formulated in standards such as ISO/IEC-9126 and CMMI. However, the debate about quality and maintainability between software producers, vendors and users is ongoing, while organizations need the ability to evaluate from multiple angles the software systems that they use or develop. So, is Software quality in the eye of the beholder? This workshop session aims at feeding into this debate by establishing what the state of the practice and the way forward is.",2009,0, 3237,A Tool for Enterprise Architecture Analysis of Maintainability,"A tool for enterprise architecture analysis using a probabilistic mathematical framework is demonstrated. The model-view-controller tool architecture is outlined, before the use of the tool is considered. A sample abstract maintainability model is created, showing the dependence of system maintainability on documentation quality, developer expertise, etc. Finally, a concrete model of an ERP system is discussed.",2009,0, 3238,An Image Based Auto-Focusing Algorithm forDigital Fundus Photography,"In fundus photography, the task of fine focusing the image is demanding and lack of focus is quite often the cause of suboptimal photographs. The introduction of digital cameras has provided an opportunity to automate the task of focusing. We have developed a software algorithm capable of identifying best focus. The auto-focus (AF) method is based on an algorithm we developed to assess the sharpness of an image. The AF algorithm was tested in the prototype of a semi-automated nonmydriatic fundus camera designed to screen in the primary care environment for major eye diseases. A series of images was acquired in volunteers while focusing the camera on the fundus. The image with the best focus was determined by the AF algorithm and compared to the assessment of two masked readers. A set of fundus images was obtained in 26 eyes of 20 normal subjects and 42 eyes of 28 glaucoma patients. The 95% limits of agreement between the readers and the AF algorithm were -2.56 to 2.93 and -3.7 to 3.84 diopter and the bias was 0.09 and 0.71 diopter, for the two readers respectively. On average, the readers agreed with the AF algorithm on the best correction within less than 3/4 diopter. The intraobserver repeatability was 0.94 and 1.87 diopter, for the two readers respectively, indicating that the limit of agreement with the AF algorithm was determined predominantly by the repeatability of each reader. An auto-focus algorithm for digital fundus photography can identify the best focus reliably and objectively. It may improve the quality of fundus images by easing the task of the photographer.",2009,0, 3239,Generating non-uniform distributions for fault injection to emulate real network behavior in test campaigns,"Fault injection is an efficient technique to evaluate the robustness of computer systems and their fault tolerance strategies. In order to obtain accurate results from fault injection based tests, it is important to mimic real conditions during a test campaign. When testing dependability attributes of network applications the real faulty behavior of networks must be closely emulated. We show how probability distributions can be used to inject communication faults that closely resemble the behavior observed in real network environments. To demonstrate the strengths of this strategy we develop a reusable and extensible entity called FIEND, integrate it to a fault injector and use the resulting tool to run test experiments injecting non-uniform distributed faults in a network application taken as example.",2009,0, 3240,Using software invariants for dynamic detection of transient errors,"Software based error detection techniques usually imply modification of the algorithms to be hardened, and almost certainly also demand a huge memory footprint and/or execution time overhead. In the software engineering field, program invariants have been proposed as a means to check program correctness during the development cycle. In this work we discuss the use of software invariants verification as a low cost alternative to detect soft errors after the execution of a given algorithm. A clear advantage is that this approach does not require any change in the algorithm to be hardened, and in case its computational cost and memory overhead are proven to be much smaller than duplication for a given algorithm, it may become a feasible option for hardening that algorithm against soft errors. The results of fault injection experiments performed with different algorithms are analyzed and some guidelines for future research concerning this technique are proposed.",2009,0, 3241,Recovery scheme for hardening system on programmable chips,"The checkpoint and rollback recovery techniques enable a system to survive failures by periodically saving a known good snapshot of the system's state, and rolling back to it in case a failure is detected. The approach is particularly interesting for developing critical systems on programmable chips that today offers multiple embedded processor cores, as well as configurable fabric that can be used to implement error detection and correction mechanisms. This paper presents an approach that aims at developing a safety- or mission-critical systems on programmable chip able to tolerate soft errors by exploiting processor duplication to implement error detection, as well as checkpoint and rollback recovery to correct errors in a cost-efficient manner. We developed a prototypical implementation of the proposed approach targeting the Leon processor core, and we collected preliminary results that outline the capability of the technique to tolerate soft errors affecting the processor's internal registers. This paper is the first step toward the definition of an automatic design flow for hardening processor cores (either hard of soft) embedded in programmable chips, like for example SRAM-based FPGAs.",2009,0, 3242,A Flexible Software-Based Framework for Online Detection of Hardware Defects,"This work proposes a new, software-based, defect detection and diagnosis technique. We introduce a novel set of instructions, called access-control extensions (ACE), that can access and control the microprocessor's internal state. Special firmware periodically suspends microprocessor execution and uses the ACE instructions to run directed tests on the hardware. When a hardware defect is present, these tests can diagnose and locate it, and then activate system repair through resource reconfiguration. The software nature of our framework makes it flexible: testing techniques can be modified/upgraded in the field to trade-off performance with reliability without requiring any change to the hardware. We describe and evaluate different execution models for using the ACE framework. We also describe how the proposed ACE framework can be extended and utilized to improve the quality of post-silicon debugging and manufacturing testing of modern processors. We evaluated our technique on a commercial chip-multiprocessor based on Sun's Niagara and found that it can provide very high coverage, with 99.22 percent of all silicon defects detected. Moreover, our results show that the average performance overhead of software-based testing is only 5.5 percent. Based on a detailed register transfer level (RTL) implementation of our technique, we find its area and power consumption overheads to be modest, with a 5.8 percent increase in total chip area and a 4 percent increase in the chip's overall power consumption.",2009,0, 3243,An Initial Characterization of Industrial Graphical User Interface Systems,"To date we have developed and applied numerous model-based GUI testing techniques; however, we are unable to provide definitive improvement schemes to real-world GUI test planners, as our data was derived from open source applications, small compared to industrial systems. This paper presents a study of three industrial GUI-based software systems developed at ABB, including data on classified defects detected during late-phase testing and customer usage, test suites, and source code change metrics. The results show that (1) 50% of the defects found through the GUI are categorized as data access and handling, control flow and sequencing, correctness, and processing defects, (2) system crashes exposed defects 12-19% of the time, and (3) GUI and non-GUI components are constructed differently, in terms of source code metrics.",2009,0, 3244,"A Flexible Framework for Quality Assurance of Software Artefacts with Applications to Java, UML, and TTCN-3 Test Specifications","Manual reviews and inspections of software artefacts are time consuming and thus, automated analysis tools have been developed to support the quality assurance of software artefacts. Usually, software analysis tools are implemented for analysing only one specific language as target and for performing only one class of analyses. Furthermore, most software analysis tools support only common programming languages, but not those domain-specific languages that are used in a test process. As a solution, a framework for software analysis is presented that is based on a flexible, yet high-level facade layer that mediates between analysis rules and the underlying target software artefact; the analysis rules are specified using high-level XQuery expressions. Hence, further rules can be quickly added and new types of software artefacts can be analysed without needing to adapt the existing analysis rules. The applicability of this approach is demonstrated by examples from using this framework to calculate metrics and detect bad smells in Java source code, in UML models, and in test specifications written using the testing and test control notations (TTCN-3).",2009,0, 3245,Quality Assurance of Software Applications Using the In Vivo Testing Approach,"Software products released into the field typically have some number of residual defects that either were not detected or could not have been detected during testing. This may be the result of flaws in the test cases themselves, incorrect assumptions made during the creation of test cases, or the infeasibility of testing the sheer number of possible configurations for a complex system; these defects may also be due to application states that were not considered during lab testing, or corrupted states that could arise due to a security violation. One approach to this problem is to continue to test these applications even after deployment, in hopes of finding any remaining flaws. In this paper, we present a testing methodology we call in vivo testing, in which tests are continuously executed in the deployment environment. We also describe a type of test we call in vivo tests that are specifically designed for use with such an approach: these tests execute within the current state of the program (rather than by creating a clean slate) without affecting or altering that state from the perspective of the end-user. We discuss the approach and the prototype testing framework for Java applications called Invite. We also provide the results of case studies that demonstrate Invite's effectiveness and efficiency.",2009,0, 3246,The Effectiveness of Automated Static Analysis Tools for Fault Detection and Refactoring Prediction,"Many automated static analysis (ASA) tools have been developed in recent years for detecting software anomalies. The aim of these tools is to help developers to eliminate software defects at early stages and produce more reliable software at a lower cost. Determining the effectiveness of ASA tools requires empirical evaluation. This study evaluates coding concerns reported by three ASA tools on two open source software (OSS) projects with respect to two types of modifications performed in the studied software CVS repositories: corrections of faults that caused failures, and refactoring modifications. The results show that fewer than 3% of the detected faults correspond to the coding concerns reported by the ASA tools. ASA tools were more effective in identifying refactoring modifications and corresponded to about 71% of them. More than 96% of the coding concerns were false positives that do not relate to any fault or refactoring modification.",2009,0, 3247,Transforming and Selecting Functional Test Cases for Security Policy Testing,"In this paper, we consider typical applications in which the business logic is separated from the access control logic, implemented in an independent component, called the Policy Decision Point (PDP). The execution of functions in the business logic should thus include calls to the PDP, which grants or denies the access to the protected resources/functionalities of the system, depending on the way the PDP has been configured. The task of testing the correctness of the implementation of the security policy is tedious and costly. In this paper, we propose a new approach to reuse and automatically transform existing functional test cases for specifically testing the security mechanisms. The method includes a three-step technique based on mutation applied to security policies (RBAC, XACML, OrBAC) and AOP for transforming automatically functional test cases into security policy test cases. The method is applied to Java programs and provides tools for performing the steps from the dynamic analyses of impacted test cases to their transformation. Three empirical case studies provide fruitful results and a first proof of concepts for this approach, e.g. by comparing its efficiency to an error-prone manual adaptation task.",2009,0, 3248,Predicting Attack-prone Components,"Limited resources preclude software engineers from finding and fixing all vulnerabilities in a software system. This limitation necessitates security risk management where security efforts are prioritized to the highest risk vulnerabilities that cause the most damage to the end user. We created a predictive model that identifies the software components that pose the highest security risk in order to prioritize security fortification efforts. The input variables to our model are available early in the software life cycle and include security-related static analysis tool warnings, code churn and size, and faults identified by manual inspections. These metrics are validated against vulnerabilities reported by testing and those found in the field. We evaluated our model on a large Cisco software system and found that 75.6% of the system's vulnerable components are in the top 18.6% of the components predicted to be vulnerable. The model's false positive rate is 47.4% of this top 18.6% or 9.1% of the total system components. We quantified the goodness of fit of our model to the Cisco data set using a receiver operating characteristic curve that shows 94.4% of the area is under the curve.",2009,0, 3249,Dynamic Regression Test Selection Based on a File Cache An Industrial Evaluation,"This paper presents a simple method that computes test case coverage information from information on what files were updated to fix a fault found by the test case. It uses a cache to monitor fault-prone files and recommends test cases to rerun to cover updated files. We present an evaluation of the method during two months of development of a large, industrial, embedded, real-time software system. Our results show that the method is effective, reaching weekly cache hit rates in the range 50-80%.",2009,0, 3250,Clustering and Tailoring User Session Data for Testing Web Applications,"Web applications have become major driving forces for world business. Effective and efficient testing of evolving Web applications is essential for providing reliable services. In this paper, we present a user session based testing technique that clusters user sessions based on the service profile and selects a set of representative user sessions from each cluster. Then each selected user session is tailored by augmentation with additional requests to cover the dependence relationships between Web pages. The created test suite not only can significantly reduce the size of the collected user sessions, but is also viable to exercise fault sensitive paths. We conducted two empirical studies to investigate the effectiveness of our approach- one was in a controlled environment using seeded faults, and the other was conducted on an industrial system with real faults. The results demonstrate that our approach consistently detected the majority of the known faults by using a relatively small number of test cases in both studies.",2009,0, 3251,Using Logic Criterion Feasibility to Reduce Test Set Size While Guaranteeing Fault Detection,"Some software testing logic coverage criteria demand inputs that guarantee detection of a large set of fault types. One powerful such criterion, MUMCUT, is composed of three criteria, where each constituent criterion ensures the detection of specific fault types. In practice, the criteria may overlap in terms of fault types detected, thereby leading to numerous redundant tests, but due to the unfortunate fact that infeasible test requirements don't result in tests, all the constituent criteria are needed. The key insight of this paper is that analysis of the feasibility of the constituent criteria can be used to reduce test set size without sacrificing fault detection. In other words, expensive criteria can be reserved for use only when they are actually necessary. This paper introduces a new logic criterion, Minimal-MUMCUT, based on this insight. Given a predicate in minimal DNF, a determination is made of which constituent criteria are feasible at the level of individual literals and terms. This in turn determines which criteria are necessary, again at the level of individual literals and terms. This paper presents an empirical study using predicates in avionics software. The study found that Minimal-MUMCUT reduces test set size -- without sacrificing fault detection -- to as little as a few percent of the test set size needed if feasibility is not considered.",2009,0, 3252,A Simple Coverage-Based Locator for Multiple Faults,"Fault localization helps spotting faults in source code by exploiting automatically collected data. Deviating from other fault locators relying on hit spectra or test coverage information, we do not compute the likelihood of each possible fault location by evaluating its participation in failed and passed test cases, but rather search for each failed test case the set of possible fault locations explaining its failure. Assuming a probability distribution of the number of faults as the only other input, we can compute the probability of faultiness for each possible fault location in presence of arbitrarily many faults. As the main threat to the viability of our approach we identify its inherent complexity, for which we present two simple bypasses. First experiments show that while leaving room for improvement, our approach is already feasible in practical cases.",2009,0, 3253,A Test-Driven Approach to Developing Pointcut Descriptors in AspectJ,"Aspect-oriented programming (AOP) languages introduce new constructs that can lead to new types of faults, which must be targeted by testing techniques. In particular, AOP languages such as AspectJ use a pointcut descriptor (PCD) that provides a convenient way to declaratively specify a set of joinpoints in the program where the aspect should be woven. However, a major difficulty when testing that the PCD matches the intended set of joinpoints is the lack of precise specification for this set other than the PCD itself. In this paper, we propose a test-driven approach for the development and validation of the PCD. We developed a tool, AdviceTracer, which enriches the JUnit API with new types of assertions that can be used to specify the expected joinpoints. In order to validate our approach, we also developed a mutation tool that systematically injects faults into PCDs. Using these two tools, we perform experiments to validate that our approach can be applied for specifying expected joinpoints and for detecting faults in the PCD.",2009,0, 3254,Using a Fault Hierarchy to Improve the Efficiency of DNF Logic Mutation Testing,"Mutation testing is a technique for generating high quality test data. However, logic mutation testing is currently inefficient for three reasons. One, the same mutant is generated more than once. Two, mutants are generated that are guaranteed to be killed by a test that kills some other generated mutant. Three, mutants that when killed are guaranteed to kill many other mutants are not generated as valuable mutation operators are missing. This paper improves logic mutation testing by 1) extending a logic fault hierarchy to include existing logic mutation operators, 2) introducing new logic mutation operators based on existing faults in the hierarchy, 3) introducing new logic mutation operators having no corresponding faults in the hierarchy and extending the hierarchy to include them, and 4) addressing the precise effects of equivalent mutants on the fault hierarchy. An empirical study using minimal DNF predicates in avionics software showed that a new logic mutation testing approach generates fewer mutants, detects more faults, and outperforms an existing logic criterion.",2009,0, 3255,Assertion-Based Validation of Modified Programs,"Assertions are used to detect incorrect program behavior during testing and debugging. Assertions when combined with automated test data generation may increase the confidence that certain types of faults are not present in the program. If the test data generation process is not able to violate an assertion, a developer may have confidence that the fault """"captured"""" by the assertion is not present. We refer to this process as an assertion-based validation. Assertion-based validation may be very expensive especially when a large number of assertions are present in a program. During maintenance, after a modification is made to the program, all unchanged assertions need to be revalidated to make sure that certain types of faults are present. In this paper we present an approach that may reduce the cost of assertion-based revalidation after modifications are made to the program by identifying assertions that need to be revalidated or only partially revalidated. The presented approach is based on program dependence analysis and testability transformation. The results of a small case study indicate that the presented approach may significantly reduce the effort during the process of assertion-based revalidation.",2009,0, 3256,Using JML Runtime Assertion Checking to Automate Metamorphic Testing in Applications without Test Oracles,"It is challenging to test applications and functions for which the correct output for arbitrary input cannot be known in advance, e.g. some computational science or machine learning applications. In the absence of a test Oracle, one approach to testing these applications is to use metamorphic testing: existing test case input is modified to produce new test cases in such a manner that, when given the new input, the application should produce an output that can be easily be computed based on the original output. That is, if input x produces output f(x), then we create input x' such that we can predict f(x') based on f(x); if the application or function does not produce the expected output, then a defect must exist, and either f(x) or f(x') (or both) is wrong. By using metamorphic testing, we are able to provide built-in """"pseudo-oracles"""" for these so-called """"nontestable programs"""" that have no test oracles.In this paper, we describe an approach in which a function's metamorphic properties are specified using an extension to the Java modeling language (JML), a behavioral interface specification language that is used to support the """"design by contract"""" paradigm in Java applications. Our implementation, called Corduroy, pre-processes these specifications and generates test code that can be executed using JML runtime assertion checking, for ensuring that the specifications hold during program execution. In addition topresenting our approach and implementation, we also describe our findings from case studies in which we apply our technique to applications without test oracles.",2009,0, 3257,Finding All Small Error-Prone Substructures in LDPC Codes,"It is proven in this work that it is NP-complete to exhaustively enumerate small error-prone substructures in arbitrary, finite-length low-density parity-check (LDPC) codes. Two error-prone patterns of interest include stopping sets for binary erasure channels (BECs) and trapping sets for general memoryless symmetric channels. Despite the provable hardness of the problem, this work provides an exhaustive enumeration algorithm that is computationally affordable when applied to codes of practical short lengths n ap 500. By exploiting the sparse connectivity of LDPC codes, the stopping sets of size les 13 and the trapping sets of size les11 can be exhaustively enumerated. The central theorem behind the proposed algorithm is a new provably tight upper bound on the error rates of iterative decoding over BECs. Based on a tree-pruning technique, this upper bound can be iteratively sharpened until its asymptotic order equals that of the error floor. This feature distinguishes the proposed algorithm from existing non-exhaustive ones that correspond to finding lower bounds of the error floor. The upper bound also provides a worst case performance guarantee that is crucial to optimizing LDPC codes when the target error rate is beyond the reach of Monte Carlo simulation. Numerical experiments on both randomly and algebraically constructed LDPC codes demonstrate the efficiency of the search algorithm and its significant value for finite-length code optimization.",2009,0, 3258,Uplink array concept demonstration with the EPOXI spacecraft,"Uplink array technology is currently being developed for NASA's deep space network (DSN), to provide greater range and data throughput for future NASA missions, including manned missions to Mars and exploratory missions to the outer planets, the Kuiper belt, and beyond. The DSN uplink arrays employ N microwave antennas transmitting at X-band to produce signals that add coherently at the spacecraft, thereby providing a power gain of N2 over a single antenna. This gain can be traded off directly for N2 higher data rate at a given distance such as Mars, providing for example HD quality video broadcast from earth to a future manned mission, or it can provide a given data-rate for commands and software uploads at a distance N times greater than possible with a single antenna. The uplink arraying concept has been recently demonstrated using the three operational 34-meter antennas of the Apollo complex at goldstone, CA, which transmitted arrayed signals to the EPOXI spacecraft. Both two-element and three-element uplink arrays were configured, and the theoretical array gains of 6 dB and 9.5 dB, respectively, were demonstrated experimentally. This required initial phasing of the array elements, the generation of accurate frequency predicts to maintain phase from each antenna despite relative velocity components due to earth-rotation and spacecraft trajectory, and monitoring of the ground system phase for possible drifts caused by thermal effects over the 16 km fiber-optic signal distribution network. This paper provides a description of the equipment and techniques used to demonstrate the uplink arraying concept in a relevant operational environment. Data collected from the EPOXI spacecraft was analyzed to verify array calibration, array gain, and system stability over the entire 5 hour duration of this experiment.",2009,0,4161 3259,Hop-by-hop transport for satellite networks,"This paper research the transport control scheme for satellite networks. The special characteristics make TCP protocols incapable of providing satisfying service for satellite networks. We analyze the performance of hop-by-hop and end-to-end transport under Bernoulli loss model. Then we propose a novel protocol based on hop-by-hop acknowledgment against the traditional end-to-end TCP for satellite networks, which uses ACK associated with NACK together, allows out-of-sequence packet transmission and has the capability of re-routing in arbitrary node. Both theoretical and simulation results showed that hop-by-hop communication scheme outperforms TCP under highly error prone, multi-hop and long delay conditions.",2009,0, 3260,Defining requirements for advanced PHM technologies for optimal reliability centered maintenance,"Condition based maintenance plus (CBM+) can be described as optimal condition based maintenance (CBM) procedures defined by applying the principles and process of reliability centered maintenance. This approach offers a rigorous and disciplined method, based on the system FMECA, to determine the least cost maintenance policy and procedures that are consistent with acceptable levels of safety and readiness, applying available prognosis and health management tools. It is argued that the same process is the preferred method to define requirements for advanced PHM technologies based on RCM derived capability gaps, preferably accounting for synergies with concurrent continuous (maintenance) process improvement. There may be synergies in coupling this process with Continuous Process Improvement programs, such as NAVAIR's AIRSPEED. In discussing this proposed approach, several issues are addressed. The first is the question of interdependence between incommensurable safety, affordability and readiness objectives and metrics. The second is the problem of uncertainty in the FMECA failure modes and probabilities until the system and equipment has accumulated considerable service history, while still subject to the emergence or aggravation of failure modes by mission exposure, component deterioration, quality escapes and intentional configuration change. In practice it may be necessary to fall back on less rigorous (semi)qualitative methods to target innovation. In any case, more adaptable PHM architectures are needed to mitigate inevitable uncertainty in requirements.. Note: the terms equipment health management (also, more specifically, engine health management) [EHM] and prognostic health management (or prognosis and health management) [PHM] are used with little distinction in this paper, but in general PHM is restricted to methods generating estimates of remaining useful life.",2009,0, 3261,System effectiveness evaluation of software quality of REA (radio-electronic apparatus),"The concept of design, production and exploitation quality systems probabilistic estimation efficacy is proposed. The probabilistic optimization models of such systems with probable control contours are proposed.",2009,0, 3262,A Novel Dual-Core Architecture for the Analysis of DNA Microarray Images,"A deoxyribonucleic acid (DNA) microarray is a collection of microscopic DNA spots attached to a solid surface, such as a glass, plastic, or silicon chip forming an array. DNA microarray technologies are an essential part of modern biomedical research. DNA microarray allows compressing hundreds of thousands of different DNA nucleotide sequences in a little microscope glass and permits having all this information on a single image. The analysis of DNA microarray images allows the identification of gene expressions to draw biological conclusions for applications ranging from genetic profiling to diagnosis of cancer. Unfortunately, DNA microarray technology has a high variation of data quality. Therefore, to obtain reliable results, complex and extensive image analysis algorithms should be applied before the actual DNA microarray information can be used for biomedical purposes. In this paper, we present a novel hardware architecture that is specifically designed to analyze DNA microarray images. The architecture is based on a dual-core system that implements several units working in a single-instruction/multiple-data fashion. A field-programmable-gate-array (FPGA)-based prototypal implementation of the proposed architecture is presented. The effectiveness of the novel dual-core architecture is demonstrated by several analyses performed on original DNA microarray images, showing that the capability of detecting DNA spots increases by more than the 30% with respect to that of previously developed software techniques.",2009,0, 3263,Analysis and enhancement of software dynamic defect models,"Dynamic defect models are used to estimate the number of defects in a software project, predict the release date and required effort of maintenance, and measure the progress and quality of development. The literature suggests that defects projection over time follows a Rayleigh distribution. In this paper, data concerning defects are collected from several software projects and products. Data projection showed that the previous assumption of the Rayleigh distribution is not valid for current projects which are much more complex. Empirical data collected showed that defect distribution in even simpler software projects cannot be represented by the Rayleigh curves due to the adoption of several types of testing on different phases in the project lifecycle. The findings of this paper enhance the well known Puntam's defect model and propose new performance criteria to support the changes occurred during the project. Results of fitting and predicting the collected data show the superiority of the new enhanced defect model over the original defect model.",2009,0, 3264,ESoftCheck: Removal of Non-vital Checks for Fault Tolerance,"As semiconductor technology scales into the deep submicron regime the occurrence of transient or soft errors will increase. This will require new approaches to error detection. Software checking approaches are attractive because they require little hardware modification and can be easily adjusted to fit different reliability and performance requirements. Unfortunately, software checking adds a significant performance overhead. In this paper we present ESoftCheck, a set of compiler optimization techniques to determine which are the vital checks, that is, the minimum number of checks that are necessary to detect an error and roll back to a correct program state. ESoftCheck identifies the vital checks on platforms where registers are hardware-protected with parity or ECC, when there are redundant checks and when checks appear in loops. ESoftCheck also provides knobs to trade reliability for performance based on the support for recovery and the degree of trustiness of the operations. Our experimental results on a Pentium 4 show that ESoftCheck can obtain 27.1% performance improvement without losing fault coverage.",2009,0, 3265,Fault-Tolerant Algorithm for Distributed Primary Detection in Cognitive Radio Networks,"This paper attempts to identify the reliability of detection of licensed primary transmission based on cooperative sensing in cognitive radio networks. With a parallel fusion network model, the correlation issue of the received signals between the nodes in the worst case is derived. Leveraging the property of false sensing data due to malfunctioning or malicious software, the optimizing strategy, namely fault-tolerant algorithm for distributed detection (FTDD) is proposed, and quantitative analysis of false alarm reliability and detection probability under the scheme is presented. In particular, the tradeoff between licensed transmissions and user cooperation among nodes is discussed. Simulation experiments are also used to evaluate the fusion performance under practical settings. The model and analytic results provide useful tools for reliability analysis for other wireless decentralization-based applications (e.g., those involving robust spectrum sensing).",2009,0, 3266,Allocation of extra components to ki-out-of-mi subsystems using the NPI method,"The allocation of components to systems remains a challenge due to the components success and failure rate which is unpredictable to design engineers. Optimal algorithms often assume a restricted class for the allocation and yet still require a high-degree polynomial time complexity. Heuristic methods may be time-efficient but they do not guarantee optimality of the allocation. This paper introduces a new and efficient model of a system consisting of ki-out-of-mi subsystems for allocation of extra components. This model is more general than the traditional k-out-of-n one. This system which consists of subsystem i (i = 1, 2, ..., x) is working if at least ki(out of mi) components are working. All subsystems are independent and the components within subsystem i (i = 1, 2, ..., x) are exchangeable. Components exchangeable with those of each subsystem have been tested. For subsystem i, ni components have been tested for faults and none were discovered in si of these ni components. We assume zero-failure testing, that is, we are assuming that none of the components tested is faulty so si = ni, i = 1, 2, ..., x. We are using lower and upper probability that a system consisting of x independent ki-out-of-mi subsystems works. This allocation problem dealt with in this paper can be categorised as either to which subsystem the expected number of extra components should be allocated subject to achieving maximum reliability (Lower probability) of the system consisting subsystems, so si = ni, i = 1, 2, ..., x. The resulting component allocation problems are too complicated to be solved by traditional approaches; therefore, the nonparametric predictive inference (NPI) method is used to solve them. These results show that NPI is a powerful tool for solving these kinds of problems which are helpful for design engineers to make optimal decisi- ons. The paper also includes suggestions for further research.",2009,0, 3267,A Layout- and Data-Driven Generic Simulation Model for Semiconductor Fabs,"Simulation has drawn much attention as an analysis tool because it is often the only tool that has the capability of modeling the details of the semiconductor lines. However, building a simulation model of a semiconductor line is time-consuming and error-prone because of the complexity of the line. This paper proposes a generic simulation modeling framework to reduce the simulation model build time. The framework consists of a layout modeling software called AutoLay and a data-driven generic simulation model called AutoLogic. It can be used to develop an integrated simulation model of production processes and material handling processes in a short period of time. Early users of our framework reported that the initial model building time was reduced from two weeks to a half day.",2009,0, 3268,"REDFLAG a Run-timE, Distributed, Flexible, Lightweight, And Generic fault detection service for data-driven wireless sensor applications","Increased interest in Wireless Sensor Networks (WSNs) by scientists and engineers is forcing WSN research to focus on application requirements. Data is available as never before in many fields of study; practitioners are now burdened with the challenge of doing data-rich research rather than being data-starved. In-situ sensors can be prone to errors, links between nodes are often unreliable, and nodes may become unresponsive in harsh environments, leaving to researchers the onerous task of deciphering often anomalous data. Presented here is the REDFLAG fault detection service for WSN applications, a Run-timE, Distributed, Flexible, detector of faults, that is also Lightweight And Generic. REDFLAG addresses the two most worrisome issues in data-driven wireless sensor applications: abnormal data and missing data. REDFLAG exposes faults as they occur by using distributed algorithms in order to conserve energy. Simulation results show that REDFLAG is lightweight both in terms of footprint and required power resources while ensuring satisfactory detection and diagnosis accuracy. Because REDFLAG is unrestrictive, it is generically available to a myriad of applications and scenarios.",2009,0, 3269,BitTorrent Worm Sensor Network : P2P Worms Detection and Containment,"Peer-to-peer (p2p) networking technology has gained popularity as an efficient mechanism for users to obtain free services without the need for centralized servers. Protecting these networks from intruders and attackers is a real challenge. One of the constant threats on P2P networks is the propagation of active worms. In 2007, Worms have caused damages worth the amount of 8,391,800 USD in the United States alone. Nowadays, BitTorrent is becoming more and more popular, mainly due to its fair load distribution mechanism. Unfortunately, BitTorrent is particularly vulnerable to active worms. In this paper, we propose a novel worm detection system in BitTorrent and evaluate it. We show that our solution can detect various worm scans before 1% of the vulnerable hosts are infected in worst case scenarios. Our solution, the BitTorrent worm sensor network, is built over a network of immunized agents, which their main job is to efficiently stop worm spread in BitTorrent.",2009,0, 3270,The Perk Station: Systems design for percutaneous intervention training suite,"Image-guided percutaneous needle-based surgery has become part of routine clinical practice in performing procedures such as biopsies, injections and therapeutic implants. A novice physician typically performs needle interventions under the supervision of a senior physician; a slow and inherently subjective training process that lacks objective, quantitative assessment of the surgical skill and performance. Current evaluations of needle-based surgery are also rather simplistic: usually only needle tip accuracy and procedure time are recorded, the latter being used as an indicator of economical feasibility. Shortening the learning curve and increasing procedural consistency are critically important factors in assuring high-quality medical care for all segments of society. This paper describes the design and development of a laboratory validation system for measuring operator performance under different assistance techniques for needle-based surgical guidance systems - The perk station. The initial focus of the perk station is to assess and compare three different techniques: the image overlay, bi-plane laser guide, and conventional freehand. The integrated system comprises of a flat display with semi-transparent mirror (image overlay), bi-plane laser guide, a magnetic tracking system, a tracked needle, a phantom, and a stand-alone laptop computer running the planning and guidance software. The prototype Perk Station has been successfully developed, the associated needle insertion phantoms have been built, and the graphic surgical interface has been implemented.",2009,0, 3271,The R. A. Evans - P. K. McElroy Award for the 2008 Best Paper,"Warranty data is a valuable source of information for analyzing the failure characteristics of a product. Knowing the failure trends of a product provides an array of benefits, including the ability to predict future returns, to estimate warranty-related claim costs, and to monitor product quality. Typically, the failure distribution and its parameters are determined using product manufacturing data for each month of production and the corresponding monthly failure counts derived from the warranty claims. If the data is collected systematically, the product ages at the times of failure can be derived. Classical methods are then used to determine the failure time distribution and parameters. However, in many cases it may not be possible to know the failure ages of components. The information available each month might be limited to the volume of shipments and total claims or product returns. In such cases, the data hides the component age at the time of failure. In our paper, we show that when the failure history information is incomplete, the failure distribution of the product can be determined using Bayesian analysis techniques applicable for handling incomplete data. We apply the popular Expectation-Maximization (EM) algorithm to find the Maximum Likelihood Estimates (MLE) of the failure distribution parameters using incomplete data. The effectiveness of the EM algorithm is evaluated against classical methodology by using several sets of simulated complete/incomplete warranty data. We observed that the EM algorithm is powerful in capturing the hidden failure patterns from the incomplete warranty data.",2009,0, 3272,Software reliability modeling of fault detection and correction processes,"In this paper, both fault detection and correction processes are considered in software reliability growth modeling. The dependency of the two processes is first studied from the viewpoint of the fault number in two ways. One is the ratio of corrected fault number to detected fault number, which appears S-shaped. And the other is the difference between the detected fault number and corrected fault number, which appears Bell-shaped. Then based on the ratio and difference functions, two software reliability models are proposed for both fault detection and correction processes. The proposed models are evaluated by a data set of software testing. The experimental results show that the new models fit the data set of fault detection and correction processes very well.",2009,0, 3273,A Compound Scheme of Islanding Detection According to Inverter,"Nowadays, with the rapid development of distributed generation, it has an increasing rate of permeation, the islanding phenomenon in grid-connected run mode brings hazards to network, electrical equipment and life safety. So, it is necessary to effectively detect the islanding condition and swiftly stop the run mode of grid-connect. In this paper, three-phase inverter and network systems was taken as an example, several existing methods were introduced and the advantages and disadvantages were compared of several existing methods by using MATLAB simulation software for the realization of the simulation, the proposed combination of a variety of detection methods for the detection program.",2009,0, 3274,An On-Line Monitoring System for Gases Dissolved in Transformer Oil Using Wireless Data Acquisition,"Dissolved gas analysis (DGA) is a certain method to diagnose incipient fault of transformers through the correlation between the content of gases dissolved in transformer oil and a particular malfunction. This paper developed an on-line monitoring system to detect the concentrations of H2 and CO dissolved in transformer oil. The system mounts polyperfluoro ethylene-propylene membrane, electrochemical gas sensors, a wireless communication terminal based on RF transceiver, and data management software recording the concentration of H2 and CO. Compared with off-line gas chromatograph results of concentrations of H2 and CO, the concentration trend of on-line detecting results is in agreement with off-line results approximately through 8 months. For hydrogen detection, the biggest error of the system is 4 ppm when the on-line results range from 10 ppm to 21 ppm; for CO detection, the biggest error of the system is 26 ppm when on-line results range from 136 ppm to 189 ppm. The whole system provides a low cost, simple way to on-line monitor H2 and CO dissolved in transformer oil. And the system which incorporated wireless data acquisition will definitely shorten a time gap between the changes of gas concentrations and dissolved gas analysis.",2009,0, 3275,Transforming traditional iris recognition systems to work on non-ideal situations,"Non-ideal iris images can significantly affect the accuracy of iris recognition systems for two reasons: 1) they cannot be properly preprocessed by the system; and/or 2) they have poor image quality. However, many traditional iris recognition systems have been deployed in law enforcement, military, or many other important locations. It will be expensive to replace all these systems. It will be desirable if the traditional systems can be transformed to perform in non-ideal situations without an expensive update. In this paper, we propose a method that can help traditional iris recognition systems to work on the non-ideal situation using a video image approach. The proposed method will quickly identify and eliminate the bad quality images from iris videos for further processing. The segmentation accuracy is critical in recognition and would be challenging for traditional systems. The segmentation evaluation is designed to evaluate if the segmentation is valid. The information distance based quality measure is used to evaluate if the image has enough quality for recognition. The segmentation evaluation score and quality score are combined to predict the recognition performance. The research results show that the proposed methods can work effectively and objectively. The combination of segmentation and quality scores is highly correlated with the recognition accuracy and can be used to improve the performance of iris recognition systems in a non-ideal situation. The deployment of such a system would not cost much since the core parts of the traditional systems are not changed and we only need to add software modules. It will be very practical to transform the traditional system using the proposed method.",2009,0, 3276,Bounded diameter overlay construction: A self organized approach,"This paper describes a distributed algorithm to construct and maintain a peer-to-peer network overlay with bounded diameter. The proposed approach merges a bio-inspired self-organized behavior with a pure peer-to-peer approach, in order to adapt the overlay to underlying changes in the network topology. Ant colonies are used to collect and spread information across all peers, whereas pheromone trails help detecting crashed nodes. Construction of the network favors balanced distribution of links across all peers, so that the resulting topology does not exhibit large hubs. Fault resilience and recovery mechanisms have also been implemented to prevent network partition in the event of node crashes. Validation has been conducted through simulations of different network scenarios.",2009,0, 3277,[Front matter],The following topics are dealt with: ultra-low false alarm rate support vector classifier; collaborative filtering; self-organizing feature maps; probabilistic methods; fault tolerant peer-to-peer distributed EM algorithm; Bayesian network learning; E-mails; patient care patterns; electronic medical records mining; e-commerce; object-oriented software; medical diagnosis; fuzzy decision trees; data mining; data clustering algorithm; ant colony classification algorithms; Web mining; spatial-temporal data; and data streams.,2009,0, 3278,Quality Assessment of Computational Techniques and Software Tools for Planar-Antenna Analysis,"The goal of this paper is a thorough investigation of the quality of the software tools widely used nowadays in the field of planar-antenna analysis and synthesis. Six simulation tools - five well-known commercial tools and one developed in-house - are compared with each other for four different planar antennas. It is crucial to point out that all possible efforts have been made to guarantee the most optimal use of each of the software packages, to study in detail any discrepancies between the solvers, and to assess the remaining simulation challenges. The study clearly highlights the importance of understanding EM simulation principles and their inherent limitations for antenna designers. Finally, some designing guidelines are provided that also can simplify the initial selection of EM solvers.",2009,0, 3279,Magnetic Sensor for the Defect Detection of Steam Generator Tube With Outside Ferrite Sludge,"Steam generator tube in nuclear power plant is a boundary between primary side and secondary side. Nondestructive test for the steam generator tube, eddy current testing (ECT) method has been carried out. In case of bobbin-type ECT probe, transverse defect, defect with ferro-phase, outside of tube having ferrite sludge were very difficult to detect. To overcome these problems, a motorized rotating probe coil (MRPC) probe was developed but scan speed is slow and not effective method in time. In this work we have developed a new sensor having U-shape of yoke to measure permeability variation of test specimen, which can detect ferro-phase generated as well as normal defects in the tube material of Inconel 6oo. Electronics for signal processing, 2-phase lock-in amplifier per sensor, ADC, and embedded controller were employed in one probe and measured digital data were transmitted to the PC data acquisition software using RS232C interface. Using the developed sensors we have applied to detect defects in the tube outside with ferrite sludge which is very difficult to detect conventional bobbin-type ECT probe. Developed sensor could measure defect size of 0.2 mm times 10 mm times 0.44 mm (width times length times depth) for the normal defect and defect outside of tube having ferrite sludge. We expect the developed sensor could detect defects like as MRPC probe and scan sensor speed like as bobbin-type ECT probe.",2009,0, 3280,A Tool for the Application of Software Metrics to UML Class Diagram,"How to improve software quality are the important directions in software engineering research field. The complexity has a close relationship with the developing cost, time spending and the number of detects which a program may exist. OOA and OOD have been widely used, so the requirement of measuring software complexity written in object-oriented language is emerging. UML class diagrams describe the static view of a system in terms of classes and relationships among the classes. In order to objectively assess UML class diagrams, this paper presents a suite of metrics based on UML class diagram that is adapted to Java to assess the complexity of UML class diagrams in various aspects, and verifies them with a suite of evaluation rules suggested by Weyuker.",2009,0, 3281,Resource usage prediction for groups of dynamic image-processing tasks using Markov modeling,"With the introduction of dynamic image processing, such as in image analysis, the computational complexity has become data dependent and memory usage irregular. Therefore, the possibility of runtime estimation of resource usage would be highly attractive and would enable quality-of-service (QoS) control for dynamic image-processing applications with shared resources. A possible solution to this problem is to characterize the application execution using model descriptions of the resource usage. In this paper, we attempt to predict resource usage for groups of dynamic image-processing tasks based on Markov-chain modeling. As a typical application, we explore a medical imaging application to enhance a wire mesh tube (stent) under X-ray fluoroscopy imaging during angioplasty. Simulations show that Markov modeling can be successfully applied to describe the resource usage function even if the flow graph dynamically switches between groups of tasks. For the evaluated sequences, an average prediction accuracy of 97% is reached with sporadic excursions of the prediction error up to 20-30%.",2009,0, 3282,A study of pronunciation verification in a speech therapy application,Techniques are presented for detecting phoneme level mispronunciations in utterances obtained from a population of impaired children speakers. The intended application of these approaches is to use the resulting confidence measures to provide feedback to patients concerning the quality of pronunciations in utterances arising within interactive speech therapy sessions. The pronunciation verification scenario involves presenting utterances of known words to a phonetic decoder and generating confusion networks from the resulting phone lattices. Confidence measures are derived from the posterior probabilities obtained from the confusion networks. Phoneme level mispronunciation detection performance was significantly improved with respect to a baseline system by optimizing acoustic models and pronunciation models in the phonetic decoder and applying a nonlinear mapping to the confusion network posteriors.,2009,0, 3283,A Method for Optimum Test Point Selection and Fault Diagnosis Strategy for BIT of Avionic System,"A method for optimum test point selection and the fault diagnosis strategy which is based on the fault message matrix and features of BIT is proposed. The fault message matrix is divided based on the weight of the test points The diagnosis strategy is determined using dividing the fault message matrix and the thought of detecting first and isolating next. Result shows that the optimum method is suitable for BIT to select the appropriate test points and fault diagnosis procedure. Besides, average numbers of test steps were reduced.",2009,0, 3284,PDF Research on IEEE802.11 DCF in Wireless Distributed Measurement System,"This paper analyzes the basic and RTS/CTS and fragment MAC access mechanism of IEEE802.11 DCF (distribution coordination function), and puts forward the model of wireless distributed measurement system (WDMS). It analyzes the elements that affect system real-time communication. It sets up the simulation network scenario of real-time communication through software, then simulates to research the MAC PDF (probability distribution function) performance in wireless distributed measurement system, which gives theory argument and reference data for further system design.",2009,0, 3285,Combining Perceptions and Prescriptions in Requirements Engineering Process Assessment: An Industrial Case Study,"Requirements engineering (RE) is a key discipline in software development and several methods are available to help assess and improve RE processes. However, these methods rely on prescriptive models of RE; they do not, like other disciplines within software engineering, draw directly on stakeholder perceptions and subjective judgments. Given this backdrop, we present an empirical study in RE process assessment. Our aim was to investigate how stakeholder perceptions and process prescriptions can be combined during assessments to effectively inform RE process improvement. We first describe existing methods for RE process assessment and the role played by stakeholder perceptions and subjective judgments in the software engineering and management literature. We then present a method that combines perceptions and prescriptions in RE assessments together with an industrial case study in which the method was applied and evaluated over a three-year period at TelSoft. The data suggest that the combined method led to a comprehensive and rich assessment and it helped TelSoft consider RE as an important and integral part of the broader engineering context. This, in turn, led to improvements that combined plan-driven and adaptive principles for RE. Overall, the combined method helped TelSoft move from Level 1 to Level 2 in RE maturity, and the employees perceived the resulting engineering practices to be improved. Based on these results, we suggest that software managers and researchers combine stakeholder perceptions and process prescriptions as one way to effectively balance the specificity, comparability, and accuracy of software process assessments.",2009,0, 3286,Staffing Level and Cost Analyses for Software Debugging Activities Through Rate-Based Simulation Approaches,"Research in the field of software reliability, dedicated to the analysis of software failure processes, is quite diverse. In recent years, several attractive rate-based simulation approaches have been proposed. Thus far, it appears that most existing simulation approaches do not take into account the number of available debuggers (or developers). In practice, the number of debuggers will be carefully controlled. If all debuggers are busy, they may not address newly detected faults for some time. Furthermore, practical experience shows that fault-removal time is not negligible, and the number of removed faults generally lags behind the total number of detected faults, because fault detection activities continue as faults are being removed. Given these facts, we apply the queueing theory to describe and explain possible debugging behavior during software development. Two simulation procedures are developed based on G/G/infin, and G/G/m queueing models, respectively. The proposed methods will be illustrated using real software failure data. The analysis conducted through the proposed framework can help project managers assess the appropriate staffing level for the debugging team from the standpoint of performance, and cost-effectiveness.",2009,0, 3287,Evolutionary Sampling and Software Quality Modeling of High-Assurance Systems,"Software quality modeling for high-assurance systems, such as safety-critical systems, is adversely affected by the skewed distribution of fault-prone program modules. This sparsity of defect occurrence within the software system impedes training and performance of software quality estimation models. Data sampling approaches presented in data mining and machine learning literature can be used to address the imbalance problem. We present a novel genetic algorithm-based data sampling method, named evolutionary sampling, as a solution to improving software quality modeling for high-assurance systems. The proposed solution is compared with multiple existing data sampling techniques, including random undersampling, one-sided selection, Wilson's editing, random oversampling, cluster-based oversampling, synthetic minority oversampling technique (SMOTE), and borderline-SMOTE. This paper involves case studies of two real-world software systems and builds C4.5- and RIPPER-based software quality models both before and after applying a given data sampling technique. It is empirically shown that evolutionary sampling improves performance of software quality models for high-assurance systems and is significantly better than most existing data sampling techniques.",2009,0, 3288,A conflict resolution methodology for collective ubiquitous context-aware applications,"The context-aware computing is a research field that defines systems capable of adapting their behavior according to any relevant information about entities (e.g.,people, places and objects) of interest. The ubiquitous computing is closely related to the use of contexts, since it aims to provide personalized, transparent and on-demand services. Ubiquitous systems are frequently shared among multiple users, once they are designed to be embedded into everyday objects and environments such as houses, cars and offices. In scenarios where more than one user shares the same ubiquitous context-aware application, conflicts may occur during adaptation actions due to individual profiles divergences and/or environment resources incompatibility. In such situations it is interesting to use computer supported collaborative work techniques in order to detect and solve those conflicts, considering what is better for the group but also being fair enough with each individual demand, whenever possible. This work presents the important concepts on the collective ubiquitous context-aware applications field. Furthermore, it proposes a new methodology for conflicts detection and resolution that considers the trade-off between quality of services and resources consumption.",2009,0, 3289,LA1 testBed: Evaluation testbed to assess the impact of network impairments on video quality,"Currently, a complete system for analyzing the effect of packet loss on a viewer's perception is not available. Given the popularity of digital video and the growing interest in live video streams where channel coding errors cannot be corrected, such a system would give great insight into the problem of video corruption through transmission errors and how they are perceived by the user. In this paper we introduce such a system, where digital video can be corrupted according to established loss patterns and the effect is measured automatically. The corrupted video is then used as input for user tests. Their results are analyzed and compared with the automatically generated. Within this paper we present the complete testing system that makes use of existing software as well as introducing new modules and extensions. With the current configuration the system can test packet loss in H.264 coded video streams and produce a statistical analysis detailing the results. The system is fully modular allowing for future developments such as other types of statistical analysis, different video measurements and new video codecs.",2009,0, 3290,Collaborative defense as a pervasive service Architectural insights and validation methodologies of a trial deployment,"Network defense is an elusive art. The arsenal to defend our devices from attack is constantly lagging behind the latest methods used by attackers to break into them and subsequently into our networks. To counteract this trend, we developed a distributed, scalable approach that harnesses the power of collaborative end-host detectors or sensors. Simulation results reveal order of magnitude improvements over stand-alone detectors in the accuracy of detection (fewer false alarms) and in the quality of detection (the ability to capture stealthy anomalies that would otherwise go undetected). Although these results arise out of a proof of concept in the arena of botnet detection in an enterprise network, they have broader applicability to the area of network self-manageability of pervasive computing devices. To test the efficacy of these ideas further, Intel Corporation partnered with British Telecommunications plc to launch a trial deployment. In this paper, we report on results and insights gleaned from the development of a testbed infrastructure and phased experiments; (1) the design of a re-usable measurement-inference architecture into which 3rd party sensor developers can integrate a wide variety of ldquoanomaly detectionrdquo algorithms to derive the same correlation-related performance benefits; (2) the development of a series of validation methodologies necessitated by the lack of mature tools and approaches to attest to the security of distributed networked systems; (3) the critical role of learning and adaptation algorithms to calibrate a fully-distributed architecture of varied devices in varied contexts, and (4) the utility of large-scale data collections to assess what's normal behavior for Enterprise end-host background traffic as well as malware command-and-control protocols. Finally, we propose collaborative defense as a blueprint for emergent collaborative systems and its measurement-everywhere approach as the adaptive underpinnings needed for pervasive- services.",2009,0, 3291,"Assessing - Learning - Improving, an Integrated Approach for Self Assessment and Process Improvement Systems","Delivering successful projects and system in a sustaining way becomes more and more the focus of systems and software developing organizations. New approaches in the field of assessment and standardization application led to an increase of assessment and self assessment systems. But these systems are only the first step on a long way. If the assessment system itself is not supported by a learning and improvement approach, the organization will have a system to identify the status but does not have any support for improvement. This gap can be closed by an approach combining assessment tools, wiki-based knowledge platforms and self-learning expert systems (based on ontologies and semantic wikis). Result is a system environment which provides status assessment, learning and continuous improvement services based on different standards and approaches. This approach is already being implemented for the field of project management. In this article we explained the basics and show the application of a combined system.",2009,0, 3292,Runtime Diversity against Quasirandom Faults,"Complex software based systems that have to be highly reliable, are increasingly confronted with fault types whose corresponding failures appear to be random, although they have a systematic cause. This paper introduces and defines these """"quasirandom"""" faults. They have certain inconvenient common properties such as their difficulty to be reproduced, their strong state dependence and their likelihood to be found in operational systems after testing. However, these faults are also likely to be detected or tolerated with the help of diversity in software, and even low level diversity which can be achieved during runtime is a promising means against them. The result suggests, that runtime diversity can improve software reliability in complex systems.",2009,0, 3293,Test Case Generation Using Model Checking for Software Components Deployed into New Environments,"In this paper, we show how to generate test cases for a component deployed into a new software environment. This problem is important for software engineers who need to deploy a component into a new environment. Most existing model based testing approaches generate models from high level specifications. This leaves a semantic gap between the high level specification and the actual implementation. Furthermore, the high level specification often needs to be manually translated into a model, which is a time consuming and error prone process. We propose generating the model automatically by abstracting the source code of the component using an under-approximating predicate abstraction scheme and leaving the environment concrete. Test cases are generated by iteratively executing the entire system and storing the border states between the component and the environment. A model checker is used in the component to explore non-deterministic behaviors of the component due to the concurrency or data abstraction. The environment is symbolically simulated to detect refinement conditions. Assuming the run time environment is able to do symbolic execution and that the run time environment has a single unique response to a given input, we prove that our approach can generate test cases that have complete coverage of the component when the proposed algorithm terminates. When the algorithm does not terminate, the abstract-concrete model can be refined iteratively to generate additional test cases. Test cases generated from this abstract-concrete model can be used to check whether a new environment is compatible with the existing component.",2009,0, 3294,Experimental Comparison of Code-Based and Model-Based Test Prioritization,"During regression testing, a modified system needs to beretested using the existing test suite. Since test suites may be very large, developers are interested in detecting faults in the system as early as possible. Test prioritization orders test cases for execution to increase potentially the chances of early fault detection during retesting. Code-based test prioritization methods are based on the source code of the system, whereas model-based test prioritization methods are based on system models. System modeling is a widely used technique to model state-based systems. Models can be used not only during software development but also during testing. In this paper, we briefly overview codebased and model-based test prioritization. In addition, we present an experimental study in which the code based test prioritization and the model-based test prioritization are compared.",2009,0, 3295,Signal Generation for Search-Based Testing of Continuous Systems,"Test case generation constitutes a critical activity in software testing that is cost-intensive, time-consuming and error-prone when done manually. Hence, an automation of this process is required. One automation approach is search-based testing for which the task of generating test data is transformed into an optimization problem which is solved using metaheuristic search techniques. However, only little work has been done so far applying search-based testing techniques to systems that depend on continuous input signals. This paper proposes two novel approaches to generating input signals from within search-based testing techniques for continuous systems. These approaches are then shown to be very effective when experimentally applied to the problem of approximating a set of realistic signals.",2009,0, 3296,Evolving the Quality of a Model Based Test Suite,"Redundant test cases in newly generated test suites often remain undetected until execution and waste scarce project resources. In model-based testing, the testing process starts early on in the developmental phases and enables early fault detection. The redundancy in the test suites generated from models can be detected earlier as well and removed prior to its execution. The article presents a novel model-based test suite optimization technique involving UML activity diagrams by formulating the test suite optimization problem as an Equality Knapsack Problem. The aim here is the development of a test suite optimization framework that could optimize the model-based test suites by removing the redundant test cases. An evolution-based algorithm is incorporated into the framework and is compared with the performances of two other algorithms. An empirical study is conducted with four synthetic and industrial scale Activity Diagram models and results are presented.",2009,0, 3297,Temporal White-Box Testing Using Evolutionary Algorithms,Embedded computer systems should fulfill real-time requirements which need to be checked in order to assure system quality. This paper stands to propose some ideas for testing the temporal behavior of real-time systems. The goal is to achieve white-box temporal testing using evolutionary techniques to detect system failures in reasonable time and little effort.,2009,0, 3298,Using Logic Criterion Feasibility to Reduce Test Set Size While Guaranteeing Double Fault Detection,"Logic criteria are used in software testing to find inputs that guarantee detecting certain faults. Thus, satisfying a logic criterion guarantees killing certain mutants. Some logic criteria are composed of other criteria. Determining component criterion feasibility can be used as a means to reduce test set size without sacrificing fault detection. This paper introduces a new logic criterion based on component criterion feasibility. Given a predicate in minimal DNF, a determination is made of which component criteria are feasible for individual literals and terms. This in turn provides determination of which criteria are necessary to detect double faults and kill second-order mutants. A test set satisfying this new criterion guarantees detecting the same double faults as a larger test set satisfying another criterion. An empirical study using predicates in avionics software showed that tests sets satisfying the new criterion detected all but one double fault type. For this one double fault type, 99.91% of the double faults were detected and combining equivalent single faults nearly always yielded an equivalent double fault.",2009,0, 3299,Mutation Analysis of Parameterized Unit Tests,"Recently parameterized unit testing has emerged as a promising and effective methodology to allow the separation of (1) specifying external, black-box behavior (e.g., assumptions and assertions) by developers and (2) generating and selecting internal, white-box test inputs (i.e., high-code-covering test inputs) by tools. A parameterized unit test (PUT) is simply a test method that takes parameters, specifies assumptions on the parameters, calls the code under test, and specifies assertions. The test effectiveness of PUTs highly depends on the way that they are written by developers. For example, if stronger assumptions are specified, only a smaller scope of test inputs than intended are generated by tools, leading to false negatives in terms of fault detection. If weaker assertions are specified, erroneous states induced by the test execution do not necessarily cause assertion violations, leading to false negatives. Detecting these false negatives is challenging since the insufficiently written PUTs would just pass. In this paper, we propose a novel mutation analysis approach for analyzing PUTs written by developers and identifying likely locations in PUTs for improvement. The proposed approach is a first step towards helping developers write better PUTs in practice.",2009,0, 3300,Assertion-Driven Development: Assessing the Quality of Contracts Using Meta-Mutations,"Agile development methods have gained momentum in the last few years and, as a consequence, test-driven development has become more prevalent in practice. However, test cases are not sufficient for producing dependable software and we rather advocate approaches that emphasize the use of assertions or contracts over that of test cases. Yet, writing self-checks in code has been shown to be difficult and is itself prone to errors. A standard technique to specify runtime properties is design-by contract(DbC). But how can one test if the contracts themselves are sensible and sufficient? We propose a measure to quantify the goodness of contracts (or assertions in a broader sense). We introduce meta-mutations at the source code level to simulate common programmer errors that the self-checks are supposed to detect. We then use random mutation testing to determine a lower and upper bound on the detectable mutations and compare these bounds with the number of mutants detected by the contracts. Contracts are considered ldquogoodrdquo if they detect a certain percentage of the detectable mutations.We have evaluated our tools on Java classes with contracts specified using the Java Modeling Language (JML). We have additionally tested the contract quality of 19 implementations, written independently by students, based on the same specification.",2009,0, 3301,"An Experimental Comparison of Four Unit Test Criteria: Mutation, Edge-Pair, All-Uses and Prime Path Coverage","With recent increased expectations for quality, and the growth of agile processes and test driven development, developers are expected to do more and more effective unit testing. Yet, our knowledge of when to use the various unit level test criteria is incomplete. The paper presents results from a comparison of four unit level software testing criteria. Mutation testing, prime path coverage, edge pair coverage, and all-uses testing were compared on two bases: the number of seeded faults found and the number of tests needed to satisfy the criteria. The comparison used a collection of Java classes taken from various sources and hand-seeded faults. Tests were designed and generated mostly by hand with help from tools that compute test requirements and muJava. The findings are that mutation tests detected more faults and the other three criteria were very similar. The paper also presents a secondary measure, a cost benefit ratio, computed as the number of tests needed to detect each fault. Surprisingly, mutation required the fewest number of tests. The paper also discusses some specific faults that were not found and presents analysis for why not.",2009,0, 3302,Using Common Criteria to Assess Quality of Web Services,"Nowadays, Web services are one of the most fashionable technology. Their simplicity of use and interoperability make them used in several fields such as web sites,widgets, classical applications and so on. There exists many technologies linked to this paradigm: SOAP (a communication protocol), WSDL (a description language) and UDDI (a yellow pages system) are among the most known. Some works proposed enhanced UDDI servers which only publish fully reliable Web service. This reliability is generally evaluated using tests. In those works, only Web services for which every tests are positive are published. In this paper, we introduce why we consider that this kind of approach is too binary. We also present examples of Web services that could be used by customers even if some tests fails. Then, we introduce a new approach for Web services publication. This approach relies on test categorization based on the common criteria norm. Then, we present how this marking system is implemented in iTaC-QoS, our validation framework based on UDDI.",2009,0, 3303,A Workflow Framework for Intelligent Service Composition,"Generally, service composition and its evaluation are initiated by web services' functional and non-functional attributes. To select qualified services and compose them into a service composition framework manually is time-consuming and error-prone. In practice, it is a challenging endeavor to timely discover qualified services and develop a service composition schema. In view of this challenge, a workflow framework is presented in this paper for intelligently navigating service composition. The workflow framework consists of two primary processing modules: planning module and CSP (constraint satisfaction problems) solving module. Planning module aims at producing composite plans taking advantage of services' functional attributes. Moreover, CSP solving module aims at selecting an appropriate service, taking advantaging of services' non-functional attributes, from a group of qualified services that own the same functionality. This group of qualified services is instantiated from a service class predefined. Finally, a case study is presented to demonstrate the framework.",2009,0, 3304,Work in Progress: Building a Distributed Generic Stress Tool for Server Performance and Behavior Analysis,"One of the primary tools for performance analysis of multi-tier systems are standardized benchmarks. They are used to evaluate system behavior under different circumstances to assess whether a system can handle real workloads in a production environment. Such benchmarks are also helpful to resolve situations when a system has an unacceptable performance or even crashes. System administrators and developers use these tools for reproducing and analyzing circumstances which provoke the errors or performance degradation. However, standardized benchmarks are usually constrained to simulating a set of pre-fixed workload distributions. We present a benchmarking framework which overcomes this limitation by generating real workloads from pre-recorded system traces. This distributed tool allows more realistic testing scenarios, and thus exposes the behavior and limits of a tested system with more details. Further advantage of our framework is its flexibility. For example, it can be used to extend standardized benchmarks like TPC-W thus allowing them to incorporate workload distributions derived from real workloads.",2009,0, 3305,Designing for Feel: Contrasts between Human and Automated Parametric Capture of Knob Physics,"We examine a crucial aspect of a tool intended to support designing for feel: the ability of an objective physical-model identification method to capture perceptually relevant parameters, relative to human identification performance. The feel of manual controls, such as knobs, sliders, and buttons, becomes critical when these controls are used in certain settings. Appropriate feel enables designers to create consistent control behaviors that lead to improved usability and safety. For example, a heavy knob with stiff detents for a power plant boiler setting may afford better feedback and safer operations, whereas subtle detents in an automobile radio volume knob may afford improved ergonomics and driver attention to the road. To assess the quality of our identification method, we compared previously reported automated model captures for five real mechanical reference knobs with captures by novice and expert human participants who were asked to adjust four parameters of a rendered knob model to match the feel of each reference knob. Participants indicated their satisfaction with the matches their renderings produced. We observed similar relative inertia, friction, detent strength, and detent spacing parameterizations by human experts and our automatic estimation methods. Qualitative results provided insight on users' strategies and confidence. While experts (but not novices) were better able to ascertain an underlying model in the presence of unmodeled dynamics, the objective algorithm outperformed all humans when an appropriate physical model was used. Our studies demonstrate that automated model identification can capture knob dynamics as perceived by a human, and they also establish limits to that ability; they comprise a step towards pragmatic design guidelines for embedded physical interfaces in which methodological expedience is informed by human perceptual requirements.",2009,0, 3306,Image quality enhancement method based on scene category classification and its evaluation,"This paper presents a method of enhancing image quality based on the analysis of scene categories, and its practical software implementation. The proposed method analyzes the scene of input images and calculates the probabilities for five predetermined scene categories; i.e. ldquoPortraits,rdquo ldquoLandscapes,rdquo ldquoNight scenes,rdquo ldquoFlowers (close-up)rdquo and ldquoOthers.rdquo The quality of the input image is improved by using multiple image-processing functions with correction parameters, which take the probabilities of the scene categories into consideration. Subjective experiments on image quality show the reliability of the proposed method.",2009,0, 3307,Design of the Autonomous Fault Manager for learning and estimating home network faults,"This paper proposes a design of software autonomous fault manager (AFM) for learning and estimating faults generated in home network. Most of the existing researches employ rule-based fault processing mechanism, but those works depend on the static characteristics of rules for a specific home environment. Therefore, we focus on a fault estimating and learning mechanism that autonomously produces a fault diagnosis rule and predicts an expected fault pattern in the mutually different home environment. For this, the proposed AFM extracts the home network information with a set of training data using the 5W1H (Who, What, When, Where, Why, How) based contexts to autonomously produce a new fault diagnosis rule. The fault pattern with high correlations can then be predicted for the current home network operation pattern.",2009,0, 3308,Evolution and Search Based Metrics to Improve Defects Prediction,"Testing activity is the most widely adopted practice to ensure software quality. Testing effort should be focused on defect prone and critical resources i.e., on resources highly coupled with other entities of the software application.In this paper, we used search based techniques to define software metrics accounting for the role a class plays in the class diagram and for its evolution over time. We applied Chidamber and Kemerer and the newly defined metrics to Rhino, a Java ECMA script interpreter, to predict version 1.6R5 defect prone classes. Preliminary results show that the new metrics favorably compare with traditional object oriented metrics.",2009,0, 3309,Distributed detection of jamming and defense in wireless sensor networks,"We consider in this paper a single-channel wireless sensor network (WSN) where communication among sensor nodes are subject to jamming by an attacker. In particular, we address the detection of a jamming event, and investigate the optimal network defense strategy to mitigate jamming effects. A multiple-monitor distributed detection structure is considered. The optimal detection scheme is developed under the Bayesian criterion, with the goal of minimizing the error probability of detection at a fusion center. For the optimal network defense strategy, we formulate and solve the design problem using a constrained maximization problem by jointly considering Quality-of-Service and resource constraints of WSNs such as communication throughput, energy, and delay.",2009,0, 3310,OneClick: A Framework for Measuring Network Quality of Experience,"As the service requirements of network applications shift from high throughput to high media quality, interactivity, and responsiveness, the definition of QoE (Quality of Experience) has become multidimensional. Although it may not be difficult to measure individual dimensions of the QoE, how to capture users' overall perceptions when they are using network applications remains an open question. In this paper, we propose a framework called OneClick to capture users' perceptions when they are using network applications. The framework only requires a subject to click a dedicated key whenever he/she feels dissatisfied with the quality of the application in use. OneClick is particularly effective because it is intuitive, lightweight, efficient, time-aware, and application-independent. We use two objective quality assessment methods, PESQ and VQM, to validate OneClick's ability to evaluate the quality of audio and video clips. To demonstrate the proposed framework's efficiency and effectiveness in assessing user experiences, we implement it on two applications, one for instant messaging applications, and the other for first- person shooter games. A Flash implementation of the proposed framework is also presented.",2009,0, 3311,Quantifying the Importance of Vantage Points Distribution in Internet Topology Measurements,"The topology of the Internet has been extensively studied in recent years, driving a need for increasingly complex measurement infrastructures. These measurements have produced detailed topologies with steadily increasing temporal resolution, but concerns exist about the ability of active measurement to measure the true Internet topology. Difficulties in ensuring the accuracy of every individual measurement when millions of measurements are made daily, and concerns about the bias that might result from measurement along the tree of routes from each vantage point to the wider reaches of the Internet must be addressed. However, early discussions of these concerns were based mostly on synthetic data, oversimplified models or data with limited or biased observer distributions. In this paper, we show the importance that extensive sampling from a broad distribution of vantage points has on the resulting topology and bias. We present two methods for designing and analyzing the topology coverage by vantage points: one, when system-wide knowledge exists, provides a near-optimal assignment of measurements to vantage points; while the second one is suitable for an oblivious system and is purely probabilistic. The majority of the paper is devoted to a first look at the importance of the distribution's quality. We show that diversity in the locations and types of vantage points is required for obtaining an unbiased topology. We analyze the effect that broad distribution has over the convergence of various autonomous systems topology characteristics. We show that although diverse and broad distribution is not required for all inspected properties, it is required for some. Finally, some recent bias claims that were made against active traceroute sampling are revisited, and we empirically show that diverse and broad distribution can question their conclusions.",2009,0, 3312,On Equilibrium Distribution Properties in Software Reliability Modeling,"The non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to assess the software reliability, the number of remaining faults in the software, the software release schedule, etc. In this paper, we propose a novel modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution. We study the equilibrium distribution properties in software reliability modeling and compare the resulting NHPP-based SRMs with the existing ones.",2009,0, 3313,Generating AMF Configurations from Software Vendor Constraints and User Requirements,"The service availability forum (SAF) has defined a set of service API specifications addressing the growing need of commercial-off-the-shelf high availability solutions. Among these services, the availability management framework (AMF) is the service responsible for managing the high availability of the application services by coordinating redundant application components. To achieve this task, an AMF implementation requires a specific logical view of the organization of the application's services and components known as an AMF configuration. Developing manually such a configuration is a complex, error prone, and time consuming task. In this paper, we present an approach for automatic generation of AMF configurations from a set of requirements given by the configuration designer and the description of the software as provided by the vendor. Our approach alleviates the need of configuration designers dealing with a large number of AMF entities and their relations.",2009,0, 3314,Static Code Analysis to Detect Software Security Vulnerabilities - Does Experience Matter?,Code reviews with static analysis tools are today recommended by several security development processes. Developers are expected to use the tools' output to detect the security threats they themselves have introduced in the source code. This approach assumes that all developers can correctly identify a warning from a static analysis tool (SAT) as a security threat that needs to be corrected. We have conducted an industry experiment with a state of the art static analysis tool and real vulnerabilities. We have found that average developers do not correctly identify the security warnings and only developers with specific experiences are better than chance in detecting the security vulnerabilities. Specific SAT experience more than doubled the number of correct answers and a combination of security experience and SAT experience almost tripled the number of correct security answers.,2009,0, 3315,Keynote: Event Driven Software Quality,"Summary form only given. Event-driven programming has found pervasive acceptance, from high-performance servers to embedded systems, as an efficient method for interacting with a complex world. The fastest research Web servers are event- driven, as is the most common operating system for sensor nodes. An event-driven program handles concurrent logical tasks using a cooperative, application-level scheduler. The application developer separates each logical task into event handlers; the scheduler runs multiple handlers in an interleaved fashion. Unfortunately, the loose coupling of the event handlers obscures the program's control flow and makes dependencies hard to express and detect, leading to subtle bugs. As a result, event-driven programs can be difficult to understand, making them hard to debug, maintain, extend, and validate. This talk presents recent approaches to event-driven software quality based on static analysis and testing, along with some open problems. We will discuss progress on how to avoid buffer overflow in TCP servers, stack overflow and missed deadlines in microcontrollers, and rapid battery drain in sensor networks. Our work is part of the Event Driven Software Quality project at UCLA, which is aimed at building the next generation of language and tool support for event-driven programming.",2009,0, 3316,Model-Based Design of Embedded Control Systems by Means of a Synchronous Intermediate Model,"Model-based design (MBD) involves designing a model of a control system, simulating and debugging it with dedicated tools, and finally generating automatically code corresponding to this model. In the domain of embedded systems, it offers the huge advantage of avoiding the time-consuming and error-prone final coding phase. The main issue raised by MBD is the faithfulness of the generated code with respect to the initial model, the latter being defined by the simulation semantics. To bridge the gap between the high-level model and the low-level implementation, we use the synchronous programming language Lustre as an intermediate formal model. Concretely, starting from a high-level model specified in the de-facto standard Simulink, we first generate Lustre code along with some structured """"glue code"""", and then we generate embedded real-time code for the Xenomai RTOS. Thanks to Lustre's clean mathematical semantics, we are able to guarantee the faithfulness of the generated multi-tasked real-time code.",2009,0, 3317,Checkpoint Interval and System's Overall Quality for Message Logging-Based Rollback and Recovery in Distributed and Embedded Computing,"In distributed environment, message logging based checkpointing and rollback recovery is a commonly used approach for providing distributed systems with fault tolerance and synchronized global states. Clearly, taking more frequent checkpointing reduces system recovery time in the presence of faults, and hence improves the system availability; however, more frequent checkpointing may also increase the probability for a task to miss its deadlines or prolong its execution time in fault free scenarios. Hence, in distributed and real-time computing, the systempsilas overall quality must be measured by a set of aggregated criteria, such as availability, task execution time, and task deadline miss probability. In this paper, we take into account state synchronization costs in the checkpointing and rollback recovery scheme and quantitatively analyze the relationships between checkpoint intervals and these criteria. Based on the analytical results, we present an algorithm for finding an optimal checkpoint interval that maximizes systempsilas overall quality.",2009,0, 3318,Making Expert Knowledge Explicit to Facilitate Tool Support for Integrating Complex Information Systems in the ATM Domain,"The capability to provide a platform for flexible business services in the air traffic management (ATM) domain is both a major success factor for the ATM industry and a challenge to integrate a large number of complex and heterogeneous information systems. Most of the system knowledge needed for integration is not available explicitly in machine-understandable form, resulting in time-consuming and error-prone human tasks. In this paper we propose a knowledge-based approach, """"semantically-enabled externalization of knowledge"""" for the ATM domain (SEEK-ATM), which explicitly models a) expert knowledge on specific heterogeneous systems and integration requirements; and b) allows mapping of the specific knowledge to the general ATM problem domain knowledge for semantic integration. The domain-specific modeling enables a) to verify the integration knowledge base as requirements specification for later design of technical systems integration and b) to provide an API to the problem space knowledge to facilitate tool support for efficient and effective systems integration. Based on an industry case study, we evaluate effects of the proposed SEEK-ATM approach in comparison to traditional system integration approaches in the ATM domain.",2009,0, 3319,Link Structure Ranking Algorithm for Trading Networks,"Ranking algorithms based on link structure of the network are well-known methods in Web search engines to improve the quality of the searches. The most famous ones are PageRank and HITS. PageRank uses probability of a random surfer to visit a page as the score of that page, and HITS instead of produces one score, proposes using two scores, authority and hub scores. In this paper, we introduce a new link structure ranking algorithm for trading network based on the differences between trading network and WWW network in the links addition process, a process that known to be the foundation of PageRank and HITS formulation. In the last section, we describe the using of proposed algorithm as a tool for network clustering in addition to its original function as a ranking method.",2009,0, 3320,A Wide Area Surveillance Video System by Combination of Omni-Directional and Network Controlled Cameras,"In recent years, in the surveillance system which observes the behavior of human invasion to building or indoor, it is required not only to capture the high quality and wide area image, but also to automatically track to the specific suspicious person in real-time to reduce the number of the required surveillance cameras. While these installations have included a number of video streams, they have been also placed in contexts with limited personnel for monitoring. Using the suggested system, the location of the target motion objects in wide area with 360 degrees surround it can be detected and tracked by capturing high quality images in real-time.",2009,0, 3321,Performance Evaluation of Link Quality Extension in Multihop Wireless Mobile Ad-hoc Networks,"Recently, mobile ad-hoc networks (MANET) are continuing to attract the attention for their potential use in several fields. Most of the work has been done in simulation, because a simulator can give a quick and inexpensive understanding of protocols and algorithms. However, experimentation in the real world are very important to verify the simulation results and to revise the models implemented in the simulator. In this paper, we present the implementation and analysis of our testbed considering the link quality window size (LQWS) parameter for optimized link state routing (OLSR) protocol. We investigate the effect of mobility in the throughput of a MANET. The mobile nodes move toward the destination at a regular speed. When the mobile nodes arrive at the corner, they stop for about three seconds. In our experiments, we consider two cases: only one node is moving (mobile node)and two nodes (intermediate nodes) are moving at the same time. We assess the performance of our testbed in terms of throughput, round trip time, jitter and packet loss. From our experiments, we found that throughput of TCP was improved by reducing LQWS.",2009,0, 3322,Delay Compensation Scheme for Transparency over Haptic-Based Networked Virtual Environments,"Haptic-based NVEs (networked virtual environments) with CS (client/server) communication architectures support better consistency but induce larger end-to-end delays than those with P2P (peer-to-peer) communication architectures. Unfortunately, large delay severely deteriorates the transparency (i.e., reality) of haptic interaction. To improve the haptic interaction quality for haptic-based NVEs with CS communication architectures, in this paper the degradation of haptic interaction quality is analyzed according to network delays. Based on the analysis, the maximum allowable delay bound is predicted and unrealistic force feedback caused by the network delay is compensated. Experimental results confirm the proposed delay compensation scheme effectively improves haptic interaction quality with respect to network delays.",2009,0, 3323,A Unitary-Optimized Operation for Wireless Live Streaming,"High error rate is critical to the wireless video transmission. Our paper tackles the problem of robust video streaming over error-prone channels. Basing on an autoresilient multiple-description coding method and a multi-path transmission strategy, further study is done on the computational complexity of rate-distortion optimization for video coding and transmission. A unitary-optimized operation of the whole system is then proposed after the R-D calculation. Experiment results show that video streaming with our unitary-optimized operation gains better playback quality especially when transmitted through error-prone mobile channel.",2009,0, 3324,Automatic modulation classification for cognitive radios using cyclic feature detection,"Cognitive radios have become a key research area in communications over the past few years. Automatic modulation classification (AMC) is an important component that improves the overall performance of the cognitive radio. Most modulated signals exhibit the property of cyclostationarity that can be exploited for the purpose of classification. In this paper, AMCs that are based on exploiting the cyclostationarity property of the modulated signals are discussed. Inherent advantages of using cyclostationarity based AMC are also addressed. When the cognitive radio is in a network, distributed sensing methods have the potential to increase the spectral sensing reliability, and decrease the probability of interference to existing radio systems. The use of cyclostationarity based methods for distributed signal detection and classification are presented. Examples are given to illustrate the concepts. The Matlab codes for some of the algorithms described in the paper are available for free download at http://filebox.vt.edu/user/bramkum.",2009,0, 3325,SWAP: Mitigating XSS attacks using a reverse proxy,"Due to the increasing amount of Web sites offering features to contribute rich content, and the frequent failure of Web developers to properly sanitize user input, cross-site scripting prevails as the most significant security threat to Web applications. Using cross-site scripting techniques, miscreants can hijack Web sessions, and craft credible phishing sites. Previous work towards protecting against cross-site scripting attacks suffers from various drawbacks, such as practical infeasibility of deployment due to the need for client-side modifications, inability to reliably detect all injected scripts, and complex, error-prone parameterization. In this paper, we introduce SWAP (secure Web application proxy), a server-side solution for detecting and preventing cross-site scripting attacks. SWAP comprises a reverse proxy that intercepts all HTML responses, as well as a modified Web browser which is utilized to detect script content. SWAP can be deployed transparently for the client, and requires only a simple automated transformation of the original Web application. Using SWAP, we were able to correctly detect exploits on several authentic vulnerabilities in popular Web applications.",2009,0, 3326,Toward automatic transformation of enterprise business model to service model,"One of the key activities needed to construct a quality service-oriented solution is the specification of the architectural elements. The selection of an appropriate and proven method for specification of the elements consisting services, flows and components is thus quite crucial to the success of any service-based solution. Existing methods for service specification ignore the automation capability while focusing on human-based and error-prone processes. This paper proposes a novel method called ASSM (Automated Service Specification Method) that automatically specifies the architecturally significant elements of service model artifact. The proposed automated process helps to improve productivity, enforce architectural integrity and improve the quality of the solution when specifying the service model. Model transformations such as ASSM automate the labor and cost intensive activities and lead the architect to focus on more important activities, which need human intelligence, and eventually enable efficient development of service-based solutions.",2009,0, 3327,Experiments on the test case length in specification based test case generation,"Many different techniques have been proposed to address the problem of automated test case generation, varying in a range of properties and resulting in very different test cases. In this paper we investigate the effects of the test case length on resulting test suites: Intuitively, longer test cases should serve to find more difficult faults but will reduce the number of test cases necessary to achieve the test objectives. On the other hand longer test cases have disadvantages such as higher computational costs and they are more difficult to interpret manually. Consequently, should one aim to generate many short test cases or fewer but longer test cases? We present the results of a set of experiments performed in a scenario of specification based testing for reactive systems. As expected, a long test case can achieve higher coverage and fault detecting capability than a short one, while giving preference to longer test cases in general can help reduce the size of test suites but can also have the opposite effect, for example, if minimization is applied.",2009,0, 3328,Towards a practical and effective method for Web services test case generation,"This paper proposes a method for Web services test case generation, which is centered on a practical test data generation framework that has higher probability to penetrate the service implementation logic. This presented framework leverages both the information contained in the WSDL/XSD files and the information provided by testers as well as enables testers to customize the fields to be tested, field constraints and data generation rules. The proposed method can generate test cases that are effective in detecting defects and efficient in reducing test time. This paper also presents preliminary empirical evidence to illustrate the value of the method.",2009,0, 3329,Calculating BPEL test coverage through instrumentation,"Assessing the quality of tests for BPEL processes is a difficult task in projects following SOA principles. Since insufficient testing can lead to unforeseen defects that can be extremely costly in complex and mission critical environments, this problem needs to be addressed. By using formally defined test metrics that can be evaluated automatically by using an extension to the BPELUnit testing framework, testers are able to assess whether their white box tests cover all important areas of a BPEL process. This leads to better tests and thus to better BPEL processes because testers can improve their test cases by knowing which important areas of the BPEL process have not been tested yet.",2009,0, 3330,GUI savvy end-to-end testing with smart monkeys,"In this article we report on the development of a graphical user interface-savvy test monkey and its successful application to the Windows calculator. Our novel test monkey allows for a pragmatic approach in providing an abstract model of the GUI relevant behavior of the application under test and relies on a readily available GUI automation tool. Besides of outlining the employed test oracles we explain our novel decision-based state machine model, the associated language and the random test algorithm. Moreover we outline the pragmatic model creation concept and report on its concrete application in an end-to-end test setting with a Windows Vista front-end. Notably in this specific scenario, our novel monkey was able to identify a misbehavior in a well-established application and provided valuable insight for reproducing the detected fault.",2009,0, 3331,Evaluating the effectiveness of the Rainbow self-adaptive system,"Rainbow is a framework for engineering a system with run-time, self-adaptive capabilities to monitor, detect, decide, and act on opportunities for system improvement. We applied Rainbow to a system, Znn.com, and evaluated its effectiveness to self-adapt on three levels: its effectiveness to maintain quality attribute in the face of changing conditions, run-time overheads of adaptation, and the engineering effort to use it to add self-adaptive capabilities to Znn.com. We make Znn.com and the associated evaluation tools available to the community so that other researchers can use it to evaluate their own systems and the community can compare different systems. In this paper, we report on our evaluation experience, reflect on some principles for benchmarking self-adaptive systems, and discuss the suitability of our evaluation tools for this purpose.",2009,0, 3332,Testing for trustworthiness in scientific software,"Two factors contribute to the difficulty of testing scientific software. One is the lack of testing oracles - a means of comparing software output to expected and correct results. The second is the large number of tests required when following any standard testing technique described in the software engineering literature. Due to the lack of oracles, scientists use judgment based on experience to assess trustworthiness, rather than correctness, of their software. This is an approach well established for assessing scientific models. However, the problem of assessing software is more complex, exacerbated by the problem of code faults. This highlights the need for effective and efficient testing for code faults in scientific software. Our current research suggests that a small number of well chosen tests may reveal a high percentage of code faults in scientific software and allow scientists to increase their trust.",2009,0, 3333,Software reliability prediction using multi-objective genetic algorithm,"Software reliability models are very useful to estimate the probability of the software fail along the time. Several different models have been proposed to predict the software reliability growth (SRGM); however, none of them has proven to perform well considering different project characteristics. The ability to predict the number of faults in the software during development and testing processes. In this paper, we explore Genetic Algorithms (GA) as an alternative approach to derive these models. GA is a powerful machine learning technique and optimization techniques to estimate the parameters of well known reliably growth models. Moreover, machine learning algorithms, proposed the solution overcome the uncertainties in the modeling by combining multiple models using multiple objective function to achieve the best generalization performance where. The objectives are conflicting and no design exists which can be considered best with respect to all objectives. In this paper, experiments were conducted to confirm these hypotheses. Then evaluating the predictive capability of the ensemble of models optimized using multi-objective GA has been calculated. Finally, the results were compared with traditional models.",2009,0, 3334,Consider of fault propagation in architecture-based software reliability analysis,"Software reliability models are used to estimation and prediction of software reliability. Existing models either use black-box approach that based on test data during software test phase or white-box approach that based on software architecture and individual component reliability, which is more suited to assess the reliability of modern software system. However, most of the reliability models based on architecture assumed that a failure occurring within one component will not cause any other component to fail, which is inconsistent with the facts. This paper introduces a reliability model and a reliability analysis technique for architecture-based reliability evaluation. Our approach extend existing reliability model by considering fault propagation. We believe that this model can be used to effectively improve software quality.",2009,0, 3335,A platform for software engineering research,"Research in the fields of software quality, maintainability and evolution requires the analysis of large quantities of data, which often originate from open source software projects. Collecting and preprocessing data, calculating metrics, and synthesizing composite results from a large corpus of project artifacts is a tedious and error prone task lacking direct scientific value. The Alitheia Core tool is an extensible platform for software quality analysis that is designed specifically to facilitate software engineering research on large and diverse data sources, by integrating data collection and preprocessing phases with an array of analysis services, and presenting the researcher with an easy to use extension mechanism. Alitheia Core aims to be the basis of an ecosystem of shared tools and research data that will enable researchers to focus on their research questions at hand, rather than spend time on re-implementing analysis tools. In this paper, we present the Alitheia Core platform in detail and demonstrate its usefulness in mining software repositories by guiding the reader through the steps required to execute a simple experiment.",2009,0, 3336,Evaluating the relation between coding standard violations and faultswithin and across software versions,"In spite of the widespread use of coding standards and tools enforcing their rules, there is little empirical evidence supporting the intuition that they prevent the introduction of faults in software. In previous work, we performed a pilot study to assess the relation between rule violations and actual faults, using the MISRA C 2004 standard on an industrial case. In this paper, we investigate three different aspects of the relation between violations and faults on a larger case study, and compare the results across the two projects. We find that 10 rules in the standard are significant predictors of fault location.",2009,0, 3337,Does calling structure information improve the accuracy of fault prediction?,"Previous studies have shown that software code attributes, such as lines of source code, and history information, such as the number of code changes and the number of faults in prior releases of software, are useful for predicting where faults will occur. In this study of an industrial software system, we investigate the effectiveness of adding information about calling structure to fault prediction models. The addition of calling structure information to a model based solely on non-calling structure code attributes provided noticeable improvement in prediction accuracy, but only marginally improved the best model based on history and non-calling structure code attributes. The best model based on history and non-calling structure code attributes outperformed the best model based on calling and non-calling structure code attributes.",2009,0, 3338,Using association rules to study the co-evolution of production & test code,"Unit tests are generally acknowledged as an important aid to produce high quality code, as they provide quick feedback to developers on the correctness of their code. In order to achieve high quality, well-maintained tests are needed. Ideally, tests co-evolve with the production code to test changes as soon as possible. In this paper, we explore an approach based on association rule mining to determine whether production and test code co-evolve synchronously. Through two case studies, one with an open source and another one with an industrial software system, we show that our association rule mining approach allows one to assess the co-evolution of product and test code in a software project and, moreover, to uncover the distribution of programmer effort over pure coding, pure testing, or a more test-driven-like practice.",2009,0, 3339,Relationship-based change propagation: A case study,"Software development is an evolutionary process. Requirements of a system are often incomplete or inconsistent, and hence need to be extended or modified over time. Customers may demand new services or goals that often lead to changes in the design and implementation of the system. These changes are typically very expensive. Even if only local modifications are needed, manually applying them is time-consuming and and error-prone. Thus, it is essential to assist users in propagating changes across requirements, design, and implementation artifacts. In this paper, we take a model-based approach and provide an automated algorithm for propagating changes between requirements and design models. The key feature of our work is explicating relationships between models at the requirements and design levels. We provide conditions for checking validity of these relationships both syntactically and semantically. We show how our algorithm utilizes the relationships between models at different levels to localize the regions that should be modified. We use the IBM Trade 6 case study to demonstrate our approach.",2009,0, 3340,Raising the level of abstraction in the development of GMF-based graphical model editors,The Eclipse graphical modeling framework (GMF) provides substantial infrastructure and tooling for developing diagram-based editors for modelling languages atop the Eclipse platform. It is widely accepted that implementing a visual editor using the built-in GMF facilities is a particularly complex and error-prone task and requires a steep learning curve. We present an approach that raises the level of abstraction at which a visual editor is specified. The approach uses annotations at the metamodel level. Annotations are used for producing the required low-level intermediate GMF models necessary for generating an editor via model-to-model transformations.,2009,0, 3341,Predicting build failures using social network analysis on developer communication,"A critical factor in work group coordination, communication has been studied extensively. Yet, we are missing objective evidence of the relationship between successful coordination outcome and communication structures. Using data from IBM's Jazztrade project, we study communication structures of development teams with high coordination needs. We conceptualize coordination outcome by the result of their code integration build processes (successful or failed) and study team communication structures with social network measures. Our results indicate that developer communication plays an important role in the quality of software integrations. Although we found that no individual measure could indicate whether a build will fail or succeed, we leveraged the combination of communication structure measures into a predictive model that indicates whether an integration will fail. When used for five project teams, our predictive model yielded recall values between 55% and 75%, and precision values between 50% to 76%.",2009,0, 3342,Taming coincidental correctness: Coverage refinement with context patterns to improve fault localization,"Recent techniques for fault localization leverage code coverage to address the high cost problem of debugging. These techniques exploit the correlations between program failures and the coverage of program entities as the clue in locating faults. Experimental evidence shows that the effectiveness of these techniques can be affected adversely by coincidental correctness, which occurs when a fault is executed but no failure is detected. In this paper, we propose an approach to address this problem. We refine code coverage of test runs using control- and data-flow patterns prescribed by different fault types. We conjecture that this extra information, which we call context patterns, can strengthen the correlations between program failures and the coverage of faulty program entities, making it easier for fault localization techniques to locate the faults. To evaluate the proposed approach, we have conducted a mutation analysis on three real world programs and cross-validated the results with real faults. The experimental results consistently show that coverage refinement is effective in easing the coincidental correctness problem in fault localization techniques.",2009,0, 3343,Predicting faults using the complexity of code changes,"Predicting the incidence of faults in code has been commonly associated with measuring complexity. In this paper, we propose complexity metrics that are based on the code change process instead of on the code. We conjecture that a complex code change process negatively affects its product, i.e., the software system. We validate our hypothesis empirically through a case study using data derived from the change history for six large open source projects. Our case study shows that our change complexity metrics are better predictors of fault potential in comparison to other well-known historical predictors of faults, i.e., prior modifications and prior faults.",2009,0, 3344,Using quantitative analysis to implement autonomic IT systems,"The software underpinning today's IT systems needs to adapt dynamically and predictably to rapid changes in system workload, environment and objectives. We describe a software framework that achieves such adaptiveness for IT systems whose components can be modelled as Markov chains. The framework comprises (i) an autonomic architecture that uses Markov-chain quantitative analysis to dynamically adjust the parameters of an IT system in line with its state, environment and objectives; and (ii) a method for developing instances of this architecture for real-world systems. Two case studies are presented that use the framework successfully for the dynamic power management of disk drives, and for the adaptive management of cluster availability within data centres, respectively.",2009,0, 3345,Taming Dynamically Adaptive Systems using models and aspects,"Since software systems need to be continuously available under varying conditions, their ability to evolve at runtime is increasingly seen as one key issue. Modern programming frameworks already provide support for dynamic adaptations. However the high-variability of features in Dynamic Adaptive Systems (DAS) introduces an explosion of possible runtime system configurations (often called modes) and mode transitions. Designing these configurations and their transitions is tedious and error-prone, making the system feature evolution difficult. While Aspect-Oriented Modeling (AOM) was introduced to improve the modularity of software, this paper presents how an AOM approach can be used to tame the combinatorial explosion of DAS modes. Using AOM techniques, we derive a wide range of modes by weaving aspects into an explicit model reflecting the runtime system. We use these generated modes to automatically adapt the system. We validate our approach on an adaptive middleware for home-automation currently deployed in Rennes metropolis.",2009,0, 3346,Accurate Interprocedural Null-Dereference Analysis for Java,"Null dereference is a commonly occurring defect in Java programs, and many static-analysis tools identify such defects. However, most of the existing tools perform a limited interprocedural analysis. In this paper, we present an interprocedural path-sensitive and context-sensitive analysis for identifying null dereferences. Starting at a dereference statement, our approach performs a backward demand-driven analysis to identify precisely paths along which null values may flow to the dereference. The demand-driven analysis avoids an exhaustive program exploration, which lets it scale to large programs. We present the results of empirical studies conducted using large open-source and commercial products. Our results show that: (1) our approach detects fewer false positives, and significantly more interprocedural true positives, than other commonly used tools; (2) the analysis scales to large subjects; and (3) the identified defects are often deleted in subsequent releases, which indicates that the reported defects are important.",2009,0, 3347,Invariant-based automatic testing of AJAX user interfaces,"AJAX-based Web 2.0 applications rely on stateful asynchronous client/server communication, and client-side runtime manipulation of the DOM tree. This not only makes them fundamentally different from traditional web applications, but also more error-prone and harder to test. We propose a method for testing AJAX applications automatically, based on a crawler to infer a flow graph for all (client-side) user interface states. We identify AJAX-specific faults that can occur in such states (related to DOM validity, error messages, discoverability, back-button compatibility, etc.) as well as DOM-tree invariants that can serve as oracle to detect such faults. We implemented our approach in ATUSA, a tool offering generic invariant checking components, a plugin-mechanism to add application-specific state validators, and generation of a test suite covering the paths obtained during crawling. We describe two case studies evaluating the fault revealing capabilities, scalability, required manual effort and level of automation of our approach.",2009,0, 3348,Refactoring sequential Java code for concurrency via concurrent libraries,"Parallelizing existing sequential programs to run efficiently on multicores is hard. The Java 5 package java.util.concurrent (j.u.c.) supports writing concurrent programs: much of the complexity of writing thread-safe and scalable programs is hidden in the library. To use this package, programmers still need to reengineer existing code. This is tedious because it requires changing many lines of code, is error-prone because programmers can use the wrong APIs, and is omission-prone because programmers can miss opportunities to use the enhanced APIs. This paper presents our tool, Concurrencer, that enables programmers to refactor sequential code into parallel code that uses three j.u.c. concurrent utilities. Concurrencer does not require any program annotations. Its transformations span multiple, non-adjacent, program statements. A find-and-replace tool can not perform such transformations, which require program analysis. Empirical evaluation shows that concurrencer refactors code effectively: concurrencer correctly identifies and applies transformations that some open-source developers overlooked, and the converted code exhibits good speedup.",2009,0, 3349,Do code clones matter?,"Code cloning is not only assumed to inflate maintenance costs but also considered defect-prone as inconsistent changes to code duplicates can lead to unexpected behavior. Consequently, the identification of duplicated code, clone detection, has been a very active area of research in recent years. Up to now, however, no substantial investigation of the consequences of code cloning on program correctness has been carried out. To remedy this shortcoming, this paper presents the results of a large-scale case study that was undertaken to find out if inconsistent changes to cloned code can indicate faults. For the analyzed commercial and open source systems we not only found that inconsistent changes to clones are very frequent but also identified a significant number of faults induced by such changes. The clone detection tool used in the case study implements a novel algorithm for the detection of inconsistent clones. It is available as open source to enable other researchers to use it as basis for further investigations.",2009,0, 3350,Mining exception-handling rules as sequence association rules,"Programming languages such as Java and C++ provide exception-handling constructs to handle exception conditions. Applications are expected to handle these exception conditions and take necessary recovery actions such as releasing opened database connections. However, exception-handling rules that describe these necessary recovery actions are often not available in practice. To address this issue, we develop a novel approach that mines exception-handling rules as sequence association rules of the form ldquo(FCc 1...FCc n) nland FCa rArr (FCe 1...FCe m)rdquo. This rule describes that function call FCa should be followed by a sequence of function calls (FCe 1...FCe m) when FCa is preceded by a sequence of function calls (FCe 1...FCc n). Such form of rules is required to characterize common exception-handling rules. We show the usefulness of these mined rules by applying them on five real-world applications (including 285 KLOC) to detect violations in our evaluation. Our empirical results show that our approach mines 294 real exception-handling rules in these five applications and also detects 160 defects, where 87 defects are new defects that are not found by a previous related approach.",2009,0, 3351,Alitheia Core: An extensible software quality monitoring platform,"Research in the fields of software quality and maintainability requires the analysis of large quantities of data, which often originate from open source software projects. Pre-processing data, calculating metrics, and synthesizing composite results from a large corpus of project artefacts is a tedious and error prone task lacking direct scientific value. The Alitheia Core tool is an extensible platform for software quality analysis that is designed specifically to facilitate software engineering research on large and diverse data sources, by integrating data collection and preprocessing phases with an array of analysis services, and presenting the researcher with an easy to use extension mechanism. The system has been used to process several projects successfully, forming the basis of an emerging ecosystem of quality analysis tools.",2009,0, 3352,Clustering and Metrics Thresholds Based Software Fault Prediction of Unlabeled Program Modules,"Predicting the fault-proneness of program modules when the fault labels for modules are unavailable is a practical problem frequently encountered in the software industry. Because fault data belonging to previous software version is not available, supervised learning approaches can not be applied, leading to the need for new methods, tools, or techniques. In this study, we propose a clustering and metrics thresholds based software fault prediction approach for this challenging problem and explore it on three datasets, collected from a Turkish white-goods manufacturer developing embedded controller software. Experiments reveal that unsupervised software fault prediction can be automated and reasonable results can be produced with techniques based on metrics thresholds and clustering. The results of this study demonstrate the effectiveness of metrics thresholds and show that the standalone application of metrics thresholds (one-stage) is currently easier than the clustering and metrics thresholds based (two-stage) approach because the selection of cluster number is performed heuristically in this clustering based method.",2009,0, 3353,Testing SQL Server Integration Services Runtime Engine Using Model and Mock Objects,"Software testing is complex and costly. It has become increasingly difficult to assess the quality of software and evaluate its correctness due to the ever increasing complexity of the software implementations as well as their dynamic nature in terms of the requirements changes and functionality updates. It is practically not possible to test a software system for all possible combinations of inputs, interactions between modules and usage environmental conditions. Several approaches have been identified to maximize results of testing with limited investments. Model based testing and using mock objects are promising techniques for carrying out behavioral testing and are rapidly gaining popularity among the software testing community. In this paper, we present our approach in testing SQL server integration services runtime engine using model based test methodology to dynamically generate test cases and mock objects to control and observe the test system behavior.",2009,0, 3354,The Development of a Multi-Agent Based Middleware for RFID Asset Management System Using the PASSI Methodology,"Radio frequency identification (RFID) technology enables information to be remotely stored and retrieved by means of electromagnetic radiation. Compared to other automatic identification technologies, RFID provides an efficient, flexible and inexpensive way of identifying and tracking objects. Asset management is one of the potential applications for RFID technology. Asset management using RFID reduces the workload on asset audit administrators while eliminating the error prone manual audit processes. Successful implementation of RFID asset management system requires an intelligent use of the data harvested from the RFID system. This work describes the development of a multi-agent based middleware solution for processing and managing the data produced by RFID system for asset management applications. The middleware is developed using the agent-oriented software engineering (AOSE) methodology PASSI (Process for Agent Societies Specification and Implementation).",2009,0, 3355,"Improving quality, one process change at a time","We report on one organization's experience making process changes in a suite of projects. The changes were motivated by clients' requests for better time estimates, better quality, better stability and more reliable test scheduling resulting from the high number of bug reports and constant delivery delays. The teams embarked on a series of top-down process changes inspired by the IBM Rational Unified Process. Changes included adopting the Rational Tools, introducing iterative development, and later the hiring of a formal manual testing team and support for refactoring activities. To assess the impact of these changes we have collected fault data from 23 releases of the systems including releases from before and after these changes were introduced. In this report we discuss the challenges and impact of these process changes, and how the development teams leveraged these successes to gradually introduce other process improvements in a bottom-up fashion.",2009,0, 3356,Predicting defects in SAP Java code: An experience report,"Which components of a large software system are the most defect-prone? In a study on a large SAP Java system, we evaluated and compared a number of defect predictors, based on code features such as complexity metrics, static error detectors, change frequency, or component imports, thus replicating a number of earlier case studies in an industrial context. We found the overall predictive power to be lower than expected; still, the resulting regression models successfully predicted 50-60% of the 20% most defect-prone components.",2009,0, 3357,Automated substring hole analysis,"Code coverage is a common measure for quantitatively assessing the quality of software testing. Code coverage indicates the fraction of code that is actually executed by tests in a test suite. While code coverage has been around since the 60's there has been little work on how to effectively analyze code coverage data measured in system tests. Raw data of this magnitude, containing millions of data records, is often impossible for a human user to comprehend and analyze. Even drill-down capabilities that enable looking at different granularities starting with directories and going through files to lines of source code are not enough. Substring hole analysis is a novel method for viewing the coverage of huge data sets. We have implemented a tool that enables automatic substring hole analysis. We used this tool to analyze coverage data of several large and complex IBM software systems. The tool identified coverage holes that suggested interesting scenarios that were untested.",2009,0, 3358,Guided path exploration for regression test generation,"Regression test generation aims at generating a test suite that can detect behavioral differences between the original and the modified versions of a program. Regression test generation can be automated by using dynamic symbolic execution (DSE), a state-of-the-art test generation technique, to generate a test suite achieving high structural coverage. DSE explores paths in the program to achieve high structural coverage, and exploration of all these paths can often be expensive. However, if our aim is to detect behavioral differences between two versions of a program, we do not need to explore all paths in the program as not all these paths are relevant for detecting behavioral differences. In this paper, we propose a guided path exploration approach that avoids exploring irrelevant paths and gives priority to more promising paths (in terms of detecting behavioral differences) such that behavioral differences are more likely to be detected earlier in path exploration. Preliminary results show that our approach requires about 12.9% fewer runs on average (maximum 25%) to cause the execution of a changed statement and 11.8% fewer runs on average (maximum 31.2%) to cause program-state differences after its execution than the search strategies without guidance.",2009,0, 3359,High-level multicore programming with XJava,"Multicore chips are becoming mainstream, but programming them is difficult because the prevalent thread-based programming model is error-prone and does not scale well. To address this problem, we designed XJava, an extension of Java that permits the direct expression of producer/consumer, pipeline, master/slave, and data parallelism. The central concept of the extension is the task, a parallel activity similar to a filter in Unix. Tasks can be combined with new operators to create arbitrary nestings of parallel activities. Preliminary experience with XJava and its compiler suggests that the extensions lead to code savings and reduce the potential for synchronization defects, while preserving the advantages of object-orientation and type-safety. The proposed extensions provide intuitive ldquowhat-you-see-is-what-you-getrdquo parallelism. They also enable other software tools, such as auto-tuning and accurate static analysis for race detection.",2009,0, 3360,Search-based testing of complex simulink models containing stateflow diagrams,"Model-based software design is constantly becoming more important and thus requiring systematic model testing. Test case generation constitutes a critical activity that is cost-intensive, time-consuming and error-prone when done manually. Hence, an automation of this process is required. One automation approach is search-based testing for which the task of generating test data is transformed into an optimization problem which is solved using metaheuristic search techniques. However, only little work has been done so far applying search-based testing techniques to continuous functional models, such as SIMULINK STATEFLOW models. This paper presents the current state of my thesis developing a new approach for automatically generating continuous test data sets achieving high structural model coverage for SIMULINK models containing STATEFLOW diagrams using search-based testing. The expected contribution of this work is to demonstrate how search-based testing techniques can be applied successfully to continuous functional models and how to cope with the arising problems such as generating and optimizing continuous signals, covering structural model elements and dealing with the complexity of the models.",2009,0, 3361,Concurrencer: A tool for retrofitting concurrency into sequential java applications via concurrent libraries,"Parallelizing existing sequential programs to run efficiently on multicores is hard. The Java 5 package java.util.concurrent (j.u.c.) supports writing concurrent programs. To use this package, programmers still need to refactor existing code. This is tedious, error-prone, and omission-prone. This demo presents our tool, CONCURRENCER, which enables programmers to refactor sequential code into parallel code that uses j.u.c. concurrent utilities. CONCURRENCER does not require any program annotations, although the transformations span several, non-adjacent, program statements and use custom program analysis. A find-and-replace tool can not perform such transformations. Empirical evaluation shows that CONCURRENCER refactors code effectively: CONCURRENCER correctly identifies and applies transformations that some open-source developers overlooked, and the converted code exhibits good speedup.",2009,0, 3362,LuMiNousmodel-driven assertion generation for runtime failure detection,"Well designed assertions improve overall software quality, ease debugging and maintenance, and support the construction of autonomic software systems. Although widely used both in academia and industry, manually defining code assertions is hard and error-prone. In this summary we present LuMiNous, a prototype that implements a technique to automatically generate code assertions from model annotations.",2009,0, 3363,Slede: Framework for automatic verification of sensor network security protocol implementations,"Verifying security properties of protocols requires developers to manually create protocol-specific intruder models, which could be tedious and error prone. We present Slede, a verification framework for sensor network applications. Key features include automation of: extraction of models, generation and composition of intrusion models, and verification of security properties.",2009,0, 3364,Reducing search space of auto-tuners using parallel patterns,"Auto-tuning is indispensable to achieve best performance of parallel applications, as manual tuning is extremely labor intensive and error-prone. Search-based auto-tuners offer a systematic way to find performance optimums, and existing approaches provide promising results. However, they suffer from large search spaces. In this paper we propose the idea to reduce the search space using parameterized parallel patterns. We introduce an approach to exploit context information from Master/Worker and Pipeline patterns before applying common search algorithms. The approach enables a more efficient search and is suitable for parallel applications in general. In addition, we present an implementation concept and a corresponding prototype for pattern-based tuning. The approach and the prototype have been successfully evaluated in two large case studies. Due to the significantly reduced search space a common hill climbing algorithm and a random sampling strategy require on average 54% less tuning iterations, while even achieving a better accuracy in most cases.",2009,0, 3365,Stage: Python with Actors,"Programmers hoping to exploit multi-core processors must split their applications into threads suitable for independent, concurrent execution. The lock-based concurrency of many existing languages is clumsy and error prone - a barrier to writing fast and correct concurrent code. The Actor model exudes concurrency - each entity in the model (an Actor) executes concurrently. Interaction is restricted to message passing which prevents many of the errors associated with shared mutable state and locking, the common alternative. By favouring message passing over method calling the Actor model makes distribution straightforward. Early Actor-based languages enjoyed only moderate success, probably because they were before their time. More recent Actor languages have enjoyed greater success, the most successful being ERLANG, but the language is functional; a paradigm unfamiliar to many programmers. There is a need for a language that presents a familiar and fast encoding of the Actor model. In this paper we present STAGE, our mobile Actor language based on PYTHON.",2009,0, 3366,Discovering determinants of high volatility software,"This topic paper presents a line of research that we are proposing that incorporates, in a very explicit and intentional way, human and organizational aspects in the prediction of troublesome (defect-prone or change-prone or volatile, depending on the environment) software modules. Much previous research in this area tries to identify these modules by looking at their structural characteristics, so that increased effort can be concentrated on those modules in order to reduce future maintenance costs. The outcome of this work will be a set of models that describe the relationships between software change characteristics (in particular human, organizational, and process characteristics), changes in the structural properties of software modules (e.g. complexity), and the future volatility of those modules. The impact of such models is two-fold. First, maintainers will have an improved technique for estimation of maintenance effort based on recent change characteristics. Also, the models will help identify management practices that are associated with high future volatility.",2009,0, 3367,Inspection effectiveness for different quality attributes of software requirement specifications: An industrial case study,"Early inspections of software requirements specifications (SRS) are known to be an effective and cost-efficient quality assurance technique. However, inspections are often applied with the underlying assumption that they work equally well to assess all kinds of quality attributes of SRS. Little work has yet been done to validate this assumption. At Capgemini sd&m, we set up an inspection technique to assess SRS, the so called ldquospecification quality gaterdquo (QG-Spec). The QG-Spec has been applied to a series of large scale commercial projects. In this paper we present our lessons learned and discuss, which quality attributes are effectively assessed by means of the QG-Spec - and which are not. We argue that our results can be generalized to other existing inspection techniques. We came to the conclusion that inspections have to be carefully balanced with techniques for constructive quality assurance in order to economically arrive at high quality SRS.",2009,0, 3368,Software aging assessment through a specialization of the SQuaRE quality model,"In the last years the software application portfolio has become a key asset for almost all companies. During their lives, applications undergo lots of changes to add new functionalities or to refactor older ones; these changes tend to reduce the quality of the applications themselves, causing the phenomenon known as software aging. Monitoring of software aging is very important for companies, but up to now there are no standard approaches to perform this task. In addition many of the suggested models assess software aging basing on few software features, whereas this phenomenon affects all of the software aspects. In 2005ISO/IEC released the SQuaRE quality model which covers several elements of software quality assessment, but some issues make SQuaRE usage quite difficult. The purpose of this paper is to suggest an approach to software aging monitoring that considers the software product in its wholeness and to provide a specialization of the SQuaRE quality model which allows to perform this task.",2009,0, 3369,Existing model metrics and relations to model quality,"This paper presents quality goals for models and provides a state-of-the-art analysis regarding model metrics. While model-based software development often requires assessing the quality of models at different abstraction and precision levels and developed for multiple purposes, existing work on model metrics do not reflect this need. Model size metrics are descriptive and may be used for comparing models but their relation to model quality is not well-defined. Code metrics are proposed to be applied on models for evaluating design quality while metrics related to other quality goals are few. Models often consist of a significant amount of elements, which allows a large amount of metrics to be defined on them. However, identifying useful model metrics, linking them to model quality goals, providing some baseline for interpretation of data, and combining metrics with other evaluation models such as inspections requires more theoretical and empirical work.",2009,0, 3370,Operation-based versioning of metamodels with COPE,"Model-based development promises to increase productivity by offering modeling languages tailored to a specific domain. Such modeling languages are typically defined by a metamodel. In response to changing requirements and technological progress, the domains and thus the metamodels are subject to change. Manually migrating existing models to a new version of their metamodel is tedious and error-prone. Hence, adequate tool support is required to support the maintenance of modeling languages. COPE provides adequate tool support by specifying the coupled evolution of metamodels and models. In this paper, we present the tool support to record the operations carried out on the metamodel directly through an editor. These operations can be enriched by instructions on how to migrate corresponding models. To further reduce migration effort, COPE provides high-level operations which have built-in meaning in terms of the migration of models.",2009,0, 3371,Assessing Quality of Derived Non Atomic Data by Considering Conflict Resolution Function,We present a data quality manager (DQM) prototype providing information regarding the elements of derived non-atomic data values. Users are able to make effective decisions by trusting data according to the description of the conflict resolution function that was utilized for fusing data along with the quality properties of data ancestor. The assessment and ranking of non-atomic data is possible by the specification of quality properties and priorities from users at any level of experience.,2009,0, 3372,Cesar-FD: An Effective Stateful Fault Detection Mechanism in Drug Discovery Grid,"Workflow management system is widely accepted and used in the wide area network environment, especially in the e-science application scenarios, to coordinate the operation of different functional components and to provide more powerful functions. The error-prone nature of the wide area network environment makes the fault-tolerance requirements of workflow management become more and more urgent. In this paper, we propose Cesar-FD, a stateful fault detection mechanism, which builds up states related to the runtime and external environments of workflow management system by aggregating multiple messages and provides more accurate notifications asynchronously. We demonstrate the use of this mechanism in the drug discovery grid environment by two use cases. We also show that it can be used to detect faulty situations more accurately.",2009,0, 3373,A Modified BCE Algorithm for Fault-Tolerance Scheduling of Periodic Tasks in Hard Real-Time Systems,"Fault tolerance is an important aspect of real-time control systems, due to unavoidable timing constraints. In this paper, the timing problem of a set of concurrent periodic tasks is considered where each task has primary and alternate versions. In the literature, probability of fault in the alternate version of a task is assumed to be zero. Here, a fault probability with uniform distribution has been used. In addition, to cover the situations in which both versions are scheduled with some time overlapping, a criterion is defined for prioritizing primary version against the alternate version. A new scheduling algorithm is proposed based on the defined criterion. Simulation results show that an increase in the number of executed primary tasks which improves the efficiency of processor utilization, hence prove the efficiency of the proposed algorithm.",2009,0, 3374,CMDB Implementation Approaches and Considerations in SME/SITU's Companies,"ITIL (information technology infrastructure library) is the most widely used IT framework in organizations. This de-facto standard is a service-based IT-framework that aims to improve the quality of organizational services. The core of this framework is configuration management that includes configuration management data base (CMDB) to record, update and trace all assessed activities and information in the organization. However, ITIL implementation faces some challenges especially when it effects on culture, attitudes and processes in the organization. One major part of this implementation is related to CMDB implementation. There are several major approaches to implement this repository in organizations. This paper, at first describes ITIL infrastructure and then highlights major approaches and its challenges to implement CMDB in the SME companies.",2009,0, 3375,SPEWS: A Framework for the Performance Analysis of Web Services Orchestrated with BPEL4WS,"This paper addresses quality of service aspects of Web services (WS) orchestrations created using the business process execution language for Web services (BPEL4WS). BPEL4WS is a promising language describing the WS orchestrations in form of business processes, but it lacks of a sound formal semantic, which hinders the formal analysis and verification of business processes specified in it. Formal methods, like Petri nets (PN), may provide a means to analyse BPEL4WS processes, evaluating its performance, detecting weaknesses and errors in the process model already at design-time. A framework for transformation of BPEL4WS into generalized stochastic Petri nets (GSPN) is proposed to analyse the performance and throughput of WS, based on the execution of orchestrated processes.",2009,0, 3376,A BBN-Based Approach for Fault Localization,"Fault localization techniques help programmers find out the locations and the causes of the faults and accelerate the debugging process. The relation between the fault and the failure is usually complicated, making it hard to deduce how a fault causes the failure. Analysis of variance is broadly used in many correlative researches. In this paper, a Bayesian belief network (BBN) for fault reasoning was constructed based on the suspicious pattern, whose nodes consist of the suspicious pattern and the callers of the methods that constitute the suspicious pattern. The constructing algorithm of the BBN, the correlative probabilities, and the formula for the conditional probabilities of each arc of the BBN were defined. A reasoning algorithm based on the BBN was proposed, through which the faulty module can be found and the probability for each module containing the fault can be calculated. An evaluation method was proposed. Experiments were executed to evaluation this fault localization technique. The data demonstrated that this technique could achieve an average accuracy of 0.761 and an average recall of 0.737. This fault localization technique is very effective and has high practical value.",2009,0, 3377,A State Transition Model for Anti-Interference on the Embedded System,"Microprogrammed control unit (MCU) and embedded systems are widely used in all kinds of industrial, communication devices and household appliances. However, for the embedded system, even with well hardware protection, the stability are often impacted by the complicated electro magnetic interference (EMI) and the others interference. To get a better system stability ratio, this paper introduces a potent state transition model for the anti-interference intention instead of the empiristic methods. The states of the embedded system are classified into three types: I/O, computing, and error. By the state transition diagram, we deduced a set of anti-interference probability model, which guide the software measures for the anti-interference intention and reduce the unstable probability of the embedded system.",2009,0, 3378,Are real-world power systems really safe?,"With the advent of new power system analysis software, a more detailed arc flash (AF) analysis can be performed under various load conditions. These new tools can also evaluate equipment damage, design systems with lower AF, and predict electrical fire locations based on high AF levels. This article demonstrates how AF levels change with available utility mega volt amperes (MVA), additions in connected load, and selection of system components. This article summarizes a detailed analysis of several power systems to illustrate the possible misuses of the """"Risk Category Classification Tables"""" in the Standard for Electrical Safety Requirements for Employee Workplaces, 2004 (NFPA 70E), while pointing toward future improvements of such standards.",2009,0, 3379,"DAVO: A Domain-Adaptable, Visual BPEL4WS Orchestrator","The Business Process Execution Language for Web Services (BPEL4WS) is the de facto standard for the composition of Web services into complex, valued-added workflows in both industry and academia. Since the composition of Web services into a workflow is challenging and error-prone, several graphical BPEL4WS workflow editors have been developed. These tools focus on the composition process and the visualization of workflows and mainly address the needs of web service experts.To increase the acceptance of BPEL4WS in new application domains, it is mandatory that non Web service experts are also empowered to easily compose Web services into a workflow. This paper presents the domain-adaptable visual orchestrator (DAVO), a graphical BPEL4WS workflow editor which offers a domain-adaptable data model and user interface. DAVO can be easily customized to domain needs and thus is suitable for non Web service experts.",2009,0, 3380,QoS Enhancements and Performance Analysis for Delay Sensitive Applications,This paper presents a comprehensive system modeling and analysis approach for both predicting and controlling queuing delay at an expected value under multi-class traffic in a single buffer. This approach could effectively enhance QoS delivery for delay sensitive applications. Six major contributions are given in the paper: (1) a discrete-time analytical model is developed for capturing multi-class traffic with binomial distribution; (2) a control strategy with dynamic queue thresholds is used in simulation experiments to control the delay at a specified value within the buffer; (3) the feasibility of the system is validated by comparing theoretical analysis with simulation scenarios; (4) the arrival rate can be adjusted for each forthcoming time window during the simulation with multi-packet sources; (5) statistical evaluation is performed to show both efficiency and accuracy of the analytical and simulation results; (6) a graphical user interface is developed that can provide flexible configuration for the simulation and validate input values.,2009,0, 3381,Packet Scheduling Algorithm with QoS Provision in HSDPA,"The 3rd generation WCDMA standard has been enhanced to offer significantly increased performance for packet data. But coming application like multimedia on desiring data rates will spur the UMTS can support. To support for such high data rates, high speed downlink access (HSDPA), labeled as a 3.5G wireless system , has been published in UMTS Release05. Under the HSDPA system, have to support high speed transmission and promises a peak data rate of up to 10 Mb/s. To achieve such high speed rate, system provides the channel quality indicator (CQI) information to detect the air interface condition. Then, there are many channel condition related scheduling schemes have been proposed to attend achievement of high system performance and guarantee the quality-of-service (QoS) requirements. However, there is no solution of packet scheduling algorithm can consider differences of data-types priority management under the hybrid automatic repeat (H-ARQ) scenario. In this paper, we propose the weight combination packet scheduling algorithm to target to enhance the system efficiency and balanced QoS requirements. The proposed schemes is simulated with OPNET simulator and compared with the Max CIR and PF algorithms in fast fading channel. Simulation result shows that the proposed algorithm can both effectively increase the cell throughput and meet userpsilas satisfaction base on QoS requirements.",2009,0, 3382,Hermes: A Tool for Testing Mobile Device Applications,"Smart mobile devices are ubiquitous in today's society. Such devices are being used to host increasingly complex applications and users continue to have high expectations concerning the quality of mobile application software. Testing is an established means of identifying defects and ultimately promotes confidence in the quality of a software application. However, testing of mobile device applications is challenging due to their interactive nature, the inherent heterogeneity in underlying mobile devices, and devices' limited resources. To address these difficulties, we have developed Hermes, a framework for writing tests plus a distributed run-time for automating test execution and reporting. Hermes offers support for multi-faceted tests that allow developers to verify an application's behaviour with respect to its function, aesthetics, and operating environment. In addition, Hermes' has been designed to be extensible and is application independent. A partial prototype of Hermes' design has been evaluated and the results give evidence in support of the claim that use of Hermes is more effective in detecting defects than using manual testing techniques. While Hermes is more expensive to employ than manual testing, we expect that further anticipated development will lead to an improved cost/benefit ratio.",2009,0, 3383,A Cross-layer Relay Selection Algorithm for Infrastructure-Based Two-hop Relay Networks,"Relay selection is a key issue in multihop relay networks. This paper proposes a cross-layer design relay selection algorithm for the infrastructure-based two-hop relay networks by introducing a novel utility function as the relay selection criterion. The proposed algorithm considers both channel state information(CSI) at physical layer and queue state informance (QSI) at data link layer, the goal is to guarantee the diverse QoS requirements for different services. Simulation results show that the proposed algorithm can decrease packet transmission time delay and packet dropping probability significantly while make a slight penalty on system spectrum efficiency. Moreover, it can exploit multiuser diversity gain.",2009,0, 3384,Delay and Throughput Trade-Off in WiMAX Mesh Networks,"We investigate WiMAX mesh networks in terms of delay and throughput trade-off. For a given topology, using our proposed analytical model we study how slot allocation policy and forwarding probability affects per-node and network performance.",2009,0, 3385,Predicting Quality of Object-Oriented Systems through a Quality Model Based on Design Metrics and Data Mining Techniques,"Most of the existing object-oriented design metrics and data mining techniques capture similar dimensions in the data sets, thus reflecting the fact that many of the metrics are based on similar hypotheses, properties, and principles. Accurate quality models can be built to predict the quality of object-oriented systems by using a subset of the existing object-oriented design metrics and data mining techniques. We propose a software quality model, namely QUAMO (QUAlity MOdel) which is based on divide-and-conquer strategy to measure the quality of object-oriented systems through a set of object-oriented design metrics and data mining techniques. The primary objective of the model is to make similar studies on software quality more comparable and repeatable. The proposed model is augmented from five quality models, namely McCall Model, Boehm Model, FURPS/FURPS+ (i.e. functionality, usability, reliability, performance, and supportability), ISO 9126, and Dromey Model. We empirically evaluated the proposed model on several versions of JUnit releases. We also used linear regression to formulate a prediction equation. The technique is useful to help us interpret the results and to facilitate comparisons of results from future similar studies.",2009,0, 3386,Detection of Demagnetization Faults in Permanent-Magnet Synchronous Motors Under Nonstationary Conditions,"This paper presents the results of our study of the permanent-magnet synchronous motor (PMSM) running under demagnetization. We examined the effect of demagnetization on the current spectrum of PMSMs with the aim of developing an effective condition-monitoring scheme. Harmonics of the stator currents induced by the fault conditions are examined. Simulation by means of a two-dimensional finite-element analysis (FEA) software package and experimental results are presented to substantiate the successful application of the proposed method over a wide range of motor operation conditions. Methods based on continuous wavelet transform (CWT) and discrete wavelet transform (DWT) have been successfully applied to detect and to discriminate demagnetization faults in PMSM motors under nonstationary conditions. Additionally, a reduced set of easy-to-compute discriminating features for both CWT and DWT methods has been defined. We have shown the effectiveness of the proposed method by means of experimental results.",2009,0, 3387,Automatically identifying changes that impact code-to-design traceability,"An approach is presented that automatically determines if a given source code change impacts the design (i.e., UML class diagram) of the system. This allows code-to-design traceability to be consistently maintained as the source code evolves. The approach uses lightweight analysis and syntactic differencing of the source code changes to determine if the change alters the class diagram in the context of abstract design. The intent is to support both the simultaneous updating of design documents with code changes and bringing old design documents up to date with current code given the change history. An efficient tool was developed to support the approach and is applied to an open source system (i.e., HippoDraw). The results are evaluated and compared against manual inspection by human experts. The tool performs better than (error prone) manual inspection.",2009,0, 3388,Dn-based architecture assessment of Java Open Source software systems,"Since their introduction in 1994 the Martin's metrics became popular in assessing object-oriented software architectures. While one of the Martin metrics, normalised distance from the main sequence Dn, has been originally designed with assessing individual packages, it has also been applied to assess quality of entire software architectures. The approach itself, however, has never been studied. In this paper we take the first step to formalising the Dn-based architecture assessment of Java open source software. We present two aggregate measures: average normalised distance from the main sequence Dmacrn, and parameter of the fitted statistical model lambda. Applying these measures to a carefully selected collection of benchmarks we obtain a set of reference values that can be used to assess quality of a system architecture. Furthermore, we show that applying the same measures to different versions of the same system provides valuable insights in system architecture evolution.",2009,0, 3389,Dynamic admission control and path allocation for SLAs in DiffServ networks,"Today's converged networks are mainly characterized by their support of real-time and high priority traffic requiring a certain level of quality of service (QoS). In this context, traffic classification and prioritization are key features in providing preferential treatments of the traffic in the core of the network. In this paper, we address the joint problem of path allocation and admission control (JPAC) of new Service Level Agreements (SLA) in a DiffServ domain. In order to maximize the resources utilization and the number of admitted SLAs in the network, we consider a statistical bandwidth constraints allowing for a certain overbooking over the network's links. SLAs' admissibility decisions are based on solving to optimality an integer linear programming (ILP) model. When tested by simulations, numerical results confirm that the proposed model can be solved to optimality for real-sized instances within acceptable computation times and substantially reduces the SLAs blocking probability, compared to a the Greedy mechanism proposed in the literature.",2009,0, 3390,Verification of Replication Architectures in AADL,"An established approach to achieve fault tolerance is to deploy multiple copies of the same functionality on multiple processors to ensure that if one processor fails another can provide the same functionality. This approach is known as replication. In spite of the number of studies on the topic, designing a replication pattern is still error prone. This is due to the fact that its final behavior is the result of the combination of design decisions that involves reasoning about a collection of non-deterministic events such as hardware failures and parallel computations. In this paper we present an approach to model replication patterns in the architecture analysis and design language (AADL) and analyze potentially unintended behaviors. Such an approach takes advantage of the strong semantics of AADL to model replication patterns at the architecture level. The approach involves developing two AADL models. The first one defines the intended behavior in synchronous call sequences. And the second model describes the replication architecture. These two models are then compared using a differential model in Alloy where the requirements of the first model and the concurrency and potential failure of the second are combined. The additional behaviors discovered in this model are presented to the designer as potential errors in the design. The designer then has the opportunity to modify the replication architecture to correct these behaviors or qualify them as valid behaviors. Finally, we validated our approach by recreating the verification experiment presented in but limiting ourselves to the AADL syntax.",2009,0, 3391,A UML frontend for IP-XACT-based IP management,"IP-XACT is a well accepted standard for the exchange of IP components at Electronic System and Register Transfer Level. Still, the creation and manipulation of these descriptions at the XML level can be time-consuming and error-prone. In this paper, we show that the UML can be consistently applied as an efficient and comprehensible frontend for IP-XACT-based IP description and integration. For this, we present an IP-XACT UML profile that enables UML-based descriptions covering the same information as a corresponding IP-XACT description. This enables the automated generation of IP-XACT component and design descriptions from respective UML models. In particular, it also allows the integration of existing IPs with UML. To illustrate our approach, we present an application example based on the IBM PowerPC Evaluation Kit.",2009,0, 3392,Fault-tolerant average execution time optimization for general-purpose multi-processor system-on-chips,"Fault-tolerance is due to the semiconductor technology development important, not only for safety-critical systems but also for general-purpose (non-safety critical) systems. However, instead of guaranteeing that deadlines always are met, it is for general-purpose systems important to minimize the average execution time (AET) while ensuring fault-tolerance. For a given job and a soft (transient) error probability, we define mathematical formulas for AET that includes bus communication overhead for both voting (active replication) and rollback-recovery with checkpointing (RRC). And, for a given multi-processor system-on-chip (MPSoC), we define integer linear programming (ILP) models that minimize AET including bus communication overhead when: (1) selecting the number of checkpoints when using RRC, (2) finding the number of processors and job-to-processor assignment when using voting, and (3) defining fault-tolerance scheme (voting or RRC) per job and defining its usage for each job. Experiments demonstrate significant savings in AET.",2009,0, 3393,Analysis and optimization of fault-tolerant embedded systems with hardened processors,"In this paper we propose an approach to the design optimization of fault-tolerant hard real-time embedded systems, which combines hardware and software fault tolerance techniques. We trade-off between selective hardening in hardware and process re-execution in software to provide the required levels of fault tolerance against transient faults with the lowest-possible system costs. We propose a system failure probability (SFP) analysis that connects the hardening level with the maximum number of re-executions in software. We present design optimization heuristics, to select the fault-tolerant architecture and decide process mapping such that the system cost is minimized, deadlines are satisfied, and the reliability requirements are fulfilled.",2009,0, 3394,pTest: An adaptive testing tool for concurrent software on embedded multicore processors,"More and more processor manufacturers have launched embedded multicore processors for consumer electronics products because such processors provide high performance and low power consumption to meet the requirements of mobile computing and multimedia applications. To effectively utilize computing power of multicore processors, software designers interest in using concurrent processing for such architecture. The master-slave model is one of the popular programming models for concurrent processing. Even if it is a simple model, the potential concurrency faults and unreliable slave systems still lead to anomalies of entire system. In this paper, we present an adaptive testing tool called pTest to stress test a slave system and to detect the synchronization anomalies of concurrent software in the master-slave systems on embedded multicore processors. We use a probabilistic finite-state automaton(PFA) to model the test patterns for stress testing and shows how a PFA can be applied to pTest in practice.",2009,0, 3395,Generation of compact test sets with high defect coverage,"Multi-detect (N-detect) testing suffers from the drawback that its test length grows linearly with N. We present a new method to generate compact test sets that provide high defect coverage. The proposed technique makes judicious use of a new pattern-quality metric based on the concept of output deviations. We select the most effective patterns from a large N-detect pattern repository, and guarantee a small test set as well as complete stuck-at coverage. Simulation results for benchmark circuits show that with a compact, 1-detect stuck-at test set, the proposed method provides considerably higher transition-fault coverage and coverage ramp-up compared to another recently-published method. Moreover, in all cases, the proposed method either outperforms or is as effective as the competing approach in terms of bridging-fault coverage and the surrogate BCE+ metric. In many cases, higher transition-fault coverage is obtained than much larger N-detect test sets for several values of N. Finally, our results provide the insight that, instead of using N-detect testing with as large N as possible, it is more efficient to combine the output deviations metric with multi-detect testing to get high-quality, compact test sets.",2009,0, 3396,System-level hardware-based protection of memories against soft-errors,"We present a hardware-based approach to improve the resilience of a computer system against the errors occurred in the main memory with the help of error detecting and correcting (EDAC) codes. Checksums are placed in the same type of memory locations and addressed in the same way as normal data. Consequently, the checksums are accessible from the exterior of the main memory just as normal data and this enables implicit fault-tolerance for interconnection and solid-state secondary storage sub-systems. A small hardware module is used to manage the sequential retrieval of checksums each time the integrity of the data accessed by the processor sub-system needs to be verified. The proposed approach has the following properties: (a) it is cost efficient since it can be used with simple storage and interconnection sub-systems that do not possess any inherent EDAC mechanism, (b) it allows on-line modifications of the memory protection levels, and (c) no modification of the application software is required.",2009,0, 3397,Networked vehicles for automated fault detection,"Creating fault detection software for complex mechatronic systems (e.g. modern vehicles) is costly both in terms of engineer time and hardware resources. With the availability of wireless communication in vehicles, information can be transmitted from vehicles to allow historical or fleet comparisons. New networked applications can be created that, e.g., monitor if the behavior of a certain system in a vehicle deviates compared to the system behavior observed in a fleet. This allows a new approach to fault detection that can help reduce development costs of fault detection software and create vehicle individual service planning. The COSMO (consensus self-organized modeling) methodology described in this paper creates a compact representation of the data observed for a subsystem or component in a vehicle. A representation that can be sent to a server in a backoffice and compared to similar representations for other vehicles. The backoffice server can collect representations from a single vehicle over time or from a fleet of vehicles to define a norm of the vehicle condition. The vehicle condition can then be monitored, looking for deviations from the norm. The method is demonstrated for measurements made on a real truck driven in varied conditions with ten different generated faults. The proposed method is able to detect all cases without prior information on what a fault looks like or which signals to use.",2009,0, 3398,Fast mode decision and motion estimation for AVS,"AVS video coding standard can achieve considerably high coding efficiency. Unfortunately this comes at a come in considerably increased complexity at the encoder due to mode decision and motion estimation. In this paper, we propose fast inter-prediction mode decision based on probability distribution algorithm and improved small diamond-shaped pattern search algorithm to optimize motion estimation and mode decision. The experimental results show that the proposed methods provide considerable reduction in computational complexity while maintaining coding rate and quality.",2009,0, 3399,Risk-Aware SLA Brokering Using WS-Agreement,"Service level agreements (SLAs) are facilitators for widening the commercial uptake of grid technology. They provide explicit statements of expectation and obligation between service consumers and providers. However, without the ability to assess the probability that an SLA might fail, commercial uptake will be restricted, since neither party will be willing to agree. Therefore, risk assessment mechanisms are critical to increase confidence in grid technology usage within the commercial sector. This paper presents an SLA brokering mechanism including risk assessment techniques which evaluate the probability of SLA failure. WS-agreement and risk metrics are used to facilitate SLA creation between service consumers and providers within a typical grid resource usage scenario.",2009,0, 3400,Street CORNERS: Real-Time Contextual Representation of Sensor Network Data for Environmental Trend Identification,"Sensor networks have been deployed for a range of rural and environmental applications. Well-regarded for the volume and range of data which can be obtained, wireless sensor network applications capable of using the data gathered have not been fully realized, particularly in urban settings. Street CORNERS is a wireless sensor network application which supports the contextual presentation of data gathered from an urban setting. The Street CORNERS application offers real-time data display, and provides support for predictive algorithms suitable for anticipating, detecting, and defending urban communities, among others, from environmental threats such as declining air quality and urban flash floods. Street CORNERS is presented in two parts. The network design and deployment is outlined, followed by a discussion of the design of the network application, which is involved in data pre-processing and the contextual presentation of the data gathered for trend identification.",2009,0, 3401,A Trial of a Worker's Motion Trace System Using Terrestrial Magnetism and Acceleration Sensors,"In quality control of industrial products, it is important that materials, parts, and design but also production processes themselves. Generally, most production lines have processes to confirm quality of works in former processes in everywhere. However, there are some processes of which quality cannot be confirmed in later processes. For example, to fix a part using some screws, procedure to move the part onto correct pace and order to fasten screws is defined in most process. However, detecting violation of the procedure by visual or another way is hard in later processes. Therefore, we have been developing a system that guarantees quality of a process by tracing motions of worker's arms and hands by using terrestrial magnetism sensors and acceleration sensors. As a prototype system, we developed a system that traces workerpsilas motion and judges his/her work is correct or not at a process to attach a fuel tank in automobile assembly factory. In this paper, based on days long examination of the prototype system we will describe a method to judge worker's motion and its evaluation.",2009,0, 3402,Quantifying software reliability and readiness,"As the industry moves to more mature software processes (e.g., CMMI) there is increased need to adopt more rigorous, sophisticated (i.e., quantitative) metrics. While quantitative product readiness criteria are often used for business cases and related areas, software readiness is often assessed more subjectively & qualitatively. Quite often there is no explicit linkage to original performance and reliability requirements for the software. The criteria are primarily process-oriented (versus product oriented) and/or subjective. Such an approach to deciding software readiness increases the risk of poor field performance and unhappy customers. Unfortunately, creating meaningful and useful quantitative in-process metrics for software development has been notoriously difficult. This paper describes novel and quantitative software readiness criteria to support objective and effective decision-making at product shipment. The method organizes and streamlines existing quality and reliability data into a simple metric and visualizations that are applicable across products and releases. The methodology amalgamates two schools of thoughts in quantitative terms: product and process parameters that have been adequately represented to formalize the software readiness index. Parameters from all aspects of software development life cycle (e.g., requirements, project management & resources, development & testing, audits & assessments, stability and reliability, and technical documentation) that could impact the readiness index are considered.",2009,0, 3403,Accuracy improvement of multi-stage change-point detection scheme by weighting alerts based on false-positive rate,"One promising approach for large-scale simultaneous events (e.g., DDoS attacks and worm epidemics) is to use a multi-stage change-point detection scheme. The scheme adopts two-stage detection. In the first stage, local detectors (LDs), which are deployed on each monitored subnet, detects a change point in a monitored metric such as outgoing traffic rate. If an LD detects a change-point, it sends an alert to global detector (GD). In the second stage, GD checks whether the proportion of LDs that send alerts simultaneously is greater than or equal to a threshold value. If so, it judges that large-scale simultaneous events are occurring. In previous studies for the multi-stage change-point detection scheme, it is assumed that weight of each alert is identical. Under this assumption, false-positive rate of the scheme tends to be high when some LDs sends false-positive alerts frequently. In this paper, we weight alerts based on false-positive rate of each LD in order to decrease false-positive rate of the multi-stage change-point detection scheme. In our scheme, GD infers false-positive rate of each LD and gives lower weight to LDs with higher false-positive rate. Simulation results show that our proposed scheme can achieve lower false-positive rate than the scheme without alert weighting under the constraint that detection rate must be 1.0.",2009,0, 3404,Static Detection of Un-Trusted Variables in PHP Web Applications,"Web applications support more and more our daily activities, it's important to improve their reliability and security. The content which users input to Web applications' server-side is named un-trusted content. Un-trusted content has a significant impact on the reliability and security of Web applications, so detecting the un-trusted variables in server-side program is important for improving the quality of Web applications. The previous methods have poor performance on weak typed and none typed server-side programs. To address this issue, this paper proposed a new technique for detecting un-trusted variables in PHP web applications (PHP is a weak typed server- side language). The technique is based upon a two phases static analysis algorithm. In the first phase, we extract modules from the Web application. Then un-trusted variables are detected from modules in the second phase. An implementation of the proposed techniques DUVP was also presented in the paper and it's successfully applied to detect un-trusted variables in large-scale PHP web application.",2009,0, 3405,Voltage flicker compensation using STATCOM,"Voltage flicker is considered as one of the most severe power quality problems (especially in loads like electrical arc furnaces) and much attention has been paid to it lately. Due to the latest achievements in the semiconductors industry and consequently the emergence of the compensators based on voltage source converters, FACTS devices have been gradually noticed to be used for voltage flicker compensation. This paper covers the contrasting approaches; dealing with the voltage flicker mitigation in three stages and assessing the related results in details. Initially, the voltage flicker mitigation, using FCTCR (fixed capacitor thyristor controlled reactor), was simulated. Secondly, the compensation for the Static Synchronous Compensator (STATCOM) has been performed. In this case, injection of harmonics into the system caused some problems which were later overcome by using 12-pulse assignment of SATCOM and RLC filters. The obtained results show that STATCOM is very efficient and effective for the flicker compensation. All the simulations have been performed on the MATLAB Software.",2009,0, 3406,Wavelet packet analysis applied in detection of low-voltage DC arc fault,The randomness and instantaneity of low-voltage DC arc fault make it difficult to be detected by methods in time or frequency domain. This paper proposed a method based on wavelet packet analysis which has the localization characteristics to detect low-voltage DC arc fault. And the effectiveness of this method has been proved by the simulation analysis with the MATALAB software and the arc simulation experiments.,2009,0, 3407,An efficient parallel approach to Random Sample Matching (pRANSAM),"This paper introduces a parallelized variant of the Random Sample Matching (RANSAM) approach, which is a very time and memory efficient enhancement of the common Random Sample Consensus (RANSAC). RANSAM exploits the theory of the birthday attack whose mathematical background is known from cryptography. The RANSAM technique can be applied to various fields of application such as mobile robotics, computer vision, and medical robotics. Since standard computers feature multi-core processors nowadays, a considerable speedup can be obtained by distributing selected subtasks of RANSAM among the available cores. First of all this paper addresses the parallelization of the RANSAM approach. Several important characteristics are derived from a probabilistic point of view. Moreover, we apply a fuzzy criterion to compute the matching quality, which is an important step towards real-time capability. The algorithm has been implemented for Windows and for the QNX RTOS. In an experimental section the performance of both implementations is compared and our theoretical results are validated.",2009,0, 3408,Study of reliability and accelerated life test of electric drive system,"Reliability is defined as the probability that an item can perform is intended function for a specified interval under stated conditions. This paper researches on the reliability and accelerated life test of electric drive system of Electric Vehicle. It provides three different cases of accelerated life testing, and builds the fault tree through analyzing the construction and failure mechanism of the electric drive system. It presents the reliable model of weakness factors in electric drive system through reliability assessment using simulation software. And it confirms the influence factors about the reliability for electric drive system.",2009,0, 3409,Segment based X-Filling for low power and high defect coverage,"Many X-Filling strategies are proposed to reduce test power during scan based testing. Because their main motivation is to reduce the switching activities of test patterns in the test process, some of them are prone to reduce the test ability of test patterns, which may lead to low defect coverage. In this paper, we propose a segment based X-filling(SBF) technique to reduce test power using multiple scan chains, with minimal impact on defect coverage. Different from the previous filling methods, our X-filling technique is segment based and defect coverage aware. The method can be easily incorporated into traditional ATPG flow to keep capture power below a certain limit and keep the defect coverage at a high level.",2009,0, 3410,A Proposal for Stable Semantic Metrics Based on Evolving Ontologies,"In this paper, we propose a set of semantic cohesion metrics for ontology measurement, which can be used to assess the quality of evolving ontologies in the context of dynamic and changing Web. We argue that these metrics are stable and can be used to measure ontology semantics rather than ontology structures. Measuring ontology quality with semantic inconsistencies caused by ontology evolutions, is mainly considered in this paper. The proposed semantic cohesion metrics are theoretically and empirically validated.",2009,0, 3411,Research on Workflow QoS,"In business processes, buyers and suppliers define a contract between the two parties such as quality of service(QOS) and quality of products. Organizations operating in modern markets require an excellent service management. When services or products are created or managed using workflow processes, the workflow management system should predict, monitor and control the QOS rendered to customers according to specifications of the contract. To achieve these objectives, an appropriate QOS model for workflow processes and methods to compute QOS are proposed.",2009,0, 3412,Optimized Design of Injection Mould for Mobile Phone Front Shell Based on CAE Technology,"There is a lot of limitations in the traditional process of mould design and manufacturing. With the development of the science and technology, especially in the field of the computer, CAE technology begins to be applied widely in the process of modern mould design and manufacturing. The results of CAE simulation analysis of injection molding can provide reliable and optimized reference data for mould design and manufacturing. Applying CAE simulation analysis technology of injection molding can not only increase the probability of success in mould test but also improve greatly the quality of mould design and manufacturing. In this paper, the injection molding process of mobile phone front shell is analyzed. The best position of gate is discussed by using the Moldflow software. The optimized design scheme of feed system is determined in view of the particular structure of the plastic part. Then the simulation flow analysis of injection molding is carried into execution. On the basis of CAE analysis of injection molding, the whole injection mould structure is designed and the working procedure of injection mould is stated.",2009,0, 3413,Angle Domain Average and Autoregressive Spectrum Analysis Based Gear Faults Diagnosis,"In order to process the non-stationary vibration signals during run-up of gearbox, the method based on angle domain average and autoregressive spectrum analysis is presented. This new method combines angle domain average with angle domain average technique. Firstly, the vibration signal is sampled at constant time increments and then uses software to resample the data at constant angle increments.Secondly, the angle domain signal is preprocessed using angle domain average technique in order to eliminate the unrelated noise. In the end, the averaged signals are processed by autoregressive spectrum analysis. The experimental results show that the proposed method can effectively detect the gear crack faults.",2009,0, 3414,Resilient computing: An engineering discipline,"The term resiliency has been used in many fields like child psychology, ecology, business, and several others, with the common meaning of expressing the ability to successfully accommodate unforeseen environmental perturbations or disturbances. The adjective resilient has been in use for decades in the field of dependable computing systems however essentially as a synonym of fault-tolerant, thus generally ignoring the unexpected aspect of the phenomena the systems may have to face. These phenomena become of primary relevance when moving to systems like the future large, networked, evolving systems constituting complex information infrastructures - perhaps involving everything from super-computers and huge server ldquofarmsrdquo to myriads of small mobile computers and tiny embedded devices, with humans being central part of the operation of such systems. Such systems are in fact the dawning of the ubiquitous systems that will support Ambient Intelligence. With such ubiquitous systems, what is at stake is to maintain dependability, i.e., the ability to deliver service that can justifiably be trusted, in spite of continuous changes. Therefore the term resilience and resilient computing can be applied to the design of ubiquitous systems and defined as the search for the following property: the persistence of service delivery that can justifiably be trusted, when facing changes. Changes may be of different nature, with different prospect and different timing. Therefore the design of ubiquitous systems requires the mastering of many, often separated, engineering disciplines that span from advanced probability to logic, from human factors to cryptology and information security and to management of large projects. From an educational point of view, very few, if any, Universities are offering a comprehensive and methodical track that is able to provi- de students with a sufficient preparation that makes them able to cope with the challenges posed by the design of ubiquitous systems. In Europe an activity has started towards the identification of a MSc curriculum in Resilient Computing as properly providing a timely and necessary answer to requirements posed by the design of ubiquitous systems. To this aim, a Network of Excellence ReSIST - Resilience for Survivability in IST-was run from January 2006 to March 2009 (see http://www.resist-noe.org). In this presentation the results of ReSIST will be presented as well as the identified MSc curriculum in Resilient Computing, to share its experience and to involve a much larger, open and qualified community in the discussion of the proposed curriculum.",2009,0, 3415,A cross-input adaptive framework for GPU program optimizations,"Recent years have seen a trend in using graphic processing units (GPU) as accelerators for general-purpose computing. The inexpensive, single-chip, massively parallel architecture of GPU has evidentially brought factors of speedup to many numerical applications. However, the development of a high-quality GPU application is challenging, due to the large optimization space and complex unpredictable effects of optimizations on GPU program performance. Recently, several studies have attempted to use empirical search to help the optimization. Although those studies have shown promising results, one important factor-program inputs-in the optimization has remained unexplored. In this work, we initiate the exploration in this new dimension. By conducting a series of measurement, we find that the ability to adapt to program inputs is important for some applications to achieve their best performance on GPU. In light of the findings, we develop an input-adaptive optimization framework, namely G-ADAPT, to address the influence by constructing cross-input predictive models for automatically predicting the (near-)optimal configurations for an arbitrary input to a GPU program. The results demonstrate the promise of the framework in serving as a tool to alleviate the productivity bottleneck in GPU programming.",2009,0, 3416,Evaluating the performance and intrusiveness of virtual machines for desktop grid computing,"We experimentally evaluate the performance overhead of the virtual environments VMware Player, QEMU, VirtualPC and VirtualBox on a dual-core machine. Firstly, we assess the performance of a Linux guest OS running on a virtual machine by separately benchmarking the CPU, file I/O and the network bandwidth. These values are compared to the performance achieved when applications are run on a Linux OS directly over the physical machine. Secondly, we measure the impact that a virtual machine running a volunteer @home project worker causes on a host OS. Results show that performance attainable on virtual machines depends simultaneously on the virtual machine software and on the application type, with CPU-bound applications much less impacted than IO-bound ones. Additionally, the performance impact on the host OS caused by a virtual machine using all the virtual CPU, ranges from 10% to 35%, depending on the virtual environment.",2009,0, 3417,Predicting cache needs and cache sensitivity for applications in cloud computing on CMP servers with configurable caches,"QoS criteria in cloud computing require guarantees about application runtimes, even if CMP servers are shared among multiple parallel or serial applications. Performance of computation-intensive application depends significantly on memory performance and especially cache performance. Recent trends are toward configurable caches that can dynamically partition the cache among cores. Then, proper cache partitioning should consider the applications' different cache needs and their sensitivity towards insufficient cache space. We present a simple, yet effective and therefore practically feasible black-box model that describes application performance in dependence on allocated cache size and only needs three descriptive parameters. Learning these parameters can therefore be done with very few sample points. We demonstrate with the SPEC benchmarks that the model adequately describes application behavior and that curve fitting can accomplish very high accuracy, with mean relative error of 2.8% and maximum relative error of 17%.",2009,0, 3418,Generalized Tardiness Quantile Metric: Distributed DVS for Soft Real-Time Web Clusters,"Performing QoS (Quality of Service) control in large computing systems requires an on line metric that is representative of the real state of the system. The Tardiness Quantile Metric (TQM) introduced earlier allows control of QoS by measuring efficiently how close to the specified QoS the system is, assuming specific distributions. In this paper we generalize this idea and propose the Generalized Tardiness Quantile Metric (GTQM). By using an online convergent sequential process, defined from a Markov chain, we derive quantile estimations that do not depend on the shape of the workload probability distribution. We then use GTQM to keep QoS controlled in a fine grain manner, saving energy in soft real-time Web clusters. To evaluate the new metric, we show practical results in a real Web cluster running Linux, Apache, and MySQL, with our QoS control and for both a deterministic workload and an e-commerce workload. The results show that the GTQM method has excellent workload prediction capabilities, which immediately translates in more accurate QoS control, allowing for slower speeds and larger energy savings than the state-of-the-art in soft real-time Web cluster systems.",2009,0, 3419,FTTH Network Management Software Tool: SANTAD ver 2.0,"This paper focused on developing a fiber-to-the-home (FTTH) network management software tool named Smart Access Network Testing, Analyzing and Database (SANTAD) ver 2.0 based on Visual Basic for transmission surveillance and fiber fault identification. SANTAD will be installed with optical line terminal (OLT) at central office (CO) to allow the network service providers and field engineers to detect a fiber fault, locate the failure location, and monitor the network performance downwardly from CO towards customer residential locations. SANTAD is able to display the status of each optical network unit (ONU) connected line on a single computer screen with capability to configure the attenuation and detect the failure simultaneously. The failure information will be delivered to the field engineers for promptly actions, meanwhile the failure line will be diverted to protection line to ensure the traffic flow continuously. The database system enable the history of network scanning process be analyzed and studied by the engineer.",2009,0, 3420,Ad Hoc Solution of the Multicommodity-Flow-Over-Time Problem,"Ad hoc shared-ride systems built upon intelligent-transportation-system (ITS) technology represent a promising scenario for investigating the multicommodity-flow-over-time problem. This type of problem is known to be strongly NP-hard. Furthermore, capacity assignment in this shared-ride system is a problem to be solved in highly dynamic transportation and communication networks. So far, the known heuristics to this problem are centralized and require global knowledge about the environment. This paper develops a decentralized ad hoc capacity-assignment approach. Based on a spatial decomposition of the global optimization problem, the solution provides effective agent decisions using only local knowledge. The effectiveness is assessed by the trip quality for ride clients and by the required communication effort.",2009,0, 3421,Using Declarative Meta Programming for Design Flaws Detection in Object-Oriented Software,"Nowadays, many software developers and maintainers encounter with incomprehensible, unexpandable and unchangeable program structures that consequently reduce software quality. Such problems come from poor design and poor programming called design flaws. Design flaws are program properties that indicate a potentially deficient design of a software system. It can increase the software maintenance cost drastically. Therefore detection of these flaws is necessary. This paper proposes a declarative-based approach in which the design flaws of an object-oriented system can be detected at the meta-level in the declarative meta programming. We apply our approach to detect some well-known design flaws, and the results show that the proposed approach is able to detect those flaws.",2009,0, 3422,Notice of Violation of IEEE Publication Principles
Evaluation of GP Model for Software Reliability,"Notice of Violation of IEEE Publication Principles

""""Evaluation of GP Model for Software Reliability,""""
by S. Paramasivam, and M. Kumaran,
in the 2009 International Conference on Signal Processing Systems, May 2009

After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

This paper contains significant portions of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.

Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following article:

""""A Comparative Evaluation of Using Genetic Programming for Predicting Fault Count Data,""""
by W. Afzal, R. Torkar,
in the Third International Conference on Software Engineering Advances, 2008. ICSEA '08, pp.407-414, October 2008

There has been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of modelspsila assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count.",2009,0, 3423,Estimation of Defect Proneness Using Design Complexity Measurements in Object-Oriented Software,"Software engineering is continuously facing the challenges of growing complexity of software packages and increased level of data on defects and drawbacks from software production process. This makes a clarion call for inventions and methods which can enable a more reusable, reliable, easily maintainable and high quality software systems with deeper control on software generation process. Quality and productivity are indeed the two most important parameters for controlling any industrial process. Implementation of a successful control system requires some means of measurement. Software metrics play an important role in the management aspects of the software development process such as better planning, assessment of improvements, resource allocation and reduction of unpredictability. The process involving early detection of potential problems, productivity evaluation and evaluating external quality factors such as reusability, maintainability, defect proneness and complexity are of utmost importance. Here we discuss the application of CK metrics and estimation model to predict the external quality parameters for optimizing the design process and production process for desired levels of quality. Estimation of defect-proneness in object-oriented system at design level is developed using a novel methodology where models of relationship between CK metrics and defect-proneness index is achieved. A multifunctional estimation approach captures the correlation between CK metrics and defect proneness level of software modules.",2009,0, 3424,Graph-Based Task Replication for Workflow Applications,"The Grid is an heterogeneous and dynamic environment which enables distributed computation. This makes it a technology prone to failures. Some related work uses replication to overcome failures in a set of independent tasks, and in workflow applications, but they do not consider possible resource limitations when scheduling the replicas. In this paper, we focus on the use of task replication techniques for workflow applications, trying to achieve not only tolerance to the possible failures in an execution, but also to speed up the computation without demanding the user to implement an application-level checkpoint, which may be a difficult task depending on the application. Moreover, we also study what to do when there are not enough resources for replicating all running tasks. We establish different priorities of replication depending on the graph of the workflow application, giving more priority to tasks with a higher output degree. We have implemented our proposed policy in the GRID superscalar system, and we have run the fastDNAml as an experiment to prove our objectives are reached. Finally, we have identified and studied a problem which may arise due to the use of replication in workflow applications: the replication wait time.",2009,0, 3425,Evaluating Provider Reliability in Grid Resource Brokering,"If Grid computing is to experience widespread commercial adoption, then incorporating risk assessment and management techniques is essential,both during negotiation between service provider and service requester and during run-time. This paper focuses on the role of a resource broker in this context. Specifically, an approach to evaluating the reliability of risk information received from resource providers is presented, using historical data to provide a statistical estimate of the average integrity of their risk assessments, with respect to systematic overestimation or underestimation of the probability of failure. Simulation results are presented, indicating the effectiveness of this approach.",2009,0, 3426,Remote power quality monitoring and analysis system using LabVIEW software,This paper presents the development of a computer based data acquisition system that provides real-time monitoring of voltage and current at the customer's point of common coupling (PCC). Any power quality disturbances sustained by the user throughout the monitoring period were detected and recorded on remote PC. Post acquisition analysis was performed on the data collected. The power quality system was put to several tests and experiments during the development stage of the system. Actual continuous real-time monitoring was carried out for one-week duration using the developed system and results were analyzed and reported.,2009,0, 3427,Indoor monitoring of respiratory distress triggering factors using a wireless sensing network and a smart phone,"A wireless sensing network Bluetooth enabled was designed and implemented for continuous monitoring of indoor humidity and temperature conditions as well as to detect pollutant gases and vapors. The novelty of the work is related to the development of an embedded software using Java2ME technology for a smart phone that materializes a user friendly HMI. Two mobile software modules assure sensor nodes data reading through Bluetooth connection, primary data processing, data storage and alarm generation according with imposed thresholds for air quality parameters. Additional .NET developed software for a Notebook PC platform permits to remotely configure the mobile application and to receive the data logged in the mobile phone. Using the implemented distributed measurement system, including the smart phone, an intelligent assessment of air conditions for risk factor reduction of asthma or chronic obstructive pulmonary disease is carried out. Several experimental results are also included.",2009,0, 3428,Analytic Scoring of Landscape CAD Course Based on BP Neural Network,"Course scoring is the major criteria for measuring the teaching effect and a basis for improving the education quality. Analytic scoring of landscape CAD (computer-aided design) course is influenced by various factors. Relationships among these factors are complex, some are nonlinear, even some are random and fuzzy. It is difficult to explain their internal relationships with traditional method. This research combines back-propagation neural network and DPS software to establish a three-layer BP neural network model, which took 60 examination papers of landscape CAD course as samples and made predictions on the score in accordance with five factors, including landscape design standard, landscape design innovation, computer cartography standard, drawing effect and workload. The results show that BP neural network model has strong nonlinear approximation ability, could truly reflects the nonlinear relationships between global score of landscape CAD course and main controlling factors of analytic scoring, with small error between predicted values and the measured values, relative error lower than 5%. In the future, when analytic scoring of the landscape CAD course obtains from the teachers, the global scoring can be calculated by BPNN model automatically. This method showed wide application prospect to the courses need analytic scoring.",2009,0, 3429,Detecting Software Faults in Distrubted Systems,"We are concerned with the problem of detecting faults in distributed software, rapidly and accurately. We assume that the software is characterized by events or attributes, which determine operational modes; some of these modes may be identified as failures. We assume that these events are known and that their probabilistic structure, in their chronological evolution, is also known, for a finite set of different operational modes. We propose and analyze a sequential algorithm that detects changes in operational modes rapidly and reliably. Further more, a threshold operational parameter of the algorithm controls effectively the induced speed versus correct detection versus false detection tradeoff.",2009,0, 3430,Real-Time Designing Software for the Sewing Path Design on the Broidery Industry,The software with the user graphic interface has been developed for the sewing path design in the embroidery industry. This new version of software is an advanced version of the last one and is incorporated many advanced new functions. The main purpose of the software is to create an easy and painless environment for the designer to create their own artistic sewing pattern with suitable sewing path. The resulting sawing path will be automatically transformed into a file with the absolutely or relatively coordinate value of each stitch point on the sewing path. The coordinate file will feed into the embroidery machine for the recreation of the design pattern on the surface of the product. Commercial graphic software does not provide the suitable function to transform each stitch point into their respective coordinate values. This drawback is resulted in the transformation process to be done by hand and by means of measuring each stitch point by designer - one point by one point. Most of the designed sewing pattern will involve more than hundreds of stitch points. It is a tedious and error-prone process for the designer and will deprive most of the design time away from the designer. This software will provide an immediately help for the designer to do the painful work of the process of the coordinate transformation. This will leave the designer to focus only on the artistic pattern design.,2009,0, 3431,Using the Number of Faults to Improve Fault-Proneness Prediction of the Probability Models,"The existing fault-proneness prediction methods are based on unsampling and the training dataset does not contain the information on the number of faults of each module and the fault distributions among these modules. In this paper, we propose an oversampling method using the number of faults to improve fault-proneness prediction. Our method uses the information on the number of faults in the training dataset to support better prediction of fault-proneness. Our test illustrates that the difference between the predictions of oversampling and unsampling is statistically significant and our method can improve the prediction of two probability models, i.e. logistic regression and naive Bayes with kernel estimators.",2009,0, 3432,Reliability Analysis in the Early Development of Real-Time Reactive Systems,"The increasing trend toward complex software systems has highlighted the need to incorporate quality requirements earlier in the development process. Reliability is one of the important quality indicators of such systems. This paper proposes a reliability analysis approach to measure reliability in the early development of real-time reactive systems (RTRS). The goal is to provide decision support and detect the first signs of low or decreasing reliability as the system design evolves. The analysis is conducted in a formal development environment for RTRS, formalized mathematically and illustrated using a train-gate-controller case study.",2009,0, 3433,A Rotatable Placement Algorithm and GA to the Nesting Problem,"The objective of two-dimensional optimal nesting problem is to place the same or different pieces of the fixed quantity on the sheet in this paper. What we want to do is increase the rate of utility and decrease the waste of panel. Generally, determine the quality of the nesting results; it can briefly be divided into two factors: placement algorithm and permutation. Placement algorithm means to decide the positions where the pieces place into the sheet. And permutation is the placing sequence order of pieces. If the sequence of permutation is available and the placement rule also meets the demand of packing, then the exact or optimal solutions could be found. This research provides a new placement algorithm rule """"area-decomposition"""" method. This combines the rotation function for each piece and genetic algorithm. A comparison of nesting with literature and commercial software shows the results. This research can really acquire good results of nesting according to the demand of different situations in interest.",2009,0, 3434,Applications of Regression Kriging and GIS in Detecting the Variation in Leaf Nitrogen and Phosphorus of Spruce in Europe,"The leaf nitrogen and phosphorus are considered the major limitations of the photosynthetic process, reflecting the quality and suitability of habitation. The multidimensional factors of ecosystem have been a great barrier to discover the biochemical reaction of plants to their environment. In this research, a novel approach integrated with regression kriging and GIS is applied to explore the pattern of these leaf minerals in relation to spatial variability in climate and landscape (urban radiation and forest shield). Europe was chosen as the study area owing to the availability of spruce leaf data and ancillary grids. The advantage of this method is based on the fact that a map-based orthogonal space and universal kriging improve the accuracy and resolution in mapping the spatial distribution of leaf minerals.",2009,0, 3435,Integrated Platform for Autonomic Computing,"IPAC (integrated platform for autonomic computing) aims at delivering a middleware and service creation environment for developing embedded, intelligent, collaborative, context-aware services in mobile nodes. IPAC relies on short range communications for the ad hoc realization of dialogs among collaborating nodes. Advanced sensing components leverage the context-awareness attributes of IPAC, thus rendering it capable of delivering highly innovative applications for mobile and pervasive computing. IPAC networking capabilities are based on rumour spreading techniques, a stateless and resilient approach, and information dissemination among embedded nodes. Spreading of information is subject to certain rules (e.g., space, time, price). IPAC nodes may receive, store, assesses and possibly relay the incoming content to other nodes. The same distribution channel is followed for the dissemination of new applications and application components that """"join the IPAC world"""".",2009,0, 3436,Multiobjective Evolutionary Optimization Algorithm for Cognitive Radio Networks,"Under Cognitive radio (CR), the Quality of Service (QoS) suffers from many dimensions or metrics of communication quality for improving spectrum utilization. To investigate this issue, this paper develops a methodology based on the multiobjective optimization model with genetic algorithms (GAs). The influence of evolving a radio defined by a chromosome is identified. The Multiobjective Cognitive Radio (MOCR) algorithm from genetically manipulating the chromosomes is proposed. Using adaptive component as an example, the bounds for the maximum benefit is predicted by a proposed model that considers Pareto front. To find a set of parameters that optimize the radio for userpsilas current needs, several solutions are presented. Simulation results show that MOCR is able to find a comparatively better spread of compromise solutions.",2009,0, 3437,Improving Web Services Robustness,"Developing robust web services is a difficult task. Field studies show that a large number of web services are deployed with robustness problems (i.e., presenting unexpected behaviors in the presence of invalid inputs). Several techniques for the identification of robustness problems have been proposed in the past. This paper proposes a mechanism that automatically fixes the problems detected. The approach consists of using robustness testing to detect robustness issues and then mitigate those issues by applying inputs verification based on well-defined parameter domains, including domain dependencies between different parameters. This integrated and fully automatable methodology has been used to improve three different implementations of the TPC-App web services. Results show that this tool can be easily used by developers to improve the robustness of web services implementations.",2009,0, 3438,An Extensible Abstract Service Orchestration Framework,"Service composition is complex. It has to reach a set of pre-defined non-functional qualities, like security for instance, which requires the production of complicated code.This code, often distributed between client and server sides, is highly error-prone and difficult to maintain. In this paper, we present a generative environment for the orchestration of abstract services and the separate specification of non-functional properties. This environment has been built within the European SODA project and validated on several industrial use cases. In this paper, we focus on an alarm management scenario with stringent security requirements.",2009,0, 3439,SRGMs Based on Stochastic Differential Equations,"This paper presents a software reliability growth model based on Ito type stochastic differential equation. As the size of a software system is large, the number of faults detected during the testing phase becomes large ; the change of the number of faults, which are detected and removed through each debugging, becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, two new software reliability growth model based on Ito type of stochastic differential equation has been proposed. In software reliability growth model 1 stochastic differential equation based generalized Erlang model and in software reliability growth model 2 stochastic differential equation based generalized Erlang model with logistic error detection function is being considered.",2009,0, 3440,Using Virtualization to Improve Software Rejuvenation,"In this paper, we present an approach for software rejuvenation based on automated self-healing techniques that can be easily applied to off-the-shelf application servers. Software aging and transient failures are detected through continuous monitoring of system data and performability metrics of the application server. If some anomalous behavior is identified, the system triggers an automatic rejuvenation action. This self-healing scheme is meant to disrupt the running service for a minimal amount of time, achieving zero downtime in most cases. In our scheme, we exploit the usage of virtualization to optimize the self-recovery actions. The techniques described in this paper have been tested with a set of open-source Linux tools and the XEN virtualization middleware. We conducted an experimental study with two application benchmarks (Tomcat/Axis and TPC-W). Our results demonstrate that virtualization can be extremely helpful for failover and software rejuvenation in the occurrence of transient failures and software aging.",2009,0, 3441,Probabilistic fault diagnosis for IT services in noisy and dynamic environments,"The modern society has come to rely heavily on IT services. To improve the quality of IT services it is important to quickly and accurately detect and diagnose their faults which are usually detected as disruption of a set of dependent logical services affected by the failed IT resources. The task, depending on observed symptoms and knowledge about IT services, is always disturbed by noises and dynamic changing in the managed environments. We present a tool for analysis of IT services faults which, given a set of failed end-to-end services, discovers the underlying resources of faulty state. We demonstrate empirically that it applies in noisy and dynamic changing environments with bounded errors and high efficiency. We compare our algorithm with two prior approaches, Shrink and Max coverage, in two well-known types of network topologies. Experimental results show that our algorithm improves the overall performance.",2009,0, 3442,Heteroscedastic models to track relationships between management metrics,"Modern software systems expose management metrics to help track their health. Recently, it was demonstrated that correlations among these metrics allow faults to be detected and their causes localized. In particular, linear regression models have been used to capture metric correlations. We show that for many pairs of correlated metrics in software systems, such as those based on Java Enterprise Edition (JavaEE), the variance of the predicted variable is not constant. This behaviour violates the assumptions of linear regression, and we show that these models may produce inaccurate results. In this paper, leveraging insight from the system behaviour, we employ an efficient variant of linear regression to capture the non-constant variance. We show that this variant captures metric correlations, while taking the changing residual variance into consideration. We explore potential causes underlying this behaviour, and we construct and validate our models using a realistic multi-tier enterprise application. Using a set of 50 fault-injection experiments, we show that we can detect all faults without any false alarm.",2009,0, 3443,Monitoring probabilistic SLAs in Web service orchestrations,"Web services are software applications that are published over the Web, and can be searched and invoked by other programs. New Web services can be formed by composing elementary services, such composite services are called Web service orchestrations. Quality of service (QoS) issues for Web service orchestrations deeply differ from corresponding QoS issues in network management. In an open world of Web services, service level agreements (SLAs) play an important role. They are contracts defining the obligations and rights between the provider of a Web service and a client with respect to the services' function and quality. In a previous work we have advocated using soft contracts of probabilistic nature, for the QoS part of contracts. Soft contracts have no hard bounds on QoS parameters, but rather probability distributions for them. An essential component of SLA management is the continuous monitoring of the performance of called Web services, to check for violation of the agreed SLA. In this paper we propose a statistical technique for QoS contract run time monitoring. Our technique is compatible with the use of soft probabilistic contracts.",2009,0, 3444,Modeling remote desktop systems in utility environments with application to QoS management,"A remote desktop utility system is an emerging client/server networked model for enterprise desktops. In this model, a shared pool of consolidated compute and storage servers host users' desktop applications and data respectively. End-users are allocated resources for a desktop session from the shared pool on-demand, and they interact with their applications over the network using remote display technologies. Understanding the detailed behavior of applications in these remote desktop utilities is crucial for more effective QoS management. However, there are challenges due to hard-to-predict workloads, complexity, and scale. In this paper, we present a detailed modeling of a remote desktop system through case study of an Office application - email. The characterization provides insights into workload and user model, the effect of remote display technology, and implications of shared infrastructure. We then apply these learnings and modeling results for improved QoS resource management decisions - achieving over 90% improvement compared to state of the art allocation mechanisms. We also present discussion on generalizing a methodology for a broader applicability of model-driven resource management.",2009,0, 3445,Traffic studies for DSA policies in a simple cellular context with packet services,"DSA (dynamic spectrum allocation) techniques are very challenging when the quality of service has to be guaranteed in a flexible spectrum situation. In this paper, we present and analyze DSA policies for packet services in cellular context. A centralized model, where a meta-operator shares a common spectrum among different operators, is considered. We focus on two criteria for the policies design: the total welfare (sum of operators' rewards), and the blocking probability. We go through two steps to pass from the actual FSA (fixed spectrum allocation) situation into DSA. First, DSA algorithms depend on the arrival rates. Second, DSA algorithms depend on both the arrival rates as well as the number of active users. Targeting the reward maximization shows to be inefficient when the blocking probability has to be guaranteed. However policies targeting a blocking probability threshold, achieve greater rewards then FSA rewards. We also present a heuristic DSA algorithm that takes into consideration: the arrival rates, the number of active users and the blocking probability. The algorithm gives a very close blocking probability to the one achieved using FSA, while the obtained reward significantly exceeds the FSA reward.",2009,0, 3446,Providing Security in Intelligent Agent-Based Grids by Means of Error Correction,"The security of the existing ambient intelligence model presents a particular challenge because this architecture combines different networks, being prone to vulnerabilities and attacks. Specific techniques are required to be studied in order to provide a fault-tolerant system. One way to address this goal is to apply different techniques in order to correct the errors introduced in the systems by a spectrum of sources. This paper is a rational continuity of the previous work by offering a method of detecting and correcting errors.",2009,0, 3447,A Case for Meta-Triggers in Wireless Sensor Networks,"This work addresses the problem of managing the reactive behavior in Wireless Sensor Networks (WSN). We consider settings in which the occurrence of a particular event, detected in a state that satisfies a given condition, should fire the execution of an action. We observe that in WSN settings, both the event and condition may pertain to some continuous phenomena that are monitored by distinct groups of nodes and, in addition, their respective detection may impose an extra communication overhead, if a correct executional behavior is desired in terms of firing the action. Towards that end, we propose the concept of a meta trigger, which essentially translates a particular request, so that the communication overhead among the entities participating in its processing is minimized. We discuss a proof-of-concept implementation which demonstrates the benefits of the proposed methodology on an actual small-size network, and we present a detailed simulation-based experimental evaluation in large-scale networks. Our experiments indicate that the meta-triggers can yield substantial savings in the energy (and bandwidth) expenditures of the network, while preserving the intended executional correctness.",2009,0, 3448,Maintaining Network QoS Across NIC Device Driver Failures Using Virtualization,"Device driver failures have been shown to be a major cause of system failures. Network services stress NIC device drivers, increasing the probability of NIC driver bugs being manifested as server failures. System virtualization is increasingly used for server consolidation and management. The isolated driver domain (IDD) architecture used by several virtual machine monitors, such as Xen, forms a natural foundation for making systems resilient to NIC driver failures. In order to realize this potential, recovery must be fast enough to maintain QoS for network services across NIC driver failures. We show that the standard Xen configuration, enhanced with simple detection and recovery mechanisms, cannot provide such QoS. However, with NIC drivers isolated in two virtual machines, in a primary/warm-spare configuration, the system can recover from an overwhelming majority of NIC driver failures in under 10 ms.",2009,0, 3449,Dynamically Changing Workflows of Web Services,"Workflow reconfiguration traditionally modifies only workflow definitions. Incorporating dynamism in Web service workflows should also adapt instance execution as services change availability. Commercial workflow engines lack mechanisms to adapt instances except where instances deploy with all possible workflow paths, to achieve pseudo-dynamism. This error prone method has the potential for unsound specifications and still does not allow runtime modifications. We perform workflow reconfiguration through an inspection-feedback loop that decouples services specifications and priorities that can change BPEL workflows from their actual execution. When a change occurs, such as service unavailability, immediate adaptation of the workflow instance takes place. To guarantee proper reconfiguration, we formally specify the architecture, interactions, and change directives, according to a natural separation of reconfiguration concerns. We prove the workflow instance will correctly adapt to an alternative service when certain conditions are met.",2009,0, 3450,Reasoning on Scientific Workflows,"Scientific workflows describe the scientific process from experimental design, data capture, integration, processing, and analysis that leads to scientific discovery. Laboratory Information Management Systems (LIMS) coordinate the management of wet lab tasks, samples, and instruments and allow reasoning on business-like parameters such as ordering (e.g., invoicing) and organization (automation and optimization) whereas workflow systems support the design of workflows in-silico for their execution. We present an approach that supports reasoning on scientific workflows that mix wet and digital tasks. Indeed, experiments are often first designed and simulated with digital resources in order to predict the quality of the result or to identify the parameters suitable for the expected outcome. ProtocolDB allows the design of scientific workflows that may combine wet and digital tasks and provides the framework for prediction and reasoning on performance, quality, and cost.",2009,0, 3451,Queuing Theoretic and Evolutionary Deployment Optimization with Probabilistic SLAs for Service Oriented Clouds,"This paper focuses on service deployment optimization in cloud computing environments. In a cloud, each service in an application is deployed as one or more service instances. Different service instances operate at different quality of service (QoS) levels. In order to satisfy given service level agreements (SLAs) as end-to-end QoS requirements of an application, the application is required to optimize its deployment configuration of service instances. E3/Q is a multiobjective genetic algorithm to solve this problem. By leveraging queuing theory, E3/Q estimates the performance of an application and allows for defining SLAs in a probabilistic manner. Simulation results demonstrate that E3/Q efficiently obtains deployment configurations that satisfy given SLAs.",2009,0, 3452,WS-Certificate,"Assessing the correct operation of individual web services or of entire business processes hosted on a Service Oriented Architecture (SOA) is one of the major challenges of SOA research. The unique features of WS/SOA require new quality assessment approaches, including novel testing and monitoring techniques. In this paper, we present a framework for assessing the correct functioning of WS/SOA systems by introducing a third party certifier as a trusted authority that checks and certifies WS/SOA systems. Our certifications are based on signed test cases and their respective results and operate at different level of granularity, providing a sound basis for run-time service selection and process orchestration decisions.",2009,0, 3453,Gossip-Based Workload Prediction and Process Model for Composite Workflow Service,"In this paper, we propose to predict the workloads of the service components within the composite workflow based on the communication of the queue condition of service nodes. With this information, we actively discard the requests that has high probability to be dropped in the later stages of the workflow. The benefit of our approach is the efficient saving of limited system resource as well as the SLA-satisfied system performance. We present mechanisms for four basic workflow patterns and evaluate our mechanism through simulation. The experiment results show that our mechanism can help to successfully serve more requests than the normal mechanism. In addition, our mechanism can maintain more stable response time than the normal one.",2009,0, 3454,Model-Based Monitoring and Policy Enforcement of Services,"Runtime monitoring is necessary for continuous quality assurance of Web services. In a monitoring system, sensors with policies are widely used to collect runtime execution data, detect behavior anomalies and generate alerts. Hard-coded sensors and policies are expensive to develop and maintain. They are hard to accommodate the flexible changes of the service-based system to be monitored. The paper proposes a model-driven approach to facilitate automatic sensor generation and policy enforcement. The sensors and policies are decoupled from the software and are defined at the abstraction model level, including structure and behavior models. WSDL and OWL-S are used for modeling the service-base software, and automatic generating sensors based on dependency and coverage strategies. The policy model is constructed following the WS-Policy framework with a 3-tuple policy definition and a correlation matrix identifying the associations between policies and sensors. Policies are enforced by the policy engine that interoperates with service execution engine to communicate runtime behavior information and verification results. These features have been implemented and experimented with data.",2009,0, 3455,Application on job-shop scheduling with Genetic Algorithm based on the mixed strategy,"Adaptive genetic algorithm for solving job-shop scheduling problems has the defects of the slow convergence speed on the early stage and it is easy to trap into local optimal solutions, this paper introduces a time operator depending on the time evolution to solve this problem. Its purpose is to overcome the defect of adaptive genetic algorithm whose crossover and mutation probability can not make a corresponding adjustment with evolutionary process. Algorithm's structure is hierarchical, scheduling problems can be fully demonstrated the characteristics by using this strategy, not only improve the convergence rate but also maintain the diversity of the population, furthermore avoid premature. The population in the same layer evolve with two goals-time optimal and cost optimal at the same time, the basic genetic algorithm is applied between layers. The improved algorithm was tested by Muth and Thompson benchmarks, the results show that the optimized algorithm is highly efficient and improves both the quality of solutions and speed of convergence.",2009,0, 3456,The implementation of artificial neural networks applying to software reliability modeling,"In current software reliability modeling research, the main concern is how to develop general prediction models. In this paper, we propose several improvements on the conventional software reliability growth models (SRGMs) to describe actual software development process by eliminating some unrealistic assumptions. Most of these models have focused on the failure detection process and not given equal priority to modeling the fault correction process. But, most latent software errors may remain uncorrected for a long time even after they are detected, which increases their impact. The remaining software faults are often one of the most unreliable reasons for software quality. Therefore, we develop a general framework of the modeling of the failure detection and fault correction processes. Furthermore, we apply neural network with back-propagation to match the histories of software failure data. We will also illustrate how to construct the neural networks from the mathematical viewpoints of software reliability modeling in detail. Finally, numerical examples are shown to illustrate the results of the integration of the detection and correction process in terms of predictive ability and some other standard criteria.",2009,0, 3457,A novel QoS modeling approach for soft real-time systems with performance guarantees,"This paper introduces a systematic approach for modeling QoS requirements of soft real-time systems with stochastic responsiveness guarantees. While deadline miss ratio and its proposed extensions have been considered for evaluating firm real-time systems, this work brings out its limitations for assessing the performance of emerging computer services operating over communication infrastructures with non-deterministic dynamics. This work explains how delay frequencies and delay lengths can be both represented into a single quantitative meaningful measure for performance evaluation of soft real-time constrained computer applications. It also explores the presented approach in the design of scheduling strategies which can ground novel business models for QoS-enabled service-oriented systems.",2009,0, 3458,Dynamic Causality Diagram in Fault Diagnosis,"In order to overcomes some shortages of Belief Network dynamic causality diagram is put forward. Its knowledge expression, reasoning, probability computing and also the model of causality diagram used for system fault diagnosis, the model constructing method and reasoning algorithm are proposed. At last, an application example in the fault diagnosis of the nuclear power plant is given which shows that the method is effective.",2009,0, 3459,A Novel Hybrid PSO-BP Algorithm for Neural Network Training,"In order to search better solution in the high dimension space, the novel hybrid PSO-BP algorithm which combines the PSO mechanism with the Levenberg-Marquardt algorithm or the conjugate gradient algorithm is proposed. The main idea employs BP algorithm with numeric technology to find the local optimum, and takes the weights and biases trained as particles, and harnesses swarm motion to search the optimum. Finally, the hybrid algorithm selects some good particles from the local optimum set to predict the new samples. Simulation results show that the hybrid PSO-BP algorithm is better than the basic BP algorithm and the adaptive PSO algorithm in the stability, correct recognition rate and training time.",2009,0, 3460,Usability Evaluation Driven by Cooperative Software Description Framework,"The usability problems of CSCW system mainly come from two aspects of limitation. On the higher level, it lacks sufficient simulation of the use-context influenced by social, political and organizational features; on the lower level, it lacks suitable measurement and description of basic collaborative behavior happened in common cooperative activities. However, the traditional task-based user interface models and discount usability inspection methods can not address the above problems. Therefore, an overall framework for describing CSCW system was proposed in this paper from the perspective of usability evaluation. This framework covers all aspects in understanding the use-context, so that it is very helpful to better instruct low-cost usability walkthrough technique in the early stage of software development cycle to detect usability problems.",2009,0, 3461,A Practical Coder of Fully Scalable Video over Error-Prone Network,"The paper is mainly devoted to explore practical scalable video coding technologies over error-prone wireless and mobile network with fluctuant bandwidth and various terminals. The new coder supports full scalability of spatial resolution, SNR quality and temporal in a flexible and simple method. Instead of common-used but complex MCTF-based scheme the new coder integrates smartly several optimized and improved technologies, i.e. so-called Hierarchical-B-Picture-Like, DWT, successive multi-level quantization, and SPIHT. The proposed coder can produce embedded bitstream satisfying various requirements of bandwidth, resolution and SNR of terminals over error-prone network. The experiments show that the proposed coder is practical and flexible and works well over error-prone network.",2009,0, 3462,Optimizing a highly fault tolerant software RAID for many core systems,We present a parallel software driver for a RAID architecture to detect and correct corrupted disk blocks in addition to tolerate disk failures. The necessary computations demand parallel execution to avoid the processor being the bottleneck for a RAID with high bandwidth. The driver employs the processing power of multicore and manycore systems. We report on the performance of a prototype implementation on a quadcore processor that indicates linear speedup and promises good scalability on larger machines. We use reordering of I/O orders to ensure balance between CPU and disk load.,2009,0, 3463,Vertical quench furnace Hammerstein fault predicting model based on least squares support vector machine and its application,"Since large-scale vertical quench furnace is voluminous, whose working condition is a typically complex process with distributed parameter, nonlinear, multi-inputs/multi-outputs, close coupled variables, etc, Hammerstein model of the furnace is presented. Firstly, the nonlinear function of Hammerstein model is constructed by least squares support vector machines regression. A numerical algorithm for subspace system (singular value decomposition, SVD) is utilized to identify the Hammerstein model. Finally, the model is used to predict the furnace temperature. The simulation research shows this model provides accurate prediction and is with desirable application value.",2009,0, 3464,Invariant checkers: An efficient low cost technique for run-time transient errors detection,"Semiconductor technology evolution brings along higher soft error rates and long duration transients, which require new low cost system level approaches for error detection and mitigation. Known software based error detection techniques imply a high overhead in terms of memory usage and execution times. In this work, the use of software invariants as a means to detect transient errors affecting a system at run-time is proposed. The technique is based on the use of a publicly available tool to automate the invariant detection process, and the decomposition of complex algorithms into simpler ones, which are checked through the verification of their invariants during the execution of the program. A sample program is used as a case study, and fault injection campaigns are performed to verify the error detection capability of the proposed technique. The experimental results show that the proposed technique provides high error detection capability, with low execution time overhead.",2009,0, 3465,A fast error correction technique for matrix multiplication algorithms,"Temporal redundancy techniques will no longer be able to cope with radiation induced soft errors in technologies beyond the 45 nm node, because transients will last longer than the cycle time of circuits. The use of spatial redundancy techniques will also be precluded, due to their intrinsic high power and area overheads. The use of algorithm level techniques to detect and correct errors with low cost has been proposed in previous works, using a matrix multiplication algorithm as the case study. In this paper, a new approach to deal with this problem is proposed, in which the time required to recompute the erroneous element when an error is detected is minimized.",2009,0, 3466,"Quality Indicators on Global Software Development Projects: Does """"Getting to Know You"""" Really Matter?","In Spring 2008, five student teams were put into competition to develop software for a Cambodian client. Each extended team comprised students distributed across a minimum of three locations, drawn from the US, India, Thailand and Cambodia. This paper describes a couple of exercises conducted with students to examine their basic awareness of the countries of their collaborators and competitors, and to assess their knowledge of their own extended team members during the course of the project. The results from these exercises are examined in conjunction with the high-level communication patterns exhibited by the participating teams and provisional findings are drawn with respect to quality, as measured through a final product selection process. Initial implications for practice are discussed.",2009,0, 3467,Requirements Reasoning for Distributed Requirements Analysis Using Semantic Wiki,"In large-scale collaborative software projects, thousands of requirements with complex interdependencies and different granularity spreading in different levels are elicited, documented, and evolved during the project lifecycle. Non-technical stakeholders involved in requirements engineering activities rarely apply formal techniques; therefore it is infeasible to automatically detect problems in requirements. This situation becomes even worse in a distributed context when all sites are responsible to maintain their own requirements list using various requirements models and management tools, and the detection of requirements problems across multiple sites is error-prone, and un-affordable if performed manually. This paper proposes an integrated approach of basing distributed requirements analysis on semantic Wiki by requirements reasoning. First, the functions concerning reasoning support provided by semantic Wiki for requirements analysis are proposed. Second, the underlying requirements rationale model for requirements reasoning is presented with sample reasoning rules. Third, our rationale model is mapped to the WinWin requirements negotiation model which further adds to its credibility.",2009,0, 3468,Detecting Functional Dependence Program Invariants Based on Data Mining,"With the development of computer science and technology, software has been widely applied in all kinds of business. It has been a very popular and important application system. So the quality of software causes more serious attention than before. Design by program invariant is a very important method which is used to improve quality of software. In this paper, a theory model of dynamically detecting technology of program invariant was built. And a new method of dynamically generating technology of program invariant of functional dependence based on the theory of database was showed. In this way, program invariant of functional dependence can be detected in a nimble way. Experiments have been done and the result demonstrates that the method is obviously reliable and efficient.",2009,0, 3469,A Simplified and Fast Fully Scalable Video Coding Scheme with Hierarchical-B-Picture-Like and DWT,"The paper proposes a simplified and fast fully scalable video coding (SVC) scheme for video communication over error prone channels and surveillance systems, such as wireless and mobile network characterized with vibrating bandwidth. The proposed scheme supports three types of scalability of temporal, resolution and SNR and any combination scalability of them. It uses a novel Hierarchical-B-Picture-Like technology to gain temporal scalability, DWT for spatial scalability, successive multi-level quantization and SPIHT together for quality scalability. The proposed scheme provides an embedded bitstream that can be easily adapted (reordered) to a given bandwidth, resolution and SNR requirements. Simulation results show the scheme has the advantages of being simple, flexible, and acceptable quality.",2009,0, 3470,8Kbit/s LD-aCELP Speech Coding with Backward Pitch Detection,"This paper presents an 8 Kbit/s speech coding algorithm whose delay is 2.5 ms on the base of G.728 algorithm to reduce coding rate. Adaptive codebook structure is introduced in the proposed algorithm, which is composed of the recent excitation. The algorithm uses double codebook structure which contains adaptive codebook and normalized fixed codebook. In codebook searching, the algorithm detects backward pitch, and then searches the adaptive codebook subtly round the pitch period T. After searching 64 adaptive code words, 8 gains, 256 fixed code words and 8 gains, the best excitation is obtained. Testing with the average segment SNR and perceptual evaluation of speech quality (PESQ), the improved algorithm is close to G.728 in speech coding quality. And it achieves desirable balance in coding rate, delay time and coding quality.",2009,0, 3471,Combinatorial Software Testing,"Combinatorial testing can detect hard-to-find software faults more efficiently than manual test case selection methods. While the most basic form of combinatorial testing-pairwise-is well established, and adoption by software testing practitioners continues to increase, industry usage of these methods remains patchy at best. However, the additional training required is well worth the effort.",2009,0, 3472,Interpreting a Successful Testing Process: Risk and Actual Coverage,"Testing is inherently incomplete; no test suite will ever be able to test all possible usage scenarios of a system. It is therefore vital to assess the implication of a system passing a test suite. This paper quantifies that implication by means of two distinct, but related, measures: the risk quantifies the confidence in a system after it passes a test suite, i.e., the number of faults still expected to be present (weighted by their severity); the actual coverage quantifies the extent to which faults have been shown absent, i.e., the fraction of possible faults that has been covered. We provide evaluation algorithms that calculate these metrics for a given test suite, as well as optimisation algorithms that yield the best test suite for a given optimisation criterion.",2009,0, 3473,Exploring Topological Structure of Boolean Expressions for Test Data Selection,"Several test strategies have emerged to detect faults associated Boolean expressions. Current approaches lack a proper model to give an overall picture of the Boolean expressions and comprehensive exploration of test data space. This paper proposes a topological model (T-model) to systematically represent Boolean expressions and test data space, and theoretically analyzes the capability and limitations of existing test strategies. We explicitly identify the sufficient and necessary conditions to detect the faults of interest, introduce new Boolean expression related faults and reform the fault hierarchy, where a family of test strategies is defined to detect the corresponding faults.",2009,0, 3474,Modeling Fault Tolerant Services in Service-Oriented Architecture,"Nowadays, using of Service-Oriented Architectures (SOA) is spreading as a flexible architecture for developing dynamic enterprise systems.In this paper we investigate fault tolerance mechanisms for modeling services in service-oriented architecture. We propose a metamodel (formalized by a type graph) with graph rules for monitoring services and their communications to detect existing faults.",2009,0, 3475,The Establishment and Application of Urban Environment Information System Based on MAPGIS: A Case Study on Jiaozuo City,"In order to keep dynamic monitoring on the resources and environment on major environmental problems in Jiaozuo city and based on MAPGIS, an urban environment information system (UEIS) was established and applied in this paper. The UEIS is a type of spatial environment information system, based on regional urban ecosystem, it input such relative data as population, economy, resources and environment into computer by its spatial location or geographic coordinates, then save, refresh, inquiry and search, model analysis, display, printing and drawing output subsequently. This provided an effective method to the regional urban environmental management. In the UEIS, the design and development of model base and its management system hold the main part for the integration of the whole system and the decision analysis depend on the relative environmental models. These models displayed important role in the analysis, analogue, assessment, prediction and optimization of raw information. The design and development of model base mainly include environment assessment, prediction, planning management models and corresponding model base management system. This UEIS included following subsystems-the resources and environment basic information management system, the resources and environment dynamic monitoring information management system, the ambient air pollution control system and the water environmental pollution control system. In order to input and output mass storage of data related to society, economy, resources and environment hold in this system conveniently and rapidly, and doing decision and simulate work supported by multi-source of information, this UEIS makes full use of the tool software MAPGIS and the existing poly environmental software, thus it perfects the function of environment management and control, import and output, spatial analysis and decision support. This UEIS achieved the following targets, first, it can do the daily jobs such as the save, refresh, in- quiry and search, statistic and analysis, mapping and tabling of data need by environmental planning, management, decision and scientific research. Secondly, by using the data of routine environmental monitoring and investigation, it finish the work aimed to the total amount control of pollution discharge such as the assess and prediction of pollution sources and environmental quality, the simulating and planning, and thirdly, using the remote data, finish the work related to monitoring and assess of the change of urban ecosystem such as map drawing of urban ecosystem, land utilization structure, spatial quality and social environment assess.",2009,0, 3476,Uncovering global icebergs in distributed monitors,"Security is becoming an increasingly important QoS parameter for which network providers should provision. We focus on monitoring and detecting one type of network event, which is important for a number of security applications such as DDoS attack mitigation and worm detection, called distributed global icebergs. While previous work has concentrated on measuring local heavy-hitters using ldquosketchesrdquo in the non-distributed streaming case or icebergs in the non-streaming distributed case, we focus on measuring icebergs from distributed streams. Since an iceberg may be ldquohiddenrdquo by being distributed across many different streams, we combine a sampling component with local sketches to catch such cases. We provide a taxonomy of the existing sketches and perform a thorough study of the strengths and weaknesses of each of them, as well as the interactions between the different components, using both real and synthetic Internet trace data. Our combination of sketching and sampling is simple yet efficient in detecting global icebergs.",2009,0, 3477,"Congestion location detection: Methodology, algorithm, and performance","We address the following question in this study: Can a network application detect not only the occurrence, but also the location of congestion? Answering this question will not only help the diagnostic of network failure and monitor server's QoS, but also help developers to engineer transport protocols with more desirable congestion avoidance behavior. The paper answers this question through new analytic results on the two underlying technical difficulties: 1) synchronization effects of loss and delay in TCP, and 2) distributed hypothesis testing using only local loss and delay data. We present a practical congestion location detection (CLD) algorithm that effectively allows an end host to distributively detect whether congestion happens in the local access link or in more remote links. We validate the effectiveness of CLD algorithm with extensive experiments.",2009,0, 3478,Admission control for roadside unit access in Intelligent Transportation Systems,"Roadside units can provide a variety of potential services for passing-by vehicles in future intelligent transportation systems. Since each vehicle has a limited time period when passing by a roadside unit, it is important to predict whether a service can be finished in time. In this paper, we focus on admission control problems, which are important especially when a roadside unit is in or close to overloaded conditions. Traditional admission control schemes mainly concern long-term flows, such as VOIP and multimedia service. They are not applicable to highly mobile vehicular environments. Our analysis finds that it is not necessarily accurate to use deadline to evaluate the risk whether a flow can be finished in time. Instead, we introduce a potential metric to more accurately predict the total data size that can be transmitted to/from a vehicle. Based on this new concept, we formulate the admission control task as a linear programming problem and then propose a lexicographically maxmin algorithm to solve the problem. Simulation results demonstrate that our scheme can efficiently make admission decisions for coming transmission requests and effectively avoid system overload.",2009,0, 3479,AGRADC: An architecture for autonomous deployment and configuration of grid computing applications,"Deployment and configuration of grid computing applications are exhaustive and error-prone tasks, and represent a weak link of the lifecycle of grid applications. To address the problem, this paper proposes AGRADC, an architecture to instantiate grid applications on demand, which incorporates features from the autonomic computing paradigm. This architecture improves the grid application development process, providing tools to define a deployment flow, configuration parameters, and actions to be executed when adverse situations like faults arise.",2009,0, 3480,Experiments results and large scale measurement data for web services performance assessment,"Service provisioning is a challenging research area for the design and implementation of autonomic service- oriented software systems. It includes automated QoS management for such systems and their applications. Monitoring and Measurement are two key features of QoS management. They are addressed in this paper as elements of a main step in provisioning of self-healing web services. In a previous work, we defined and implemented a generic architecture applicable for different services within different business activities. Our approach is based on meta-level communications defined as extensions of the SOAP envelope of the exchanged messages, and implemented within handlers provided by existing web service containers. Using the web services technology, we implemented a complete prototype of a service-oriented Conference Management System (CMS). We experienced our monitoring and measurement architecture using the implemented application and assessed successfully the scalability of our approach under the French grid5000. In this paper, experimental results are analyzed and concluding remarks are given.",2009,0, 3481,Oil Contamination Monitoring Based on Dielectric Constant Measurement,"Fault information of machinery equipments is often contained in the process of lubricating oil wearing off. Meanwhile, the dielectric constant of the oil would change accordingly during the process. The principle of online oil contamination monitoring based on dielectric constant measurement is proposed. The measure system is also developed, which includes capacitance sensor, small capacitance detecting circuit and software of monitoring and analysis. Experiments are carried out on lubricating oil polluted by different contamination. The results show that change of lubricating oil's relative dielectric constant can be detected effectively and properly distinguished using the developed measure system, which can be used to determine the proper oil-replacing period, and to perform fault prediction of the machinery equipments.",2009,0, 3482,A method for selecting and ranking quality metrics for optimization of biometric recognition systems,"In the field of biometrics evaluation of quality of biometric samples has a number of important applications. The main applications include (1) to reject poor quality images during acquisition, (2) to use as enhancement metric, and (3) to apply as a weighting factor in fusion schemes. Since a biometric-based recognition system relies on measures of performance such as matching scores and recognition probability of error, it becomes intuitive that the metrics evaluating biometric sample quality have to be linked to the recognition performance of the system. The goal of this work is to design a method for evaluating and ranking various quality metrics applied to biometric images or signals based on their ability to predict recognition performance of a biometric recognition system. The proposed method involves: (1) Preprocessing algorithm operating on pairs of quality scores and generating relative scores, (2) Adaptive multivariate mapping relating quality scores and measures of recognition performance and (3) Ranking algorithm that selects the best combinations of quality measures. The performance of the method is demonstrated on face and iris biometric data.",2009,0, 3483,Global and local quality measures for NIR iris video,"In the field of iris-based recognition, evaluation of quality of images has a number of important applications. These include image acquisition, enhancement, and data fusion. Iris image quality metrics designed for these applications are used as figures of merit to quantify degradations or improvements in iris images due to various image processing operations. This paper elaborates on the factors and introduces new global and local factors that can be used to evaluate iris video and image quality. The main contributions of the paper are as follows. (1) A fast global quality evaluation procedure for selecting the best frames from a video or an image sequence is introduced. (2) A number of new local quality measures for the iris biometrics are introduced. The performance of the individual quality measures is carefully analyzed. Since performance of iris recognition systems is evaluated in terms of the distributions of matching scores and recognition probability of error, from a good iris image quality metric it is also expected that its performance is linked to the recognition performance of the biometric recognition system.",2009,0, 3484,Measurement of worst-case power delivery noise and construction of worst case current for graphics core simulation,"Worst case graphics core power delivery noise is a major indicator of graphics chip performance. The design of good graphics core power delivery network (PDN) is technically difficult because it is not easy to predict a worst case current stimulus during pre-silicon design stage. Many times, the worst case power delivery noise is observed when graphics benchmark software is run during post-silicon validation. At times like this, it is too late to rectify the power delivery noise issue unless many extra capacitor placeholders are placed during early design stage. To intelligently optimize the graphics core power delivery network design and determining the right amount of decoupling capacitors, this paper suggests an approach that setup a working platform to capture the worst case power delivery noise; and later re-construct the worst case power delivery current using Thevenin's Theorem. The measurement is based on actual gaming application instead of engineering a special stimulus that is generated thru millions of logic test-vectors. This approach is practical, direct and quick, and does not need huge computing resources; or technically skilled logic designers to design algorithms to build the stimulus.",2009,0, 3485,Black-box and gray-box components as elements for performance prediction in telecommunications system,"In order to enable quality impact prediction for newly added functionality to the existing telecommunications system, we introduce new modeling elements to the common development process. In the paper we describe a black-box and grey-box instrumentation and modeling method, used for performance prediction of a distributed system. To verify basic concept of the method, a prototype system was made. The prototype consists of the existing proprietary telephone exchange system and new, open-source components that make an extension of the system. The paper presents conceptual architecture of the prototype and analyzes basic performance impact for introduction of open-source software into existing system. Instrumentation and modeling parameters are discussed, as well as measurements environment. Initial comparison of predicted and experimentally measured values validates current work on the method.",2009,0, 3486,Contextual restoration of severely degraded document images,"We propose an approach to restore severely degraded document images using a probabilistic context model. Unlike traditional approaches that use previously learned prior models to restore an image, we are able to learn the text model from the degraded document itself, making the approach independent of script, font, style, etc. We model the contextual relationship using an MRF. The ability to work with larger patch sizes allows us to deal with severe degradations including cuts, blobs, merges and vandalized documents. Our approach can also integrate document restoration and super-resolution into a single framework, thus directly generating high quality images from degraded documents. Experimental results show significant improvement in image quality on document images collected from various sources including magazines and books, and comprehensively demonstrate the robustness and adaptability of the approach. It works well with document collections such as books, even with severe degradations, and hence is ideally suited for repositories such as digital libraries.",2009,0, 3487,Quality Skyline in Sensor Database,Data quality of sensor database is very important in industrial applications. An appropriate time period can improve the quality of result of high-level applications. In this paper we present a quality assessment and then give an algorithm to detect a quality skyline in a sensor dataset with stationary assumption. This algorithm returns an appropriate length of time period that query with a longer time period will get a result with stationary data quality.,2009,0, 3488,Surface Inspection System of Steel Strip Based on Machine Vision,"The traditional surface quality inspection of steel strip is carried by human inspectors, which is far from satisfactory because of its low productivity, low reliability and poor economy. It is a promising way to inspect surface quality of steel strip based on machine-vision technology. In this paper, the structure of the surface automated inspection system is described. The software and image processing of steel strip surface inspection is presented and the algorithms of detect surface defects of steel strip is discussed. The system is capable of both detecting and classifying surface defects in cold rolling steel strip.",2009,0, 3489,Workflow Model Performance Analysis Concerning Instance Dwelling Times Distribution,Instances dwelling times (IDT) which consist of waiting times and handle times in a workflow model is a key performance analysis goal. In a workflow model the instances which act as customers and the resources which act as servers form a queuing network. Multidimensional workflow net (MWF-net) includes multiple timing workflow nets (TWF-nets) and the organization and resource information. This paper uses queuing theory and MWF-net to discuss mean value and probability distribution density function (PDDF) of IDT. An example is used to show that the proposed method can be effectively utilized in practice.,2009,0, 3490,A Business-Oriented Fault Localization Approach Using Digraph,"Analyzed here is a fault localization approach based on directed graph in view point of business software. The fault propagation model solved the problem of obtaining the dependency relationship of faults and symptoms semi-automatically. The main idea includes: get the deployment graph of managed business from the topography of network and software environment; generate the adjacency matrix of the graph; compute the transitive closure of adjacency matrix and obtain a so-called dependency matrix; independent fault locates in the main diagonals of the dependency matrix, elements of column is faultpsilas symptoms domain, which are divided into immediate effect and transitive effect. In real world, those elements will denote symptoms, but the two class affected nodes have different probability of occurrence, that is, immediate symptoms can be observed more likely than transitive symptoms and symptom of fault itself is most likely observed. Based on the hypothesis, a new fault localization algorithm is proposed. The simulation results show the validity and efficiency of this fault localization approach.",2009,0, 3491,Grey Theory Based Nodes Risk Assessment in P2P Networks,"P2P networks are self-organized and distributed. Efficient nodes risk assessment is one of the key factors for high quality resource exchanging. Most assessment methods based on trust or reputation have some remarkable drawbacks. For example, some methods impose too many restrictions to the samples, and many methods canpsilat identify the malicious recommendations, which result in that the final results are not convincible and credible. To solve these problems, we propose a novel risk assessment method based on grey theory. In our scheme, the communication nodespsila incomplete information state is described as several key attributes. Original data of these attributes is collected using taste concourse method to avoid malicious recommendation. The analysis and computing example shows this scheme is an efficient incomplete information nodes risk assessment method in P2P networks.",2009,0, 3492,A Low Latency Handoff Scheme Based on the Location and Movement Pattern,"Providing a seamless handoff and quality of service (QoS) guarantees is one of the key issues in real-time services. Several IPv6 mobility management schemes have been proposed and can provide uninterrupted service. However, these schemes either have synchronization issues or introduce signaling overhead and high packet loss rate. This paper presents a scheme that reduces the handoff latency based on the location and movement pattern of a Mobile Node (MN). An MN can detect its movement actively and forecast the handoff, which alleviates the communication cost between the MN and its Access Router (AR). Furthermore, by setting the dynamic interval in an appropriate range, the MNpsilas burden can also be alleviated. Finally, by performance evaluation using both theoretical analysis and computer simulations, we show that the proposed scheme can lower the handoff latency efficiently.",2009,0, 3493,An Alliance Based Reputation Model for Internet Autonomous System Trust Evaluation,"The security of inter-domain routing system greatly depends on the trustworthiness of routing information and routing behavior of autonomous system (AS). Many researches on e-commerce, grid, and p2p have proven that reputation mechanism is helpful to inhibit the spread of false route and the occurrence of malicious routing behavior. To increase AS resistance to malicious routing attack, we designs an alliance based reputation model for AS routing behavior trust evaluation. Our approach calculates AS reputation with the Bayesian probability model and manages AS reputation with AS alliance. Compared with the fully distributed reputation model, our model has lower storage and communication overhead. This reputation model is incremental deployment and easy to implement. It can be employed for securing AS routing and assisting malicious behavior detection.",2009,0, 3494,A Two-Phase Log-Based Fault Recovery Mechanism in Master/Worker Based Computing Environment,"The master/worker pattern is widely used to construct the cross-domain, large scale computing infrastructure. The applications supported by this kind of infrastructure usually features long-running, speculative execution etc. Fault recovery mechanism is significant to them especially in the wide area network environment, which consists of error prone components. Inter-node cooperation is urgent to make the recovery process more efficient. The traditional log-based rollback recovery mechanism which features independent recovery cannot fulfill the global cooperation requirement due to the waste of bandwidth and slow application data transfer which is caused by the exchange of a large amount of logs. In this paper, we propose a two-phase log-based recovery mechanism which is of merits such as space saving and global optimization and can be used as a complement of the current log-based rollback recovery approach in some specific situations. We have demonstrated the use of this mechanism in the Drug Discovery Grid environment, which is supported by China National Grid. Experiment results have proved efficiency of this mechanism.",2009,0, 3495,A Method for Selecting ERP System Based on Fuzzy Set Theory and Analytical Hierarchy Process,"Enterprise resource planning (ERP) system can significantly improve future competitiveness and performance of a company, but it is also a critical investment and has a high failure probability. Thus, selecting an appropriate ERP system is very important. A method for selecting ERP system is proposed based on fuzzy set theory and analytical hierarchy process (AHP). The evaluation criteria system, including vendor brand, quality, price and service, is constructed, a multi-criteria decision-making model is formulated and the solution process is introduced in detail, that is, AHP is used to identify the weights of the criteria and fuzzy set theory is used to deal with the fuzzification of the criteria. An application example of ERP system selection demonstrates the feasibility of the proposed method.",2009,0, 3496,Dynamic Causality Diagram in Vehicular Engine's Fault Diagnosis,"We discuss the knowledge expression, reasoning and probability computing in dynamic causality diagram, which is developed from the belief network and overcomes some shortages of belief network. The model of causality diagram used for vehicular enginepsilas fault diagnosis is brought forward, and the model constructing method and reasoning algorithm are also presented. At last, an application example in the vehicular enginepsilas fault diagnosis is given which shows that the method is effective.",2009,0, 3497,Comparative Assessment of Fingerprint Sample Quality Measures Based on Minutiae-Based Matching Performance,"This fingerprint sample quality is one of major factors influencing the matching performance of fingerprint recognition systems. The error rates of fingerprint recognition systems can be decreased significantly by removing poor quality fingerprints. The purpose of this paper is to assess the effectiveness of individual sample quality measures on the performance of minutiae-based fingerprint recognition algorithms. Initially, the authors examined the various factors that influenced the matching performance of the minutiae-based fingerprint recognition algorithms. Then, the existing measures for fingerprint sample quality were studied and the more effective quality measures were selected and compared with two image quality software packages, (NFIQ from NIST, and QualityCheck from Aware Inc.) in terms of matching performance of a commercial fingerprint matcher (Verifinger 5.0 from Neurotechnologija). The experimental results over various fingerprint verification competition (FVC) datasets show that even a single sample quality measure can enhance the matching performance effectively.",2009,0, 3498,Mutation Analysis for Testing Finite State Machines,"Mutation analysis is a program testing method which seeds a fault in a program and tries to identify it with test data, thus promoting the test efficiency. The paper investigates the application of mutation analysis in model-based testing for the modeling language of finite state machines (FSM). We describe a set of mutation operators for FSM based on the fault category; present an algorithm of selecting a test suite for the mutation testing of system models in FSM. In an experiment, other five methods of test suites generating and selecting for FSM are chosen to compare with the mutation testing method. The experiment shows that in respect of faults detecting in FSM, the mutation testing is more effective and efficient than the other FSM testing methods including D-method, W-method and T-method.",2009,0, 3499,Prediction Models for BPMN Usability and Maintainability,"The measurement of a business process in the early stages of the lifecycle, such as the design and modelling stages, could reduce costs and effort in future maintenance tasks. In this paper we present a set of measures for assessing the structural complexity of business processes models at a conceptual level. The aim is to obtain useful information about process maintenance and to estimate the quality of the process model in the early stages. Empirical validation of the measures was carried out along with a linear regression analysis aimed at estimating process model quality in terms of modifiability and understandability.",2009,0, 3500,Evaluation of Prioritization in Performance Models of DTP Systems,"Modern IT systems serve many different business processes on a shared infrastructure in parallel. The automatic request execution on the numerous interconnected components, hosted on heterogeneous hardware resources, is coordinated in distributed transaction processing (DTP) systems. While pre-defined quality-of-service metrics must be met, IT providers have to deal with a highly dynamic environment concerning workload structure and overall demand when provisioning their systems. Adaptive prioritization is a way to react to short-term demand variances. Performance models can be applied to predict the impacts of prioritization strategies on the overall performance of the system. In this paper we describe the workload characteristics and particularities of two real-world DTP systems and evaluate the effects of prioritization concerning supported overall load and resulting end-to-end performance measures.",2009,0, 3501,Integrity-Checking Framework: An In-situ Testing and Validation Framework for Wireless Sensor and Actuator Networks,"Wireless sensor and actuator network applications require several levels of testing during their development. Although software algorithms can be tested through simulations and syntax checking, it is difficult to predict or test for problems that may occur once the wireless sensor and actuator has been deployed. The requirement for testing is not however limited to the development phase. During the lifecycle of the system, faults, upgrades, retasking, etc. lead to further needs for system validation. In this paper we review the state-of-the-art techniques for testing wireless sensor and actuator applications and propose the integrity-checking framework. The framework provides in-situ full lifecycle testing and validation of wireless sensor and actuator applications by performing an ldquointegrity checkrdquo, during which the sensor inputs and actuator responses are emulated within the physical wireless sensor and actuator. This enables application-level testing by feeding controlled information to the sensor inputs, while data processing, communication, aggregation and decision making continue as normal across the physical wireless sensor and actuator.",2009,0, 3502,A Technique to Identify and Substitute Faulty Nodes in Wireless Sensor Networks,"In this paper, we propose a technique to identify and substitute faulty nodes to achieve fault tolerance in wireless sensor networks. The proposed technique divides the network into disjoint zones while having a master for each zone. The zone masters are used to identify faulty nodes by virtually dividing the zone into quadrants until a suspect node is found. Our fault model assumes both communication and sensing faults which are caused by a hardware failure in anode. To detect communication faults, the division process is based on calculating the throughput for each zone and comparing it to a predefined threshold. However, for sensing faults it is based on comparing the data a node senses to a predefined status and data ranges. In addition, we make use of a new technique, which was inspired by the roll forward checkpointing scheme, to activate sleeping nodes in order to validate the correctness of the suspected nodes. This is used to reconfigure the network using fault free nodes only.",2009,0, 3503,Quantitative Assessment for Organisational Security & Dependability,"There are numerous metrics proposed to assess security and dependability of technical systems (e.g., number of defects per thousand lines of code). Unfortunately, most of these metrics are too low-level, and lack on capturing high-level system abstractions required for organisation analysis. The analysis essentially enables the organisation to detect and eliminate possible threats by system re-organisations or re-configurations. In other words, it is necessary to assess security and dependability of organisational structures next to implementations and architectures of systems. This paper focuses on metrics suitable for assessing security and dependability aspects of a socio-technical system and supporting decision making in designing processes. We also highlight how these metrics can help in making the system more effective in providing security and dependability by applying socio-technical solutions (i.e., organisation design patterns).",2009,0, 3504,An Algorithm Based Fault Tolerant Scheme for Elliptic Curve Public-Key Cryptography,"In this paper, an algorithm based fault tolerant (ABFT) scheme for Elliptic Curve Cryptography (ECC) public-key cipher is presented. By adding 2n+1 check sums, the proposed scheme is able to detect and correct up to three errors which may occur during the massive computation process or/and data transmission process for total n2 groups of data. The other advantage of the proposed fault tolerant scheme include: (1). It maintains almost the same throughput when there is no error detected. (2). It does not require additional arithmetic units to create check sums and error detection. (3). It can be easily implemented by software or hardware.",2009,0, 3505,A Study on Machine Translation of Register-Specific Terms in Tea Classics,"The rich heritage of Chinese tea culture has attracted an increasing number of people in the world, but the translating of such classical and specialized literature proves to be extremely arduous. Machine translation (MT) is introduced to facilitate the decrypting process. However, when the popular online translation Systran is tried bi-directionally on a high-frequency wordlist from 24 ancient tea documents, the version reveals some defects, especially the recognition of register-specific terms. To address this elementary yet essential problem, a parallel translation corpus is constructed. Statistic method of automatic extraction is used to produce a bilingual inventory of equivalent term pairs. To further improve the efficiency of MT, a descriptive term database is built on the basis of the previous work. It is expected that, with contextual clues taken into the decrypting process, later machine-aided translation of documents in the special sphere could become more standardized, consistent and systematic.",2009,0, 3506,Integration of Heterogeneous Medical Decision Support Systems Based on Web Services,"This study employs the framework of Web services in conjunction with Bayesian theorem and decision trees to construct a Web-services-based decision support system for medical diagnosis and treatment. The purpose is to help users (physicians) with issues pertinent to medical diagnosis and treatment decisions. Users through the system key in available prior probability and through computation based on Bayesian theorem obtain the diagnosis. The process helps users enhance the quality and efficiency of medical decisions, and the diagnosis can be transmitted to a decision-tree-based treatment decision support service component via XML to generate recommendation and analysis for treatment decisions. On the other hand, features of web services enable this medical decision support system to offer more service platforms than conventional one. Users will have access to the system whether they use Windows, Macintosh, Linux or any other platforms that connect with the Internet via HTTP. More important is the fact that after the system is completed all Internet service providers will be able to access the system as a software unit freely and quickly. This way, the goal of this study to provide medical decision support tools and speedily integrate heterogeneous medical decision support systems can be effectively attained.",2009,0, 3507,Wedjat: A Mobile Phone Based Medicine In-take Reminder and Monitor,"Out-patient medication administration has been identified as the most error-prone procedure in modern healthcare. Under or over doses due to erratic in-takes, drug-drug or drug-food interactions caused by un-reconciled prescriptions and the absence of in-take enforcement and monitoring mechanisms have caused medication errors to become the common cases of all medical errors. Most medication administration errors were made when patients bought different prescribed and over-the-counter medicines from several drug stores and use them at home without little or no guidance. Elderly or chronically ill patients are particularly susceptible to these mistakes. In this paper, we introduce Wedjat, a smart phone application designed to help patients avoiding these mistakes. Wedjat can remind its users to take the correct medicines on time and record the in-take schedules for later review by healthcare professionals. Wedjat has two distinguished features: (1) it can alert the patients about potential drug-drug/drug-food interactions and plan a proper in-take schedule to avoid these interactions; (2) it can revise the in-take schedule automatically when a dose was missed. In both cases, the software always tries to produce the simplest schedule with least number of in-takes. Wedjat is equipped with user friendly interfaces to help its users to recognize the proper medicines and obtain the correct instructions of taking these drugs. It can maintain the medicine in-take records on board, synchronize them with a datanotbase on a host machine or upload them onto a Personal Health Record (PHR) system. A proof-of-concept prototype of Wedjat has been implemented on Window Mobile platform and will be migrated onto Android for Google Phones. This paper introduces the system concept and design principles of Wedjat with emphasis on its medication scheduling algorithms and the modular implementation of mobile computing application.",2009,0, 3508,Reliable Transactional Web Service Composition Using Refinement Method,"Web services composition is a good way to construct complex Web software. However, Web services composition is prone to fail due to the unstable Web services execution. Thus, it is necessary to deal with the abstract hierarchical modeling of multi-partner business process. Therefore, this paper proposes a refinement method for failure compensation process of transaction mechanism, constructs failure service composition compensation model with the help of paired Petri net and builds a services composition compensation model. It discusses the refinement model and aggregated QoS estimation of the five common aggregation compensation constructs. It takes the classical traveling reservation business process as an example to show the influence on composite business process brought by the aggregated QoS metrics in different failure points, and analyzes the influence of reputation in different failure rate.",2009,0, 3509,Dynamical Detecting Technique of Nonfunctional Dependence Program Invariant,"In this paper, the notation of program invariant was described, and a theory model of dynamical generating technique of invariants was researched. Then, the technology of dynamical generating technique of invariants of nonfunctional dependence was discussed. A new method of dynamical generating technique of invariants of nonfunctional dependence based on the theory of database was showed. Then, a series of detecting measures of specific nonfunctional dependence invariants were proposed .Many kinds of program invariants can be dynamically discovered by the means.",2009,0, 3510,Fabric defects detecting and rank scoring based on Fisher criterion discrimination,"Automatic texture defect detection is highly important for many fields of visual inspection. This paper studies the application of advanced computer image processing techniques for solving the problem of automated defect detection for textile fabrics. The approach is used for the quality inspection of local defects embedded in homogeneous textured surfaces. Above all, the size of the basic texture units of the fabric image is acquired by calculating auto correlation function in weft direction and in wrap direction. Then the sizes of the basic texture units are taken as criterion to segment the fabric image. During scanning the fabric texture image, the basic units are segmented. And the Fisher criterion discriminator is used to assign each unit to a class at the same time. Afterwards, the fabric detects are measured according to the relationship of the suffix of the image pixel and the scale of the image and ranked scale by comparing with America Four Points System. Experiments with real fabric image data show that it is effective.",2009,0, 3511,Detecting Design Patterns Using Source Code of Before Applying Design Patterns,"Detecting design patterns from object-oriented program source-code can help maintainers understand the design of the program. However, the detection precision of conventional approaches based on the structural aspects of patterns is low due to the fact that there are several patterns with the same structure. To solve this problem, we propose an approach of design pattern detection using source-code of before the application of the design pattern.Our approach is able to distinguish different design patterns with similar structures, and help maintainers understand the design of the program more accurately. Moreover, our technique reveals when and where the target pattern has been applied in an ordered series of revisions of the target program. Our technique is useful to assess what kinds of patterns increase what kinds of quality characteristics such as the maintainability.",2009,0, 3512,Risky Module Estimation in Safety-Critical Software,"Software used in safety-critical system must have high dependability. Software testing and V&V (Verification and Validation) activities are very important for assuring high software quality. If we can predict the risky modules in safety-critical software, testing activities and regulation activities can be applied to them more intensively. In this paper, we classify the estimated risk classes which can be used for deep testing and V&V. We predict the risk class for each module using support vector machines. We can consider that the modules classified to risk class 5 or 4 are more risky than others relatively. For all classification error rates, we expect that the results can be useful and practical for software testing, V&V, and activities for regulatory reviews. In the future works, to improve the practicality, we will have to investigate other machine learning algorithms and datasets.",2009,0, 3513,Validating Requirements Model of a B2B System,"It is very costly if a software project development has to recover from an error that is due to a mistake made in the construction of the requirements model. Validation of requirements model is thus always an effective means for detecting defects in the requirements model. In this paper, we present an approach to modeling requirements by UML with OCL, and the design of a tool EOC (executable OCL checker), that supports well-formed static checking for the OCL specification as well as the dynamic validation. We illustrate the approach and the tool by an example of B2B system. The requirements model of this system is validated by the prototyping executions of the system functions on the system states against the OCL constraints. The validation checks whether the execution of a use case violates the system invariants, and whether the requirements model is feasible according to its business workflow.",2009,0, 3514,Network management software tool for FTTH-PON: SANTAD,"This paper proposed a network management software tool named smart access network _ testing, analyzing and database (SANTAD) that associated with remotely controlling, optical monitoring, system analyzing, fault detection, protection switching, and automatic recovery apparatus for fiber-to-the-home passive optical network (FTTH-PON) based on Visual Basic programming. The developed program is able to prevent and detect any occurrence of fault in the network system through centralized monitoring and remote operating from central office (CO) via Ethernet connection. SANTAD enable the status of each transmission link to be displayed on a single computer screen with capability to configure the drastic drop of optical signal level and detect the failure simultaneously. The analysis results will then stored in database with certain attributes such as date and time, network failure rate, failure location, etc. The database system enable the history of network scanning process be analyzed and studied by the engineers.",2009,0, 3515,Prototype Implementation of a Goal-Based Software Health Management Service,"The FAILSAFE project is developing concepts and prototype implementations for software health management in mission-critical real-time embedded systems. The project unites features of the industry standard ARINC 653 Avionics Application Software Standard Interface and JPL's Mission Data System (MDS) technology. The ARINC 653 standard establishes requirements for the services provided by partitioned real-time operating systems. The MDS technology provides a state analysis method, canonical architecture, and software framework that facilitates the design and implementation of software-intensive complex systems. We use the MDS technology to provide the health management function for an ARINC 653 application implementation. In particular, we focus on showing how this combination enables reasoning about and recovering from application software problems. Our prototype application software mimics the space shuttle orbiter's abort control sequencer software task, which provides safety-related functions to manage vehicle performance during launch aborts. We turned this task into a goal-based function that, when working in concert with the software health manager, aims to work around software and hardware problems in order to maximize abort performance results. In order to make it a compelling demonstration for current aerospace initiatives, we additionally imposed on our prototype a number of requirements derived from NASA's Constellation Program. Lastly, the ARINC 653 standard imposes a number of requirements on the system integrator for developing the requisite error handler process. Under ARINC 653, the health monitoring (HM) service is invoked by an application calling the application error service or by the operating system or hardware detecting a fault. It is these HM and error process details that we implement with the MDS technology, showing how a state-analytic approach is appropriate for identifying fault determination details, and showing how the framework supp- orts acting upon state estimation and control features in order to achieve safety-related goals. We describe herein the requirements, design, and implementation of our software health manager and the software under control. We provide details of the analysis and design for the phase II prototype, and describe future directions for the remainder of phase II and the new topics we plan to address in phase III.",2009,0, 3516,Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning,"Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We observed good performance from both an existing ABFT method for matrix multiplication and a novel ABFT method for exponentiation. These techniques bring us a step closer to """"rad-hard"""" machine learning algorithms.",2009,0, 3517,Synthesizing hardware from sketches,"This paper proposes to adapt sketching, a software synthesis technique, to hardware development. In sketching, the designer develops an incomplete hardware description, providing the """"insight"""" into the design. The synthesizer completes the design to match an executable specification. This style of synthesis liberates the designer from tedious and error-prone details-such as timing delays, wiring in combinational circuits and initialization of lookup tables-while allowing him to control low-level aspects of the design. The main benefit will be a reduction of the time-to-market without impairing system performance.",2009,0, 3518,Debugging strategies for mere mortals,"Recent improvements in design verification strive to automate error detection and greatly enhance engineers' ability to detect functional errors. However, the process of diagnosing the cause of these errors, and subsequently fixing them, remains one of the most difficult tasks of verification. The complexity of design descriptions, paired with the scarcity of software tools supporting this task lead to an activity that is mostly ad-hoc, labor intensive and accessible only to a few debugging specialists within a design house. This paper discusses some recent research solutions that support the debugging effort by simplifying and automating bug diagnosis. These novel techniques demonstrate that, through the support of structured methodologies, debugging can become a task pursued by the average design engineer. We also outline some of the upcoming trends in design verification, postponing some the verification effort to runtime, and discuss how debugging could leverage these trends to achieve better quality of results.",2009,0, 3519,Predicting Performance of Multi-Agent systems during feasibility study,"Agent oriented software engineering (AOSE) is a software paradigm that has grasped the attention of researchers/developers for the last few years. As a result, many different methods have been introduced to enable researchers/developers to develop multi agent systems. However Performance, a non- functional attribute have not been given that much importance for producing quality software. Performance issues must be considered throughout software project development. Predicting performance early in the life cycle during feasibility study is not considered for predicting performance. In this paper, we consider the data collected (technical and environmental factors) during feasibility study of Multi-Agent software development to predict performance. We derive an algorithm to predict the performance metrics and simulate the results using a case study on scheduling the use of runways on an airport.",2009,0, 3520,SVis: A Computational Steering Visualization Environment for Surface Structure Determination,"The arrangement of atoms at the surface of a solid accounts for many of its properties: hardness, chemical activity, corrosion, etc. are dictated by the precise surface structure. Hence, finding it, has a broad range of technical and industrial applications. The ability to solve this problem opens the possibility of designing by computer materials with properties tailored to specific applications. Since the search space grows exponentially with the number of atoms, its solution cannot be achieved for arbitrarily large structures. Presently, a trial and error procedure is used: an expert proposes an structure as a candidate solution and tries a local optimization procedure on it. The solution relaxes to the local minimum in the attractor basin corresponding to the initial point, that might be the one corresponding to the global minimum or not. This procedure is very time consuming and, for reasonably sized surfaces, can take many iterations and much effort from the expert. Here we report on a visualization environment designed to steer this process in an attempt to solve bigger structures and reduce the time needed. The idea is to use an immersive environment to interact with the computation. It has immediate feedback to assess the quality of the proposed structure in order to let the expert explore the space of candidate solutions. The visualization environment is also able to communicate with the de facto local solver used for this problem. The user is then able to send trial structures to the local minimizer and track its progress as they approach the minimum. This allows for simultaneous testing of candidate structures. The system has also proved very useful as an educational tool for the field.",2009,0, 3521,An Approach to Measuring Software Development Risk Based on Information Entropy,"Software development risk always influence the success of software project, even determine a enterprise surviving or perishment. Consequently, there are some significant meaning for software company and software engineering field to measure effectively the risk. Whereas, the measurement is very difficult because software is a sort of logic product. There are many methods to measure the risk at the present time, but lack of quantificational measurement methods, and these measurements are all localization and they cannot consider well the risk factors and the effect. In this paper, we bring forward a quantificational approach to measuring the software development risk based on information entropy. It involves both the probabilities of the risk factors and the bloss, it is a effective synthesis method to measuring software development risk in practice.",2009,0, 3522,Towards Improved Assessment of Bone Fracture Risk,"Summary form only given. The mechanical competence of a bone depends on its density, its geometry and its internal trabecular microarchitecture. The gold standard to determine bone competence is an experimental, mechanical test. Direct mechanical testing is a straight-forward procedure, but is limited by its destructiveness. For the clinician, the prediction of bone quality for individual patients is, so far, more or less restricted to the quantitative analysis of bone density alone. Finite element (FE) analysis of bone can be used as a tool to non-destructively assess bone competence. FE analysis is a computational technique; it is the most widely used method in engineering for structural analysis. With FE analysis it is possible to perform a 'virtual experiment', i.e. the simulation of a mechanical test in great detail and with high precision. What is needed for that are, first, in vivo imaging capabilities to assess bone structure with adequate resolution, and second, appropriate software to solve the image-based FE models. Both requirements have seen a tremendous development over the last years. The last decade has seen the commercial introduction and proliferation of non-destructive microstructural imaging systems such as desktop micro- computed tomography (muCT), which allow easy and relatively inexpensive access to the 3D microarchitecture of bone. Furthermore, the introduction of new computational techniques has allowed to solve the increasingly large FE models, that represent bone in more and more detail. With the recent advent of microstructural in vivo patient imaging systems, these methodologies have reached a level that it is now becoming possible to accurately assess bone strength in humans. Although most applications are still in an experimental setting, it has been clearly demonstrated that it is possible to use these techniques in a clinical setting. The high level of automation, the continuing increase in computational power, and above all the im- proved predictive capacity over predictions based on bone mass, make clear that there is great potential in the clinical arena for in vivo FE analyses Ideally, the development of in vivo imaging systems with microstructural resolution better than 50 mum would allow measurement of patients at different time points and at different anatomical sites. Unfortunately, such systems are not yet available, but the resolution at peripheral sites has reached a level (80 mum) that allows elucidation of individual microstructural bone elements. Whether a resolution of 50 mum in vivo will be reached using conventional CT technology remains to be seen as the required doses may be too high. With respect to these dose considerations, MRI may have considerable potential for future clinical applications to overcome some of the limitations with X-ray CT. With the advent of new clinical MRI systems with higher field strengths, and the introduction of fast parallel- imaging acquisition techniques, higher resolutions in MRI will be possible with comparable image quality and without the adverse effects of ionizing radiation. With these patient scanners, it will be possible to monitor changes in the microarchitectural aspects of bone quality in vivo. In combination with FE analysis it will also allow to predict the mechanical competence of whole bones in the course of age- and disease-related bone loss and osteoporosis. We expect these findings to improve our understanding of the influence of densitometric, morphological but also loading factors in the etiology of spontaneous fractures of the hip, the spine, and the radius. Eventually, this improved understanding may lead to more successful approaches in the prevention of age- and disease-related fractures.",2009,0, 3523,Update Parameters Dynamic in Causality Diagram,"Causality diagram theory is a new uncertainty reasoning model based on probability theory, which adopted direct cause-effect intensity and graphical knowledge representation. It has important theoretical meaning and application value for fault diagnosis. Linkage intensity is the basis of the inference which is the parameters not easy to obtain, it is often given by field experts. In this paper, the algorithm of EM(eta) is proposed to learn causality diagram parameters (linkage intensity) dynamic, which can make the parameters adapt with the change of environment, and this method's feasibility and advantage are proved in theory. Experimental results show the validity and the superiority of the method as well. At last, we compared the differences with the learning of causality diagram parameters static.",2009,0, 3524,Experimental Analysis of Different Metrics (Object-Oriented and Structural) of Software,"In this paper first investigate the relationships between existing object-oriented metrics (coupling, cohesion) and procedure-oriented metrics (Line of code, Cyclomatic complexity and knot metric) measure the probability of error detection in system classes during testing and is to propose an investigation and analysis strategy to make these kinds of studies more reusable and comparable, a problem which is persistent in the quality measurement. The metrics are first defined and then explained using practical applications finally, a review of the empirical study concerning chosen and coupling metrics and subset of these measures that provide sufficient information is given and metrics providing overlapping information are expelled from the set. The paper defines a new set of operational measures for the conceptual coupling of classes, which are theoretically valid and empirically studied. In this paper, we shows that these metrics capture new dimensions in coupling measurement, compared to existing structural metrics.",2009,0, 3525,Filtering Spam in Social Tagging System with Dynamic Behavior Analysis,"Spam in social tagging systems introduced by some malicious participants has become a serious problem for its global popularizing. Some studies which can be deduced to static user data analysis have been presented to combat tag spam, but either they do not give an exact evaluation or the algorithms' performances are not good enough. In this paper, we proposed a novel method based on analysis of dynamic user behavior data for the notion that users' behaviors in social tagging system can reflect the quality of tags more accurately. Through modeling the different categories of participants' behaviors, we extract tag-associated actions which can be used to estimate whether tag is spam, and then present our algorithm that can filter the tag spam in the results of social search. The experiment results show that our method indeed outperforms the existing methods based on static data and effectively defends against the tag spam in various spam attacks.",2009,0, 3526,Property Preservation and Composition with Guarantees: From ASSERT to CHESS,"While the demand for high-integrity applications continues to rise, industrial developers seek cost effective development strategies that are capable of delivering the required guarantees. The very nature of high-integrity software systems make a-posteriori verification totally inapt to meet the time, cost and quality constraints that impend on developers. What is wanted instead is a development method that facilitates early verification and that devolves to proven automation as many of the error-prone development tasks as practically possible. Model-driven engineering (MDE) is an especially fit option to explore in that respect. In a recent European project very interesting results were obtained in the development and industrial evaluation of an MDE process centered on the joint principles of correctness by construction and property preservation. The proceedings of that project were so encouraging in fact that a continuation of it was instigated with a challenging broader scope.This paper provides an account of the approach taken in the original project with regard to property preservation and outlines the intent of its continuation.",2009,0, 3527,Quality of Service Composition and Adaptability of Software Architectures,"Quality of service adaptability refers to the ability of components/services to adapt in run-time the quality exhibited. A composition study from a quality point of view would investigate how these adaptable elements could be combined to meet systempsilas quality requirements. Enclosing quality properties with architectural models has been typically used to improve system understanding. Nevertheless these properties along with some supplementary information about quality adaptation would allow us to carry out a composition study during the design phase and even to predict some features of the adaptability behavior of the system. Existing modeling languages and tools lack enough mechanisms to cope with adaptability, e.g. to describe system elements that may offer/require several quality levels. This paper shows how we reuse existing modeling languages and tools, combine them and create new ones to tackle the problem of quality of service adaptability and composition. The final goal of this work is to evaluate architectural models to predict systempsilas QoS behavior before it is implemented.",2009,0, 3528,A Lightweight Anomaly Detection System for Information Appliances,"In this paper, a novel lightweight anomaly and fault detection infrastructure called anomaly detection by resource monitoring (Ayaka) is presented for information appliances. Ayaka provides a general monitoring method for detecting anomalies using only resource usage information on systems independent of its domain, target application, and programming languages. Ayaka modifies the kernel to detect faults and uses a completely application black-box approach based on machine learning methods. It uses the clustering method to quantize the resource usage vector data and learn the normal patterns with a hidden Markov Model. In the running phase, Ayaka finds anomalies by comparing the application resource usage with the learned model. The evaluation experiment indicates that our prototype system is able to detect anomalies, such as SQL injection and buffer overrun, without significant overheads.",2009,0, 3529,Online Self-Healing Support for Embedded Systems,"In this paper, online system-level self-healing support is presented for embedded systems. Different from off-line log analysis methods used by conventional intrusion detection systems, our research focuses on analyzing runtime kernel data structures hence perform self-diagnosis and self-healing. Inside the infrastructure, self-diagnosis and self-healing solutions have been implemented based on several selected critical kernel data structures. They can fully represent current system status and are also closely related with system resources. At runtime once any system inconsistency has been detected, predefined recovery functions are invoked. Our prototype is developed based on a lightweight virtual machine monitor, above on which the monitored Linux kernel, runtime detection and recovery services run simultaneously. The proposed infrastructure requires few modifications to current Linux kernel source code, thus it can be easily adopted into existing embedded systems. It is also fully software-based without introducing any specific hardware, therefore it is cost-efficient. The evaluation experiment results indicate that our prototype system can correctly detect inconsistent kernel data structures caused by security attacks with acceptable penalty to system performance.",2009,0, 3530,"Enhanced wafer analysis using a combination of test, emission and software net tracing","We describe a wafer analysis methodology which uses test data, emission data and CAD data to accurately predict the location and type of defect. The methodology described enabled us to know the location with metal layer information and type of defect before performing destructive physical analysis.",2009,0, 3531,Improving Software-Quality Predictions With Data Sampling and Boosting,"Software-quality data sets tend to fall victim to the class-imbalance problem that plagues so many other application domains. The majority of faults in a software system, particularly high-assurance systems, usually lie in a very small percentage of the software modules. This imbalance between the number of fault-prone (fp) and non-fp (nfp) modules can have a severely negative impact on a data-mining technique's ability to differentiate between the two. This paper addresses the class-imbalance problem as it pertains to the domain of software-quality prediction. We present a comprehensive empirical study examining two different methodologies, data sampling and boosting, for improving the performance of decision-tree models designed to identify fp software modules. This paper applies five data-sampling techniques and boosting to 15 software-quality data sets of different sizes and levels of imbalance. Nearly 50 000 models were built for the experiments contained in this paper. Our results show that while data-sampling techniques are very effective in improving the performance of such models, boosting almost always outperforms even the best data-sampling techniques. This significant result, which, to our knowledge, has not been previously reported, has important consequences for practitioners developing software-quality classification models.",2009,0, 3532,Fault diagnosis and failure prognosis for engineering systems: A global perspective,"Engineering systems, such as aircraft, industrial processes, manufacturing systems, transportation systems, electrical and electronic systems, etc., are becoming more complex and are subjected to failure modes that impact adversely their reliability, availability, safety and maintainability. Such critical assets are required to be available when needed, and maintained on the basis of their current condition rather than on the basis of scheduled or breakdown maintenance practices. Moreover, on-line, real-time fault diagnosis and prognosis can assist the operator to avoid catastrophic events. Recent advances in Condition-Based Maintenance and Prognostics and Health Management (CBM/PHM) have prompted the development of new and innovative algorithms for fault, or incipient failure, diagnosis and failure prognosis aimed at improving the performance of critical systems. This paper introduces an integrated systems-based framework (architecture) for diagnosis and prognosis that is generic and applicable to a variety of engineering systems. The enabling technologies are based on suitable health monitoring hardware and software, data processing methods that focus on extracting features or condition indicators from raw data via data mining and sensor fusion tools, accurate diagnostic and prognostic algorithms that borrow from Bayesian estimation theory, and specifically particle filtering, fatigue or degradation modeling, and real-time measurements to declare a fault with prescribed confidence and given false alarm rate while predicting accurately and precisely the remaining useful life of the failing component/system. Potential benefits to industry include reduced maintenance costs, improved equipment uptime and safety. The approach is illustrated with examples from the aircraft and industrial domains.",2009,0, 3533,Dealing with Driver Failures in the Storage Stack,"This work augments MINIX 3's failure-resilience mechanisms with novel disk-driver recovery strategies and guaranteed file-system data integrity. We propose a flexible filter-driver framework that operates transparently to both the file system and the disk driver and enforces different protection strategies. The filter uses checksumming and mirroring in order to achieve end-to-end integrity and provide hard guarantees for detection of silent data corruption and recovery of lost data. In addition, the filter uses semantic information about the driver's working in order to verify correct operation and proactively replace the driver if an anomaly is detected. We evaluated our design through a series of experiments on a prototype implementation: application-level benchmarks show modest performance overhead of 0-28% and software-implemented fault-injection (SWIFI) testing demonstrates the filter's ability to detect and transparently recover from both data-integrity problems and driver-protocol violations.",2009,0, 3534,Color grading in Tomato Maturity Estimator using image processing technique,"This tomato maturity estimator is developed to conduct tomato color grading using machine vision to replace human labor. Existing machine has not been widely applied in Malaysia since the cost is too expensive. The major problem in tomato color grading by human vision was due to the subjectivity of human vision and error prone by visual stress and tiredness. Therefore, this system is carried out to judge the tomato maturity based on their color and to estimate the expiry date of tomato by their color. Evolutionary methodology was implemented in this system design by using several image processing techniques including image acquisition, image enhancement and feature extraction. Fifty sample data of tomatoes were collected during image acquisition phase in the format of RGB color image. The quality of the collected images were being improved in the image enhancement phase; mainly converting to color space format (L*a*b*), filtering and threshold process. In the feature extraction phase, value of red-green is being extracted. The values are then being used as information for determining the percentage of tomato maturity and to estimate expiry date of tomato. According to the testing results, this system has met its objectives whereby 90.00% of the tomato tested has not rotten yet. This indicates that the judgment of tomato maturity and the estimation of tomato's expiry date were accurate in this project.",2009,0, 3535,One adaptive modeling with Markov-chain decision tree in application of independent combinational components installing test,"The software functional test goal is to detect and remove as many defects as possible with a series of test case. In order to solve the problem of paradox of defects coverage and time cost in installation of independent combinational components (ICCI), one adaptive modeling algorithm with Markov-chain decision tree based on some necessary assumptions is proposed in this paper compared to the traditional matrix methodology. After comparing two searching strategies called two-binary search and bottom-to-top respectively, we also do some edges cunning work according to improved understanding of correct probability of software product during testing and defect verification. Combining the merits of the two strategies with layer controlling, empirical results with one of our product Tivoli integrated portal is also collected to demonstrate the effectiveness of such solution.",2009,0, 3536,Novel parallel particle swarm optimization algorithms applied on the multi-task cooperation,"With more and more applications of workflow technology, the workflow systems must be flexible and dynamic in order to effectively adapt for the uncertain and error-prone collaborative work environments. This paper adds the interaction and machine learning to the workflow model proposed by workflow management coalition and then applies the parallel particle swarm optimization algorithm to solve it, so that workflow modeling and enactment are both flexible while the complexity of whole system is decrease. The improvement based on the model manifest that it not only realizes the flexible workflow, but also supports the personality of workflow.",2009,0, 3537,Research on surface defect inspection for small magnetic rings,"Surface defect characters of the small magnetic ring is diversiform and the diameter of surface defect is usually about 0.1mm, thus the magnetic ring existing defects used in industrial production is great security risks. However, currently the disadvantages of artificial visual method are cost high, efficiency low and etc. Therefore, a new method is needed to detect small magnetic ring automatically. In this paper, the design of the automatic inspection system is illustrated, the modified BHPF filter is used for de-noising and eventually the defect is located and recognized.",2009,0, 3538,Design of embedded laser beam measurement system,"In view of disadvantages lying in traditional laser beam quality measurement, an evaluation system was developed which can be used to detect laser beam quality automatically, M2 factor was used to evaluate the laser beam quality which was suggested by ISO. This system can be used in the laboratory and industry field, which collected image data of the laser speckle by image sampling system, stored image data and transmitted them to the embedded processor ARM11 under the control of FPGA, which is the abbreviation of field programmable gate array and DSP which is the abbreviation of digital signal processor, then run the image processing application software, executed laser parameter calculating algorithm, imitated laser beam transmitting hyperbolic curve in space, calculated the value of M2. Results showed that this system had advantages such as small size, light scale, low cost, simple operation, easy use, high measurement precision and so on, which had better application and popularize value.",2009,0, 3539,Numerical simulation study on the process of multi-point stretch-forming for sheet metal parts,"Forming defects have an important impact on the forming quality of sheet metal. Based on finite element software, the process of multi-point stretch-forming was simulated. The creation of finite element model, the choice of material model, the establishment of boundary conditions and the treatment of contact friction and so on were carried out. With the changes of technical parameters, multi-point stretch-forming is made further understanding, the potential forming defects may be predicted, the proper technical parameters are chosen to restrain or eliminate forming defects, and consequently the forming quality of the parts and efficiency of manufacture are improved. These results provide significant guidance to the practical application of multi-point stretch-forming technique.",2009,0, 3540,Comparing apples and oranges: Subjective quality assessment of streamed video with different types of distortion,"Video quality assessment is essential for the performance analysis of visual communication applications. Objective metrics can be used to estimate the relative quality differences, but they typically give reliable results only if the compared videos contain similar type of quality distortion. However, video compression typically produces different kinds of visual artifacts than transmission errors. In this paper, we propose a novel subjective quality assessment method that is suitable for comparing different types of quality distortions. The proposed method has been used to evaluate how well PSNR estimates the relative subjective quality levels for content with different types of quality distortions. Our conclusion is that PSNR is not a reliable metric for assessing the co-impact of compression artifacts and transmission errors on the subjective quality.",2009,0, 3541,Gradient ascent paired-comparison subjective quality testing,"Subjective testing is the most direct means of assessing audio, video, and multimedia quality as experienced by users and maximizing the information gathered while minimizing the number of trials is an important goal. We propose gradient ascent subjective testing (GAST) as an efficient way to locate optimizing sets of coding or transmission parameter values. GAST combines gradient ascent optimization techniques with paired-comparison subjective test trials to efficiently locate parameter values that maximize perceived quality. We used GAST to search a two-dimensional parameter space for the known region of maximal audio quality as proof-of-concept. That point was accurately located and we estimate that conventional testing would have required at least 27 times as many trials to generate the same results.",2009,0, 3542,A lightweight software control system for cyber awareness and security,"Designing and building software that is free of defects that can be exploited by malicious adversaries is a difficult task. Despite extensive efforts via the application of formal methods, use of automated software engineering tools, and performing extensive pre-deployment testing, exploitable errors still appear in software. The problem of cyber resilience is further compounded by the growing sophistication of adversaries who can marshal substantial resources to compromise systems. This paper describes a novel, promising approach to improving the resilience of software. The approach is to impose a process-level software control system that continuously monitors an application for signs of attack or failure and responds accordingly. The system uses software dynamic translation to seamlessly insert arbitrary sensors and actuators into an executing binary. The control system employs the sensors to detect attacks and the actuators to effect an appropriate response. Using this approach, several novel monitoring and response systems have been developed. The paper describes our light-weight process-level software control system, our experience using it to increase the resilience of systems, and discusses future research directions for extending and enhancing this powerful approach to achieving cyber awareness and resilience.",2009,0, 3543,Verification of Access Control Policies for REA Business Processes,"Access control is a significant aspect of security and constitutes an important component of operating systems, database management systems (DBMS), and applications. Access control policies define which users have access to what objects and operations and describe any existing constraints. These policies are not only different from one organization to another but also change over time, even in a single organization. We examine the integration, not necessarily the inclusion, of these policies into business processes and consider such effects as consistency. Determining the effects of these policies can become difficult because several such policies exist, and taking into account all possible combinations or executions of these policies is tedious and error-prone. In addition, the number of policies usually increases over time and adds to the complexity of analyzing their combinations. It is acknowledged in the literature that what you specify is what you get, but that is not necessarily what you want. To show our approach, we specify certain access control policies for Resource--Event--Agent (REA) business processes and examine the addition and combination of these policies. More specifically, we illustrate the principal of separation of duties (e.g., two separate individuals must authorize ordering items and paying for them). Our main contribution is the verification of access control policies in conjunction with a REA business process.",2009,0, 3544,Adaptive Control Framework for Software Components: Case-Based Reasoning Approach,"The proposed architecture for an adaptive software system is based on a multi-threaded programming model. It includes two basic logic components: a database for selected case gathering and a decision making sub-system. Using CBR-methods, a control procedure for the decision-making sub-system is worked out, which uses a database for selected case gathering. A respective CBR algorithm determines the number of threads needed for guaranteed predicted performance (measured in ms), and reliability (defined in %).",2009,0, 3545,Risk-Based Adaptive Group Testing of Semantic Web Services,"Comprehensive testing is necessary to ensure the quality of complex Web services that are loosely coupled, dynamic bound and integrated through standard protocols. Testing of such web services can be however very expensive due to the diversified user requirements and the large numbers of service combinations delivered by the open platform. Group testing was introduced in our previous research as a selective testing technique to reduce test cost and improve test efficiencies. It applies test cases efficiently so that the largest percent of problematic web service is detected as early as possible. The paper proposes a risk-based approach to group test selection. With this approach, test cases are categorized and scheduled with respect to the risks of their target service features. The approach is based on the assumption that for a service-based system, the tolerance to a featurepsilas failure is an inverse ratio to its risk. The risky features should be tested earlier and with more tests. We specially address the problem in the context of semantic Web Services and report a first attempt for an ontology-based quantitative risk assessment. The paper also discusses risk-based group testing process and strategies for ranking and ruling-out services of the test groups, at each risk level. Runtime monitoring mechanism is incorporated to detect the dynamic changes in service configuration and composition so that the risks can be continuously adjusted online.",2009,0, 3546,A Contextual Guidance Approach to Software Security,"With the ongoing trend towards the globalization of software systems and their development, components in these systems might not only work together, but may end up evolving independently from each other. Modern IDEs have started to incorporate support for these highly distributed environments, by adding new collaborative features. As a result, assessing and controlling system quality (e.g. security concerns) during system evolution in these highly distributed systems become a major challenge. In this research, we introduce a unified ontological representation that integrates best security practices in a context-aware tool implementation. As part of our approach, we integrate information from traditional static source code analysis with semantic rich structural information in a unified ontological representation. We illustrate through several use cases how our approach can support the evolvability of software systems from a security quality perspective.",2009,0, 3547,Real-Time Guarantees in Flexible Advance Reservations,"This paper deals with the problem of scheduling workflow applications with quality of service (QoS) constraints, comprising real-time and interactivity constraints, over a service-oriented grid network. A novel approach is proposed, in which high-level advance reservations, supporting flexible start and end time, are combined with low-level soft real-time scheduling, allowing for the concurrent deployment of multiple services on the same host while fulfilling their QoS requirements. By undertaking a stochastic approach, in which a-priori knowledge is leveraged about the probability of activation of the application workflows within the reserved time-frame, the proposed methodology allows for the achievement of various trade-offs between the need for respecting QoS constraints (user perspective) and the need for having good resource saturation levels (service provider perspective).",2009,0, 3548,Traceability ReARMed,"Traceability links connect artifacts in software engineering models to allow tracing in a variety of use cases. Common to any of these use cases is that one can easily find related artifacts by following these links. Traceability links can significantly reduce the risk and cost of change in a software development project. However, finding, creating and maintaining these links is costly in most cases. In any real-world project of significant size the creation and maintenance of traceability links requires tool support. In this paper, we propose a novel approach to support the automation of traceability link recovery based on association rule mining and operation-based change tracking. Traceability link recovery is the activity of finding missing or lost traceability links. Our approach automatically generates a list of candidate links based on the project history along with a score of support and confidence for every candidate link. We transformed the data from an operation based change tracking system to sets of frequent items, which serve as input for association rule mining (ARM). We applied our approach to data from a software development project with more than 40 developers and assessed the quality of the candidate links in interviews.",2009,0, 3549,Predicting Change Impact in Object-Oriented Applications with Bayesian Networks,"This study has to be considered as another step towards the proposal of assessment/predictive models in software quality. We consider in this work, that a probabilistic model using Bayesian nets constitutes an interesting alternative to non-probabilistic models suggested in the literature. Thus, we propose in this paper a probabilistic approach using Bayesian networks to analyze and predict change impact in object-oriented systems. An impact model is built and probabilities are assigned to network nodes. Data obtained from a real system are exploited to empirically study causality hypotheses between some software internal attributes and change impact. Several scenarios are executed on the network, and the obtained results confirm that coupling is a good indicator of change impact.",2009,0, 3550,Modeling and Predicting Software Failure Costs,"For software, the costs of failures are not clearly understood. Often, these costs disappear in the costs of testing, the general developments costs, or the operating expenses. In a general manufacturing context, the British standard BS-6143-2:1990 classifies quality-related costs into prevention costs, appraisal costs, and failure costs. It furthermore recommends to identify the activities carried out within each of these categories, and to measure the costs connected with the activities. The standard thus presents a framework for recording and structuring costs once they have occurred. In this paper, we propose an approach for structuring the information on internal and external software failure costs such that their development over time can be represented by stochastic models. Based on these models, future failure costs can be predicted. In two case studies we show how the approach was applied in an industrial software development project.",2009,0, 3551,Research and Application on Automatic Network Topology Discovery in ITSM System,"Automatic network topology discovery is important for network management and network analysis in ITSM system. Issues like network environment configuration, performance testing and fault detecting all require accurate information of the network topology. According to the requirement of the ITSM system, in this paper we propose an efficient algorithm that is based on SNMP for discovering Layer 3 and Layer 2 topology of the network. The proposed algorithm only requires SNMP to be enabled on routers and managed switches. For Layer 3 topology discovery, this paper proposes a shared subnet based approach to analyze connections between routers. In addition, the algorithm uses a STP based method to discover Layer 2 topology, which does not require the completeness of switchpsilas AFT information. The algorithm has been implemented and tested in several enterprise-level networks, and the results demonstrate that it discovers the network topology accurately and efficiently.",2009,0, 3552,On the integration of protein contact map predictions,"Protein structure prediction is a key topic in computational structural proteomics. The hypothesis that protein biological functions are implied by their three-dimensional structure makes the protein tertiary structure prediction a relevant problem to be solved. Predicting the tertiary structure of a protein by using its residue sequence is called the protein folding problem. Recently, novel approaches to the solution of this problem have been found and many of them use contact maps as a guide during the prediction process. Contact map structures are bidimensional objects which represent some of the structural information of a protein. Many approaches and bioinformatics tools for contact map prediction have been presented during the past years, having different performances for different protein families. In this work we present a novel approach based on the integration of contact map predictions in order to improve the quality of the predicted contact map with a consensus-based algorithm.",2009,0, 3553,Scenario-oriented information extraction from electronic health records,"Providing a comprehensive set of relevant information at the point of care is crucial for making correct clinical decisions in a timely manner. Retrieval of scenario specific information from an extensive electronic health record (EHR) is a tedious, time consuming and error prone task. In this paper, we propose a model and a technique for extracting relevant clinical information with respect to the most probable diagnostic hypotheses in a clinical scenario. In the proposed technique, we first model the relationship between diseases, symptoms, signs and other clinical information as a graph and apply concept lattice analysis to extract all possible diagnostic hypotheses related to a specific scenario. Next, we identify relevant information regarding the extracted hypotheses and search for matching evidences in the patient's EHR. Finally, we rank the extracted information according to their relevancy to the hypotheses. We have assessed the usefulness of our approach in a clinical setting by modeling a challenging clinical problem as a case study.",2009,0, 3554,Next-Generation Power Information System,"Power monitoring systems (power quality monitors, digital fault recorders, digital relays, advanced controllers, etc.) continue to get more powerful and provide a growing array of benefits to overall power system operation and performance evaluation. Permanent monitoring systems are used to track ongoing performance, to watch for conditions that could require attention, and to provide information for utility and customer personnel when there is a problem to investigate. An important development area for these monitoring systems is the implementation of intelligent systems that can automatically evaluate disturbances and conditions to make conclusions about the cause of a problem or even predict problems before they occur. This paper describes the development of a Next Generation Power Information System that will provide an open platform for implementing advanced monitoring system applications that involve integration with many different data sources. The work builds on many years of experience with software for management and analysis of large power quality monitoring systems.",2009,0, 3555,Architectural QoS Predictions in Model-Driven Development of Component-Based Software,"Continuous quality-of-service (QoS) provisioning is becoming increasingly important in the software development lifecycle of distributed systems. For component-based software, it also requires understanding of extra-functional properties of each individual component. However, black-box components, as most commonly provided by current component models, do not provide sufficient details to predict QoS. This paper describes which information about a component is needed to enable relevant analyses and emphasizes that prediction feedback should not based on internal models, but based on models which the domain experts understand. Hence, a mechanism is required to annotate analysis results and back into the original models provided by the domain experts. This paper proposes a new approach to provide QoS predictions of distributed systems throughout the whole development lifecycle.",2009,0, 3556,A Domain Specific Language in Dependability Analysis,"Domain specific languages gain increasing popularity as they substantially leverage software development by bridging the gap between technical and business area. After a domain framework is produced, experts gain an effective vehicle for assessing quality and performance of a system in the business-specific context. We consider the domain to be dependability of multi-agent system (MAS), for which a key requirement is an efficient verification of a topology model of a power system. As a result, we come up with a reliability evaluation solution offering a significant rise in the level of abstraction towards MAS utilized for purposes of a power system topology verification. By means of the mentioned solution safety engineers are enabled to perform analysis while the design is still incomplete. A new DSL is developed in XText in order to specify a structure of the system together with dependability extensions, which are further translated into dynamic fault trees using model to model transformations. The Eclipse Ecore becomes a common denominator, in which both metamodelspsila abstract syntax trees are defined. Finally, an expert is offered with two ways of defining a model: through abstract and textual concrete syntax, both of which are checked for consistency using object constraint language.",2009,0, 3557,A Model Based Framework for Specifying and Executing Fault Injection Experiments,"Dependability is a fundamental property of computer systems operating in critical environment. The measurement of dependability (and thus the assessment of the solutions applied to improve dependability) typically relies on controlled fault injection experiments that are able to reveal the behavior of the system in case of faults (to test error handling and fault tolerance) or extreme input conditions (to assess robustness of system components). In our paper we present an Eclipse-based fault injection framework that provides a model-based approach and a graphical user interface to specify both the fault injection experiments and the run-time monitoring of the results. It automatically implements the modifications that are required for fault injection and monitoring using the Javassist technology, this way it supports the dependability assessment and robustness testing of software components written in Java.",2009,0, 3558,A Comparison of Structural Testing Strategies Based on Subdomain Testing and Random Testing,"Both partition testing and random testing methods are commonly followed practice towards selection of test cases. For partition testing, the programpsilas input domain is divided into subsets, called subdomains, and one or more representatives from each subdomain are selected to test the program. In random testing test cases are selected from the entire programpsilas input domain randomly. The main aim of the paper is to compare the fault-detecting ability of partition testing and random testing methods. The results of comparing the effectiveness of partition testing and random testing may be surprising to many people. Even when partition testing is better than random testing at finding faults, the difference in effectiveness is marginal. Using some effectiveness metrics for testing and some partitioning schemes this paper investigates formal conditions for partition testing to be better than random testing and vice versa.",2009,0, 3559,"Extended Dependability Analysis of Information and Control Systems by FME(C)A-technique: Models, Procedures, Application","This paper addresses the problems associated with dependability analysis of complex information and control systems (I&CS). FME(C)A-technique is proposed as a unified approach to I&CS dependability assessment. Classic philosophy is extended by introducing new items into assessed objects, relevant causes, assessed effects, assessed attributes and used means. FME(C)A-tables and models for dependability (reliability, survivability and safety) attributes assessment are constructed. Elements of information technology of I&CS analysis are presented.",2009,0, 3560,Study on application of three-dimensional laser scanning technology in forestry resources inventory,"Terrestrial laser scanners, as efficient tools, have opened a wide range of application fields within a rather short period of time. Beyond interactive measurement in 3D point clouds, techniques for the automatic detection of objects and the determination of geometric parameters form a high priority research issue. The quality of 3D point clouds generated by laser scanners and the automation potential make terrestrial laser scanning also an interesting tool for forest inventory and management. The paper will first review current laser scanner systems from a technological point of view and discuss different scanner technologies and system parameters regarding their suitability for forestry applications. In the second part of the paper, results of a pilot study on the applicability of terrestrial laser scanners in forest inventory tasks will be presented. The study concentrates on the automatic detection of trees and the subsequent determination of tree height and diameter at breast height. Reliability and precision of techniques for automatic point cloud processing were analysed based on scans of a test region in Harbin Experimental Forest Farm. In the pilot study, which represents an early stage of software development, more than 95% of the trees in a test region could be detected correctly. Tree heights could be determined with a precision of 80 cm, and breast height diameters could be determined with a precision of less than 1.5 cm.",2009,0, 3561,Efficient fault-prone software modules selection based on complexity metrics,"In order to improve the software reliability early, this paper proposes an efficient algorithm to select fault-prone software module. Based on software module's complexity metrics, the algorithm uses modified cascaded-correlation algorithm as neural network classifier to select the fault-prone software module. Finally, by analyzing the algorithm's application in the project MAP, the paper shows the advantage of the algorithm.",2009,0, 3562,Software task processing with dependent modules and some measures,"This paper proposes a new model by combining an infinite-server queueing model for multi-task processing software system with a perfect debugging model based on Markov process with two types of faults suggested by Lee, Nam and Park. We apply this model for module and integration testing in the testing process. Also, we compute the expected number of tasks whose processes can be completed and the task completion probability are investigated under the proposed model. We interpret the meaning of the interaction between modules in a software composed of dependent modules.",2009,0, 3563,Research on code pattern automata-based code error pattern automatic detection technique,"Nowadays, many defects, e.g., obscure error generation-scenario and lacking of formalization which is the basis for the automatic error detection, exist in field of code error research. Furthermore, the automation of error detection will greatly affect the quality and efficiency of software testing. Therefore, more deeply research on code errors need to be done. At first, this paper presents the definition of code error pattern based on definition of pattern. Secondly, it investigates the formalization description of code error pattern. Then, it studies the automatic error pattern detecting technique based on non-determinate finite state automata and treats the matching technique of error pattern as the key problem. Finally, some case studies are given. The preliminary results show the rationality of code error pattern definition and the effectiveness of error pattern formalization description and error pattern matching technique.",2009,0, 3564,An approach of software quality prediction based on relationship analysis and prediction model,"By predicting the quality of the software that will be formed in the early stage of development, faults brought in at the phase of design will be found out early in order not leave them in the software product. Furthermore, it will be easy for designers to adopt appropriate plans based on specific expectations of the target software. However, the traditional prediction models have following shortages: 1) the relationship between attributes and metrics effectively cannot be expressed; 2) lack of the ability to process data both qualitatively and quantitatively; 3) not appropriate to the case with uncompleted information. In this paper, a model built based on and fuzzy neural network is proved to be good at quality prediction of object-oriented software.",2009,0, 3565,Resolving the impact of distributed renewable generation on directional overcurrent relay coordination: a case study,"Two approaches are proposed to solve the directional overcurrent relay coordination problem associated with the installation of distributed renewable generation (DRG) in interconnected power delivery systems (IPDS), depending on the existing system protection capability (adaptive or non-adaptive). For adaptive protection systems, the first proposed approach introduces a procedure to select the optimal minimum number of relays, their locations and new settings. This procedure is restricted by the available relay setting groups. For non-adaptive protection systems, the second proposed approach implements a practice to obtain optimal minimum fault current limiter values (FCL) to limit DRG fault currents and restore relay coordination status without altering the original relay settings. An integration of the proposed two approaches is evaluated for IPDSs possessing both protection systems. Three scenarios are assessed for different numbers of DRGs, and DRG and fault locations using an optimisation model implemented in GAMS software and a developed MatLab code. The obtained results are reported and discussed.",2009,0, 3566,An end-to-end approach for the automatic derivation of application-aware error detectors,"Critical Variable Recomputation (CVR) based error detection provides high coverage for data critical to an application while reducing the performance overhead associated with detecting benign errors. However, when implemented exclusively in software, the performance penalty associated with CVR based detection is unsuitably high. This paper addresses this limitation by providing a hybrid hardware/software tool chain which allows for the design of efficient error detectors while minimizing additional hardware. Detection mechanisms are automatically derived during compilation and mapped onto hardware where they are executed in parallel with the original task at runtime. When tested using an FPGA platform, results show that our approach incurs an area overhead of 53% while increasing execution time by 27% on average.",2009,0, 3567,Automatic fault detection and diagnosis in complex software systems by information-theoretic monitoring,"Management metrics of complex software systems exhibit stable correlations which can enable fault detection and diagnosis. Current approaches use specific analytic forms, typically linear, for modeling correlations. In this paper we use normalized mutual information as a similarity measure to identify clusters of correlated metrics, without knowing the specific form. We show how we can apply the Wilcoxon rank-sum test to identify anomalous behaviour. We present two diagnosis algorithms to locate faulty components: RatioScore, based on the Jaccard coefficient, and SigScore, which incorporates knowledge of component dependencies. We evaluate our mechanisms in the context of a complex enterprise application. Through fault injection experiments, we show that we can detect 17 out of 22 faults without any false positives. We diagnose the faulty component in the top five anomaly scores 7 times out of 17 using SigScore, which is 40% better than when system structure is ignored.",2009,0, 3568,Exploiting refactoring in formal verification,"In previous work, we introduced Echo, a new approach to the formal verification of the functional correctness of software. Part of what makes Echo practical is a technique called verification refactoring. The program to be verified is mechanically refactored specifically to facilitate verification. After refactoring, the program is documented with low-level annotations, and a specification is extracted mechanically. Proofs that the semantics of the refactored program are equivalent to those of the original program, that the code conforms to the annotations, and that the extracted specification implies the program's original specification constitute the verification argument. In this paper, we discuss verification refactoring and illustrate it with a case study of the verification of an optimized implementation of the advanced encryption standard (AES) against its official specification. We compare the practicality of verification using refactoring with traditional correctness proofs and refinement, and we assess its efficacy using seeded defects.",2009,0, 3569,Improving students' hardware and software skills by providing unrestricted access to state of the art design tools and hardware systems,"The technology and CAD tools employed by industry to design digital hardware evolve quickly and continuously. Well prepared engineers, who are able to produce actual designs and adapt themselves to the global world, are in demand. Educational programs must keep pace with technologies in common use in order to produce graduates who are competitive in the marketplace. Studies conducted at two different universities, Rose Hulman Institute of Technology and Washington State University measure changes in student performance when all students have unlimited access to state of the art design tools and hardware systems. Data are collected from surveys, exams, and course assignments. Quantitative data are analyzed by comparison to historical data gathered from student groups that did not have unlimited access to hardware systems, and qualitative data are used to determine the subjective quality of each student's experience. Outcomes include: assessing whether the overall learning process is improved; whether students have a better knowledge of modern technologies and design methods; whether their comprehension of founding concepts has improved or faltered.",2009,0, 3570,Encouraging reusable network hardware design,"The NetFPGA platform is designed to enable students and researchers to build networking systems that run at line-rate, and to create re-usable designs to share with others. Our goal is to eventually create a thriving developer-community, where developers around the world contribute reusable modules and designs for the benefit of the community as a whole. To this end, we have created a repository of ldquoUser Contributed Designsrdquo at NetFPGA.org. But creating an ldquoopen-source hardwarerdquo platform is quite different from software oriented open-source projects. Designing hardware is much more time consuming-and more error prone-than designing software, and so demands a process that is more focussed on verifying that a module really works as advertised, else others will be reluctant to use it. We have designed a novel process for contributing new designs. Each contributed design is specified entirely by a set of tests it passes. A developer includes a list of tests that their design will pass, along with an executable set of tests that the user can check against. Through this process, we hope to establish the right expectations for someone who reuses a design, and to encourage sound design practices with solid, repeatable and integrated testing. In this paper we describe the philosophy behind our process, in the hope that others may learn from it, as well as describe the details of how someone contributes a new design to the NetFPGA repository.",2009,0, 3571,Relative evaluation of partition algorithms for complex networks,"Complex networks partitioning consists in identifying denser groups of nodes. This popular research topic has applications in many fields such as biology, social sciences and physics. This led to many different partition algorithms, most of them based on Newman's modularity measure, which estimates the quality of a partition. Until now, these algorithms were tested only on a few real networks or unrealistic artificial ones. In this work, we use the more realistic generative model developed by Lancichinetti et al. to compare seven algorithms: Edge-betweenness, Eigenvector, Fast Greedy, Label Propagation, Markov Clustering, Spinglass and Walktrap. We used normalized mutual information (NMI) to assess their performances. Our results show Spinglass and Walktrap are above the others in terms of quality, while Markov Clustering and Edge-Betweenness also achieve good performance. Additionally, we compared NMI and modularity and observed they are not necessarily related: some algorithms produce better partitions while getting lower modularity.",2009,0, 3572,Stochastic Analysis of CAN-Based Real-Time Automotive Systems,"Many automotive applications, including most of those developed for active safety and chassis systems, must comply with hard real-time deadlines, and are also sensitive to the average latency of the end-to-end computations from sensors to actuators. A characterization of the timing behavior of functions is used to estimate the quality of an architecture configuration in the early stages of architecture selection. In this paper, we extend previous work on stochastic analysis of response times for software tasks to controller area network messages, then compose them with sampling delays to compute probability distributions of end-to-end latencies. We present the results of the analysis on a realistic complex distributed automotive system. The distributions predicted by our method are very close to the probability of latency values measured on a simulated system. However, the faster computation time of the stochastic analysis is much better suited to the architecture exploration process, allowing a much larger number of configurations to be analyzed and evaluated.",2009,0, 3573,Matching schemas of heterogeneous relational databases,"Schema matching is a basic problem in many database application domains, such as data integration. The problem of schema matching can be formulated as follows, ldquogiven two schemas, Si and Sj, find the most plausible correspondences between the elements of Si and Sj, exploiting all available information, such as the schemas, instance data, and auxiliary sourcesrdquo. Given the rapidly increasing number of data sources to integrate and due to database heterogeneities, manually identifying schema matches is a tedious, time consuming, error-prone, and therefore expensive process. As systems become able to handle more complex databases and applications, their schemas become large, further increasing the number of matches to be performed. Thus, automating this process, which attempts to achieve faster and less labor-intensive, has been one of the main tasks in data integration. However, it is not possible to determine fully automatically the different correspondences between schemas, primarily because of the differing and often not explicated or documented semantics of the schemas. Several solutions in solving the issues of schema matching have been proposed. Nevertheless, these solutions are still limited, as they do not explore most of the available information related to schemas and thus affect the result of integration. This paper presents an approach for matching schemas of heterogeneous relational databases that utilizes most of the information related to schemas, which indirectly explores the implicit semantics of the schemas, that further improves the results of the integration.",2009,0, 3574,Assessing easiness with Froglingo,"Expressive power has been a well-established as a dimension of measuring the quality of a computer language. Easiness is another dimension. It is the main stream of development in programming language and database management. However, there has not been a method to precisely measure the easiness of a computer language. This article discusses easiness. Provided that a data model be easier than a programming language in representing the semantics of the given data model, this article concludes that Froglingo is the easiest in database application development and maintenance.",2009,0, 3575,The development of Software Certifier System (SoCfeS): The architecture and design,"In this information age, most businesses are highly dependent on the availability of ICT services, especially on software application components. The interest on the acquisition of high quality software has increased among various stakeholders. However, some pertaining problems are still being debated such as: (i) defining mechanism for assessing software product quality; (ii) ensuring and offering software quality guarantee; and (iii) ensuring the continuous improvement of quality of software products. Therefore, a practical mechanism for assessment and certification is required to resolve these uncertainties. The fundamental model of certification or SCM-prod has been developed, evaluated and tested. It shows that the model and methodology are feasible and practical to be implemented in real world environment. Thus, a comprehensive model and support tool with intelligent aspects included is developed. The software named as SoCfeS (software certifier system) supports software certification process and continuous improvement intelligently.",2009,0, 3576,Design and data processing of a real-time power quality monitoring instrument,"Power quality (PQ) monitoring is an important issue to electric utilities and many industrial power customers. This paper presents a new distributed monitoring instrument based on Digital Signal Processing (DSP) and virtual measuring technology. The instrument is composed of EOC module, Feeder Control Unit (FCU) and Supervision Unit (SU). EOC module is used to implement the data acquisition and high speed data transmission of busbar voltage; FCU performs data acquisition and processing of feeders; SU based on virtual measuring technology is used to further analyze and process power quality parameters, achieves data storage and management. Wavelet-transformation which is implemented into SU detects transient power quality disturbance, while digital filtering, windowing, the software fixed-frequency sampling method, linear interpolation and Fast Fourier Transformation (FFT) are realized by DSP. Therefore, the monitoring instrument not only implements real-time, comprehensive high-precision and management for all power quality parameters, but also helps confirm source of the disturbance, improve the quality of power supply and increase the stability performance of power system.",2009,0, 3577,Design of a remote-monitoring system of MOA based on GPRS,"By measuring resistive current of the Metal-oxide Arresters(MOA) on-line monitoring technology, we can understand the performance condition of MOA at any time without unnecessary power-cut-off overhaul. Thus we can detect the abnormal phenomena and hidden accidents of MOA in time, take measures in advance to prevent the accident from getting worse and to avoid the economic loss resulted from the accident. To conquer the defect of this transmission method, a new method of monitoring of metal-oxide arresters in long distance was presented. The monitoring system consists of on-spot collection module, GPRS transmission module, long-distance monitoring center. The design of hardware and software of TMS320F2812 microprocessor according to the data collection and processing module was introduced. Data report can be formed in this system. The state of MOA can be inspected conveniently and accurately. Electric Power system can run in the reliable state.",2009,0, 3578,Assessment of flicker impact of fluctuating loads prior to connection,"With the acceptance of IEEE Std. 1453, many utility companies in North America are facing problems in applying the recommended concepts. These problems center around difficulties in predicting flicker produced by specific customers or loads before they are connected. The discussions in this paper are intended to serve as an overview of some of the methods that are commonly used, mostly outside North America, to perform pre-connection flicker assessments of fluctuating loads.",2009,0, 3579,Demand or request: Will load behave?,"Power planning engineers are trained to design an electric system that satisfies predicted electrical demand under stringent conditions of availability and power quality. Like responsible custodians, we plan for the provision of electrical sustenance and shelter to those in whose care regulators have given us the responsibility to serve. Though most customers accept this nurturing gladly, a growing number are concerned with the economic costs and environmental impacts of service at a time when technology (particularly distributed generation, storage, automation, and information networks) offers alternatives for localized control and competitive service. As customers' and their systems mature, a new relationship with the electricity provider is emerging. Demand response is perhaps the first unsteady step where the customer participates as a partner in system operations. This paper explores issues system planners need to consider as demand response matures to significant levels beyond direct load control and toward a situation where service is requested and bargains are reached with the electricity provider based on desired load behavior. On one hand, predicting load growth and behavior appears more daunting than ever. On the other, for the first time load becomes a new resource whose behavior can be influenced during system operations to balance system conditions.",2009,0, 3580,"Investigation of residential customer safety-line, neutral and earth integrity","Reverse polarity and neutral failures can produce potentially dangerous voltage levels within electrical consumer's premises. While earthing at the consumers premises is normally good during the installation, it may degrade over time. Existing conventional electromechanical energy meters do not detect such conditions at the consumer premises. Hence, an accurate detection of conditions such as reverse polarity, earthing and neutral failure and degradation is essential for safe and reliable operation of a household electrical system. It is highly desirable that a protection system is designed such that it should detect such conditions accurately and it should not be oversensitive, as this could lead to an unnecessarily unacceptable high level of ldquonuisancerdquo operation. In addition, such a solution should have to be reliable, economical and easily adoptable into existing premises without any major modification to the installation. This paper is intended to derive various necessary indices to detect neutral and earthing failure or degradation and reverse polarity conditions at the electrical consumer's premises. The simulation is carried out with the MATLABreg - SIMULINKreg software with SimPowerSystemstrade toolbox. These indices can be integrated into a smart meter or similar device to accurately detect earthing and neutral failure or degradation and reverse polarity conditions at consumer premises.",2009,0, 3581,A fault location and protection scheme for distribution systems in presence of dg using MLP neural networks,"Traditional electric distribution systems are radial in nature. These networks are protected by very simple protection devices such as over-current relays, fuses, and re-closers. Recent trends in distributed generation (DG) and its useful advantages perfectly can be achieved while the relevant concerns are deliberately taken into account. For example, penetration of DG disturbs the radial nature of conventional distribution networks. Therefore, protection coordination will be changed in some cases, and in some other cases it will be lost. The penetration of DG into distribution networks reinforces the necessity of designing new protection systems for these networks. One of the main capabilities that can improve the efficiency of new protection relays in distribution systems is exact fault locating. In this paper, a novel fault location and protection scheme has been presented to provide the distribution networks with DG. The suggested approach is able to determine the accurate type and location of faults using MLP neural networks. As case study, the proposed scheme has been assessed using a MATLAB based developed software and DIgSILENT Power Factory 13.2 on a sample distribution network.",2009,0, 3582,An Adaptive mimic filter Based algorithm for the detections of CT saturations,"An adaptive mimic filter-based algorithm for detecting the saturation of current transformers (CTs) has been presented and implemented in this paper. First, an adaptive method was developed for obtaining the line impedance of a digital mimic filter. The variations of the obtained line impedance were then used to detect the CT saturations. By using the proposed algorithm, the saturation period of a current waveform can be accurately detected. This paper finally utilized the MATLAB/SIMULINK software and a DSP-based environment to verify the proposed algorithm. Test results show the proposed algorithm can accurately detect the CT saturations.",2009,0, 3583,Geometrical approach on masked gross errors for power systems state estimation,"In this paper, a geometrical based-index, called undetectability index (UI), that quantifies the inability of the traditional normalized residue test to detect single gross errors is proposed. It is shown that the error in measurements with high UI is not reflected in their residues. This masking effect is due to the ldquoproximityrdquo of a measurement to the range of the Jacobian matrix associated with the power system measurement set. A critical measurement is the limit case of measurement with high UI, that is, it belongs to the range of the Jacobian matrix, has an infinite UI index, its error is totally masked and cannot be detected in the normalized residue test at all. The set of measurements with high UI contains the critical measurements and, in general, the leverage points, however there exist measurements with high UI that are neither critical nor leverage points and whose errors are masked by the normalized residue test. In other words, the proposed index presents a more comprehensive picture of the problem of single gross error detection in power system state estimation than critical measurements and leverage points. The index calculation is very simple and is performed using routines already available in the existing state estimation software. Two small examples are presented to show the way the index works to assess the quality of measurement sets in terms of single gross error detection. The IEEE-14 bus system is used to show the efficiency of the proposed index to identify measurements whose errors are masked by the estimation processing.",2009,0, 3584,Hierarchical fault detection and diagnosis for unmanned ground vehicles,"This paper presents a fault detection and diagnosis (FDD) method for unmanned ground vehicles (UGVs) operating in multi agent systems. The hierarchical FDD method consisting of three layered software agents is proposed: Decentralized FDD (DFDD), centralized FDD (CFDD), and supervisory FDD (SFDD). Whereas the DFDD is based on modular characteristics of sensors, actuators, and controllers connected or embedded to a single DSP, the CFDD is to analyze the performance of vehicle control system and/or compare information between different DSPs or connected modules. The SFDD is designed to monitor the performance of UGV and compare it with local goal transmitted from a ground station via wireless communications. Then, all software agents for DFDD, CFDD, and SFDD interact with each other to detect a fault and diagnose its characteristics. Finally, the proposed method will be validated experimentally via hardware-in-the-loop simulations.",2009,0, 3585,Enterprise architecture dependent application evaluations,"Chief information officers (CIO) are faced with the increasing complexity of application landscapes. The task of specifying future architectures depends on an exact assessment of the current state. When CIOs have to evaluate the whole landscape or certain applications of it, they are facing an enormous challenge, since there is no common methodology for that purpose. Within this paper we address this task and present an approach for enterprise architecture dependent application evaluations. We outline why it is important not only to assess single applications by classical software metrics in order to get an indication of their quality, but also to regard the overall context of an enterprise application. To round up this contribution we present a brief example from a project with one of our industrial partners.",2009,0, 3586,Business service composability on the basis of trust,In the real world a service in a business is usually composed of many component services. These component services join together to form a composite of components. The trustworthiness of component services determines the trustworthiness of this composite. This trustworthiness of composite service has a large impact on the successful delivery of a service. In this paper we study that how we can determine the trustworthiness of this composite. Since the components in the composite form parallel or series and/or combination of parallel/series arrangements we use probability theory to determine the trustworthiness of the composite service. We use a case study to demonstrate the concepts.,2009,0, 3587,A parameter-free hybrid clustering algorithm used for malware categorization,"Nowadays, numerous attacks made by the malware, such as viruses, backdoors, spyware, trojans and worms, have presented a major security threat to computer users. The most significant line of defense against malware is antivirus products which detects, removes, and characterizes these threats. The ability of these AV products to successfully characterize these threats greatly depends on the method for categorizing these profiles of malware into groups. Therefore, clustering malware into different families is one of the computer security topics that are of great interest. In this paper, resting on the analysis of the extracted instruction of malware samples, we propose a novel parameter-free hybrid clustering algorithm (PFHC) which combines the merits of hierarchical clustering and K-means algorithms for malware clustering. It can not only generate stable initial division, but also give the best K. PFHC first utilizes agglomerative hierarchical clustering algorithm as the frame, starting with N singleton clusters, each of which exactly includes one sample, then reuses the centroids of upper level in every level and merges the two nearest clusters, finally adopts K-means algorithm for iteration to achieve an approximate global optimal division. PFHC evaluates clustering validity of each iteration procedure and generates the best K by comparing the values. The promising studies on real daily data collection illustrate that, compared with popular existing K-means and hierarchical clustering approaches, our proposed PFHC algorithm always generates much higher quality clusters and it can be well used for malware categorization.",2009,0, 3588,Page Rule-Line Removal Using Linear Subspaces in Monochromatic Handwritten Arabic Documents,"In this paper we present a novel method for removing page rule lines in monochromatic handwritten Arabic documents using subspace methods with minimal effect on the quality of the foreground text. We use moment and histogram properties to extract features that represent the characteristics of the underlying rule lines. A linear subspace is incrementally built to obtain a line model that can be used to identify rule line pixels. We also introduce a novel scheme for evaluating noise removal algorithms in general and we use it to assess the quality of our rule line removal algorithm. Experimental results presented on a data set of 50 Arabic documents, handwritten by different writers, demonstrate the effectiveness of the proposed method.",2009,0, 3589,Enterprise Architecture Analysis for Data Accuracy Assessments,"Poor data in information systems impede the quality of decision-making in many modern organizations. Manual business process activities and application services are never executed flawlessly which results in steadily deteriorating data accuracy, the further away from the source the data gets, the poorer its accuracy becomes. This paper proposes an architecture analysis method based on Bayesian Networks to assess data accuracy deterioration in a quantitative manner. The method is model-based and uses the ArchiMate language to model business processes and the way in which data objects are transformed by various operations. A case study at a Swedish utility demonstrates the approach.",2009,0, 3590,Chlorophyll measurement from Landsat TM imagery,"Water quality is an important factor for human health and quality of life. This has been recognized many years ago. Remote sensing can be used for various purposes. Environmental monitoring through the method of traditional ship sampling is time consuming and requires a high survey cost. This study uses an empirical model, based on actual water quality of chlorophyll measurements from the Penang Strait, Malaysia to predict chlorophyll based on optical properties of satellite digital imagery. The feasibility of using remote sensing technique for estimating the concentration of chlorophyll using Landsat satellite imagery in Penang Island, Malaysia was investigated in this study. The objective of this study is to evaluate the feasibility of using Landsat TM image to provide useful data for the chlorophyll mapping studies. The chlorophyll measurements were collected simultaneously with the satellite image acquisition through a field work. The in-situ locations were determined using a handheld Global Positioning Systems (GPS). The surface reflectance values were retrieved using ATCOR2 in the PCI Geomatica 10.1.3 image processing software. And then the digital numbers for each band corresponding to the sea-truth locations were extracted and then converted into radiance values and reflectance values. The reflectance values were used for calibration of the water quality algorithm. The efficiency of the proposed algorithm was investigated based on the observations of correlation coefficient (R) and root-mean-square deviations (RMS) with the sea-truth data. Finally the chlorophyll map was color-coded and geometrically corrected for visual in terpretation. This study shows that the Landsat satellite imagery has the potential to supply useful data for chlorophyll studies by using the developed algorithm. This study indicates that the chlorophyll mapping can be carried out using remote sensing technique by using Landsat imagery and the previously developed algorithm over Penang,- Malaysia.",2009,0, 3591,Automated Refactoring Suggestions Using the Results of Code Analysis Tools,"Static analysis tools are used for the detection of errors and other problems on . The detected problems related to the internal structure of a software can be removed by source code transformations called refactorings. To automate such source code transformations, refactoring tools are available. In modern integrated development environments, there is a gap between the static analysis tools and the refactoring tools. This paper presents an automated approach for the improvement of the internal quality of software by using the results of code analysis tools to call a refactoring tool to remove detected problems. The approach is generic, thus allowing the combination of arbitrary tools. As a proof of concept, this approach is implemented as a plug-in for the integrated development environment Eclipse.",2009,0, 3592,Quality of Code Can Be Planned and Automatically Controlled,"Quality of code is an important and critical health indicator of any software development project. However, due to the complexity and ambiguousness of calculating this indicator it is rarely used in commercial contracts. As programmers are much more motivated with respect to the delivery of functionality than quality of code beneath it,they often produce low quality code, which leads to post-delivery and maintenance problems. The proposed mechanism eliminates this lack of attention to quality of code. The results achieved after the implementation of the mechanism are more motivated programmers, higher project sponsor confidence and a predicted quality of code.",2009,0, 3593,Integration Test Order Strategies to Consider Test Focus and Simulation Effort,"The integration testing process aims at uncovering faults within dependencies between the components of a software system. Due to the lack of resources, it is usually not possible to test all dependencies. Fault prone dependencies have to be selected as test focus. This test focus has to be considered during the stepwise integration of the whole software system. An integration test order strategy has to devise an integration order that integrates dependencies selected as test focus in early integration steps. Furthermore the strategy has to minimize the effort to simulate not yet integrated components of the software system. Current approaches only focus on the reduction of the simulation effort, but do not take into account the test focus. This paper introduces an approach to determine an optimal integration testing order that considers both, the test focus and the simulation effort. The approach is applied to nine real software systems and the results are compared to six approaches.",2009,0, 3594,Towards Automated Test Practice Detection and Governance,"The selection, monitoring, and adjustment of quality measures are fundamental to software engineering, and testing is a key quality assurance activity. In Small and Medium Enterprises (SMEs), it is often difficult and time consuming to manually ascertain the degree and type of test practice usage and related process compliance, thus such data collection may be omitted. Moreover, any manual data collection may not be objective, comprehensive, and dependable, since manual collection cannot typically be done transparently with software engineers. Considering test-driven development, the intention and the order of programming are important, and few clues are left ex post that can be objectively verified. This paper presents an approach that enables an automatic test practice detection capability using the SEEEK (Software Engineering Environment Event-driven framework) to support adaptable processes while ensuring process compliance and supporting governance. The results show the feasibility of this approach for automatically detecting test practices and adjusting developer task management accordingly.",2009,0, 3595,A Failure Detection Model Based on Message Delay Prediction,"Failure detection is a key technology to implement a high reliable system. It is usually based on overtime mechanism to determine whether a process is failure or not. With the development of network, old failure detectors without adaptive mechanism can not meet the requirements of QoS of application all the time. Adaptive failure detection requires that the failure detectors can dynamically adjust the detecting quality according to the requirements of applications and the variations of network. A new failure detection model based on the predicted message delay is proposed in this paper. An adaptive failure detection algorithm is discussed and realized, which is based on the prediction from historical messages delay time. Experimental results show that the algorithm can satisfy the userpsilas demand of QoS on the failure detector to some extent.",2009,0, 3596,Optimized Multipinhole Design for Mouse Imaging,"The aim of this study was to enhance high-sensitivity imaging of a limited field of view in mice using multipinhole collimators on a dual head clinical gamma camera. A fast analytical method was used to predict the contrast-to-noise ratio (CNR) in many points of a homogeneous cylinder for a large number of pinhole collimator designs with modest overlap. The design providing the best overall CNR, a configuration with 7 pinholes, was selected. Next, the pinhole pattern was made slightly irregular to reduce multiplexing artifacts. Two identical, but mirrored 7-pinhole plates were manufactured. In addition, the calibration procedure was refined to cope with small deviations of the camera from circular motion. First, the new plates were tested by reconstructing a simulated homogeneous cylinder measurement. Second, a Jaszczak phantom filled with 37 MBq 99mTc was imaged on a dual head gamma camera, equipped with the new pinhole collimators. The image quality before and after refined calibration was compared for both heads, reconstructed separately and together. Next, 20 short scans of the same phantom were performed with single and multipinhole collimation to investigate the noise improvement of the new design. Finally, two normal mice were scanned using the new multipinhole designs to illustrate the reachable image quality of abdomen and thyroid imaging. The simulation study indicated that the irregular patterns suppress most multiplexing artifacts. Using body support information strongly reduces the remaining multiplexing artifacts. Refined calibration improved the spatial resolution. Depending on the location in the phantom, the CNR increased with a factor of 1 to 2.5 using the new instead of a single pinhole design. The first proof of principle scans and reconstructions were successful, allowing the release of the new plates and software for preclinical studies in mice.",2009,0, 3597,Application Of artificial immune system for detecting overloaded lines and voltage collapse prone buses in distribution network,"Biological immune systems are highly parallel, distributed, and adaptive systems, which use learning, memory, and associative retrieval to solve recognition and classification tasks. The Artificial Immune System (AIS) are capable of constructing and maintaining a dynamical and structural identity, capable of learning to identify previously unseen invaders and remembering what it has learnt. As a part of pioneering research in application of AIS to electrical power distribution systems, an AIS based software has been developed for identification of voltage collapse and line overload prone areas in distribution network. The applicability of AIS for this particular task is demonstrated on a 295-bus generic distribution system.",2009,0, 3598,Optimal Radial Basis Function Neural Network power transformer differential protection,"This paper presents a new algorithm for protection of power transformer by using optimal radial basis function neural network (ORBFNN). ORBFNN based technique is applied by amalgamating the conventional differential protection scheme of power transformer and internal faults are precisely discriminated from inrush condition. The proposed method neither depend on any threshold nor the presence of harmonic contain in differential current. The RBFNN is designed by using particle swarm optimization (PSO) technique. The proposed RBFNN model has faster learning and detecting capability than the conventional neural networks. A comparison in the performance of the proposed ORBFNN and more commonly reported feed forward back propagation neural network (FFBPNN), in literature, is made. The simulations of different faults, over-excitation, and switching conditions on three different power transformers are performed by using PSCAD/EMTDC software and presented algorithm is evaluated by using MATLAB. The test results show that the new algorithm is quick and accurate.",2009,0, 3599,Application of Discrete Wavelet Transform for differential protection of power transformers,"This paper presents a novel formulation for differential protection of three-phase transformers. The discrete wavelet transform (DWT) is employed to extract transitory features of transformer three-phase differential currents to detect internal faulty conditions. The performance of the proposed algorithm is evaluated through simulation of faulty and non-faulty test cases on a power transformer using ATP/EMTP software. The optimal mother wavelet selection includes performance analysis of different mother wavelets and resolution number of levels. In order to test the formulations performance, the proposed method was implemented on MatLabreg environment. Simulated comparative test results with a percentage differential protection with harmonic restraint formulation shows that the proposed technique improves the discrimination performance. Simulated test cases of magnetizing inrush and close by external faults are also presented in order to test the performance of the proposed method in extreme conditions.",2009,0, 3600,Optimal integration of energy storage in distribution networks,"Energy storage, traditionally well established in the form of large scale pumped-hydro systems, is finding increased attraction in medium and smaller scale systems. Such expansion is entirely complementary to the wider uptake of intermittent renewable resources and to distributed generation in general, which are likely to present a whole range of new business opportunities for storage systems and their suppliers. In the paper, by assuming that distribution system operator has got the ownership and operation of storage, a new software planning tool for distribution networks able to define the optimal placement, rating and control strategies of distributed storage systems that minimize the overall network cost is proposed. This tool will assist the system operators in defining the better integration strategies of distributed storage systems in distribution networks and in assessing their potential as an option for a more efficient operation and development of future electricity distribution networks.",2009,0, 3601,Assessment and Improvement of Hang Detection in the Linux Operating System,"We propose a fault injection framework to assess hang detection facilities within the Linux operating system (OS). The novelty of the framework consists in the adoption of a more representative fault load than existing ones, and in the effectiveness in terms of number of hang failures produced; representativeness is supported by a field data study on the Linux OS. Using the proposed fault injection framework, along with realistic workloads, we find that the Linux OS is unable to detect hangs in several cases. We experience a relative coverage of 75%. To improve detection facilities, we propose a simple yet effective hang detector, which periodically tests OS liveness, as perceived by applications, by means of I/O system calls; it is shown that this approach can improve relative coverage up to 94%. The hang detector can be deployed on any Linux system, with an acceptable overhead.",2009,0, 3602,K-Stage Pipelined Bloom Filter for Packet Classification,"A Bloom filter is a simple space-efficient randomized data structure for representing a set in order to support membership queries. In recent years, Bloom filters have increased in popularity in database and networking applications. In this paper, we introduce a k-stage pipelined Bloom filter architecture to decrease power consumption. In the bit-array of a Bloom filter, bits corresponding to the index pointed to by hashing functions are checked and a """"match""""/""""mismatch"""" is determined. The match/mismatch determination process can be organized in a k-stage pipelined Bloom filter architecture. We present a k-stage pipelined Bloom filter, the power consumption analysis and utilize a software packet classifier to customize the k-stage pipelined Bloom filter architecture in packet classification. The results of the software packet classifier with real packet traces show that more than 75% of mismatched packets can be detected by the first three stages of the pipelined Bloom filter architecture (the remaining 25% comprises 17% matched and 8% mismatched packets). Therefore, a 4-stage pipelined Bloom filter architecture with one hashing function in the first three stages and k - 3 parallel hashing functions in the last stage is more appropriate for power consumption optimization in packet classification.",2009,0, 3603,Rule-Based Problem Classification in IT Service Management,"Problem management is a critical and expensive element for delivering IT service management and touches various levels of managed IT infrastructure. While problem management has been mostly reactive, recent work is studying how to leverage large problem ticket information from similar IT infrastructures to probatively predict the onset of problems. Because of the sheer size and complexity of problem tickets, supervised learning algorithms have been the method of choice for problem ticket classification, relying on labeled (or pre-classified) tickets from one managed infrastructure to automatically create signatures for similar infrastructures. However, where there are insufficient preclassified data, leveraging human expertise to develop classification rules can be more efficient. In this paper, we describe a rule-based crowdsourcing approach, where experts can author classification rules and a social networking-based platform (called xPad) is used to socialize and execute these rules by large practitioner communities. Using real data sets from several large IT delivery centers, we demonstrate that this approach balances between two key criteria: accuracy and cost effectiveness.",2009,0, 3604,Simplify Stochastic QoS Admission Test for Composite Services through Lower Bound Approximation,"A composite service can have its overall quality of service (QoS) measure computed with the QoS measures of its constituent services. In the stochastic case of QoS modeling, accurate computation for the probability distribution of the composite QoS measure is NP-hard because of the inherent complexities of probability value calculation for the function of discrete random variables. However, given reasonable assumptions on the monotony of the composite QoS function and on the independence of constituent QoS measures, we have proposed a lower bound approximation algorithm that computes the approximate value of the composite QoS distribution for admission test purpose in much lower-order complexity of time even in the worst case. The effectiveness of the proposed method is verified and compared against the naive algorithm using simulative trace data.",2009,0, 3605,Research on Quantitative Evaluation for Integrity,"Integrity is one of essential properties of information security. It is necessary to analyze integrity of system quantitatively in order to protect the system security. For the purpose, we present formal definitions of integrity based on probabilistic computation tree logic (PCTL) and quantitative evaluation model of integrity. In the model, we model interoperations of system and environment by probabilistic automata and evaluate integrity quantitatively by probabilistic model checking algorithm. Analysis results show that the formal description of integrity is of great significance and evaluation results are different with different integrity goals even for the same system.",2009,0, 3606,Monitoring and management of structured peer-to-peer systems,"The peer-to-peer paradigm shows the potential to provide the same functionality and quality like client/server based systems, but with much lower costs. In order to control the quality of peer-to-peer systems, monitoring and management mechanisms need to be applied. Both tasks are challenging in large-scale networks with autonomous, unreliable nodes. In this paper we present a monitoring and management framework for structured peer-to-peer systems. It captures the live status of a peer-to-peer network in an exhaustive statistical representation. Using principles of autonomic computing, a preset system state is approached through automated system re-configuration in the case that a quality deviation is detected. Evaluation shows that the monitoring is very precise and lightweight and that preset quality goals are reached and kept automatically.",2009,0, 3607,High-precision orientation and skew detection for texts in scanned documents,"This paper describes an approach towards an orientation and skew detection for texts in scanned documents. Before using OCR systems to obtain character information from images, a preprocessing stage, comprising a number of adjustments, has to be performed in order to obtain accurate results. One important operation that has to be considered is the skew correction, or deskewing, of the image, a fault that arises from an incorrect scanning process. This paper presents an iterative method for detecting the text orientation and skew angle, method based on histogram processing.",2009,0, 3608,A Max-Min Multiobjective Technique to Optimize Model Based Test Suite,"Generally, quality software production seeks timely delivery with higher productivity at lower cost. Redundancy in a test suite raises the execution cost and wastes scarce project resources. In model-based testing, the testing process starts with earlier software developmental phases and enables fault detection in earlier phases. The redundancy in the test suites generated from models can be detected earlier as well and removed prior to its execution. The paper presents a novel max-min multiobjective technique incorporated into a test suite optimization framework to find a better trade-off between the intrinsically conflicting goals. For illustration two objectives i.e. coverage and size of a test suite were used however it can be extended to more objectives. The study is associated with model based testing and reports the results of the empirical analysis on four UML based synthetic as well as industrial Activity Diagram models.",2009,0, 3609,An Approach to the Development of Inference Engine for Distributed System for Fault Diagnosis,"The reliable and fault tolerant computers are key to the success to aerospace, and communication industries. Designing a reliable digital system, and detecting and repairing the faults are challenging tasks in order for the digital system to operate without failures for a given period of time. The paper presents a new and systematic software engineering approach of performing fault diagnosis parallel and distributed computing. The purpose of the paper would be to demonstrate a method to build a fault diagnosis for a parallel and distributed computing. The paper chooses a model posed a tremendous challenge to the user for fault analysis. The model is the classic PMC model that happens to be a parallel and distributed computing. The paper would also show a method for building an optimal inference engine by obtaining sub graphs that also preserve the necessary and sufficient conditions of the model. Coin words: Parallel and Distributed Computing, Artificial Intelligence.",2009,0, 3610,Using an Artificial Neural Network for Predicting Embedded Software Development Effort,"In this paper, we establish an effort prediction model using an artificial neural network (ANN) for complementing missing values. We add missing values to the data via collaborative filtering using the method of Tsunoda et al.'s method. In addition, we perform an evaluation experiment to compare the accuracy of the ANN model with that of the MRA model using Welch's t-test. The results show that the ANN model is more accurate than the MRA model, since the mean errors of the ANN are statistically significantly lower.",2009,0, 3611,An Intelligent Decision Support Model for Actuarial Profession,"As actuarial analysis and evaluation is a time-consuming, complex and error-prone process, it can be improved or enhanced considerably by automated reasoning. Efforts to reduce the inaccuracy and incorrectness of analyses and to enhance the confidence levels of actuarial analysis have led to the development of an intelligent decision support system called ActuaExpert, which assists, not replaces, actuaries. ActuaExpert assumes the role of a hypothetical actuary capable of assessing risks, creating policies that minimize risk and its financial impacts on companies, and maximizing the profits for insurance companies. It has a knowledge base containing the expertise of statistics, finance, and business, and a case base of past episodes and consequences of decisions. By combining knowledge-based problem solving with case-based reasoning, ActuaExpert demonstrates forms of intelligent behavior not yet observed in traditional decision support systems and expert systems.",2009,0, 3612,Machine Vision Based Image Analysis for the Estimation of Pear External Quality,"The research on real time fruit quality detection with machine vision is an attractive and prospective subject for improving marketing competition and post harvesting value-added processing technology of fruit products. However, the farm products with different varieties and different quality have caused tremendous losses in economy due to lacking the post-harvest inspecting standards and measures in China. In view of the existing situations of fruit quality detection and the broad application prospect of machine vision in quality evaluation of agricultural products in China, the methods to detect the external quality of pear by machine vision were researched in this work. It aims at solving the problems, such as fast processing the large amount of image information, processing capability and increasing precision of detection, etc. The research is supported by the software of Lab Windows/CVI of NI Company. The system can be used for fruit grading by the external qualities of size, shape, color and surface defects. Some fundamental theories of machine vision based on virtual instrumentation were investigated and developed in this work. It is testified that machine vision is an alternative to unreliable manual sorting of fruits.",2009,0, 3613,Reliability Modeling and Analysis of Safety-Critical Manufacture System,"There are working, fail-safe and fail-dangerous states in safety-critical manufacture systems. This paper presents three typical safety-critical manufacture system architecture models: series, parallel and series-parallel system, whose components lifetime distributions are general forms. Also the reliability related indices, such as the probabilities that the system in these states and the mean times for the system fail-safe and fail-dangerous, are derived respectively. And the relationships among the obtained indices of the three system architecture models are analyzed. Finally some numerical examples are employed to elaborate the results obtained in this paper. The derived indices formulas are new results and without component lifetime distribution assumptions, which have significant meaning for evaluating the manufacture system reliability and improving the manufacture system safety design.",2009,0, 3614,Research on the Application of Data Mining in Software Testing and Defects Analysis,"The high dependability software is not only one of software technique development commanding points, but also is the software industry development essential foundation, this paper summarizes the data mining to face the detect of the software credibility test, the appraisal and the technical aspect newest research, elaborated the data mining technology in the software flaw test application, including flaw test in commonly used data mining method, data mining system and software testing management system. Introduced specifically in view of the software flaw's different classification based on the connection rule's software flaw parsing technique's application, proposed based on the association rule's software detect evaluation method, the purpose of which is to decrease software defects and to achieve the rapid growth of software dependability.",2009,0, 3615,V-MCS: A configuration system for virtual machines,"Vitual machine (VM) technology encapsulates shared computing resources into secure, stable, isolated and customizable private computing environments. While service-oriented computing becomes more and more a norm of computing, VM becomes a must-have common structure. However, creating and customizing a VM system on different hardware/software environments to meet versatile demands is a state-of-the-art task, especially for casual users working in new computing environments. In addition, VM configuration without system support is tedious, time consuming, and error prone. In this study, we propose a virtual machine configuration system (V-MCS) for tackling this issue. V-MCS takes a systematic approach to enhance the flexibility and usability of VM. It provides an easy-to-use Web interface to users to create their preferred configurations, and to convert the configurations into PAN documents for human-computer interaction and XML documents for machine automation. The underlying definition component parses the configurations and the spawn component generates customized VMs on the fly. V-MCS maintains and deploys these two-level documents when users login in the future. With the help of V-MCS, users can generate their customized VMs easily and swiftly. V-MCS has been implemented and tested. Experimental results match the design goal well.",2009,0, 3616,Reliability-aware scalability models for high performance computing,"Scalability models are powerful analytical tools for evaluating and predicting the performance of parallel applications. Unfortunately, existing scalability models do not quantify failure impact and therefore cannot accurately account for application performance in the presence of failures. In this study, we extend two well-known models, namely Amdahl's law and Gustafson's law, by considering the impact of failures and the effect of fault tolerance techniques on applications. The derived reliability-aware models can be used to predict application scalability in failure-present environments and evaluate fault tolerance techniques. Trace-based simulations via real failure logs demonstrate that the newly developed models provide a better understanding of application performance and scalability in the presence of failures.",2009,0, 3617,Cluster fault-tolerance: An experimental evaluation of checkpointing and MapReduce through simulation,"Traditionally, cluster computing has employed checkpointing to address fault tolerance. Recently, new models for parallel applications have grown in popularity namely MapReduce and Dryad, with runtime systems providing their own re-execute based fault tolerance mechanisms, but with no analysis of their failure characteristics. Another development is the availability of failure data spanning years for systems of significant size at Los Alamos National Labs (LANL), but the time between failure (TBF) for these systems is a poor fit to the exponential distribution assumed by optimization work in checkpointing, bringing these results into question. The work in this paper describes a discrete event simulation driven by the LANL data and by models of parallel checkpointing and MapReduce tasks. The simulation allows us to then evaluate and assess the fault tolerance characteristics of these tasks with the goal of minimizing the expected running time of a parallel program in a cluster in the presence of faults for both fault tolerance models.",2009,0, 3618,GridAtlas A grid application and resource configuration repository and discovery service,"Although access to grid resources is realized through a standardized interface, independent grid resources are not only managed autonomously but are also accessed as independent entities. Such environment results in configuration differences among individual resources forcing users that access those resources to deal with the variability in resource configurations. This behaviour breaks the concept of interpreting the grid as a unified entity and forces the users to think of the grid in terms of individual resources. Concretely, this variability is expressed through the requirement for the users to explicitly state application installation properties on individual resources during each job submission. This is a tedious, error-prone and unnecessary process that acts as a barrier in the use of the grid. In this paper, a tool named GridAtlas is presented that keeps up with the details of individual resource and application configurations and makes such data easily accessible from a well-known location through Web-service API calls or a Web interface. This paper describes the architecture of the GridAtlas service along with use cases where GridAtlas has been successfully applied and illustrates the benefit of such a service in real grid environments.",2009,0, 3619,Benchmarking Quality-Dependent and Cost-Sensitive Score-Level Multimodal Biometric Fusion Algorithms,"Automatically verifying the identity of a person by means of biometrics (e.g., face and fingerprint) is an important application in our day-to-day activities such as accessing banking services and security control in airports. To increase the system reliability, several biometric devices are often used. Such a combined system is known as a multimodal biometric system. This paper reports a benchmarking study carried out within the framework of the BioSecure DS2 (Access Control) evaluation campaign organized by the University of Surrey, involving face, fingerprint, and iris biometrics for person authentication, targeting the application of physical access control in a medium-size establishment with some 500 persons. While multimodal biometrics is a well-investigated subject in the literature, there exists no benchmark for a fusion algorithm comparison. Working towards this goal, we designed two sets of experiments: quality-dependent and cost-sensitive evaluation. The quality-dependent evaluation aims at assessing how well fusion algorithms can perform under changing quality of raw biometric images principally due to change of devices. The cost-sensitive evaluation, on the other hand, investigates how well a fusion algorithm can perform given restricted computation and in the presence of software and hardware failures, resulting in errors such as failure-to-acquire and failure-to-match. Since multiple capturing devices are available, a fusion algorithm should be able to handle this nonideal but nevertheless realistic scenario. In both evaluations, each fusion algorithm is provided with scores from each biometric comparison subsystem as well as the quality measures of both the template and the query data. The response to the call of the evaluation campaign proved very encouraging, with the submission of 22 fusion systems. To the best of our knowledge, this campaign is the first attempt to benchmark quality-based multimodal fusion algorithms. In the presence of changing - - image quality which may be due to a change of acquisition devices and/or device capturing configurations, we observe that the top performing fusion algorithms are those that exploit automatically derived quality measurements. Our evaluation also suggests that while using all the available biometric sensors can definitely increase the fusion performance, this comes at the expense of increased cost in terms of acquisition time, computation time, the physical cost of hardware, and its maintenance cost. As demonstrated in our experiments, a promising solution which minimizes the composite cost is sequential fusion, where a fusion algorithm sequentially uses match scores until a desired confidence is reached, or until all the match scores are exhausted, before outputting the final combined score.",2009,0, 3620,Interaction-sensitive synthesis of architectural tactics in connector designs,"During architectural design, the architect has to come up with architectural structures and tactics that aim at the fulfillment of quality attribute requirements. Architectural structures are built from components are supposed to interact. Since connector types typically crosscut connector designs and demand for modularized treatment during architecture design. The synthesis of tactics withing connector designs, however, turns out to be a major challenge. This is due to the fact that they are likely to affect each other in the final system where they need to be mutually integrated. This mutual affection is called tactic interaction. In this paper, we describe an approach towards detecting such tactic interactions during connector design.Our approach is integrated in a commercial architecture design tool supporting interaction-sensitive synthesis of architectural tactics during connector design activities.",2009,0, 3621,COBAREA: The COpula-BAsed REliability and Availability Modeling Environment,"Traditional algorithms for the analysis of fault trees (FT) and reliability block diagrams (RBD) are relying on the assumption that there are no dependencies in the failure and repair behavior and thus independence assumptions simplify the calculation. In practice, however, the components of a system are usually not independent. Prominent examples for inter-component dependencies include failures with a common cause (e.g. due to spatial closeness or a shared design), failure propagation, limited repair resources, failures induced by repair, and overload due to failure. Using traditional evaluation techniques implies neglecting these system properties which may lead to over-optimistic results.As an alternative approach to deal with inter-component dependencies, we present a tool based on copulas. Copulas are a way of specifying joint distributions if only the marginal probabilities are known. In terms of system reliability, this can be interpreted as inferring the system state vector probability from the component state probabilities. What makes copulas a valuable modeling method for large reliability models is the separation of the component distributions (the marginals) and the dependencies. Therefore, copulas can be used with arbitrary fault tree evaluation algorithms.",2009,0, 3622,A Concept of System Usability Assessment: System Attentiveness as the Measure of Quality,"The goal of this paper is to present novel metrics for system usability assessment and quality assessment (SQA). Proposed metrics should provide means of capturing overall system interference with regular daily routines and habits of system users, referred to as """"attentive interference"""". We argue that assuring the system is attentive proves essential when trying to mitigate risks related to system rejection by the intended users.",2009,0, 3623,Performance of FMIPv6-based cross-layer handover for supporting mobile VoIP in WiMAX networks,"This paper presents validation and evaluation of mobile VoIP support over WiMAX networks using the FMIPv6-based cross layer handover scheme. A software module has been implemented for the FMIPv6-based handoff scheme. The handoff delay components are formulated. To evaluate its support of mobile VoIP, we carefully assess the handoff delay, the total delay, and the R factor which is a representation of voice user satisfactory degree. Simulation results show that the cross-layering handoff scheme, as compared with the non-cross-layer scheme, successfully decreases layer-3 handoff delay by almost 50%, and is therefore thriving to support mobile VoIP services. We believe this is the first performance evaluation work for the FMIPv6-based cross-layer scheme, and hence an important work for the WiMAX research community.",2009,0, 3624,Mining quantitative class-association rules for software size estimation,"Associative models are usually applied in knowledge discovery problems in order to find patterns in large databases containing mainly nominal data. This work is focused on two different aspects, the predictive use of association rules and the management of quantitative attributes. The aim is to induce class association rules that allow predicting software size from attributes obtained in early stages of the project. In this application area, most of the attributes are continuous; therefore, they should be discretized before generating the rules. Discretization is a data mining preprocessing task having a special importance in association rule mining since it has a significant influence on the quality and the predictive precision of the induced rules. In this paper, a multivariate supervised discretization method is proposed, which takes into account the predictive purpose of the association rules.",2009,0, 3625,Exploiting scientific workflows for large-scale gene expression data analysis,"Microarrays are state technologies of the art for the measurement of expression of thousands of genes in a single experiment. The treatment of these data are typically performed with a wide range of tools, but the understanding of complex biological system by means of gene expression usually requires integrating different types of data from multiple sources and different services and tools. Many efforts are being developed on the new area of scientific workflows in order to create a technology that links both data and tools to create workflows that can easily be used by researchers. Currently technologies in this area aren't mature yet, making arduous the use of these technologies by the researcher. In this paper we present an architecture that helps the researchers to make large-scale gene expression data analysis with cutting edge technologies. The main underlying idea is to automate and rearrange the activities involved in gene expression data analysis, in order to freeing the user of superfluous technological details and tedious and error-prone tasks.",2009,0, 3626,Negotiation based advance reservation priority grid scheduler with a penal clause for execution failures,"The utility computing in a grid demands much more adaptability and dynamism when certain levels of commitments are to be complied with. Users with distinct priorities are categorized on the basis of the types of organizations or applications they belong to and can submit multiple jobs with varying specific needs. The interests of consumers and the resource service providers must be equally watched upon, with focus on commercials too. At the same time, the quality of the service must be ascertained to a committed level, thus enforcing an agreement. The authors propose an algorithm for a negotiation based scheduler that dynamically analyses and assesses the incoming jobs in terms of priorities and requirements, reserves them to resources after negotiations and match-making between the resource providers and the users. The jobs thus reserved are allocated resources for future. The performance evaluation of the scheduler for various parameters was done through simulations. The results were found to be optimal after incorporating advance reservation with dynamic priority control over job selection, involving the impact of situations confronting resource failures introducing economic policies and penalties.",2009,0, 3627,Stableness in large join query optimization,"In relational database model, the use of exhaustive search methods in the large join query optimization is prohibitive because of the exponential increase of search space. An alternative widely discussed is the use of randomized search techniques. Several previous researches have been showed that the use of randomized sampling in query optimization permits to find, in average, near optimal plans in polynomial time. However, due to their random components, the quality of yielded plans for the same query may vary a lot, making the response time of a submitted query unpredictable. On the other hand, the use of heuristic optimization may increase stability of response time. This characteristic is essential in environments where response time must be predicted. In this paper, we will compare a randomized algorithm and a heuristic algorithm applied to large join query optimization. We used an open source DBMS as experimental framework and we compared the quality and stability of these algorithms.",2009,0, 3628,Hand tracking and trajectory analysis for physical rehabilitation,"In this work we present a framework for physical rehabilitation, which is based on hand tracking. One particular requirement in physical rehabilitation is the capability of the patient to correctly reproduce a specific path, following an example provided by the medical staff. Currently, these assignments are typically performed manually, and a nurse or doctor, who supervises the correctness of the movement, constantly assists the patient throughout the whole rehabilitation process. With the proposed system, our aim is to provide medical institutions and patients with a low-cost and portable instrument to automatically assess the rehabilitation improvements. To evaluate the performance of the exercise, and to determine the distance between the trial and the reference path, we adopted the dynamic time warping (DTW) and the longest common sub-sequence (LCSS) as discriminating metrics. Trajectories and numerical values are then stored to track the history of the patient and appraise the improvements of the rehabilitation process over time. Thanks to the tests conducted with real patients, it has been possible to evaluate the quality of the proposed tool, in terms of both graphical interface and functionalities.",2009,0, 3629,Preserving Cohesive Structures for Tool-Based Modularity Reengineering,"The quality of software systems heavily depends on their structure, which affects maintainability and readability. However, the ability of humans to cope with the complexity of large software systems is limited. To support reengineering large software systems, software clustering techniques that maximize module cohesion and minimize inter-modular coupling have been developed. The main drawback of these approaches is that they might pull apart elements that were thoughtfully placed together. This paper describes how strongly connected component analysis, dominance analysis, and intra-modular similarity clustering can be applied to identify and to preserve cohesive structures in order to improve the result of reengineering. The use of the proposed method allows a significant reduction of the number of component movements. As a result, the probability of false component movements is reduced. The proposed approach is illustrated by statistics and examples from 18 open source Java projects.",2009,0, 3630,Are There Language Specific Bug Patterns? Results Obtained from a Case Study Using Mozilla,"A lot of information can be obtained from configuration management systems and post-release bug databases like Bugzilla. In this paper we focus on the question whether there are language specific bug patterns in large programs. For this purpose we implemented a system for extracting the necessary information from the Mozilla project files. A comparison of the extracted information with respect to the programming language showed that there are bug patterns specific to programming languages. In particular we found that Java files of the Mozilla project are less error prone than C and C++ files. Moreover, we found out that the bug lifetime when using Java was almost double the lifetime of bugs in C or C++ file.",2009,0, 3631,CMS-Based Web-Application Development Using Model-Driven Languages,"Content management systems (CMS) are typically regarded as critical software platforms for the success of organizational web sites and intranets. Although most current CMS systems allow their extension through the addition of modules/components, these are usually built using the typical source-code-oriented software development process, which is slow and error-prone. On the other hand, a MDE-oriented development process is centered on models, which represent the system and are used to automatically generate all corresponding artifacts, such as source-code and documentation. This paper describes our proposal for a MDE approach to address the development of web-applications based on CMS systems. This approach is based on the creation of two CMS-oriented languages (situated at different levels of abstraction, and are used to both quickly model a web-application and provide a common ground for the creation of additional CMS-oriented languages), and a mechanism for the processing of models specified using those languages. Those models are then to be deployed to a target CMS platform by means of code generation or model interpretation/execution mechanisms.",2009,0, 3632,Filtering System Metrics for Minimal Correlation-Based Self-Monitoring,"Self-adaptive and self-organizing systems must be self-monitoring. Recent research has shown that self-monitoring can be enabled by using correlations between monitoring variables (metrics). However, computer systems often make a very large number of metrics available for collection. Collecting them all not only reduces system performance, but also creates other overheads related to communication, storage, and processing. In order to control the overhead, it is necessary to limit collection to a subset of the available metrics. Manual selection of metrics requires a good understanding of system internals, which can be difficult given the size and complexity of modern computer systems. In this paper, assuming no knowledge of metric semantics or importance and no advance availability of fault data, we investigate automated methods for selecting a subset of available metrics in the context of correlation-based monitoring. Our goal is to collect fewer metrics while maintaining the ability to detect errors. We propose several metric selection methods that require no information beside correlations. We compare these methods on the basis of fault coverage. We show that our minimum spanning tree-based selection performs best, detecting on average 66% of faults detectable by full monitoring (i.e., using all considered metrics) with only 30% of the metrics.",2009,0, 3633,Combinatorial Approach for Automated Platform Diversity Testing,"In recent years, product line engineering has been used effectively in many industrial setups to create a large variety of products. One key aspect of product line engineering is to develop re-usable assets often referred to as a platform. Such software platforms are inherently complex due to requirements of providing diverse functionalities, thereby leading to combinatorial test data explosion problem while validating these platforms. In this paper, we present a combinatorial approach for testing varied features and data diversity present within the platform. The proposed solution effectively takes care of complex interdependencies among diversity features and generates only valid combinations for test scenario. We also developed a prototype tool based on our proposed approaches to automate the platform testing. As part of our case study, we have used our prototype to validate a software platform widely being used across Philips Medical Systems (PMS) products. Initial results confirm that our approach significantly improves the overall platform testing process by reducing testing effort and improve the quality of the platform by detecting all interaction faults.",2009,0, 3634,A Quality Perspective of Software Evolvability Using Semantic Analysis,"Software development and maintenance are highly distributed processes that involve a multitude of supporting tools and resources. Knowledge relevant to these resources is typically dispersed over a wide range of artifacts, representation formats, and abstraction levels. In order to stay competitive, organizations are often required to assess and provide evidence that their software meets the expected requirements. In our research, we focus on assessing non-functional quality requirements, specifically evolvability, through semantic modeling of relevant software artifacts. We introduce our SE-Advisor that supports the integration of knowledge resources typically found in software ecosystems by providing a unified ontological representation. We further illustrate how our SE-Advisor takes advantage of this unified representation to support the analysis and assessment of different types of quality attributes related to the evolvability of software ecosystems.",2009,0, 3635,A Controlled Natural Language Approach for Integrating Requirements and Model-Driven Engineering,"Despite the efforts made during the last decades, Software Engineering still presents several issues concerning software products' quality. Requirements Engineering plays a important role regarding software quality, since it deals with the clear definition of the target system's scope. Moreover, Requirements Engineering is crucial to deal with change management, which is required to ensure that the final product reflects the stakeholders' expectations, namely the client and end-users business-related needs. We advocate the need to address the open issues regarding the requirements development process, namely to mitigate the drawbacks of using informal natural language, such as ambiguity and inconsistency. Moreover, we recognize the importance of automation to enhance productivity by avoiding repetitive and error-prone activities. In this paper, we propose a new socio-technical approach to overcome these software quality problems, consisting on the deep integration of Requirements Engineering with Model-Driven Engineering processes. This approach is based upon a controlled natural language for requirements specification, supporting the automatic extraction and verification of requirements models with Natural Language Processing techniques. The current results consist on the development of a Wiki-based tool prototype to validate our research ideas.",2009,0, 3636,Scenario-Based Genetic Synthesis of Software Architecture,"Software architecture design can be regarded as finding an optimal combination of known general solutions and architectural knowledge with respect to given requirements. Based on previous work on synthesizing software architecture using genetic algorithms, we propose a refined fitness function for assessing software architecture in genetic synthesis, taking into account the specific anticipated needs of the software system under design. Inspired by real life architecture evaluation methods, the refined fitness function employs scenarios, specific situations possibly occurring during the lifetime of the system and requiring certain modifiability properties of the system. Empirical studies based on two example systems suggest that using this kind of fitness function significantly improves the quality of the resulting architecture.",2009,0, 3637,Feedback-Based Error Tracking for AVS-M,"Feedback-based error tracking AVS-M is a video encoding standard used for mobile video in wireless environment, developed by the Audio and Video Coding Standard Working Group of China. In order to cope with the burst error in error-prone wireless networks, in this paper, we present an error tracking mechanism, which utilizing a feedback channel to judge which areas are contaminated by the prediction from its preceding frames. Through this positioning, we can terminate the error propagation effects by INTRA refreshing the affected areas. Simulations demonstrate that the proposed algorithm can effectively stop the quality degradation and not cause much bit-rate increase.",2009,0, 3638,Research of Software Defect Prediction Model Based on Gray Theory,"The software testing process is an important stage in the software life cycle to improve software quality. An amount of software data and information can be obtained. Based on analyzing the source and the type of software, this paper describes several use of the defects data and explains how to estimate the density of the software by using the software defects data collected in the practical work, and to use GM model to predict and assess the software reliability.",2009,0, 3639,Sub-Path Congestion Control in CMT,"Using the application of block data transfer, we investigate the performance of concurrent multipath transfer using SCTP multihoming (CMT) with congestion control policy under the scenario of a sender is constrained by the receive buffer (rbuf). We find that existing policy has some defects in aspect of bandwidth-aware source scheduling. Based on this, Based on this We proposed a sub-path congestion-control policy for SCTP (SPCC-SCTP) with bandwidth-aware source scheduling by dividing an association into sub paths based on shared bottleneck detection to overcome existing flaws of standard SCTP in supporting the multi-homing feature in the case of concurrent multipath transfer (CMT). The performance of the SPCC-SCTP is assessed through ns-2 experiments in a simplified Diff-Serv network. Simulation results demonstrated the effectiveness of our proposed mechanism and invite further research.",2009,0, 3640,The Application of Fault Tree Analysis in Software Project Risk Management,"The fault tree model has great significance on software project risk management. According to the standard fault-tree model, this paper establishes the corresponding mathematical model and sets up the software fault tree model of software project, analyzes project risk probability and influence coefficient combined with the actual software project risk management; sequentially lays a theoretical foundation for better controlling software project risk management.",2009,0, 3641,CodeAuditor: A Vulnerability Detection Framework Based on Constraint Analysis and Model Checking,"Open source applications have flourished over recent years. Meanwhile security vulnerabilities in such applications have grown. Since manual code auditing is error-prone, time-consuming and costly, automatic solutions have become necessary. In this paper we address program vulnerabilities by static code analysis. First, we use flow-insensitive and interprocedural constraint-based analysis to extract the vulnerability detection model from the source code. Second, we employ model checking to solve the model. In addition, we do alias analysis to improve the correctness and precision of the detection model. The presented concepts are targeted at the general class of buffer-related vulnerabilities and can be applied to the detection of vulnerability types such as buffer overflow, format string attack, and code injection. CodeAuditor, the prototype implementation of our methods, is targeted at detecting buffer overflow vulnerabilities in C source code. It can be regarded as a vulnerability framework in which a variety of analysis and model checking tools can be incorporated. With this tool, 18 previously unknown vulnerabilities in six open source applications were discovered and the observed false positive rate was at around 23%.",2009,0, 3642,An Early Detecting All-Zero DCT Blocks For Avs-P2,"This paper presents an efficient algorithm to reduce redundant DCT and quantization computations for AVS-P2 encoding. A theoretical analysis is performed to study the sufficient condition for DCT coefficients to be quantized to zeros in AVS-P2. As a result, a sufficient condition derived to early detect all-zero 8 x 8 DCT blocks. Compared with original algorithms in reference software rm52j, the proposed algorithm provides a new method and efficient condition to predict all-zero DCT blocks. The experimental results demonstrate this new all-zero block method reduces the computational complexity of AVS-P2 encoder remarkably while the degradation in video quality is negligible.",2009,0, 3643,Automatic Inspection of Print Quality of Glycemia Detection Biochips,"Biochip plays more and more important role in medical analysis and test. There is a need to enhance the effectiveness of biochip inspection. To detect automatically the print defects of the PET board of glycemia detection biochip, the fast Hough Transform algorithm was used for sub-image segmentation based on circle detection, and a modified method with limited-value-based image subtraction was applied to recognize the print patterns. The corresponding software was developed to realize the automatic inspection function. The results indicated that the presented approach is applicable and can fulfill real-time detection accurately and effectively.",2009,0, 3644,Maximum Throughput Obtaining of IEEE 802.15.3 TDMA Mechanism under Error-Prone Channel,"IEEE 802.15.3 efficiently uses time division multiple accesses (TDMA) to support the quality of service (QoS) for multimedia traffic or the transfer of multi-megabyte data for music and image files. In the TDMA mechanism for an allocated channel time (channel time allocation, CTA) and known bit error rate of the channel, the throughput can be maximized by dynamically adjusting the frame size. In this paper a throughput model under non-ideal channel condition was formulated, and then the adaptive frame size can be calculated from the model. In addition a feasible implementation of this adaptive scheme is presented. The mathematical analysis and simulation results demonstrate the effectiveness of our adaptive scheme.",2009,0, 3645,Changes and bugs Mining and predicting development activities,"Software development results in a huge amount of data: changes to source code are recorded in version archives, bugs are reported to issue tracking systems, and communications are archived in e-mails and newsgroups. We present techniques for mining version archives and bug databases to understand and support software development. First, we introduce the concept of co-addition of method calls, which we use to identify patterns that describe how methods should be called. We use dynamic analysis to validate these patterns and identify violations. The co-addition of method calls can also detect cross-cutting changes, which are an indicator for concerns that could have been realized as aspects in aspect-oriented programming. Second, we present techniques to build models that can successfully predict the most defect-prone parts of large-scale industrial software, in our experiments Windows Server 2003. This helps managers to allocate resources for quality assurance to those parts of a system that are expected to have most defects. The proposed measures on dependency graphs outperformed traditional complexity metrics. In addition, we found empirical evidence for a domino effect, i.e., depending on defect-prone binaries increases the chances of having defects.",2009,0, 3646,An investigation of the relationships between lines of code and defects,"It is always desirable to understand the quality of a software system based on static code metrics. In this paper, we analyze the relationships between lines of code (LOC) and defects (including both pre-release and post-release defects). We confirm the ranking ability of LOC discovered by Fenton and Ohlsson. Furthermore, we find that the ranking ability of LOC can be formally described using Weibull functions. We can use defect density values calculated from a small percentage of largest modules to predict the number of total defects accurately. We also find that, given LOC we can predict the number of defective components reasonably well using typical classification techniques. We perform an extensive experiment using the public Eclipse dataset, and replicate the study using the NASA dataset. Our results confirm that simple static code attributes such as LOC can be useful predictors of software quality.",2009,0, 3647,Analysis of pervasive multiple-component defects in a large software system,"Certain software defects require corrective changes repeatedly in a few components of the system. One type of such defects spans multiple components of the system, and we call such defects pervasive multiple-component defects (PMCDs). In this paper, we describe an empirical study of six releases of a large legacy software system (of approx. size 20 million physical lines of code) to analyze PMCDs with respect to: (1) the complexity of fixing such defects and (2) the persistence of defect-prone components across phases and releases. The overall hypothesis in this study is that PMCDs inflict a greater negative impact than do other defects on defect-correction efficacy. Our findings show that the average number of changes required for fixing PMCDs is 20-30 times as much as the average for all defects. Also, over 80% of PMCD-contained defect-prone components still remain defect-prone in successive phases or releases. These findings support the overall hypothesis strongly. We compare our results, where possible, to those of other researchers and discuss the implications on maintenance processes and tools.",2009,0, 3648,Modeling class cohesion as mixtures of latent topics,"The paper proposes a new measure for the cohesion of classes in object-oriented software systems. It is based on the analysis of latent topics embedded in comments and identifiers in source code. The measure, named as maximal weighted entropy, utilizes the latent Dirichlet allocation technique and information entropy measures to quantitatively evaluate the cohesion of classes in software. This paper presents the principles and the technology that stand behind the proposed measure. Two case studies on a large open source software system are presented. They compare the new measure with an extensive set of existing metrics and use them to construct models that predict software faults. The case studies indicate that the novel measure captures different aspects of class cohesion compared to the existing cohesion measures and improves fault prediction for most metrics, which are combined with maximal weighted entropy.",2009,0, 3649,Assessing the impact of framework changes using component ranking,"Most of today's software applications are built on top of libraries or frameworks. Just as applications evolve, libraries and frameworks also evolve. Upgrading is straightforward when the framework changes preserve the API and behavior of the offered services. However, in most cases, major changes are introduced with the new framework release, which can have a significant impact on the application. Hence, a common question a framework user might ask is, ldquoIs it worth upgrading to the new framework version?rdquo In this paper, we study the evolution of an application and its underlying framework to understand the information we can get through a multi-version use relation analysis. We use component rank changes to measure this impact. Component rank measurement is a way of quantifying the importance of a component by its usage. As framework components are used by applications, the rankings of the components are changed. We use component ranking to identify the core components in each framework version. We also confirm that upgrading to the new framework version has an impact to a component rank of the entire system and the framework, and this impact not only involves components which use the framework directly, but also other indirectly-related components. Finally, we also confirm that there is a difference in the growth of use relations between application and framework.",2009,0, 3650,On predicting the time taken to correct bug reports in open source projects,"Existing studies on the maintenance of open source projects focus primarily on the analyses of the overall maintenance of the projects and less on specific categories like the corrective maintenance. This paper presents results from an empirical study of bug reports from an open source project, identifies user participation in the corrective maintenance process through bug reports, and constructs a model to predict the corrective maintenance effort for the project in terms of the time taken to correct faults. Our study focuses on 72482 bug reports from over nine releases of Ubuntu, a popular Linux distribution. We present three main results: (1) 95% of the bug reports are corrected by people participating in groups of size ranging from 1 to 8 people, (2) there is a strong linear relationship (about 92%) between the number of people participating in a bug report and the time taken to correct it, (3) a linear model can be used to predict the time taken to correct bug reports.",2009,0, 3651,Prioritizing JUnit test cases in absence of coverage information,"Better orderings of test cases can detect faults in less time with fewer resources, and thus make the debugging process earlier and accelerate software delivery. As a result, test case prioritization has become a hot topic in the research of regression testing. With the popularity of using the JUnit testing framework for developing Java software, researchers also paid attention to techniques for prioritizing JUnit test cases in regression testing of Java software. Typically, most of them are based on coverage information of test cases. However, coverage information may need extra costs to acquire. In this paper, we propose an approach (named Jupta) for prioritizing JUnit test cases in absence of coverage information. Jupta statically analyzes call graphs of JUnit test cases and the software under test to estimate the test ability (TA) of each test case. Furthermore, Jupta provides two prioritization techniques: the total TA based technique (denoted as JuptaT) and the additional TA based technique (denoted as JuptaA). To evaluate Jupta, we performed an experimental study on two open source Java programs, containing 11 versions in total. The experimental results indicate that Jupta is more effective and stable than the untreated orderings and Jupta is approximately as effective and stable as prioritization techniques using coverage information at the method level.",2009,0, 3652,The squale model A practice-based industrial quality model,"ISO 9126 promotes a three-level model of quality (factors, criteria, and metrics) which allows one to assess quality at the top level of factors and criteria. However, it is difficult to use this model as a tool to increase software quality. In the Squale model, we add practices as an intermediate level between metrics and criteria. Practices abstract away from raw information (metrics, tool reports, audits) and provide technical guidelines to be respected. Moreover, practice marks are adjusted using formulae to suit company development habits or exigences: for example bad marks are stressed to point to places which need more attention. The Squale model has been developed and validated over the last couple of years in an industrial setting with Air France-KLM and PSA Peugeot-Citroen.",2009,0, 3653,Recovering traceability links between a simple natural language sentence and source code using domain ontologies,"This paper proposes an ontology-based technique for recovering traceability links between a natural language sentence specifying features of a software product and the source code of the product. Some software products have been released without detailed documentation. To automatically detect code fragments associated with the functional descriptions written in the form of simple sentences, the relationships between source code structures and problem domains are important. In our approach, we model the knowledge of the problem domains as domain ontologies. By using semantic relationships of the ontologies in addition to method invocation relationships and the similarity between an identifier on the code and words in the sentences, we can detect code fragments corresponding to the sentences. A case study within a domain of painting software shows that we obtained results of higher quality than without ontologies.",2009,0, 3654,Fundamental performance assessment of 2-D myocardial elastography in a phased-array configuration,"Two-dimensional myocardial elastography, an RF-based, speckle-tracking technique, uses 1-D cross-correlation and recorrelation methods in a 2-D search, and can estimate and image the 2-D transmural motion and deformation of the myocardium so as to characterize the cardiac function. Based on a 3-D finite-element (FE) canine left-ventricular model, a theoretical framework was previously developed by our group to evaluate the estimation quality of 2-D myocardial elastography using a linear array. In this paper, an ultrasound simulation program, Field II, was used to generate the RF signals of a model of the heart in a phased-array configuration and under 3-D motion conditions; thus simulating a standard echocardiography exam. The estimation method of 2-D myocardial elastography was adapted for use with such a configuration. All elastographic displacements and strains were found to be in good agreement with the FE solutions, as indicated by the mean absolute error (MAE) between the two. The classified first and second principal strains approximated the radial and circumferential strains, respectively, in the phased-array configuration. The results at different sonographic signal-to-noise ratios (SNRs) showed that the MAEs of the axial, lateral, radial, and circumferential strains remained relatively constant when the SNRs was equal to or higher than 20 dB. The MAEs of the strain estimation were not significantly affected when the acoustic attenuation was included in the simulations. A significantly reduced number of scatterers could be used to speed up the simulation, without sacrificing the estimation quality.The proposed framework can further be used to assess the estimation quality, explore the theoretical limitation and investigate the effects of various parameters in 2-D myocardial elastography under more realistic conditions.",2009,0, 3655,Analyses and comparisons of technologies for rural broadband implementation,"This paper deals with the rural broadband in the Republic of Croatia, since it is presumed that the implementation of broadband access in rural areas accelerates the economic growth, expands the productivity and enhances the rural residents' quality of life. In this paper, the techno-economic analysis of broadband networks deployment is conducted and the model to assess the costs of rural broadband access is introduced. Using the basic profitability evaluation methods the costs of DSL and WiMAX systems implementation are calculated in three different rural scenarios. The results of these analyses are represented and compared, and the specificities of each scenario affecting the broadband access costs are pointed out.",2009,0, 3656,A broadband network fault distribution model,"Growth of the telecommunications market brings more and more types of broadband services, and also the total amount of users who use broadband services is growing. All of this leads to increasing number of user interference, which may be caused by various reasons. Among the most common causes of errors may be faults in the access network, the failures of the customer equipment, errors in the core network, errors in the access devices, etc. For the telecom operators it is very important to well manage the removal of those errors, because the quality of customer services depends on it. This article deals with the exploration of the most common error places and describes time distribution appearing of these errors. As part of this work was created the fault generator, which tries to truly predict the appearance of user interferences. Generator is modeled by using the method of Fourier series and a quantile function.",2009,0, 3657,Validation of PIM DM and PIM SM protocols in the NS2 network simulator,"Multicast transmission offers efficient network resource utilization, but, at the same time, it is also a demanding and complex technology. Multicast protocols are far more sophisticated than their unicast counterparts. As a result, building one's own simulation environment is a difficult, time-consuming and error-prone endeavor. Hence, there is a need for ready-made network simulation tools supporting these protocols. One of such tools is the open source NS2 simulator. However, the results obtained from this simulator are not reliable without prior testing and a validation of the application. This work concentrates on validation of the PIM SM and the PIM DM protocols implementation in NS2. The NS2 implementation was tested using a wide range of techniques commonly used in software engineering filed.",2009,0, 3658,Pre-determining comparative tests and utilizing signal levels to perform accurate diagnostics,Standard diagnostic schemes don't do enough analysis to focus in on the actual cause of a test failure. Often measurements will be border-line on test sequences prior to an actual test failure. These border-line measurements can be used to aid in the determination of an actual fault. An actual fault and the associated test that should detect that fault can be deceiving. The specific test in question can pass but be right on the border-line. The failure might not show up in testing until a later test is performed. This later test assumes all the prior tests passed and therefore the circuitry associated with these prior tests is good. This is not necessarily the case if some of the output measurements prior to the actual failing test were right on the border-line. What can we do? Take advantage of the order in which the faults are simulated1. We should structure our TPSs such that a review of preliminary tests should be evaluated before the R/R component list is presented. The review of the preliminary test can be rather straight forward. We can look at signals that are within 8-10% of the lower or upper limit. We can then use an inter-related test scheme to evaluate the test(s) that can be associated with the actual failing test. This paper will use an example TPS and show how a test scheme and an evaluation scheme can be used to determine the PCOF. The paper will show actual measurements and how these measurements can be evaluated to determine the actual cause of a failure.,2009,0, 3659,"A comparison of software cost, duration, and quality for waterfall vs. iterative and incremental development: A systematic review","The objective of this study is to present a body of evidence that will assist software project managers to make informed choices about software development approaches for their projects. In particular, two broadly defined competing approaches, the traditional ldquowaterfallrdquo approach and iterative and incremental development (IID), are compared with regards to development cost and duration, and resulting product quality. The method used for this comparison is a systematic literature review. The small set of studies we located did not demonstrate any identifiable cost, duration, or quality trends, although there was some evidence suggesting the superiority of IID (in particular XP). The results of this review indicate that further empirical studies, both quantitative and qualitative, on this topic need to be undertaken. In order to effectively compare study results, the research community needs to reach a consensus on a set of comparable parameters that best assess cost, duration, and quality.",2009,0, 3660,A systematic review of software maintainability prediction and metrics,This paper presents the results of a systematic review conducted to collect evidence on software maintainability prediction and metrics. The study was targeted at the software quality attribute of maintainability as opposed to the process of software maintenance. The evidence was gathered from the selected studies against a set of meaningful and focused questions. 710 studies were initially retrieved; however of these only 15 studies were selected; their quality was assessed; data extraction was performed; and data was synthesized against the research questions. Our results suggest that there is little evidence on the effectiveness of software maintainability prediction techniques and models.,2009,0, 3661,The impact of limited search procedures for systematic literature reviews A participant-observer case study,"This study aims to compare the use of targeted manual searches with broad automated searches, and to assess the importance of grey literature and breadth of search on the outcomes of SLRs. We used a participant-observer multi-case embedded case study. Our two cases were a tertiary study of systematic literature reviews published between January 2004 and June 2007 based on a manual search of selected journals and conferences and a replication of that study based on a broad automated search. Broad searches find more papers than restricted searches, but the papers may be of poor quality. Researchers undertaking SLRs may be justified in using targeted manual searches if they intend to omit low quality papers; if publication bias is not an issue; or if they are assessing research trends in research methodologies.",2009,0, 3662,Tool supported detection and judgment of nonconformance in process execution,"In the past decades the software engineering community has proposed a large collection of software development life cycles, models, and processes. The goal of a major set of these processes is to assure that the product is finished within time and budget, and that a predefined set of functional and nonfunctional requirements (e.g. quality goals) are satisfied at delivery time. Based upon the assumption that there is a real relationship between the process applied and the characteristics of the product developed from that process, we developed a tool supported approach that uses process nonconformance detection to identify potential risks in achieving the required process characteristics. In this paper we present the approach and a feasibility study that demonstrates its use on a large-scale software development project in the aerospace domain. We demonstrate that our approach, in addition to meeting the criteria above, can be applied to a real system of reasonable size; can represent a useful and adequate set of rules of relevance in such an environment; and can detect relevant examples of process nonconformance that provide useful insight to the project manager.",2009,0, 3663,Towards logistic regression models for predicting fault-prone code across software projects,"In this paper, we discuss the challenge of making logistic regression models able to predict fault-prone object-oriented classes across software projects. Several studies have obtained successful results in using design-complexity metrics for such a purpose. However, our data exploration indicates that the distribution of these metrics varies from project to project, making the task of predicting across projects difficult to achieve. As a first attempt to solve this problem, we employed simple log transformations for making design-complexity measures more comparable among projects. We found these transformations useful in projects which data is not as spread as the data used for building the prediction model.",2009,0, 3664,Reducing false alarms in software defect prediction by decision threshold optimization,"Software defect data has an imbalanced and highly skewed class distribution. The misclassification costs of two classes are not equal nor are known. It is critical to find the optimum bound, i.e. threshold, which would best separate defective and defect-free classes in software data. We have applied decision threshold optimization on Naiumlve Bayes classifier in order to find the optimum threshold for software defect data. ROC analyses show that decision threshold optimization significantly decreases false alarms (on the average by 11%) without changing probability of detection rates.",2009,0, 3665,Scope error detection and handling concerning software estimation models,"Over the last 25+ years, the software community has been searching for the best models for estimating variables of interest (e.g., cost, defects, and fault proneness). However, little research has been done to improve the reliability of the estimates. Over the last decades, scope error and error analysis have been substantially ignored by the community. This work attempts to fill this gap in the research and enhance a common understanding within the community. Results provided in this study can eventually be used to support human judgment-based techniques and be an addition to the portfolio. The novelty of this work is that, we provide a way of detecting and handling the scope error arising from estimation models. The answer whether or not scope error will occur is a pre-condition to safe use of an estimation model. We also provide a handy procedure for dealing with outliers as to whether or not to include them in the training set for building a new version of the estimation model. The majority of the work is empirically based, applying computational intelligence techniques to some COCOMO model variations with respect to a publicly available cost estimation data set in the PROMISE repository.",2009,0, 3666,Predicting defects with program dependencies,"Software development is a complex and error-prone task. An important factor during the development of complex systems is the understanding of the dependencies that exist between different pieces of the code. In this paper, we show that for Windows Server 2003 dependency data can predict the defect-proneness of software elements. Since most dependencies of a component are already known in the design phase, our prediction models can support design decisions.",2009,0, 3667,A detailed examination of the correlation between imports and failure-proneness of software components,"Research has provided evidence that type usage in source files is correlated with the risk of failure of software components. Previous studies that investigated the correlation between type usage and component failure assigned equal blame to all the types imported by a component with a failure history, regardless of whether a type is used in the component, or associated to its failures. A failure-prone component may use a type, but it is not always the case that the use of this type has been responsible for any of its failures. To gain more insight about the correlation between type usage and component failure, we introduce the concept of a failure-associated type to represent the imported types referenced within methods fixed due to failures. We conducted two studies to investigate the tradeoffs between the equal-blame approach and the failure-associated type approach. Our results indicate that few of the types or packages imported by a failure-prone component are associated with its failures - less than 25% of the type imports, and less than 55% of the packages whose usage were reported to be highly correlated with failures by the equal-blame approach, were actually correlated with failures when we looked at the failure-associated types.",2009,0, 3668,A probability-based approach for measuring external attributes of software artifacts,"The quantification of so-called external software attributes, which are the product qualities with real relevance for developers and users, has often been problematic. This paper introduces a proposal for quantifying external software attributes in a unified way. The basic idea is that external software attributes can be quantified by means of probabilities. As a consequence, external software attributes can be estimated via probabilistic models, and not directly measured via software measures. This paper discusses the reasons underlying the proposals and shows the pitfalls related to using measures for external software attributes. We also show that the theoretical bases for our approach can be found in so-called ldquoprobability representations,rdquo a part of Measurement Theory that has not yet been used in Software Engineering Measurement. By taking the definition and estimation of reliability as reference, we show that other external software attributes can be defined and modeled by a probability-based approach.",2009,0, 3669,Novel islanding detection method for distributed generation,"This paper describes the development of a novel islanding detection method for inverter-based distributed generation, which uses the signal cross-correlation scheme between the injected reactive current and the power frequency deviation. The existing method injects the 5% reactive current to the rated current for detecting the frequency deviation, which brings about reduction of power quality. On the contrary, the proposed method injects the 1% reactive current to the rated current which brings about negligible degradation of power quality. The proposed method detects the islanding state by calculating the cross-correlation index between the injected reactive current and the frequency deviation. The operational feasibility was verified through computer simulations with PSCAD/EMTDC software and experimental works with a 3 kVA hardware prototype. The proposed method can detect the islanding state effectively without degrading the power quality at the point of common connection point.",2009,0, 3670,The factors impact consumers' initial trust in mobile service: An empirical study in China,"Lack of trust in mobile brokerage service is a primary reason why many investors do not conduct trading in mobile environment. This study propose a model of initial investor trust in mobile brokerage service and consider the effect of six antecedent variables on shaping an investor initial trust and intention in mobile brokerage service. The six variables are perceived ubiquity, service compatibility, information quality, perceived reputation, perceived security and propensity to trust. We test the proposed model by survey data collected and use a structural equation modeling techniques to analyze the causalities. The results show support for the proposed model and confirm its robustness in predicting investors' initial trust and intention in mobile brokerage service. The findings provide useful suggestions and implication for the academician and practitioners.",2009,0, 3671,Usage of multi-criteria analysis and supportive software for optimum location of the modern devices within electrical distribution networks,"This paper describes the MCA8 - the supportive software application for computation of six methods of multi-criteria analysis (TOPSIS, CDA, AGREPREF, WSA, PROMETHEE and IPA). These methods have been implemented to the MCA8 on the basis of the type of real decision-making tasks solved in the field of electrical power engineering. We can use them for example for selecting the most suitable old electrical devices in electrical distribution networks, which we need to replace by a new devices, which fall under the system of remote control and monitoring networks. The application of these remote-controlled devices causes acceleration in handling and thus shortening of duration of a fault in the electrical networks. This results in rising of probability of faultless service and thus the reliability of electrical energy supply. We cannot replace all old devices, because the price of these devices is very high. The MCA8 software application may be used to selecting the most suitable variants. The MCA8 is determined for solving decision-making problems not only in the field of electrical power engineering.",2009,0, 3672,Improve the Portability of J2ME Applications: An Architecture-Driven Approach,"The porting of J2ME applications is usually difficult because of diverse device features, limited device resources, and specific issues like device bugs. Therefore, achieving high efficiency in J2ME application porting can be challenging, tedious and error-prone. In this paper, we propose an architecture-driven approach to help address these issues through improving the portability of J2ME applications. It abstracts and models the features that affect porting tasks using component model named NanoCM (nano component model). The model is described in an architecture description language named NanoADL. Several open source J2ME applications are used as the case studies, and are evaluated using metrics indicating coupling, comprehensibility and complexity. Experiment results show that our approach effectively improves the portability of J2ME applications.",2009,0, 3673,A Fuzzy Admission Control Strategy for End-to-End QoS Framework,"The classical IP networks can not meet the requirements of multimedia applications that need certain QoS. When bandwidth resources are limited, an effective admission control boundary bandwidth management strategy is needed to ensure the flows that already exist in the networks holding enough bandwidth. Although admission control schemes based on global knowledge make most optimal decision, they will occupy many sources to keep the information up to date. In this work, we scribe a fuzzy admission control (FAC) scheme based on hierarchical information. Compare to global information, which classical admission control strategy require, hierarchical information reduce the amount of information that nodes need to process evidently. Simulation shows that the excluding probability using FAC is very near the excluding probability of link-based admission control strategy using global information. FAC scheme is more fit for admission control in network nodes without complete information.",2009,0, 3674,Neural Network Analog on Dynamic Variation of the Karst Water and the Prediction for Spewing Tendency of Springs in Jinan,"Considering the factors that affect the karst water level, the improved neural network model has been applied to construct the random model that analogs the dynamic change of karst water. The accuracy of our analog has been greatly improved compared with that of multi-line recurrence model; moreover, BP model has strong functions of study, fault tolerance and association. In a word, BP model is an effective tool to predict the dynamic change of karst water. In addition, the spewing tendency of springs in Jinan is analyzed based on our prediction results in this paper.",2009,0, 3675,Ineffectiveness of Use of Software Science Metrics as Predictors of Defects in Object Oriented Software,"Software science metrics (SSM) have been widely used as predictors of software defects. The usage of SSM is an effect of correlation of size and complexity metrics with number of defects. The SSM have been proposed keeping in view the procedural paradigm and structural nature of the programs. There has been a shift in software development paradigm from procedural to object oriented (OO) and SSM have been used as defect predictors of OO software as well. However, the effectiveness of SSM in OO software needs to be established. This paper investigates the effectiveness of use of SSM for: (a)classification of defect prone modules in OO software (b) prediction of number of defects. Various binary and numeric classification models have been applied on dataset kc1 with class level data to study the role of SSM. The results show that the removal of SSM from the set of independent variables does not significantly affect the classification of modules as defect prone and the prediction of number of defects. In most of the cases the accuracy and mean absolute error has improved when SSM were removed from the set of independent variables. The results thus highlight the ineffectiveness of use of SSM in defect prediction in OO software.",2009,0, 3676,Quality Tree of QFD: A New Method to Ensure Software Quality,"We bring a new method--QTQ (Quality Tree of QFD) to solve the problem, that software quality can not effectively ensure. QTQ is based on UML profile, it integrates ideal solution and QFD. Its main point is: According to the concept of ideal solution, establish DTRQ (Distribution Tree of Requirement Quality), when analyze requirements of customer. Use DTRQ to guide design. On the basis of design, build DTDQ (Distribution Tree of Design Quality) and assess if CRDP (Change Rates of Design Plan) is acceptable, and then coding according to qualified DTDQ. In testing procedure, software quality is evaluated by the qualified DTDQ.",2009,0, 3677,Research on Testing-Based Software Credibility Measurement and Assessment,"It's one of the important approaches to measure and assess software credibility by testing in trustworthy software study. From the perspective of effective management and credibility analysis supporting the test process, this article describes a basic framework for its management and discusses the assessment techniques and methods of credible test process and software product.",2009,0, 3678,Progress and Quality Modeling of Requirements Analysis Based on Chaos,"It is important and difficult for us to know the progress and quality of requirements analysis. We introduce chaos and software requirements complexity to the description of requirements decomposing, and get a method which can help us to evaluate the progress and quality. The model shows that requirements decomposing procedure has its own regular pattern which we can describe in a equation and track in a trajectory. The requirements analysis process of a software system can be taken as normal if its trajectory coincide with the model. We may be able to predict the time we need to finish all requirements decomposition in advance based on the model. We apply the method in the requirements analysis of home phone service management system, and the initial results show that the method is useful in the evaluation of requirements decomposition.",2009,0, 3679,Fault Injection Technology for Software Vulnerability Testing Based on Xen,"Fault injection technology devotes an efficient way for verifying fault tolerance of computer and detecting the vulnerability of software system. In this paper, we present a Xen-based fault injection technology for software vulnerability test (XFISV) in order to build an efficient and general-purpose software test model, which injects faults into interactive layer between software applications and their environments. This technology has two main contributions: First, detecting the software vulnerability according to this model needs less number of fault test cases. Second, this model enhances the flexibility and the robustness of the fault injection tools with economical resource cost.",2009,0, 3680,Automatic Determination of Branch Correlations in Software Testing,"Path-oriented testing is an important aspect of software testing. A challenging problem with path-oriented test data generation is the existence of infeasible paths. Timely detecting these infeasible paths can not only save test sources but also improve test efficiency. It is an effective method to detect infeasible paths by branch correlations. In this paper, we propose a method to automatically determine branch correlations in software testing. Firstly, we give a theorem to determine the true-true correlation, true-false correlation, false-true correlation, and false-false correlation based on the probabilities of the conditional distribution corresponding to different branches' outcome (i.e. true or false). We then estimate these values of the probabilities by the maximum likelihood estimation. Finally, we apply the proposed method to determine the branch correlations of two typical programs, and the results show that the proposed method can accurately determine the branch correlations of different conditional statements. Our achievement provides an effective and automatic method to detect infeasible paths, which has great significance in improving the efficiency of software testing.",2009,0, 3681,Required Characteristics for Software Reliability Growth Models,"Software reliability growth models are used to estimate and predict software quality. Many software reliability growth models (SRGM) have been developed in the literature. It is a key issue how to appropriately select them since an inappropriate SRGM can give unreasonable prediction. This paper deals with this issue. It first discusses some characteristics for a SRGM to have, then presents five flexible SRGMs, and finally discusses how to determine the best SRGM from a list of candidate models for a given set of data. The usefulness of the suggested models and method are illustrated by a real-world example. The results show that all the suggested models outperform the well-known exponential model in terms of goodness-of-fit and predictive capability.",2009,0, 3682,Vulnerability Testing of Software Using Extended EAI Model,"Software testing, throughout the development life cycle of software, is one of the important ways to ensure the quality of software. Model-based software testing technology and tools have higher degree of automation, as well as efficiency of testing. They also can detect vulnerabilities that other technologies are difficult to do. So they are widely used. This paper presents an extended EAI model (Extended Environment-Application Interaction Model), and does further research for vulnerability testing based on the model. Extended EAI model inherits the methodology of anomalies simulation of the original one. In order to monitor and control the process under test, we give an idea of introducing artificial intelligence technology and status feedback into the model, and also try to use virtual execution technology for testing. We use this technique based on the Extended EAI model to experiment on Internet work Operation System (IOS) software, and detect that some services of certain protocols running in IOS software have vulnerabilities. So the experimental results indicate that our method is feasible.",2009,0, 3683,Soft Measurement Modeling Based on Improved Simulated Annealing Neural Network for Sewage Treatment,"Considering the issues that the sewage treatment process is a complicated and nonlinear system, and the key parameters of sewage treatment quality can not be detected on-line, a soft measurement modeling method based on improved simulated annealing neural network (ISANN) is presented in this paper. First the simulated annealing algorithm with the best reserve mechanism is introduced and it is organic combined with Powell algorithm to form improved simulated annealing mixed optimize algorithm, instead of gradient falling algorithm of BP network to train network weight. It can get higher accuracy and faster convergence speed. We construct the network structure. With the ability of strong self-learning and faster convergence of ISANN, the soft measurement modeling method can truly detect and assess the quality of sewage treatment in real time by learning the sewage treatment parameter information of sensors acquired. The experimental results show that this method is feasible and effective.",2009,0, 3684,Measurement of the Complexity of Variation Points in Software Product Lines,"Feature models are used in member product configuration in software product lines. A valid product configuration must satisfy two kinds of constraints, multiplicity of each variation point and dependencies among the variants in a product line. The combined impact of the two kinds of constrains on product configuration should be well understood. In this paper we propose a measurement, called VariationRank, that combines the two kinds of constraints to assess the complexity of variation points in software product lines. Based on the measurement we can identify those variation points with the highest impact on product configurations in a product line. This information could be used as guidance in product configuration as well as for feature model optimization in software product lines. A case study is presented and discussed in this paper as well.",2009,0, 3685,Color Clustering Analysis of Yarn-dyed Fabric in HSL Color Space,"A novel method for realizing the color classifying in yarn-dyed fabric is proposed in this paper. The color image of yarn-dyed fabric was obtained by a flat scanner, and then it is converted from RGB color space to HSL color space. By analyzing the characters of hue component in HSL color space, the distance and cluster center of hue component in fuzzy clustering algorithms are redefined. During the iteration in FCM, membership degree and cluster center of H and (S, L) are calculated independently, and the membership degree is normalized in the process. A better clustering quality is shown in the experiment. The color yarn number is detected based on the validity for FCM clusters. Experimental comparisons on RGB color space and HSL color space show that the approach proposed in this article is more effective for color extracting and classifying in yarn-dyed fabric.",2009,0, 3686,Fine-blanking Die Wear and its Effect on Product Edge Quality,"Based on process of fine-blanking, a FE model of fine-blanking is established by using DEFORM-2D software, and the die wear condition is predicted in applying Archard's wear formula. The developing trend of the die wear and workpiece edge quality are analyzed systematically, as well as the change law of m. Obtained from simulation with different cutting edge model respectively. The obtained results of this research has guiding significance for further standardizing the replacement of the die and then optimizing die structure.",2009,0, 3687,Reliability Computing for Service Composition,"Web service composition is a distributed model to construct new Web service on top of existing primitive or other composite Web services. However, current service technologies, including proposed composition languages, do not address the reliability of Web service composition. Thus it is hard to predict the system reliability. In this paper, we propose a method to compute system reliability based on service component architecture(SCA). We first present a formal service component signature model with respect to the specification of the SCA assembly model, and then propose a language-independent dynamic behaviour model for specifying the interface behaviour of the service component by port activities. Then the failure behaviors of ports are defined through the enhanced non-homogeneous Poisson process (ENHPP). Based on the semantics of ports, several rules have been generated to compute reliabilities of port expressions, thus the overall system reliability can be automatically computed.",2009,0, 3688,Simplifying Parametrization of Bayesian Networks in Prediction of System Quality,"Bayesian networks (BNs) are a powerful means for modelling dependencies and predicting impacts of architecture design changes on system quality. The extremely demanding parametrization of BNs is however the main obstacle for their practical application, in spite of the extensive tool support. We have promising experiences from using a tree-structured notation, that we call dependency views (DVs), for prediction of impacts of architecture design changes on system quality. Compared to BNs, DVs are far less demanding to parametrize and create. DVs have shown to be sufficiently expressive, comprehensible and feasible. Their weakness is however limited analytical power. Once created, BNs are more adaptable to changes, and more easily refined than DVs. In this paper we argue that DVs are fully compatible with BNs, in spite of different estimation approaches and concepts. A transformation from a DV to a BN preserves traceability and results in a complete BN. By defining a transformation from DVs to BNs, we have enabled reliable parametrization of BNs with significantly reduced effort, and can now exploit the strengths of both the DV and the BN approach.",2009,0, 3689,Software Reliability Prediction and Analysis Using Queueing Models with Multiple Change-Points,"Over the past three decades, many software reliability growth models (SRGMs) were proposed and they are aimed at predicting and estimating software reliability. One common assumption of these conventional SRGMs is that detected faults will be removed immediately. In reality, this assumption may not be reasonable and may not always occur. Developers need time to identify the root causes of detected faults and then fix them. Besides, during debugging the fault correction rate may not be a constant and could be changed at some certain points as time proceeds. Consequently, in this paper, we will explore and study how to apply queueing model to investigate the fault correction process during software development. We propose an extended infinite server queueing model with multiple change-points to predict and assess software reliability. Experimental results based on real failure data show that proposed model can depicts the change of fault correction rates and predict the behavior of software development more accurately than traditional SRGMs.",2009,0, 3690,A Trust-Based Detecting Mechanism against Profile Injection Attacks in Recommender Systems,"Recommender systems could be applied in grid environment to help grid users select more suitable services by making high quality personalized recommendations. Also, recommendation could be employed in the virtual machines managing platform to measure the performance and creditability of each virtual machine. However, such systems have been shown to be vulnerable to profile injection attacks (shilling attacks), attacks that involve the insertion of malicious profiles into the ratings database for the purpose of altering the system's recommendation behavior. In this paper we introduce and evaluate a new trust-based detecting algorithm for protecting recommender systems against profile injection attacks. Moreover, we discuss the combination of our trust-based metrics with previous metrics such as RDMA in profile-level and item-level respectively. In the end, we show these metrics can lead to improved detecting accuracy experimentally.",2009,0, 3691,Architectural Availability Analysis of Software Decomposition for Local Recovery,"Non-functional properties, such as timeliness, resource consumption and reliability are of crucial importance for today's software systems. Therefore, it is important to know the non-functional behavior before the system is put into operation. Preferably, such properties should be analyzed at design time, at an architectural level, so that changes can be made early in the system development process. In this paper, we present an efficient and easy-to-use methodology to predict - at design time - the availability of systems that support local recovery. Our analysis techniques work at the architectural level, where the software designer simply inputs the software modules' decomposition annotated with failure and repair rates. From this decomposition we automatically generate an analytical model (i.e. a continuous-time Markov chain), from which various performance and dependability measures are then computed, in a way that is completely transparent to the user. A crucial step is the use of intermediate models in the Input/Output Interactive Markov Chain formalism, which makes our techniques, efficient, mathematically rigorous, and easy to adapt. In particular, we use aggressive minimization techniques to keep the size of the generated state spaces small. We have applied our methodology on a realistic case study, namely the MPlayer open source software. We have investigated four different decomposition alternatives and compared our analytical results with the measured availability on a running MPlayer. We found that our predicted results closely match the measured ones.",2009,0, 3692,Resource Failure Impact on Job Execution in Grid,"Grid environment, being a collection of heterogeneous and geographically distributed resources, is prone to many kinds of failures such as process failures, resource and network failures. In this paper, we address the problem of resource failure. Resources in grid oscillate between being available and unavailable to the grid. When and how they do so, depends on the failure characteristics of the machines, the policies of resource owners and the scheduling policies. The research work involves implementation of job scheduling in a grid based on pull method for handling resource failure using GridSim simulation toolkit. We also demonstrate how the job execution time varies under different resource failure conditions using different failure patterns.",2009,0, 3693,Looking for Product Line Feature Models Defects: Towards a Systematic Classification of Verification Criteria,"Product line models (PLM) are important artifacts in product line engineering. Due to their size and complexity, it is difficult to detect defects in PLMs. The challenge is however important: any error in a PLM will inevitably impact configuration, generating issues such as incorrect product models, inconsistent architectures, poor reuse, difficulty to customize products, etc. Surveys on feature-based PLM verification approaches show that there are many verification criteria, that these criteria are defined in different ways, and that different ways of working are proposed to look for defect. The goal of this paper is to systematize PLM verification. Based on our literature review, we propose a list of 23 verification criteria that we think cover those available in the literature.",2009,0, 3694,Evaluating the Completeness and Granularity of Functional Requirements Specifications: A Controlled Experiment,"Requirements engineering (RE) is a relatively young discipline, and still many advances have been achieved during the last decades. In particular, numerous RE methods have been proposed. However, there is a growing concern for empirical validations that assess RE proposals and statements. This paper is related to the evaluation of the quality of functional requirements specifications, focusing on completeness and granularity. To do this, several concepts related to conceptual model quality are presented; these concepts lead to the definition of metrics that allow measuring certain aspects of a requirements model quality (e.g. degree of functional encapsulations completeness with respect to a reference model, number of functional fragmentation errors). A laboratory experiment with master students has been carried out, in order to compare (using the proposed metrics) two RE approaches; namely, Use Cases and Communication Analysis. Results indicate greater quality (in terms of completeness and granularity) when communication analysis guidelines are followed. Moreover, interesting issues arise from experimental results, which invite further research.",2009,0, 3695,Double Redundant Fault-Tolerance Service Routing Model in ESB,"With the development of the Service Oriented Architecture (SOA), the Enterprise Service Bus (ESB) is becoming more and more important in the management of mass services. The main function of it is service routing which focuses on delivery of message among different services. At present, some routing patterns have been implemented to finish the messaging, but they are all static configuration service routing. Once one service fails in its operation, the whole service system will not be able to detect such fault, so the whole business function will also fail finally. In order to solve this problem, we present a double redundant fault tolerant service routing model. This model has its own double redundant fault tolerant mechanism and algorithm to guarantee that if the original service fails, another replica service that has the same function will return the response message instead automatically. The service requester will receive the response message transparently without taking care where it comes from. Besides, the state of failed service will be recorded for service management. At the end of this article, we evaluated the performance of double redundant fault tolerant service routing model. Our analysis shows that, by importing double redundant fault tolerance, we can improve the fault-tolerant capability of the services routing apparently. It will solve the limitation of existent static service routing and ensure the reliability of messaging in SOA.",2009,0, 3696,Reasoning on Non-Functional Requirements for Integrated Services,"We focus on non-functional requirements for applications offered by service integrators; i.e., software that delivers service by composing services, independently developed, managed, and evolved by other service providers. In particular, we focus on requirements expressed in a probabilistic manner, such as reliability or performance. We illustrate a unified approach-a method and its support tools-which facilitates reasoning about requirements satisfaction as the system evolves dynamically. The approach relies on run-time monitoring and uses the data collected by the probes to detect if the behavior of the open environment in which the application is situated, such as usage profile or the external services currently bound to the application, deviates from the initially stated assumptions and whether this can lead to a failure of the application. This is achieved by keeping a model of the application alive at run time, automatically updating its parameters to reflect changes in the external world, and using the model's predictive capabilities to anticipate future failures, thus enabling suitable recovery plans.",2009,0, 3697,An Exploratory Study of the Impact of Code Smells on Software Change-proneness,"Code smells are poor implementation choices, thought to make object-oriented systems hard to maintain. In this study, we investigate if classes with code smells are more change-prone than classes without smells. Specifically, we test the general hypothesis: classes with code smells are not more change prone than other classes. We detect 29 code smells in 9 releases of Azureus and in 13 releases of Eclipse, and study the relation between classes with these code smells and class change-proneness. We show that, in almost all releases of Azureus and Eclipse, classes with code smells are more change-prone than others, and that specific smells are more correlated than others to change-proneness. These results justify a posteriori previous work on the specification and detection of code smells and could help focusing quality assurance and testing activities.",2009,0, 3698,Evolving Software Systems Towards Adaptability,"The increasing demand for autonomic computing calls for modernizing existing software into self-adaptive ones. However, evolving legacy software to cover adaptive behaviors is a risky and error-prone task due to the extensive changes it requires in the majority cases. The focus of this research is to propose a cost-efficient systematic approach for evolving software according to adaptation requirements. The novelty of this research is a new evolution process to assist with adding adaptive features to an existing software. Such a process includes unique properties and novel concepts of self-adaptive software, namely: a co-evolutionary model of self-adaptive software, and primitive effecting operations. Our proposed approach formulates the problem of defining software specifications as an optimization problem of finding a mapping from goal/action models to a set of primitive operations that can be added to the original software by a set of transformations.",2009,0, 3699,SQUAD: Software Quality Understanding through the Analysis of Design,"Object-oriented software quality models usually use metrics of classes and of relationships among classes to assess the quality of systems. However, software quality does not depend on classes solely: it also depends on the organization of classes, i.e., their design. Our thesis is that it is possible to understand how the design of systems affects their quality and to build quality models that take into account various design styles, in particular design patterns, antipatterns, and code smells. To demonstrate our thesis, we first analyze how playing roles in design patterns, antipatterns, and code smells impacts quality; specifically change-proneness, fault-proneness, and maintenance costs. Second, we build quality models and apply and validate them on open-source and industrial object-oriented systems to show that they allow a more precise evaluation of the quality than traditional models,like Bansiya et al.'s QMOOD.",2009,0, 3700,Enhancing Quality of Code Clone Detection with Program Dependency Graph,"At present, there are various kinds of code clone detection techniques. PDG-based detection is suitable to detect non-contiguous code clones meanwhile other detection techniques are not suited to detect them. However, there is a tendency that it cannot detect contiguous code clones unlike string-based or token-based technique. This paper proposes two techniques to enhance the PDG-based detection for practical usage. The software tool, Scorpio has been developed based on the techniques.",2009,0, 3701,Tracking Design Smells: Lessons from a Study of God Classes,"God class"""" is a term used to describe a certain type of large classes which """"know too much or do too much"""". Often a God class (GC) is created by accident as functionalities are incrementally added to a central class over the course of its evolution. GCs are generally thought to be examples of bad code that should be detected and removed to ensure software quality. However, in some cases, a GC is created by design as the best solution to a particular problem because, for example, the problem is not easily decomposable or strong requirements on efficiency exist. In this paper, we study in two open-source systems the """"life cycle"""" of GCs: how they arise, how prevalent they are, and whether they remain or they are removed as the systems evolve over time, through a number of versions. We show how to detect the degree of """"godliness"""" of classes automatically. Then, we show that by identifying the evolution of """"godliness"""", we can distinguish between those classes that are so by design (good code) from those that occurred by accident (bad code). This methodology can guide software quality teams in their efforts to implement prevention and correction mechanisms.",2009,0, 3702,A New Metric for Automatic Program Partitioning,"Software reverse engineering techniques are most often applied to reconstruct the architecture of a program with respect to quality constraints, or non-functional requirements such as maintainability or reusability. However, there has been no effort to assess the architecture of a program from the performance viewpoint and reconstruct this architecture in order to improve the program performance. In this paper, a novel Actor-Oriented Program reverse engineering approach, is proposed to reconstruct an object-oriented program architecture based on a high performance model such as actor model. Since actors can communicate with each other asynchronously, reconstructing the program architecture based on this model may result in the concurrent execution of the program invocations and consequently increasing the overall performance of the program when enough processors are available.",2009,0, 3703,Density-based classification of protein structures using iterative TM-score,"Finding similarity between a pair of protein structures is one of the fundamental tasks in many areas of bioinformatical research such as protein structure prediction, function mapping, etc. We propose a method for finding pairing of amino acids based on densities of the structures and we also propose a modification to the original TM-score rotation algorithm that assess similarity score to this alignment. Proposed modification is faster than TM and comparably robust according to non-optimal parts in the alignment. We measure the qualities of the algorithm in terms of SCOP classification accuracy. Regarding the accuracy, our solution outperforms the contemporary solutions at two out of three tested levels of the SCOP hierarchy.",2009,0, 3704,A platform for testing and comparing of real-time decision-support algorithms in mobile environments,"The unavailability of a flexible system for real-time testing of decision-support algorithms in a pre-hospital clinical setting has limited their use. In this study, we describe a plug-and-play platform for real-time testing of decision-support algorithms during the transport of trauma casualties en route to a hospital. The platform integrates a standard-of-care vital-signs monitor, which collects numeric and waveform physiologic time-series data, with a rugged ultramobile personal computer. The computer time-stamps and stores data received from the monitor, and performs analysis on the collected data in real-time. Prior to field deployment, we assessed the performance of each component of the platform by using an emulator to simulate a number of possible fault scenarios that could be encountered in the field. Initial testing with the emulator allowed us to identify and fix software inconsistencies and showed that the platform can support a quick development cycle for real-time decision-support algorithms.",2009,0, 3705,A new validity measure for a correlation-based fuzzy c-means clustering algorithm,"One of the major challenges in unsupervised clustering is the lack of consistent means for assessing the quality of clusters. In this paper, we evaluate several validity measures in fuzzy clustering and develop a new measure for a fuzzy c-means algorithm which uses a Pearson correlation in its distance metrics. The measure is designed with within-cluster sum of square, and makes use of fuzzy memberships. In comparing to the existing fuzzy partition coefficient and a fuzzy validity index, this new measure performs consistently across six microarray datasets. The newly developed measure could be used to assess the validity of fuzzy clusters produced by a correlation-based fuzzy c-means clustering algorithm.",2009,0, 3706,SleepMinder: An innovative contact-free device for the estimation of the apnoea-hypopnoea index,"We describe an innovative sensor technology (SleepMindertrade) for contact-less and convenient measurement of sleep and breathing in the home. The system is based on a novel non-contact biomotion sensor and proprietary automated analysis software. The biomotion sensor uses an ultra low-power radio-frequency transceiver to sense the movement and respiration of a subject. Proprietary software performs a variety of signal analysis tasks including respiration analysis, sleep quality measurement and sleep apnea assessment. This paper measures the performance of SleepMinder as a device for the monitoring of sleep-disordered breathing (SDB) and the provision of an estimate of the apnoea-hypopnoea index (AHI). The SleepMinder was tested against expert manually scored PSG data of patients gathered in an accredited sleep laboratory. The comparison of SleepMinder to this gold standard was performed across overnight recordings of 129 subjects with suspected SDB. The dataset had a wide demographic profile with the age ranging between 20 and 81 years. Body weight included subjects with normal weight through to the very obese (Body Mass Index: 21-44 kg/m2). SDB severity ranged from subjects free of SDB to those with severe SDB (AHI: 0.8-96 events/hours). SleepMinder's AHI estimation has a correlation of 91% and can detect clinically significant SDB (AHI>15) with a sensitivity of 89% and a specificity of 92%.",2009,0, 3707,Automatic detection of pathological myopia using variational level set,"Pathological myopia, the seventh leading cause of legal blindness in United States, is a condition caused by pathological axial elongation and eyes that deviates from the normal distribution curve of axial length, resulting in impaired vision. Studies have shown that ocular risks associated with myopia should not be underestimated, and there is a public health need to prevent the onset or progression of myopia. Peripapillary atrophy (PPA) is one of the clinical indicators for pathological myopia. In this paper, we introduce a novel method, to detect pathological myopia via peripapaillary atrophy feature by means of variational level set. This method is a core algorithm of our system, PAMELA, an automated system for the detection of pathological myopia. The proposed method has been tested on 40 images from Singapore Cohort study Of the Risk factors for Myopia (SCORM), producing a 95% accuracy of correct assessment, and a sensitivity and specificity of 0.9 and 1 respectively. The results highlight the potential of PAMELA as a possible clinical tool for objective mass screening of pathological myopia.",2009,0, 3708,Seizure prediction using cost-sensitive support vector machine,"Approximately 300,000 Americans suffer from epilepsy but no treatment currently exists. A device that could predict a seizure and notify the patient of the impending event or trigger an antiepileptic device would dramatically increase the quality of life for those patients. A patient-specific classification algorithm is proposed to distinguish between preictal and interictal features extracted from EEG recordings. It demonstrates that the classifier based on a cost-sensitive support vector machine (CSVM) can distinguish preictal from interictal with a high degree of sensitivity and specificity, when applied to linear features of power spectrum in 9 different frequency bands. The proposed algorithm was applied to EEG recordings of 9 patients in the Freiburg EEG database, totaling 45 seizures and 219-hour-long interictal, and it produced sensitivity of 77.8% (35 of 45 seizures) and the zero false positive rate using 5-minute-long window of preictal via double-cross validation. This approach is advantageous, for it can help an implantable device for seizure prediction consume less power by real-time analysis based on extraction of linear features and by offline optimization, which may be computationally intensive and by real-time analysis.",2009,0, 3709,Focal artifact removal from ongoing EEG a hybrid approach based on spatially-constrained ICA and wavelet de-noising,"Detecting artifacts produced in electroencephalographic (EEG) data by muscle activity, eye blinks and electrical noise, etc., is an important problem in EEG signal processing research. These artifacts must be corrected before further analysis because it renders subsequent analysis very error-prone. One solution is to reject the data segment if artifact is present during the observation interval, however, the rejected data segment could contain important information masked by the artifact. It has already been demonstrated that independent component analysis (ICA) can be an effective and applicable method for EEG de-noising. The goal of this paper is to propose a framework, based on ICA and wavelet denoising (WD), to improve the pre-processing of EEG signals. In particular we employ the concept of spatially-constrained ICA (SCICA) to extract artifact-only independent components (ICs) from the given EEG data, use WD to remove any brain activity from extracted artifacts, and finally project back the artifacts to be subtracted from EEG signals to get clean EEG data. The main advantage of the proposed approach is faster computation, as all ICs are not identified in the usual manner due to the square mixing assumption. Simulation results demonstrate the effectiveness of the proposed approach in removing focal artifacts that can be well separated by SCICA.",2009,0, 3710,A verification of fault tree for safety integrity level evaluation,"This study focuses on a novel approach which automatically proves the correctness and completeness of fault trees based on a formal model by model checking. This study represents that the model checking technique is useful when validating the correctness of informal safety analysis such as FTA. The benefits of this study are that it provides the probability of formally validating FTA by proving correctness and completeness of the fault trees. In addition to this benefit, it is possible that the CTL technique proves the FTA based SIL.",2009,0, 3711,Detection of tissue folds in whole slide images,"In whole slide imaging (WSI) the quality of scanned images is an interplay between the hardware specifications of the scanning device and the condition of the tissue slide itself. Tissue artifacts such as folds and bubbles have been known to affect the efficiency of a whole slide scanning system in selecting the focus points wherein the presence of the said artifacts have been found to produce blur or unfocused images. Thus, for a whole slide scanning device to produce the best image quality, even with the presence of tissue artifacts, information on the location of these artifacts should be known such that they can be avoided in the selection of the focus points. In this paper we introduced an enhancement method to emphasize and detect the location of the tissue folds from whole slide images. Results of the experiments that we conducted on various H&E stained images that were scanned using different scanners show the robustness of the method to detect tissue folds.",2009,0, 3712,Comparative study of two image space noise reduction methods for computed tomography: Bilateral filter and nonlocal means,"Optimal noise control is important for improving image quality and reducing radiation dose in computed tomography. Here we investigated two image space based nonlinear filters for noise reduction: the bilateral filter (BF) and the nonlocal means (NLM) algorithm. Images from both methods were compared against those from a commercially available weighted filtered backprojection (WFBP) method. A standard phantom for quality assurance testing was used to quantitatively compare noise and spatial resolution, as well as low contrast detectability (LCD). Additionally, an image dataset from a patient's abdominal CT exam was used to assess the effectiveness of the filters on full dose and simulated half dose acquisitions. We found that both the BF and NLM methods improve the tradeoff between noise and high contrast spatial resolution with no significant difference in LCD. Results from the patient dataset demonstrated the potential of dose reduction with the denoising methods. Care must be taken when choosing the NLM parameters in order to minimize the generation of artifacts that could possibly compromise diagnostic value.",2009,0, 3713,Mammogram enhancement using alpha weighted quadratic filter,Mammograms are widely used to detect breast cancer in women. The quality of the image may suffer from poor resolution or low contrast due to the limitations of the X-ray hardware systems. Image enhancement is a powerful tool to improve the visual quality of mammograms. This paper introduces a new powerful nonlinear filter called the alpha weighted quadratic filter for mammogram enhancement. The user has the flexibility to design the filter by selecting all of the parameters manually or using an existing quantitative measure to select the optimal enhancement parameters. Computer simulations show that excellent enhancement results can be obtained with no apriori knowledge of the mammogram contents. The filter can also be used for automatic segmentation.,2009,0, 3714,SoundView: An auditory guidance system based on environment understanding for the visually impaired people,"Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.",2009,0, 3715,Use of threat image projection (TIP) to enhance security performance,"Threat Image Projection (TIP) is a software system that is used at airports to project images of threat items amongst the passenger baggage being screened by X-ray. The use of TIP is becoming more widespread and is increasingly being included as part of security regulation. This is due to its purported benefits of improved attention and vigilance, and increased exposure to threat items that are linked to improvements in threat detection performance. Further, the data collected by the TIP system can be used to assess individual performance, provide feedback to screeners, and tailor training to specific performance weaknesses; which can generate further performance improvements. However, TIP will only be successful in enhancing security performance if it is used and managed effectively. In this paper the key areas of effective TIP use and management that enable security performance to be enhanced are highlighted. These include the optimisation of TIP settings, such as the TIP to bag ratio, and image library management. Appropriate setting of these components can lead to improved performance as a facet result of increasing exposure to a suitable range of threat images. The key elements of TIP training are highlighted including the importance of communicating TIP related information and the role of the supervisor in ensuring TIP is used appropriately. Finally, the use of TIP data are examined, including the effective use of TIP for performance assessment and screener feedback and in defining training. The provision of feedback regarding TIP scores has been shown to enhance performance in excess of that achieved by using TIP in isolation. To date, the vast majority of TIP research has been conducted in relation to the screening of carry-on baggage. In the final part of this presentation the use of TIP to enhance performance in other areas such as Hold Baggage Screening (HBS) and Cargo are considered. HBS TIP is associated with different challenges due to its alternative - method of operation (to present complete images of a bag and threat item) which imposes demands for the operational set-up and the construction of the image library. The use of TIP in Cargo is associated with a different set of challenges as a result of the diverse nature of items scanned and the screening environment. However, in both these domains, the use of TIP has been associated with the realisation of benefits in line with those achieved for carry-on baggage screening. Through understanding differences in the context in which TIP is used it is possible to understand the differing requirements for its use and management that will enable the benefits of TIP to be realised, enhancing security performance across locations and screening contexts.",2009,0, 3716,Fault-tolerant communication over Micronmesh NOC with Micron Message-Passing protocol,"In the future multi-processor system-on-chip (MPSoC) platforms are becoming more vulnerable to transient and intermittent faults due to physical level problems of VLSI technologies. This sets new requirements to the fault-tolerance of the messaging layer software which applications use for communication, because the faults make the operation of the Network-on-Chip (NoC) hardware of the MPSoCs less reliable. This paper presents Micron Message-Passing (MMP) Protocol which is a light-weight protocol designed for improving the fault tolerance of the messaging layer of the MPSoCs where Micronmesh NoC is used. Its fault-tolerance is implemented by watchdog timers and cyclic redundancy checks (CRC) which are usable for detecting packet losses, communication deadlocks, and bit errors. These three functionalities are necessary, because without them the software executed on the MPSoCs is not able to detect the faults and recover from them. This paper presents also how the MMP Protocol can be used for implementing applications which are able to recover from communication faults.",2009,0, 3717,Invited Talk: Rainbow: Engineering Support for Self-Healing Systems,"An increasingly important requirement of modern software-based systems is continuous operation even in the face of environmental changes, shifting user requirements, and unanticipated faults. One approach to address this requirement is to make systems self-adaptive: when problems are detected the system """"heals"""" itself. In this talk I describe the Rainbow System, which allows engineers to add self-healing capabilities to existing systems. The key ideas behind Rainbow are (a) the use of architectural models; (b) a new language for specifying self-healing strategies; and (c) the ability to customize the self-healing mechanisms to the domain.",2009,0, 3718,Applying Code Coverage Approach to an Infinite Failure Software Reliability Model,"An approach to software reliability modeling based on code coverage is used to derive the Infinite Failure software reliability Model Based on Code Coverage - IFMBC. Our aim was to verify the soundness of the approach under different assumptions. The IFMBC was assessed with test data from a real application, making use of the following structural testing criteria: all-nodes, all-edges, and potential-uses - a data-flow based family of criteria. The IFMBC was shown to be as good as the Geometric Model - GEO, found to be the best traditional time-based model that fits the data. Results from the analysis also show that the IFMBC is as good as the BMBC - Binomial software reliability Model Based on Coverage - a model previously derived using the code coverage approach, indicating it to be effective under different modeling assumptions.",2009,0, 3719,Exception Flows Made Explicit: An Exploratory Study,"Most of the exceptions exert a global design impact as they tend to flow through multiple module interfaces of a software system. Exception handling mechanisms in programming languages were originally proposed to improve the robustness and comprehension of error handling code. These mechanisms are traditionally based on the fundamental assumption that global exception flows should be always implicit. However, it has been empirically found that the implementation of global exception handling in real-life software projects tends to exhibit poor quality. This paper presents an exploratory study to assess the benefits and drawbacks of explicit exception flows (or exception channels), as opposed to implicit exception flows. The experiment design involved 15 participants using three alternative mechanisms for exception handling. Our analysis was driven by key indicators of software usability: (i) implementation time, (ii) number of uncaught exceptions, and (iii) number of incorrect answers by the participants.",2009,0, 3720,Finding Defective Software Modules by Means of Data Mining Techniques,"The characterization of defective modules in software engineering remains a challenge. In this work, we use data mining techniques to search for rules that indicate modules with a high probability of being defective. Using datasets from the PROMISE repository 1, we first applied feature selection to work only with those attributes from the datasets capable of predicting defective modules. Then, a genetic algorithm search for rules characterising subgroups with a high probability of being defective. This algorithm overcomes the problem of unbalanced datasets where the number of non-defective samples in the dataset highly outnumbers the defective ones.",2009,0, 3721,Proactive estimation of the video streaming reception quality in WiFi networks using a cross-layer technique,"The quality of service (QoS) in a Wireless Fidelity (WiFi) network can not be guaranteed due to intermittent disconnections, interferences and so forth because they reduce considerably the necessary QoS for multimedia applications. For that reason it is essential a software as the one we present in this paper that evaluates if there are the appropriate conditions to receive the streaming measuring the congestion, radio coverage, throughput, lost and received packets rate, and so on. Our software makes a proactive estimation using a bottom-up cross-layer technique measuring physical level parameters, Round Trip Time (RTT) and getting traffic statistics for the overall WiFi network and for each ongoing Real Time Streaming Protocol/Real Time Protocol (RTSP/RTP) streaming session. Experimental results show a high percentage of good decisions of the software to detect the wireless channel degradation and its recovery.",2009,0, 3722,Throughput and delay analysis of the IEEE 802.15.3 CSMA/CA mechanism considering the suspending events in unsaturated traffic conditions,"Unlike in IEEE 802.11, the CSMA/CA traffic conditions in IEEE 802.15.3 are typically unsaturated. This paper presents an extended analytical model based on Bianchi's model in IEEE 802.11, by taking into account the device suspending events, unsaturated traffic conditions, as well as the effects of error-prone channels. Based on this model we re-derive a closed form expression of the average service time. The accuracy of the model is validated through extensive simulations. The analysis is also instructional for IEEE 802.11 networks under limited load.",2009,0, 3723,A Flexible Data Warehousing Approach for One-Stop Querying on Heterogeneous Personal Information,"This paper presents a flexible data warehousing approach which allows one-stop querying on entire personal information residing at heterogeneous data sources. Different from previous work that requires expensive and error-prone semantic integration, our approach aims to construct personal dataspaces for users. In our approach, personal data are uniformly represented in a single data model proposed in this paper, and stored in a data warehousing system based on a storage model corresponding to the data model. Then, users are enabled to easily retrieve all their personal information by using keywords or a semi-structured query language.",2009,0, 3724,Specialized Embedded DBMS: Cell Based Approach,"Data management is fundamental to data-centric embedded systems with high resources scarcity and heterogeneity. Existing data management systems are very complex and provide a multitude of functionality. Due to complexity and their monolithic architecture, tailoring these data management systems for data-centric embedded systems is tedious, cost-intensive, and error-prone. In order to cope with complexity of data management in such systems, we propose a different approach to DBMS architecture, called Cellular DBMS, that is inspired by biological systems. Cellular DBMS is a compound of multiple simpler DBMS, called DBMS Cells, that typically provide differing functionality(e.g., persistence storage, indexes, transactions, etc.). We illustrate how the software product line approach is useful to generate different individual DBMS cells from a commonest of features and how generated atomic DBMS cells can be used together for data management on data-centric embedded systems.",2009,0, 3725,Mutation Sensitivity Testing,"Computational scientists often encounter code-testing challenges not typically faced by software engineers who develop testing techniques. Mutation sensitivity testing addresses these challenges, showing that a few well-designed tests can detect many code faults and that reducing error tolerances is often more effective than running additional tests.",2009,0, 3726,Network configuration in a box: towards end-to-end verification of network reachability and security,"Recent studies show that configurations of network access control is one of the most complex and error prone network management tasks. For this reason, network misconfiguration becomes the main source for network unreachablility and vulnerability problems. In this paper, we present a novel approach that models the global end-to-end behavior of access control configurations of the entire network including routers, IPSec, firewalls, and NAT for unicast and multicast packets. Our model represents the network as a state machine where the packet header and location determines the state. The transitions in this model are determined by packet header information, packet location, and policy semantics for the devices being modeled. We encode the semantics of access control policies with Boolean functions using binary decision diagrams (BDDs). We then use computation tree logic (CTL) and symbolic model checking to investigate all future and past states of this packet in the network and verify network reachability and security requirements. Thus, our contributions in this work is the global encoding for network configurations that allows for general reachability and security property-based verification using CTL model checking. We have implemented our approach in a tool called ConfigChecker. While evaluating ConfigChecker, we modeled and verified network configurations with thousands of devices and millions of configuration rules, thus demonstrating the scalability of this approach.",2009,0, 3727,A location-free Prediction-based Sleep Scheduling protocol for object tracking in sensor networks,"Sleep scheduling protocols are widely used in wireless sensor networks for saving energy in sensor nodes. However, without considering the special requirements of object tracking, conventional sleep scheduling protocols may lead to intolerable degradation of tracking qualities when they are used in object tracking applications. To handle this problem, sleep scheduling protocols tailed for object tracking have been proposed recently. For saving energy while maintaining satisfactory tracking qualities, these protocols pro-actively awaken sensors according to the prediction of objects' movement. Such sleep scheduling protocols are called the prediction-based sleep scheduling protocols. Most existing prediction-based sleep scheduling protocols require sensor nodes to know the locations of themselves, which may not always be available. In this paper we propose a Location-free Prediction-based Sleep Scheduling protocol (LPSS) for object tracking in sensor networks. LPSS guarantees the coverage level, an important tracking quality in most applications, which is defined as the number of sensors simultaneously detecting the object. In LPSS, when a sensor detects the object, it will emit a signal, namely the sensing stimulus. Sensors decide to wake up or not based on only the received sensing stimulus, the prediction models and the required coverage level, without the requirement of location information. We implement LPSS with two most popular prediction models: the Circle-based and the Probability-based prediction models. Experiment results show that LPSS not only provides qualified coverage levels, but also saves about 40% to 70% energy compared with existing location-free protocols. Moreover, the energy cost of LPSS is close to the ideal approach using accurate location information in terms of the number of awakened nodes.",2009,0, 3728,Temporal Exception Prediction for Loops in Resource Constrained Concurrent Workflows,"Workflow management systems (WfMS) are widely used for improving business processes and providing better quality of services. However, rapid changes in business environment can cause exceptions in WfMS leading to deadline violation and other consequences. In these circumstances, one of the crucial tasks for a workflow administrator is to detect any potential exceptions as early as possible so that corrective measures can be taken. However such detections can be extremely complex since a workflow process may consist of various control flow pattern and each pattern has its own way of influencing temporal properties of a task. In this paper, we describe a novel approach for predicting temporal exceptions for loops in concurrent workflows which are required to share limited identical resources. Our approach is divided into two phases; preparation phase and prediction phase. In the preparation phase, temporal and resource constraints are calculated for each task within the workflow schema. In the prediction phase, an algorithm is used to predict potential deadline violations by taking into account constraints calculated from the preparation phase.",2009,0, 3729,RFID middleware as a service Enabling small and medium-sized enterprises to participate in the EPC network,"RFID technology has been adopted in the market. It is used in large enterprises for different approaches with great success. The rise of RFID technology is still at its beginning and there is a lot of unused potential, especially concerning the so called long tail. To achieve the full value of supply chain networks the SMEs have to introduce RFID technology at their side, too. The idea behind Software as a Service has already proved in different scenarios that it is able to reach the long tail. In this contribution we will present such a SaaS solution for an on-demand RFID middleware, especially for SMEs.",2009,0, 3730,Risk evaluation process modeling in software project investment based on Bayesian networks,"A risk evaluation model in software project investment based on Bayesian networks (BNs) is presented in this paper. The BNs parameter learning is applied to the modeling process based on sample data set, so that the BNs is more accordant with the project feature in the software project investment phase. In addition, the validity of the parameter learning is validated with algorithm precision and convergence. Practice proves that the risk evaluation model can provide the accurate risk information for decision-makers in the process of software project investment.",2009,0, 3731,Measuring the quality of interfaces using source code entropy,"Global enterprises face an increasingly high complexity of software systems. Although size and complexity are two different aspects of a software system, traditionally, various size metrics have been established to indicate their complexity. In fact, many developed software metrics correlate with the number of lines of code. Moreover, a combination of multiple metrics collected on bottom layers into one comprehensible and meaningful indicator for an entire system is not a trivial task. This paper proposes a novel interpretation of an entropy-based metric to assess the design of a software system in terms of interface quality and understandability. The proposed metric is independent of the system size and delivers one single value eliminating the unnecessary aggregation step. Further, an industrial case study has been conducted to illustrate the usefulness of this metric.",2009,0, 3732,Comparing methodologies for the transition between software requirements and architectures,"The transition from software requirements to software architectures has consistently been one of the main challenges during software development. Various methodologies that aim at helping with this transition have been proposed. However, no systematic approach for assessing such methodologies exists. Also, there is little consensus on the technical and non-technical issues that a transition methodology should address. Hence, we present a method for assessing and comparing methodologies for the transition from requirements to architectures. This method also helps validate newly proposed transition methodologies. The objective of such validations is to assess whether or not a methodology has the potential to lead to better architectures. For that reason, this paper discusses a set of commonly known but previously only informally described criteria for transition methodologies and organizes them into a schema. In the paper we also use our method to characterize a set of 14 current transition methodologies. This is done to illustrate the usefulness of our approach for comparing transition methodologies as well as for validating newly proposed methodologies. Characterizing these 14 methodologies also gives an overview of current transition methodologies and research.",2009,0, 3733,Performance evaluation of service-oriented architecture through stochastic Petri nets,"The service-oriented architecture (SOA) has become an unifying technical architecture that can be embodied with Web service technologies, in which the Web service is thought as a fundamental building block. This paper proposes a simulation modeling approach based on stochastic Petri nets to estimate the performance of SOA applications. Using the proposed model it is possible to predict resource consumption and service levels degradation in scenarios with different compositions and workloads. A case study was conducted to validate the approach and to compare the results against an existing analytical modeling approach.",2009,0, 3734,Application of a seeded hybrid genetic algorithm for user interface design,"Studies have established that computer user interface (UI) design is a primary contributor to people's experiences with modern technology; however, current UI development remains more art than quantifiable science. In this paper, we study the use of search algorithms to predict optimal display layouts early in system design. This has the potential to greatly reduce the cost and improve the quality of UI development. Specifically, we demonstrate a hybrid genetic algorithm and pattern search optimization process that makes use of human performance modeling to quantify known design principles. We show how this approach can be tailored by capturing contextual factors in order to properly seed and tune the genetic algorithm. Finally, we demonstrate the ability of this process to discover superior layouts as compared to manual qualitative methods.",2009,0, 3735,Discovering the best web service: A neural network-based solution,"Differentiating between Web services that share similar functionalities is becoming a major challenge into the discovery of Web services. In this paper we propose a framework for enabling the efficient discovery of Web services using artificial neural networks (ANN) best known for their generalization capabilities. The core of this framework is applying a novel neural network model to Web services to determine suitable Web services based on the notion of the quality of Web service (QWS). The main concept of QWS is to assess a Web service's behaviour and ability to deliver the requested functionality. Through the aggregation of QWS for Web services, the neural network is capable of identifying those services that belong to a variety of class objects. The overall performance of the proposed method shows a 95% success rate for discovering Web services of interest. To test the robustness and effectiveness of the neural network algorithm, some of the QWS features were excluded from the training set and results showed a significant impact in the overall performance of the system. Hence, discovering Web services through a wide selection of quality attributes can considerably be influenced with the selection of QWS features used to provide an overall assessment of Web services.",2009,0, 3736,Robustness of modular multi-layered software in the automotive domain: a wrapping-based approach,"New automotive modular multi-layered software organization particularly favors use and interoperability of components-off-the-shelf. However, the integration of software components is error-prone, if their coordination is not rigorously controlled. The risk of failure is increased with the possibility to multiplex software components with heterogeneous levels of criticality, observability. Most of dependability mechanisms, today, address locally errors within each component or report them to further diagnosis services. Instead, we consider a global wrapping-based approach to deal with multilevel properties to be checked on the complete multilayered system at runtime. In this paper, we introduce a framework to design robust software, from analysis to implementation issues, and we illustrate the methodology on simple case study.",2009,0, 3737,Leveraging determinism in industrial control systems for advanced anomaly detection and reliable security configuration,"Industrial automation and control systems (IACS) today are often based on common IT technologies. However, they often lack security mechanisms and those available in enterprise IT environments are often not suitable for IACS. Other mechanisms require significant manual maintenance which is error prone. In this paper we present an approach that leverages the unique characteristics of IACS, in particular their deterministic behavior and often available formal system description, to reliably detect anomalies and reproducibly generate configurations for security mechanisms such as firewalls. In particular, we extend common IDS technology to also detect an IACS specific anomaly: the missing of required traffic.",2009,0, 3738,Automated software diversity for hardware fault detection,"Software in dependable systems must be able to tolerate or detect faults in the underlying infrastructure, such as the hardware. This paper presents a cost efficient automated method how register faults in the microprocessor can be detected during execution. This is done with the help of using compiler options to generate diverse binaries. The efficacy of this approach has been analyzed with the help of a CPU emulator, which was modified exactly for this purpose. The promising results show, that by using this approach, it is possible to automatically detect the vast majority of the injected register faults. In our simulations, two diverse versions have-despite of experiencing the same fault during execution - never delivered the same incorrect result, so we could detect all injected faults.",2009,0, 3739,"Development of hardware and software for three-phase power quality disturbances detection, classification and diagnosis using Kalman Filter theory","The aim of this work is the development of a three-phase power quality disturbances detection, classification and diagnosis tool. The tool senses the electrical grid, and when a disturbance is detected, the voltage signals are acquired and analyzed. The result of the analysis is the classification of the disturbance and the diagnosis of its probable causes. The detection is done using Kalman filter, while the classification and diagnosis are done using wavelets and fast Fourier transforms. The implementation involves hardware and software. The hardware is composed by voltage sensors, signal conditioning circuit, DSP320C6713 DSP board and an acquisition board. The software is responsible for the classification and diagnosis. Three cases of typical disturbances that affect electrical systems are presented. The results are consistent showing the feasibility of the proposed tool.",2009,0, 3740,On line sensor planning for tracking in camera networks,"Sensor planning chiefly applies to optimizing surveillance tasks, such as persistent tracking by designing and utilizing camera placement strategies. Against substituting new optimized camera networks for those still in usage, online sensor planning hereby involves the design of vision algorithms that not only select cameras which yield the best results, but also improve the quality with which surveillance tasks are performed. In previous literatures about coverage problem in sensor planning, targets (e.g., persons) are justly simplified as a 2-D point. However in actual application scene, cameras are always heterogeneous such as fixed with different height and action radii, and the monitored objects has 3-D features (e.g., height). This paper presents a new sensor planning formulation addressing the efficiency enhancement of active visual tracking in camera networks that track and detect people traversing a region. The numerical results show that this online sensor planning approach can improve the active tracking performance of the system.",2009,0, 3741,Evaluation of sophisticated hardware architectures for safety applications,"Standards and guidelines give advice on the development of qualitative and quantitative criteria to evaluate safety related systems. Success of many modern applications is highly dependent on the correct functioning of complex computer based systems. In some cases, failures in these systems may cause serious consequences in terms of loss of human life. Systems in which failure could endanger human life are termed safety-critical. The SIS (Safety Instrumented System) should be designed to meet the required safety integrity level as defined in the safety requirement specification (safety requirement allocation). Moreover, the SIS design should be performed in a way that minimizes the potential for common mode or common cause failures (CCF). The purpose of this paper is to describe the calculation of MTTF-values for a 2004-architecture with the help of Markov-models. In the paper equations are indicated for PFD for normal and common-cause-failures. The results are high availability and a high reliability.",2009,0, 3742,Service Redundancy Strategies in Service-Oriented Architectures,"Redundancy can improve the availability of components in service-oriented systems. However, predicting and quantifying the effects of different redundancy strategies can be a complex task. In our work, we have taken an architecture based approach to the modeling, predicting and monitoring of properties in distributed software systems.This paper proposes redundancy strategies for service-oriented systems and models services with their associated protocols. We derive formal models from these high-level descriptions that are embedded in our fault-tolerance testing framework.We describe the general framework of our approach, develop two service redundancy strategies and report about the preliminary evaluation results in measuring performance and availability of such services. While the assumptions for the chosen case study are limiting, our evaluation is promising and encourages the extension of our testing framework to cater for more complex, hybrid, fault-tolerance strategies and architectural compositions.",2009,0, 3743,What Software Repositories Should Be Mined for Defect Predictors?,"The information about which modules in a software system's future version are potentially defective is a valuable aid for quality managers and testers. Defect prediction promises to indicate these defect-prone modules. Constructing effective defect prediction models in an industrial setting involves the decision from what data source the defect predictors should be derived. In this paper we compare defect prediction results based on three different data sources of a large industrial software system to answer the question what repositories to mine. In addition, we investigate whether a combination of different data sources improves the prediction results. The findings indicate that predictors derived from static code and design analysis provide slightly yet still significant better results than predictors derived from version control, while a combination of all data sources showed no further improvement.",2009,0, 3744,Synthetic Metrics for Evaluating Runtime Quality of Software Architectures with Complex Tradeoffs,"Runtime quality of software, such as availability and throughput, depends on architectural factors and execution environment characteristics (e.g. CPU speed, network latency). Although the specific properties of the underlying execution environment are unknown at design time, the software architecture can be used to assess the inherent impact of the adopted design decisions on runtime quality. However, the design decisions that arise in complex software architectures exhibit non trivial interdependences. This work introduces an approach that discovers the most influential factors, by exploiting the correlation structure of the analyzed metrics via factor analysis of simulation data. A synthetic performance metric is constructed for each group of correlated metrics. The variability of these metrics summarizes the combined factor effects hence it is easier to assess the impact of the analyzed architecture decisions on the runtime quality. The approach is applied on experimental results obtained with the ACID Sim Tools framework for simulating transaction processing architectures.",2009,0, 3745,Foundations for a Model-Driven Integration of Business Services in a Safety-Critical Application Domain,"Current architectures for systems integration provide means for forming agile business processes by manually or dynamically configuring the components. However, a major challenge in the safety-critical air traffic management (ATM) domain is to interconnect business services taking into account service level agreements regarding the underlying network infrastructures. In such domains, manual configuration is forbidden due to the resulting error-prone and time-consuming tasks, while dynamic configuration is not allowed due to nondeterministic decision making. In this paper we propose a model-driven system configuration approach (MDSC), which explicitly models the components of the network infrastructures and their capabilities to automatically generate a logical network configuration. Based on an industry application example, we show the feasibility of the proposed integration platform in the ATM domain and discuss the advantages and limitations.",2009,0, 3746,Model-Based System Testing Using Visual Contracts,"In system testing the system under test (SUT) is tested against high-level requirements which are captured at early phases of the development process. Logical test cases developed from these requirements must be translated to executable test cases by augmenting them with implementation details. If manually done these activities are error-prone and tedious. In this paper we introduce a model-based approach for system testing where we generate first logical test cases from use case diagrams which are partially formalized by visual contracts, and then we transform these to executable test cases using model transformation. We derive model transformation rules from the design decisions of developers.",2009,0, 3747,Fault Analysis in OSS Based on Program Slicing Metrics,"In this paper, we investigate the barcode OSS using two of Weiser's original slice-based metrics (tightness and overlap) as a basis, complemented with fault data extracted from multiple versions of the same system. We compared the values of the metrics in functions with at least one reported fault with fault-free modules to determine a) whether significant differences in the two metrics would be observed and b) whether those metrics might allow prediction of faulty functions. Results revealed some interesting traits of the tightness metric and, in particular, how low values of that metric seemed to indicate fault-prone functions. A significant difference was found between the tightness metric values for faulty functions when compared to fault-free functions suggesting that tightness is the `better' of the two metrics in this sense. The overlap metric seemed less sensitive to differences between the two types of function.",2009,0, 3748,A Framework for the Balanced Optimization of Quality Assurance Strategies Focusing on Small and Medium Sized Enterprises,"The Quality Improvement Paradigm (QIP) offers a general framework for systematically improving an organization's development processes in a continuous manner. In the context of the LifeCycleQM project, the general QIP framework was concretized for a specific application area, namely, the balanced improvement of quality assurance (QA) strategies, i.e., a set of systematically applied QA activities. Especially with respect to small and medium-sized enterprises, the encompassing QIP framework presents limited guidance for easy and concrete application. Therefore, individual activities within the QIP framework were refined focusing on QA strategies, and proven measurement procedures such as the defect flow model and knowledge from the area of process improvement (e.g., with regard to individual QA procedures) was integrated for reuse in a practice-oriented manner. The feasibility of the developed approach was initially explored by its application at IBS AG, a medium-sized enterprise, where improvement goals were defined, a corresponding measurement program was established, improvement potential was identified, and concrete improvement suggestions for the QA strategy were derived, assessed, and implemented.",2009,0, 3749,A Multi-Tier Provenance Model for Global Climate Research,"Global climate researchers rely upon many forms of sensor data and analytical methods to help profile subtle changes in climate conditions. The U.S. Department of Energy Atmospheric Radiation Measurement (ARM) program provides researchers with curated Value Added Products (VAPs) resulting from continuous instrumentation streams, data fusion, and analytical profiling. The ARM operational staff and software development teams (data producers) rely upon a number of techniques to ensure strict quality control (QC) and quality assurance (QA) standards are maintained. Climate researchers (data consumers) are highly interested in obtaining as much provenance evidence as possible to establish data trustworthiness. Currently all the evidence is not easily attainable or identifiable without significant efforts to extract and piece together information from configuration files, log files, codes, or status information on the ARM website. Our objective is to identify a provenance model that serves the needs of both the VAP producers and consumers. This paper shares our initial results - a comprehensive multi-tier provenance model. We describe how both ARM operations staff and the climate research community can greatly benefit from this approach to more effectively assess and quantify the data historical record.",2009,0, 3750,"RAMS Analysis of a Bio-inspired Traffic Data Sensor (""""Smart Eye"""")","The Austrian Research Centers have developed a compact low-power embedded vision system """"Smart Eye TDS"""", capable of detecting, counting and measuring the velocity of passing vehicles simultaneously on up to four lanes of a motorway. The system is based on an entirely new bio-inspired wide dynamic Asilicon retinaA optical sensor. Each of the 128 A 128 pixels operates autonomously and delivers asynchronous events representing relative changes in illumination with low latency, high temporal resolution and independence of scene illumination. The resulting data rate is significantly lower and reaction significantly faster than for conventional vision systems. In ADOSE, an FP7 project started 2008 (see acknowledgment at the end of the paper), the sensor will be tested on-board for pre-crash warning and pedestrian protection systems. For safety-related control applications, it is evident that dependability issues are important. Therefore a RAMS analysis was performed with the goal of improving the quality of this new traffic data sensor technology, in particular with respect to reliability and availability. This paper describes the methods used and the results found by applying a RAMS analysis to this specific case of a vision system.",2009,0, 3751,A Hardware-Scheduler for Fault Detection in RTOS-Based Embedded Systems,"Nowadays, Real-Time Operating Systems (RTOSs) are often adopted in order to simplify the design of safety-critical applications. However, real-time embedded systems are sensitive to transient faults that can affect the system causing scheduling dysfunctions and consequently changing the correct system behavior. In this context, we propose a new hardware-based approach able to detect faults that change the tasks' execution time and/or the tasks' execution flow in embedded systems based on RTOS. To demonstrate the effectiveness and benefits of using the proposed approach, we implemented a hardware prototype named Hardware-Scheduler (Hw-S) that provides real-time monitoring of the Plasma Microprocessor's RTOS in order to detect the above mentioned types of faults. The Hw-S has been evaluated in terms of the introduced area overhead and fault detection capability.",2009,0, 3752,Simulation of Target Range Measurement Process by Passive Optoelectronic Rangefinder,"Active rangefinders used for measurement of longer distances of objects (targets), e.g. pulsed laser rangefinders, emit radiant energy, being in conflict with hygienic restrictions in various applications. Having been applied in security and military area there is a serious defect: irradiation can be detected by target. Used passive optoelectronic rangefinders (POERF) can fully eliminate mentioned above defects. The development started initially on a department of Military Academy in Brno (since 2004 University of Defence) in the year 2001 in cooperation with the company OPROX, a.s., Brno. The POERF development resulted in making special tool being able to test individual algorithms required for the target range measurement. It is the Test POERF simulation program, which is permanent created by authors of this contribution. Its 3rd version has just been finalized. This contribution is dealing with the simulation software and covers short comments on the simulation experiments results.",2009,0, 3753,Improvement of a Worker's Motion Trace System Using a Terrestrial Magnetism Sensor,"Guarantee of quality of products is very important for company activities; so many companies consider guaranteeing quality of their products is very important. In guarantee of quality in industrial products, they are not only important that designs, parts and materials but also production process itself. In assembly process, this means that production works is done according with accurate procedure is important. Generally, most assemble processes have steps to confirm results and working quality of former steps. However, there are some cases that it is impossible to confirm quality of former step's work. For example, in a work to attach and fix a part using some screws, there is certain order to screw some screws to guarantee accuracy to attach the part. However, if the order to screw is not obeyed, this violation cannot be detected by appearance since the screws are enough screws, and the part is fixed. Therefore, we have been developing a worker's motion trace system by using terrestrial magnetism sensors and acceleration sensors to confirm whether the worker's motion accords correct assembly procedure. Up to now, our system trace system cannot judge whether worker's motion is correct or not in 100% accuracy. So we tired to improve a method to judge the worker's motion. In this paper, we describe this new method based on LOF, and its evaluation.",2009,0, 3754,Instruction Precomputation for Fault Detection,"Fault tolerance (FT) is becoming increasingly important in computing systems. This work proposes and evaluates the instruction precomputation technique to detect hardware faults. Applications are profiled off-line, and the most frequent instruction instances with their operands and results are loaded into the precomputation table when executing. The precomputation-based error detection technique is used in conjunction with another method that duplicates all instructions and compares the results. In the precomputation-enabled version, whenever possible, the instruction compares its result with a precomputed value, rather than executing twice. Another precomputation-based scheme does not execute the precomputed instructions at all, assuming that precomputation provides sufficient reliability. Precomputation improves the fault coverage (including permanent and some other faults) and performance of the duplication method. The proposed method is compared to an instruction memoization-based technique. The performance improvements of the precomputation- and memoization-based schemes are comparable, while precomputation has a better long-lasting fault coverage and is considerably cheaper.",2009,0, 3755,"Predicting Groupware Use from the Perspectives of Workflow, Information and Coordination","Although groupware design aims to improve the organizational effectiveness in cooperative work, unintended consequences often persist and even cause the failure in the implementation. This study explores groupware implementation in computer-supported cooperative work, and finds that those unintended consequences could be explained in terms of workflow, information and coordination in prior cases. This study adopts workflow routinization, information integration and coordination fit to assess the outcome quality of groupware use in the workplace. Users are categorized into two groups based on their self-assessment of groupware use in cooperative work. Empirical findings from the survey of 200 experienced groupware users show that the group with high outcome quality of groupware use had stronger workflow routinization, information integration and coordination fit in cooperative work than the other group with low outcome quality. Limitations of this study are discussed accordingly.",2009,0, 3756,Full coverage location of logic resource faults in A SOC co-verification technology based FPGA functional test environment,"Full coverage location of logic resource faults is vital for FPGA design and fabrication, rather than only detecting whether there are faults or not. Taking advantage of flexibility and observability of software in conjunction with high-speed simulation of hardware, SOC co-verification technology based in-house FPGA functional test environment embedded with an in-house computerized tool, ConPlacement, can locate logic resources automatically, exhaustively and repeatedly. The approach to implement full coverage location of configurable logic block (CLB) faults by the FPGA functional test environment is presented in the paper. Experimental results of XC4010E demonstrate that full coverage location of logic resource faults as well as multi-faults position can be realized.",2009,0, 3757,Safety criteria and development methodology for the safety critical railway software,"The main part of the system is operated by the software. Besides the safety critical system such as railways, airplanes, and nuclear power plants is applied by the software. The software can perform more varying and highly complex functions efficiently because software can be flexibly designed and implemented. But the flexible design makes it difficult to predict the software failures. We need to show that the safety critical railway software is developed to ensure the safety. This paper is suggested safety criteria and software development methodology to enhance safety for the safety critical railway system.",2009,0, 3758,ADEM: Automating deployment and management of application software on the Open Science Grid,"In grid environments, the deployment and management of application software presents a major practical challenge for end users. Performing these tasks manually is error-prone and not scalable to large grids. In this work, we propose an automation tool, ADEM, for grid application software deployment and management, and demonstrate and evaluate the tool on the Open Science Grid. ADEM uses Globus for basic grid services, and integrates the grid software installer Pacman. It supports both centralized AprebuildA and on-site Adynamic-buildA approaches to software compilation, using the NMI Build and Test system to perform central prebuilds for specific target platforms. ADEM's parallel workflow automatically determines available grid sites and their platform AsignaturesA, checks for and integrates dependencies, and performs software build, installation, and testing. ADEM's tracking log of build and installation activities is helpful for troubleshooting potential exceptions. Experimental results on the Open Science Grid show that ADEM is easy to use and more productive for users than manual operation.",2009,0, 3759,The ARGOS project,"Radiation, such as alpha particles and cosmic rays, can cause transient faults in electronic systems. Such faults cause errors called single-event upsets (SEUs). SEUs are a major source of errors in electronics used in space applications. There is also a growing concern about SEUs at ground level for deep submicron technologies. Radiation hardening is an effective yet costly solution to this problem. Commercial off-the-shelf (COTS) components have been considered as a low-cost alternative to radiation-hardened parts. In ARGOS project, these two approaches were compared in an actual space experiment. We assessed the effectiveness of software-implemented hardware fault tolerance (SIHFT) techniques in enhancing the reliability of COTS.",2009,0, 3760,Assessing combinatorial interaction strategy for reverse engineering of combinational circuits,"T-way test data generators play an immensely important role for both hardware and software configuration testing. Earlier work concludes that t-way test data generator can achieve 100% coverage without having to regard for more than 6 way interactions. In this paper, we investigate whether or not such a conclusion can be applicable for reverse engineering of combinational circuits. In this case, we reverse engineer a faulty commercial eight segment display controller using our t-way test data generator in order to redesign the replacement unit. We believe that our application of t-way generators for circuit identification is novel. The results demonstrate the need of more than 6 parameter interactions as well as suggest the effectiveness of cumulative test data for reverse engineering applications.",2009,0, 3761,Identifying Fragments to be Extracted from Long Methods,"Long and complex methods are hard to read or maintain, and thus usually treated as bad smells, known as Long Method. On the contrary, short and well-named methods are much easier to read, maintain, and extend. In order to divide long methods into short ones, refactoring Extract Method was proposed and has been widely used. However, extracting methods manually is time consuming and error prone. Though existing refactoring tools can automatically extract a selected fragment from its inclosing method, which fragment within a long method should be extracted has to be determined manually. In order to facilitate the decision-making, we propose an approach to recommend fragments within long methods for extraction. The approach is implemented as a prototype, called AutoMeD. With the tool, we evaluate the approach on a nontrivial open source project. The evaluation results suggest that refactoring cost of long methods can be reduced by nearly 40%. The main contribution of this paper is an approach to recommending fragments within long methods to be extracted, as well as an initial evaluation of the approach.",2009,0, 3762,Improve Analogy-Based Software Effort Estimation Using Principal Components Analysis and Correlation Weighting,"Software development cost overruns often induce project managers to cut down manpower cost at the expense of software quality. Accurate effort estimation is beneficial to the prevention of cost overruns. Analogy-based effort estimation predicts the effort of a new project by using the information of its similar historical projects, where the similarity is measured via Euclidean distance. To calculate the Euclidean distance, traditional analogy-based effort estimation methods usually adopt the original project features and assign uniform weights to them. However, it would lead to inappropriate similarity measure and result in inaccurate effort estimate if the original features are interdependent or have unequal impacts on the project effort. In this paper, we propose to use principal components analysis (PCA) to extract independent features, and then use Pearson correlation coefficients between the extracted features and the project effort as the weights for Euclidean distance calculation in similarity measure. Extensive experiments were further conducted on three benchmark datasets: COCOMO, Desharnais, and NASA. The experimental results show that our approach significantly improves prediction accuracy and reliability over the traditional method, either by using correlation weighting alone or by using PCA combined with correlation weighting. The comparison of our approach with other approaches reported in literature also suggests that our approach is competitive.",2009,0, 3763,An Effective Path Selection Strategy for Mutation Testing,"Mutation testing has been identified as one of the most effective techniques, in detecting faults. However, because of the large number of test elements that it introduces, it is regarded as rather expensive for practical use. Therefore, there is a need for testing strategies that will alleviate this drawback by selecting effective test data that will make the technique more practical. Such a strategy based on path selection is reported in this paper. A significant influence on the efficiency associated with path selection strategies is the number of test paths that must be generated in order to achieve a specified level of coverage, and it is determined by the number of paths that are found to be feasible. Specifically, a path selection strategy is proposed that aims at reducing the effects of infeasible paths and conversely developing effective and efficient mutation based tests. The results obtained from applying the method to a set of program units are reported and analysed presenting the flexibility, feasibility and practicality of the proposed approach.",2009,0, 3764,Prioritizing Use Cases to Aid Ordering of Scenarios,"Models are used as the basis for design and testing of software. The unified modeling language (UML) is used to capture and model the requirements of a software system. One of the major requirements of a development process is to detect defects as early as possible. Effective prioritization of scenarios helps in early detection of defects as well maximize effort and utilization of resources. Use case diagrams are used to represent the requirements of a software system. In this paper, we propose using data captured from the primitives of the use case diagrams to aid in prioritization of scenarios generated from activity diagrams. Interactions among the primitives in the diagrams are used to guide prioritization. Customer prioritization of use cases is taken as one of the factors. Preliminary results on a case study indicate that the technique is effective in prioritization of test scenarios.",2009,0, 3765,An Automatic Compliance Checking Approach for Software Processes,"A lot of knowledge has been accumulated and documented in the form of process models, standards, best practices, etc. The knowledge tells how a high quality software process should look like, in other words, which constrains should be fulfilled by a software process to assure high quality software products. Compliance checking for a predefined process against proper constrains is helpful to quality assurance. Checking the compliance of an actual performed process against some constrains is also helpful to process improvement. Manual compliance checking is time-consuming and error-prone, especially for large and complex processes. In this paper, we record the process knowledge by means of process pattern. We provide an automatic compliance checking approach for process models against constrains defined in process patterns. Checking results indicate where and which constrains are violated, and therefore suggests the focuses of future process improvement. We have applied this approach in three real projects and the experimental results are also presented.",2009,0, 3766,Hierarchical Understandability Assessment Model for Large-Scale OO System,"Understanding software, especially in large-scale, is an important issue for software modification. In large-scale software systems, modularization provides help for understanding them. But, even if a system has a well-modularized design, the modular design can be deteriorated by system change over time. Therefore it is needed to assess and manage modularization in the view of understandability. However, there are rarely studies of a quality assessment model for understandability in the module-level. In this paper, we propose a hierarchical model to assess understandability of modularization in large-scale object-oriented software. To assess understandability, we define several design properties, which capture the characteristics influencing on understandability, and design metrics based on the properties, which are used to quantitatively assess understandability. We validate our model and its usefulness by applying the model to an open-source software system.",2009,0, 3767,Assuring Information Quality for Designing a Web Service-Based Disaster Management System,"This paper aims to describe development of a web-based spatial data sharing platform for disaster management. Alongside functionality, information quality (IQ) is basic to successful sharing of distributed geo-services. In this article, we present a web service-based architecture in the context of disaster management by introducing an IQ broker module between service clients and providers (servers). The functions of the IQ broker module include assessing IQ about servers, making selection decisions for clients, and negotiating with servers to get IQ agreements. We study a quality of service (QoS) model aimed at measuring the information quality of Web services used by IQ brokers acting as the front-end of servers. This methodology is composed of two main components, a QoS model to analyze the information quality of Web services and a fuzzy computing to generate the linguistic recommendations.",2009,0, 3768,LEC: Log Event Correlation Architecture Based on Continuous Query,"In our rapidly evolving society, every corporation is trying to improve its competitiveness by refactoring and improving some - if not all - of its industrial software infrastructure. This goes from mainframe applications that actually handle the company's profit generating material, to the internal desktop applications used to manage these application servers. These applications often have extended activity logging features that notify the administrators of every event encounter at runtime. Unfortunately, the standalone nature of the event logging sources renders the correlation of log event infrastructure prone to Acontinuous queriesA. This paper describes an approach that Aadapts and employs continues queriesA for distributed log event correlation with the aim to solve problems that face the present log event management systems. It will present LEC architecture that analyze a set of distributed log events that follow a set of correlation rules; then the main output is a stream of correlated log events.",2009,0, 3769,An Application of Data Mining to Identify Data Quality Problems,"Modern information systems consist of many distributed computer and database systems. The integration of such distributed data into a single data warehouse system is confronted with the well known problem of low data quality. In this paper we present an approach that facilitates a dynamic identification of spurious and error-prone data stored in a large data warehouse. The identification of data quality problems is based on data mining techniques, such as clustering, subspace clustering and classification. Furthermore, we present via a case study the applicability of our approach on real data. The experimental results show that our approach efficiently identifies data quality problems.",2009,0, 3770,Web Service QoS Prediction Based on Multi Agents,"With the rapid growth of functionally similar Web services over the Web, quality of services (QoS) has become a significant concern for many researchers. Web service QoS management techniques capable of selecting and monitoring Web services are still not mature enough. Using multi agents we overcome Web services shortcomings, hence we proposed a Web services QoS prediction architecture capable of predicting Web service quality level during Web services selection and monitoring phase. In this paper we applied a double quantization time series forecasting method based on SOM neural networks in order to predict Web services QoS level.",2009,0, 3771,Automated Model Checking of Stochastic Graph Transformation Systems,"Non-functional requirements like performance and reliability play a prominent role in distributed and dynamic systems. To measure and predict such properties using stochastic formal methods is crucial. At the same time, graph transformation systems are a suitable formalism to formally model distributed and dynamic systems. Already, to address these two issues, Stochastic Graph Transformation Systems (SGTS) have been introduced to model dynamic distributed systems. But most of the researches so far are concentrated on SGTS as a modeling means without considering the need for suitable analysis tools. In this paper, we present an approach to verify this kind of graph transformation systems using PRISM (a stochastic model checker). We translate the SGTS to the input language of PRISM and then PRISM performs the model checking and returns the results back to the designers.",2009,0, 3772,Considering Faults in Service-Oriented Architecture: A Graph Transformation-Based Approach,"Nowadays, using Service-Oriented Architectures (SOA) is spreading as a flexible architecture for developing dynamic enterprise systems. Due to the increasing need of high quality services in SOA, it is desirable to consider different Quality of Service (QoS) aspects in this architecture as security, availability, reliability, fault tolerance, etc. In this paper we investigate fault tolerance mechanisms for modeling services in service-oriented architecture. We propose a metamodel (formalized by a type graph) and some graph rules for monitoring services and their communications to detect faults. By defining additional graph rules as reconfiguration mechanisms, service requesters can be dynamically switched to a new service (with similar descriptions). To validate our proposal, we use our previous approach to model checking graph transformation using the Bogor model checker.",2009,0, 3773,A Computer Vision-Based Classification Method for Pearl Quality Assessment,"Pearl's color is an important feature to assess its value, including the hue and its color depth. A method for pearl color classification was investigated in this paper. Computer Vision is used to process the pearl image after transforming it from RGB to HSV color model, which can show the hue and color depth information of pearl. According to the histogram of V (Value) weight, the bright area is extracted by Ostu Segmentation and the average value of H (Hue) and S (saturation) are obtained. Aiming at the standards of hue classification, the artificial neural network method based on RPROP Algorithm is adopted; Aiming at the color depth`s difference, Fuzzy C-means Clustering Algorithm is adopted to classify the average value of S. The proposed method can be used for the first classification according to the surface color of pearl and further classification according to the saturation of pearl in the same color series and realizing the standard classification of pearl quality.",2009,0, 3774,Real Time Optical Network Monitoring and Surveillance System,"Optical diagnosis, performance monitoring, and characterization are essential for achieving high performance and ensuring high quality of services (QoS) of any efficient and reliable optical access network. In this paper, we proposed a practical in-service transmission surveillance scheme for passive optical network (PON). The proposed scheme is designed and simulated by using the OptiSystem CAD program with the system sensitivity - 35 dBm. A real time optical network monitoring software tool named smart access network _ testing, analyzing and database (SANTAD) is introduced for providing remote controlling, centralized monitoring, system analyzing, and fault detection features in large scale network infrastructure management and fault diagnostic. 1625 nm light source is assigned to carry the troubleshooting signal for line status monitoring purposes in the live network system. Three acquisition parameters of optical pulse: distance range, pulse width, and acquisition time, is contributed to obtain the high accuracy in determining the exact failure location. The lab prototype of SANTAD was implemented in the proposed scheme for analyzing the network performance. The experimental results showed SANTAD is able to detect any occurrence of fault and address the failure location within 30 seconds. The main advantages of this work are to manage network efficiently, reduce hands on workload, minimize network downtime and rapidly restore failed services when problems are detected and diagnosed.",2009,0, 3775,Development of Smart Drop Restoration Scheme for Customer's Ease in the i-FTTH EPON Network Solution,"This paper addresses a new restoration mechanism in drop region of EPON network by using the optical switch device to increase the efficiency, survivability and reliability of fiber to the home (FTTH) customer access network. The developed device will be installed at the drop section to detect the failure line that occurs in the multi-line drop section of FTTH network downwardly from passive optical splitter to the customer premises (ONU). The drop section restoration mechanism called smart drop restoration scheme (SDRS) will contribute for tree based and bus based architecture. Conventionally, the failure line of FTTH network can be measured using Optical Time-Domain Reflectometer (OTDR) upwardly from customer premises to the central office. The optical switches will be controlled by Access Control System (ACS). If the breakdown is the detected in drop section, ACS will recognize the related access line by the 3% tapped signal that is connected to every access line. The activation signal is then sent to activate the dedicated protection scheme. But if fault is still not restored, the shared protection scheme will be activated. Results from Optisystem software simulation will be presented so as to prove the solution feasibility.",2009,0, 3776,A Semi-supervised Framework for Simultaneous Classification and Regression of Zero-Inflated Time Series Data with Application to Precipitation Prediction,"Time series data with abundant number of zeros are common in many applications, including climate and ecological modeling, disease monitoring, manufacturing defect detection, and traffic accident monitoring. Classical regression models are inappropriate to handle data with such skewed distribution because they tend to underestimate the frequency of zeros and the magnitude of non-zero values in the data. This paper presents a hybrid framework that simultaneously perform classification and regression to accurately predict future values of a zero-inflated time series. A classifier is initially used to determine whether the value at a given time step is zero while a regression model is invoked to estimate its magnitude only if the predicted value has been classified as nonzero. The proposed framework is extended to a semi-supervised learning setting via graph regularization. The effectiveness of the framework is demonstrated via its application to the precipitation prediction problem for climate impact assessment studies.",2009,0, 3777,Online Constrained Pattern Detection over Streams,"Online pattern detection poses a challenge in many data-intensive applications, including network traffic management, trend analysis, intrusion detection, and various intelligent sensor networks. These applications have to be time and space efficient while providing high quality answers. Meanwhile, far less attention has been paid for detecting constrained patterns, that cannot be simply matched because there is no available pattern for prediction. This paper presents our research effort in efficient pattern detection with constraint. We propose a new method named Online Pattern Detection with Constraint (OPDC) to detect constrained patterns over evolving data stream, taking into account various user-defined constraints. To ensure that the constrained patterns are representative, we extend regular expression in a simple but powerful way. Our experimental results on real data sets demonstrate the feasibility and effectiveness of the proposed scheme.",2009,0, 3778,A Complexity Reliability Model,"A model of software complexity and reliability is developed. It uses an evolutionary process to transition from one software system to the next, while complexity metrics are used to predict the reliability for each system. Our approach is experimental, using data pertinent to the NASA satellite systems application environment. We do not use sophisticated mathematical models that may have little relevance for the application environment. Rather, we tailor our approach to the software characteristics of the software to yield important defect-related predictors of quality. Systems are tested until the software passes defect presence criteria and is released. Testing criteria are based on defect count, defect density, and testing efficiency predictions exceeding specified thresholds. In addition, another type of testing efficiency - a directed graph representing the complexity of the software and defects embedded in the code - is used to evaluate the efficiency of defect detection in NASA satellite system software. Complexity metrics were found to be good predictors of defects and testing efficiency in this evolutionary process.",2009,0, 3779,Harnessing Web-Based Application Similarities to Aid in Regression Testing,"Web-based applications are growing in complexity and criticality, increasing the need for their precise validation. Regression testing is an established approach for providing information about the quality of an application in the face of recurring updates that dominate the web. We present techniques to address a key challenge of the automated regression testing of web-based applications. Innocuous program evolutions often appear to fail tests and must be manually inspected. We rely on inherent similarities between independent web-based applications to provide fully automated solutions for reducing the number of false positives associated with regression testing such applications, simultaneously focusing on returning all true positives. Our approach predicts which test cases merit human inspection by applying a model derived from regression testing other programs. We are 2.5 to 50 times as accurate as current industrial practice, but require no user annotations.",2009,0, 3780,Approximating Deployment Metrics to Predict Field Defects and Plan Corrective Maintenance Activities,"Corrective maintenance activities are a common cause of schedule delays in software development projects. Organizations frequently fail to properly plan the effort required to fix field defects. This study aims to provide relevant guidance to software development organizations on planning for these corrective maintenance activities by correlating metrics that are available prior to release with parameters of the selected software reliability model that has historically best fit the product's field defect data. Many organizations do not have adequate historical data, especially historical deployment and field usage information. The study identifies a set of metrics calculable from available data to approximate these missing predictor categories. Two key metrics estimable prior to release surfaced with potentially useful correlations, (1) the number of periods until the next release and (2) the peak deployment percentage. Finally, these metrics were used in a case study to plan corrective maintenance efforts on current development releases.",2009,0, 3781,Putting It All Together: Using Socio-technical Networks to Predict Failures,"Studies have shown that social factors in development organizations have a dramatic effect on software quality. Separately, program dependency information has also been used successfully to predict which software components are more fault prone. Interestingly, the influence of these two phenomena have only been studied separately. Intuition and practical experience suggests,however, that task assignment (i.e. who worked on which components and how much) and dependency structure (which components have dependencies on others)together interact to influence the quality of the resulting software. We study the influence of combined socio-technical software networks on the fault-proneness of individual software components within a system. The network properties of a software component in this combined network are able to predict if an entity is failure prone with greater accuracy than prior methods which use dependency or contribution information in isolation. We evaluate our approach in different settings by using it on Windows Vista and across six releases of the Eclipse development environment including using models built from one release to predict failure prone components in the next release. We compare this to previous work. In every case, our method performs as well or better and is able to more accurately identify those software components that have more post-release failures, with precision and recall rates as high as 85%.",2009,0, 3782,Looking at Web Security Vulnerabilities from the Programming Language Perspective: A Field Study,"This paper presents a field study on Web security vulnerabilities from the programming language type system perspective. Security patches reported for a set of 11 widely used Web applications written in strongly typed languages (Java, C#, VB.NET) were analyzed in order to understand the fault types that are responsible for the vulnerabilities observed (SQL injection and XSS). The results are analyzed and compared with a similar work on Web applications written using a weakly typed language (PHP). This comparison points out that some of the types of defects that lead to vulnerabilities are programming language independent, while others are strongly related to the language used. Strongly typed languages do reduce the frequency of vulnerabilities, as expected, but there still is a considerable number of vulnerabilities observed in the field. The characterization of those vulnerabilities shows that they are caused by a small number of fault types. This result is relevant to train programmers and code inspectors in the manual detection of such faults, and to improve static code analyzers to automatically detect the most frequent vulnerable program structures found in the field.",2009,0, 3783,Fault Tree Analysis of Software-Controlled Component Systems Based on Second-Order Probabilities,"Software is still mostly regarded as a black box in the development process, and its safety-related quality ensured primarily by process measures. For systems whose lion share of service is delivered by (embedded) software, process-centred methods are seen to be no longer sufficient. Recent safety norms (for example, ISO 26262) thus prescribe the use of safety models for both hardware and software. However, failure rates or probabilities for software are difficult to justify. Only if developers take good design decisions from the outset will they achieve safety goals efficiently. To support safety-oriented navigation of the design space and to bridge the existing gap between qualitative analyses for software and quantitative ones for hardware, we propose a fault-tree-based approach to the safety analysis of software-controlled systems. Assigning intervals instead of fixed values to events and using Monte-Carlo sampling, probability mass functions of failure probabilities are derived. Further analysis of PMF lead to estimates of system quality that enable safety managers to take an optimal choice between design alternatives and to target cost-efficient solutions in every phase of the design process.",2009,0, 3784,Approximate Shortest Path Queries in Graphs Using Voronoi Duals,"We propose an approximation method to answer point-to-point shortest path queries in undirected graphs, based on random sampling and Voronoi duals. We compute a simplification of the graph by selecting nodes independently at random with probability p. Edges are generated as the Voronoi dual of the original graph, using the selected nodes as Voronoi sites. This overlay graph allows for fast computation of approximate shortest paths for general, undirected graphs. The time-quality tradeoff decision can be made at query time. We provide bounds on the approximation ratio of the path lengths as well as experimental results.The theoretical worst-case approximation ratio is bounded by a logarithmic factor. Experiments show that our approximation method based on Voronoi duals has extremely fast preprocessing time and efficiently computes reasonably short paths.",2009,0, 3785,Fisher Discriminance of Fault Predict for Decision-Making Systems,"A new technology of fault prediction was presented based on the neural network and Fisher discriminance in statistics. First, many enough character of running situation of decision-making were extracted from the real-time observation data. Secondly, the FP software systems were designed and the algorithm of FP of decision-making systems was presented. Finally, a simply example indicated that the algorithm is effectively.",2009,0, 3786,A Fault Detection Mechanism for SOA-Based Applications Based on Gauss Distribution,"Service-oriented architecture (SOA) is an ideal solution to build application system with low cost and high efficiency, but fault detection is not supported in most SOA-based applications. Based on Gauss distribution, a fault detection mechanism for SOA-based applications is proposed. The fault in SOA can be detected through comparing the calculated confidence interval with the predefined parameters at runtime according to the descriptor. Based on the fault detection algorithm, the reference service model is improved to support the proposed algorithm through adding some suitable components.",2009,0, 3787,Grain Quality Evaluation Method Based on Combination of FNN Neural Networks with D-S Evidence Theory,"The output of the fuzzy neural network was adopted as BPAF (basic probability assignment function) in this paper. By training the fuzzy neural network, the massive language fuzzy information and the concerned expert's experience were integrated in the decision process, it advantageous in enhancing the BPAF of the accuracy, the reliability and the objectivity. Therefore, using the superiority of D-S evidence theory in the processing uncertainty aspect and analyzing the situation of grain by FNN and the D-S evidence theory union, it can greatly decrease the uncertainty of system.",2009,0, 3788,Metrics for Evaluating Coupling and Service Granularity in Service Oriented Architecture,"Service oriented architecture (SOA) is becoming an increasingly popular architectural style for many organizations due to the promised agility, flexibility benefits. Although the concept of SOA has been described in research and industry literature, there are currently few SOA metrics designed to measure the appropriateness of service granularity and service coupling between services and clients. This paper defines the metrics centered around service design principles concerning loosely-coupled and well-chosen granularity. The metrics are based on information-theoretic principles, and are used to predict the quality of the final software product. The usefulness of the metrics is illustrated through a case study.",2009,0, 3789,Deep Web Databases Sampling Approach Based on Probability Selection and Rule Mining,"A great portion of data on the Web lies in the hidden databases of the Deep Web. These databases can only be accessed through the query interfaces. The data information in these databases can only be obtained by data sampling. Efficient and uniform data sampling approach is very important to other research work, such as data source selection and ranking, for the data samples can give insight into the data quality, freshness and coverage information in the databases. However, the existing hidden database samplers are very inefficient, because lots of queries are wasted in the sampling walks. In this paper, we propose a probability selection and rule mining based sampling approach to solve this problem. First, we leverage the historical valid walks to calculate the valid probability of the attribute values. Based on the valid probability, we give priority to sample using the attribute values with largest valid probability and guide the sampler to find the valid sampling path earlier. Meanwhile, we save the underflow walk path to mine the underflow rules, which are used in the sampling process to guide the sampler to avoid the underflow walks. The experimental results indicate that our approach can improve the sampling efficiency by detecting the valid path earlier and avoid many underflow queries.",2009,0, 3790,Analyzing Checkpointing Trends for Applications on the IBM Blue Gene/P System,"Current petascale systems have tens of thousands of hardware components and complex system software stacks, which increase the probability of faults occurring during the lifetime of a process. Checkpointing has been a popular method of providing fault tolerance in high-end systems. While considerable research has been done to optimize checkpointing, in practice the method still involves a high-cost overhead for users. In this paper, we study the checkpointing overhead seen by applications running on leadership-class machines such as the IBM Blue Gene/P at Argonne National Laboratory. We study various applications and design a methodology to assist users in understanding and choosing checkpointing frequency and reducing the overhead incurred. In particular, we study three popular applications-the Grid-Based Projector-Augmented Wave application, the Carr-Parrinello Molecular Dynamics application, and a Nek5000 computational fluid dynamics application-and analyze their memory usage and possible checkpointing trends on 32,768 processors of the Blue Gene/P system.",2009,0, 3791,Wood Nondestructive Test Based on Artificial Neural Network,"It is important to detect defects in wood, when it reduce the performance. The data and signal processing technology providing researchers with more damage identification problem solution ideas and methods. This article explore the wavelet analysis and artificial neural network for the wood defects based on non-destructive testing, and build an artificial neural network model for wood non-destructive testing technology. After wavelet packet decomposition to extract the different frequency bands of energy levels characteristic of the signal, as the neural network input samples, the network training and learning. Training of the BP network model can be achieved on the different locations automatic recognition of defects, defects of the middle of more than 90% recognition rate on the left and right side of the recognition rate of over 80%.",2009,0, 3792,A Dynamic Probability Fault Localization Algorithm Using Digraph,"Analyzed here is a probability learning fault localization algorithm based on directed graph and set-covering. The digraph is constituted as following: get the deployment graph of managed business from the topography of network and software environment; generate the adjacency matrix (Ma); compute the transitive matrix (Ma 2) and transitive closure (Mt) and obtain dependency matrix (R). When faults occur, the possible symptoms will be reflected in R with high probability in fault itself, less probability in Ma, much less in Ma 2 and least in Mt. MCA+ is a probability max covering algorithm taking lost and spurious symptom into account. DMCA+ is dynamic probability updating algorithm through learning run-time fault localization experience. When fail to localize the faults, probabilities of real faults will be updated with an increment. The simulation results show the validity and efficiency of DMCA+ under complex network. In order to promote detection rate, multi-recommendation strategy is also investigated in MCA+ and DMCA+.",2009,0, 3793,Neural Network Appraisal of SMB Operation Capability,"On the basis of analyzing the significance of assessing operation capability in SMB, the Appraisal-index system of operation capability for SMB is built, and appraisal model is established using BP neural network. The conjunction weights of the neural network are continuously modified layer by layer from output layer to input layer in the process of neural network training to reduce the errors between the anticipated and actual outputs. The capability and feasibility of this method was proved by case study.",2009,0, 3794,Anomaly Detection with Self-Organizing Maps and Effects of Principal Component Analysis on Feature Vectors,"Network anomaly detection is the problem of scrutinizing of unauthorized use of computer systems over a network. In literature there are plenty different methods produced for detecting network anomalies and the process of anomaly detection is one of the major topics that computer science is working on. In this work, a classification method is introduced to perform this discrimination based on self organizing network (SOM) classifier. Also, rather than proving well-known abilities of SOM on classification, our main concern in this work was investigating effects of principal component analysis on quality of feature vectors. In order to signify the power of success, KDD Cup 1999 dataset is used. KDD Cup dataset is a common benchmark for evaluation of intrusion detection techniques. The dataset consists of several components and here, it is used `10% corrected' test dataset. Since the feature vectors obtained from the dataset have prominent impact of success on the method, the usage of PCA and a method of choosing reliable components are introduced. At the end it is mentioned that the success of decision by the proposed method has been improved. In order to clarify this improvement, a detailed comparison of changing number of principal components on the success of decision mechanism is given.",2009,0, 3795,Evaluation of Text Clustering Based on Iterative Classification,"Text clustering is a useful and inexpensive way to organize vast text repositories into meaningful topics categories. Although text clustering can be seen as an alternative to supervised text categorization, the question remains of how to determine if the resulting clusters are of sufficient quality in a real-life application. However, it is difficult to evaluate a given clustering of documents. Furthermore, the existing quality measures rely on the labor standard, which is difficult and time-consuming. The need for fair methods that can assess the validation of clustering results is becoming more and more critical. In this paper, we propose and experiment an innovative evaluation measure that allows one to effectively and correctly assess the clustering results.",2009,0, 3796,Dynamically Discovering Functional Likely Program Invariants Based on Relational Database Theory,"Dynamic likely program invariant detection technology is an available instrument for discovering contract from large program in non-formal description. It is of benefit to contract technology exerting more influence on program quality assurance. Since the research of invariant detection technology has just started that the rough detection usually use hypothesis verification approach which relies on the experience of the detector and his degree of understanding of the detected program so that there is serious lack of accuracy and efficiency. This paper tempts to divide the invariants into two kinds that one is called functional invariant and the other is non-functional type based on relational data theory before starting the invariant detection. The paper focuses on the approach of detecting functional likely invariant, which accomplish detecting existence of them by discovering functional dependence set of the program variable at first and then detecting the forms of the existent invariants after deducing the function dependence set. Experiments demonstrate that this approach not only solves the problems of blind detection to improve the efficiency but also reduces the possibility of missing important functional invariants compared with the traditional hypothesis verification approach such as Daikon.",2009,0, 3797,Software Reliability Prediction Based on Discrete Wavelet Transform and Neural Network,"Effective prediction of the software reliability is one of the active pares of software engineering. This paper proposes a novel approach based on wavelet transform and neural network (NN). Using this approach, the time series of software faults can be decomposed into four components information, and then predict them by NN respectively. The experience results show that the performance of novel software reliability prediction approach is satisfactory.",2009,0, 3798,Exploring Software Quality Classification with a Wrapper-Based Feature Ranking Technique,"Feature selection is a process of selecting a subset of relevant features for building learning models. It is an important activity for data preprocessing used in software quality modeling and other data mining problems. Feature selection algorithms can be divided into two categories, feature ranking and feature subset selection. Feature ranking orders the features by a criterion and a user selects some of the features that are appropriate for a given scenario. Feature subset selection techniques search the space of possible feature subsets and evaluate the suitability of each. This paper investigates performance metric based feature ranking techniques by using the multilayer perceptron (MLP) learner with nine different performance metrics. The nine performance metrics include overall accuracy (OA), default F-measure (DFM), default geometric mean (DGM), default arithmetic mean (DAM), area under ROC (AUC), area under PRC (PRC), best F-measure (BFM), best geometric mean (BGM) and best arithmetic mean (BAM). The goal of the paper is to study the effect of the different performance metrics on the feature ranking results, which in turn influences the classification performance. We assessed the performance of the classification models constructed on those selected feature subsets through an empirical case study that was carried out on six data sets of real-world software systems. The results demonstrate that AUC, PRC, BFM, BGM and BAM as performance metrics for feature ranking outperformed the other performance metrics, OA, DFM, DGMand DAM, unanimously across all the data sets and therefore are recommended based on this study. In addition, the performances of the classification models were maintained or even improved when over 85 percent of the features were eliminated from the original data sets.",2009,0, 3799,A Scalable Parallel Approach for Peptide Identification from Large-Scale Mass Spectrometry Data,"Identifying peptides, which are short polymeric chains of amino acid residues in a protein sequence, is of fundamental importance in systems biology research. The most popular approach to identify peptides is through database search. In this approach, an experimental spectrum (""""query'') generated from fragments of a target peptide using mass spectrometry is computationally compared with a database of already known protein sequences. The goal is to detect database peptides that are most likely to have generated the target peptide. The exponential growth rates and overwhelming sizes of biomolecular databases make this an ideal application to benefit from parallel computing. However, the present generation of software tools is not expected to scale to the magnitudes and complexities of data that will be generated in the next few years. This is because they are all either serial algorithms or parallel strategies that have been designed over inherently serial methods, thereby requiring high space- and time- requirements. In this paper, we present an efficient parallel approach for peptide identification through database search. Three key factors distinguish our approach from that of existing solutions: (i) (space) Given p processors and a database with N residues, we provide the first space-optimal algorithm (O(N/p)) under distributed memory machine model; (ii) (time) Our algorithm uses a combination of parallel techniques such as one-sided communication and masking of communication with computation to ensure that the overhead introduced due to parallelism is minimal; and (iii) (quality) The run-time savings achieved using parallel processing has allowed us to incorporate highly accurate statistical models that have previously been demonstrated to ensure high quality prediction albeit on smaller scale data. We present the design and evaluation of two different algorithms to implement our approach. Experimental results using 2.65 million microbial proteins show linear - caling up to 128 processors of a Linux commodity cluster, with parallel efficiency at ~50%. We expect that this new approach will be critical to meet the data-intensive and qualitative demands stemming from this important application domain.",2009,0, 3800,A Multi-instance Model for Software Quality Estimation in OO Systems,"In this paper, a problem of object-oriented (OO) software quality estimation is investigated with a multi-instance (MI) perspective. In detail, each set of classes that have inheritance relation, named `class hierarchy', is regarded as a bag in the training, while each class in the bag is regarded as an instance. The task of the software quality estimation in this study is to predict the label of unseen bags, i.e. the fault-proneness of untested class hierarchies. It is stipulated that a fault-prone class hierarchy contains at least one fault-prone (negative) class, while a not fault-prone (positive) one has no negative class. Based on the modification records (MR) of previous project releases and OO software metrics, the fault-proneness of untested class hierarchy can be predicted. A MI kernel specifically designed for MI data was utilized to build the OO software quality prediction model. This model was evaluated on five datasets collected from an industrial optical communication software project. Among the MI learning algorithms applied in our empirical study, the support vector algorithms combined with dedicated MI kernel led others in accurately and correctly predicting the fault-proneness of the class hierarchy.",2009,0, 3801,Research on Multi-Sensor Information Fusion for the Detection of Surface Defects in Copper Strip,"Based on the defects detection on the surface of the copper strips, this paper firstly studies how to enhance system stability with the multi-sensors information fusion method. This method combines infrared, visible light and laser sensors to deal with defects detection, utilizes fuzzy logic and neural network to carry on the sensor's management, and uses wavelet transformation in image fusion. Experimental results show that this method can effectively detect surface defects in copper strips. Furthermore, it enhances the accuracy of recognizing and classifying, and makes the overall system more automatic and intelligent.",2009,0, 3802,Ventilator Fault Diagnosis Based on Fuzzy Theory,"Fault diagnosis has been the research hotspot in the industry fields. It has a practical significance to discuss the effective fault diagnosis methods. Aiming at the fuzzy and random features of the occurrence probabilities, this paper presents a hybrid method that combines the fault tree with fuzzy set theory.In this approach, fuzzy aggregation and defuzzification are adopted and this method is used in ventilator fault diagnosis. The research shows that this method is feasible and effective and can be applied to the other rotating machinery fault diagnosis.",2009,0, 3803,Performance Appraisal System for Academic Staff in the Context of Digital Campus of Higer Education Institutions: Design and Implementation,"Academic staff at higher education institutions are crucial for quality education and research, the core competiveness of a top research-oriented university. Efforts have been made at many universities to work on a system to effectively and accurately assess the performance of academic staff to promote the overall teaching and research activities. The paper presents an appraisal information system developed for School of Economics and Management of Beihang University, which has taken into consideration the contextual factors of research-oriented universities. The appraisal information system, based on service-oriented architecture, is characterized by a multi-layer system in digital campus. The implementation of the system has significantly promoted the overall teaching and research quality at the school.",2009,0, 3804,Software Reliability Growth Models Based on Non-Homogeneous Poisson Process,"Non-homogeneous Poisson process (NHPP) model with typical reliability growth patterns is an important technology about evaluating software reliability. Two parameters which affect software reliability are original failures number and failure-detected rate. The paper firstly defines the software failure distributed and discusses a few kinds' software reliability models. Non-homogeneous Poisson process model is presented. The models with different parameters are discussed and analyzed. The model's restrict condition and some of parameters, original failures number and failure-detected ratio, are extrapolated. Assessment methodology about key parameters is given.",2009,0, 3805,PSMM: A Plug-In Based Software Monitoring Method,"Pointing to the problem of security and reliability in modern software system, we put forward a plug-in based software monitoring method, PSMM. Firstly, we propose the model of the software monitoring method. Secondly, we deeply study two main parts of PSMM, monitoring method construction platform and monitoring information collection platform. Lastly, we validate the monitoring method by constructing Nuclear Power Control Simulation System. Results indicate that the monitoring method could obtain internal information of the software which used to judging whether the system is under an acceptable status; the method can improve the running quality of software, reduce the probability of failure, and improve the reliability of software.",2009,0, 3806,Dual-Slices Algorithm for Software Fault Localization,"After software fault is detected by runtime monitor, fault localization is always very difficult. A new method to fault localization based on dual-slices algorithm is proposed. The algorithm reduces software fault area by slicing faulty trace into segments firstly and then slicing the trace segments based on trace slice. It mainly includes two steps: Firstly, the faulty run trace is divided into segments by analyzing the differences between correct run and faulty run, and only the segments that inducing the differences between dual-traces will be regarded as suspicious fault-area; Secondly, the suspicious fault-area will be further sliced by trace slice to reduce the fault-area, and the more accuracy fault-area will be gained finally. This method could overcome some drawbacks of manual debugging, and increase the efficiency of fault localization.",2009,0, 3807,Post-Forecast OT: A Novel Method of 3D Model Compression,"In this paper, we present Post-Forecast OT (Octree), a novel method of 3D model compression codec based on Octree. Vertices of 3D meshes are re-classified according to the Octree rule. All the nodes of the Octree are statistically analyzed to identify the type of nodes with the max proportion and are encoded with fewer bits. The vertices positions are predicted and recorded, which can effectively reduce the error between the decoded vertices and the corresponding vertices in the original 3D model. Compared with prior 3D model progressive codec method with severe distortion at low bit rates, Post-Forecast OT has better performance while providing a pleasant visual quality. We also encode the topology and attribute information corresponding to the geometry information, which enables progressive transmission of all encoded 3D data over the Internet.",2009,0, 3808,Fingerprint Chromatogram and Fuzzy Calculation for Quality Control of Shenrong Tonic Wine,"With computer software and fuzzy calculation, we determined several key compounds and established fingerprint to control the stability of food products. High performance liquid chromatography (HPLC) was used to establish the fingerprint chromatogram of Shenrong tonic wine to control its quality and stability, and to detect possible counterfeits. We used reverse phase C-18 column, equivalent elution and detection wavelength at 259 nm. The chromatographic fingerprint was established by using sample chromatography of 10 different production batches to calculate the relative retention time A and the relative area Ar of each peak respectively, and 11 peaks of common characteristic had been found. Cosine method was used to calculate the similarity by which the comparative study was done on various batches. The results showed that the method was convenient and applicable for the quality evaluation of Chinese herb tonic wine.",2009,0, 3809,An Improved RBF Network for Predicting Location in Mobile Network,"In mobile network, quality of service (Qos) is difficultly guaranteed for the particularity of mobile network. If the system knows, prior to the mobile subscriber movement, the exact trajectory it will follow, the Qos can be guaranteed. Thus, location prediction is the key issue to provide quality of service to mobile subscriber. In the present paper, RBF Network of Neural Network techniques were used to predict the mobile user's next location based on his current location as well as time. The software Matlab 6.5 was used to confirm the parameters of RBF network, and to same training data, makes the detailed contrast with resilient propagation BP and BP in learning time and steps of learning. Experiment results show that predicted locations with RBF are more effective and accurate than resilient BP.",2009,0, 3810,Blocking Probability Simulation Based on SDMA Systems,"Space division multiple access(SDMA) is a technique that can be superimposed on traditional multiple access schemes to increase the system capacity, such as frequency division multiple access(FDMA), time division multiple access(TDMA) or code division multiple access(CDMA). SDMA technology employs antenna arrays and multidimensional nonlinear signal processing techniques to provide significant increases capacity and quality of many wireless communication systems. Several works have been carried out to examine the improvement in the system capacity provided by SDMA. Based on the theoretical analysis and simulate model, the blocking probability of the proposed scheme is simulated and compared in this paper.",2009,0, 3811,Automatic Test Data Generation Based on Ant Colony Optimization,Software testing is a crucial measure used to assure the quality of software. Path testing can detect bugs earlier because of it performs higher error coverage. This paper presents a model of generating test data based on an improved ant colony optimization and path coverage criteria. Experiments show that the algorithm has a better performance than other two algorithms and improve the efficiency of test data generation notably.,2009,0, 3812,A Method for Detecting Behavioral Mismatching Web Services,"Service composition is becoming a central aspect in service-oriented computing. In practice, most Web services can not be integrated directly into an application-to-be because they are incompatible. How to ensure Web services compatible at the behavioral level is an important issue for Web services integration and collaboration in a seamless way. Based on the proposed formal model for Web service interfaces, the semantics of composite Web service is defined under the assumption of synchronous communications, furthermore, the condition for detecting behavioral mismatches among multiple Web services is derived, which relies on an abstract notation based on labeled transition systems. Our method is supported by an algorithm that can automatically build the synchronous product for a set of Web service behavioral interfaces. Meanwhile, we illustrate them on a simple example.",2009,0, 3813,Anchoring the Consistency Dimension of Data Quality Using Ontology in Data Integration,"Data quality is crucial for data integration and the consistency dimension is an important issue in data quality. Traditional methods of data consistency focus on the conflict or inconsistency that occurs in the same concept. However, it is sometimes insufficient to ensure the data consistency only using these methods. In this paper, we divide the conflicts among different data sources into the traditional intra-concept conflict and the neglected inter-concept conflict based on ontology, and then we propose a detection model for these conflicts. Ontology mapping, including concept mapping and restriction verification, is the key issue in our model. We analyze the consistency dimension of data quality using the model. Both the classification and the model help us ensure the data consistency in data integration efficiently. Data from the third party and business processes of the applications can be used to resolve the inconsistency when conflicts are detected.",2009,0, 3814,Property-Driven Scenario Integration,"Scenario-based specifications have gained wide acceptance in requirements engineering. However, scenarios are not appropriate to describe global, system-wide invariants. Thus, a specification often consists of scenarios and universal properties. In order to obtain a consistent specification, the scenarios must be integrated in a way which does not violate the properties. However, manual integration of scenarios is an error-prone and laborious process. In the presented paper we suggest a synthesis algorithm for automatic integration of system scenarios to an overall specification with guaranteed satisfaction of system-wide safety properties. The main idea is to compute inter-scenario priorities, which disable certain scenarios if they violate a property.",2009,0, 3815,Using Probabilistic Model Checking to Evaluate GUI Testing Techniques,"Different testing techniques are being proposed in software testing to improve systems quality and increase development productivity. However, it is difficult to determine from a given set of testing techniques, which is the most effective testing technique for a certain domain, particularly if they are random-based. We are proposing a strategy and a framework that can evaluate such testing techniques. Our framework is defined compositionally and parametric ally. This allows us to characterize different aspects of systems in an incremental way as well as test specific hypothesis about the system under test. In this paper we focus on GUI-based systems. That is, the specific internal behavior of the system is unknown but it can be approximated by probabilistic behaviors. And the empirical evaluation is based on the probabilistic model checker PRISM.",2009,0, 3816,Evaluating the Use of Reference Run Models in Fault Injection Analysis,"Fault injection (FI) has been shown to be an effective approach to assessing the dependability of software systems. To determine the impact of faults injected during FI, a given oracle is needed. Oracles can take a variety of forms, including (i) specifications, (ii) error detection mechanisms and (iii) golden runs. Focusing on golden runs, in this paper we show that there are classes of software which a golden run based approach can not be used to analyse. Specifically, we demonstrate that a golden run based approach can not be used in the analysis of systems which employ a main control loop with an irregular period. Further, we show how a simple model, which has been refined using FI experiments, can be employed as an oracle in the analysis of such a system.",2009,0, 3817,Prioritized Test Generation Strategy for Pair-Wise Testing,"Pair-wise testing is widely used to detect faults in software systems. In many applications where pair-wise testing is needed, the whole test set can not be run completely due to time or budget constraints. In these situations, it is essential to prioritize the tests. In this paper, we drive weight for each value of each parameter, and adapt UWA algorithm to generate an ordered pair-wise coverage test suite. UWA algorithm is to accord weights set for each value of each parameter of the system, then produce ordered pair-wise coverage test set for having generated but unordered one. Finally, a greedy algorithm is adopted to prioritize generated pair-wise coverage test set with driven weights, so that whenever the testing is interrupted, interactions deemed, most important are tested.",2009,0, 3818,Information Visualization Analysis of the Hot Research Topics and the Research Fronts of Information Resources Management (IRM),"This article choosing all the 3925 documents' citation as data sample of A information resources management (IRM)A, which was published in Web of Science (SCI-EXPANDED, SSCI, A&HCI) from 1986 to 2008, confirming the hot research topics and the research fronts by using word frequency analysis and detect key words that their term frequency changed notably, and drawing the knowledge mapping of them by using Citespace: a information visualization software. Hoping it can benefit to the research of information resources management (IRM).",2009,0, 3819,Mining Rank-Correlated Associations for Recommendation Systems,"Recommendation systems, best known for their use in e-commerce or social network applications, predict users' preferences and output item suggestions. Modern recommenders are often faced with many challenges, such as covering high volume of volatile information, dealing with data sparsity, and producing high-quality results. Therefore, while there are already several strategies of this category, some of them can still be refined. Association rules mining is one of the widely applied techniques of recommender implementation. In this paper, we propose a tuned method, trying to overcome some defects of existing association rules based recommendation systems by exploring rank correlations. It builds a model for preference prediction with the help of rank correlated associations on numerical values, where traditional algorithms of such kind would choose to do discretization. An empirical study is then conducted to see the efficiency of our method.",2009,0, 3820,Trustworthy Evaluation of a Safe Driver Machine Interface through Software-Implemented Fault Injection,"Experimental evaluation is aimed at providing useful insights and results that constitute a confident representation of the system under evaluation. Although guidelines and good practices exist and are often applied, the uncertainty of results and the quality of the measuring system is rarely discussed. To complement such guidelines and good practices in experimental evaluation, metrology principles can contribute in improving experimental evaluation activities by assessing the measuring systems and the results achieved. In this paper we present the experimental evaluation by software-implemented fault injection of a safe train-borne driver machine interface (DMI), to evaluate its behavior in presence of faults. The measuring system built for the purpose and the results obtained on the assessment of the DMI are scrutinized along basic principles of metrology and good practices of fault injection. Trustfulness in results has been estimated satisfactory and the experimental campaign has shown that the safety mechanisms of the DMI correctly identify the faults injected and that a proper reaction is executed.",2009,0, 3821,Magnetic Circuit Design Based on Circumferential Excitation in Oil-Gas Pipeline Magnetic Flux Leakage Detection,"In the oil-gas pipeline magnetic flux leakage (MFL) detection, the qualitative and quantitative analysis can be achieved. The circumferential excitation technology is an effective way to detect or size axial direction of defects reliably; Magnetic circuit design has become the key part of this technology to meet the magnetized requirements of circumferential excitation. This paper studies the magnetized section of MFL detector of the oil pipeline. With combination of analytical method and magnetic segmentation method, the Kirchhoff magnetic equation was established with MATLAB to design a magnetic circuit calculation model. Take A325A8 pipeline as an example, calculations are carried out to choose suitable geometry parameters and material properties for magnetic circuit design. And simulation by large-scale finite element software ANSYS is carried out to verify the correctness of magnetic circuit design. It has been testified in the results that the design of magnetic circuit has completely met the requirements of circumferential magnetization.",2009,0, 3822,A New Algorithm for Fabric Defect Detection Based on Image Distance Difference,"This paper brings forward a new method of detection of fabric defect, namely image distance difference arithmetic. The system permit user to set appropriate control parameter of fabric defect defection based on the type of the fabric. It can detect more than 30 kinds of common defects, which has advantages of high identification correctness and fast inspection speed. Finally, using some image processing technology to score the grade of piece to satisfy the quality and elevate the finished product ratio.",2009,0, 3823,A Performance Monitoring Tool for Predicting Degradation in Distributed Systems,"Continuous performance monitoring is critical for detecting software aging and enabling performance tuning. In this paper we design and develop a performance monitoring system called PerfMon. It makes use of the /proc virtual file system's kernel-level mechanisms and abstractions in Linux-based operating system, which provides the building blocks for implementation of efficient, scalable and multi-level performance monitoring. Using PerfMon, we show that (1) monitoring functionality can be customized according to clients' requirements, (2) by filtering of monitoring information, the trade-offs can be attained between the quality of the information monitored and the associated overheads, and (3) by performing monitoring at application-level, we can predict software aging by taking into account the multiple resources used by applications. Finally, we evaluate PerfMon by experiments.",2009,0, 3824,The Application of CI-Based Virtual Instrument in Gear Fault Diagnosis,"A gear fault detection system, Virtual Instrument Diagnostic System, is developed by combining both advantages of VC++ and MATLAB and using the two hybrid programming method. The interface is designed by VC++ and the calculation of test data, signal processing and graphical display are completed by MATLAB. After conversion program converted in VC++ from *. m file is completed by interface software, a various multi-functional gear fault diagnosis software system is successfully designed. The software system, one possesses various functions including the introduction of gear vibration signals, signal processing, graphics display, fault detection and diagnosis, detecting and monitoring, which has the ability of diagnosing gear faults primely, has a considerable application foreground in the field of fault diagnosis.",2009,0, 3825,An Improved Spatial Error Concealment Algorithm Based on H.264,"The losses of packets are inevitable when the video is transported over error-prone networks. Error concealment methods can reduce the quality degradation of the received video by masking the effects of such errors. This paper presents a novel spatial error concealment algorithm based on the directional entropy in the available neighboring Macro Blocks (MBs), which can adaptively switch between weighted pixel average (WPA) adopted in H.264 and an improved directional interpolation algorithm to recover the lost MBs. In this work, the proposed algorithm was evaluated on H.264 reference software JM8.6. The illustrative examples demonstrate that the proposed method can achieve better Peak to Signal-to-Noise Ratio (PSNR) performance and visual quality, compared with WPA and the conventional directional interpolation algorithm respectively.",2009,0, 3826,Monitoring Workflow Applications in Large Scale Distributed Systems,"This paper presents the design, implementation and testing of the monitoring solution created for integration with a workflow execution platform. The monitoring solution constantly checks the system evolution in order to facilitate performance tuning and improvement. Monitoring is accomplished at application level, by monitoring each job from each workflow and at system level, by aggregating state information from each processing node. The solution also computes aggregated statistics that allow an improvement to the scheduling component of the system, with which it will interact. The improvement on the performance of distributed application is obtained using the realtime information to compute estimates of runtime which are used to improve scheduling. Another contribution is an automated error detection systems, which can improve the robustness of grid by enabling fault recovery mechanisms to be used. These aspects can benefit from the particularization of the monitoring system for a workflow-based application: the scheduling performance can be improved through better runtime estimation and the error detection can automatically detect several types of errors. The proposed monitoring solution could be used in the SEEGRID project as a part of the satellite image processing engine that is being built.",2009,0, 3827,On the Functional Qualification of a Platform Model,"This work focuses on the use of functional qualification for measuring the quality of co-verification environments for hardware/software (HW/SW) platform models. Modeling and verifying complex embedded platforms requires co-simulating one or more CPUs running embedded applications on top of an operating system, and connected to some hardware devices. The paper describes first a HW/SW co-simulation framework which supports all mechanisms used by software, in particular by device drivers, to access hardware devices so that the target CPU's machine code can be simulated. In particular, synchronization between hardware and software is performed by the co-simulation framework and, therefore, no adaptation is required in device drivers and hardware models to handle synchronization messages. Then, CertitudeA, a flexible functional qualification tool, is introduced. Functional qualification is based on the theory of mutation analysis, but it is extended by considering a mutation to be killed only if a testcase fails. CertitudeA automatically inserts mutants into the HW/SW models and determines if the verification environment can detect these mutations. A known mutant that cannot be detected points to a verification weakness. If a mutant cannot be detected, there is evidence that actual design errors would also not be detected by the co-verification environment. This is an iterative process and functional qualification solution provides the verifier with information to improve the co-verification environment quality. The proposed approach has been successfully applied on an industrial platform as shown in the experimental result section.",2009,0, 3828,Comparing Commercial Tools and State-of-the-Art Methods for Generating Text Summaries,"Nowadays, there are commercial tools that allow automatic generation of text summaries. However, it is not known the quality of the generated summaries and the method that it is used for the generation of the summaries using these commercial tools. This paper provides a study about the commercial tools such as Copernic Summarizer, Microsoft Office Word Summarizer 2003 and Microsoft Office Word Summarizer 2007, with the objective to detect which of them gives the summaries more similar to those made by a human. Furthermore, the comparison between commercial tools and state-of-the-art methods is realized. The experiments were carried out using DUC-2002 standard collection which contains 567 news in English.",2009,0, 3829,An Adaptive Service Composition Performance Guarantee Mechanism in Peer-to-Peer Network,"Service composition is becoming a hot research point in recent years, especially in distributed peer-to-peer (P2P) network. However, how to guarantee the high-quality composition service performance becomes the key technology and difficulty in P2P network, which draws lots of attention in computer research field. According to the characteristics of distributed network, an adaptive service composition performance guarantee mechanism in P2P network is presented. Firstly, the composition services are divided into preferred service and backup one; secondly, BP neural network is introduced to detect and evaluate the service performance exactly; finally, the backup service will adaptively replace the preferred one when the preferred composition service performance is less than the threshold value. Except that, the half-time window punishment mechanism is introduced to increase the punishment to the low-quality service and encourage users to provide more high-quality one, which further improves the service composition performance in P2P network. The simulations verify our method to be effective and enrich the proposed theory.",2009,0, 3830,Research on the Audit Fraud Grid,"The information technology has brought great new challenges to the audit: the dynamical and timely audit is required to predict the audit fraud in advance; the data from different applications and locations of the enterprise is not easy to audit; the original documents will be destroyed after they are input into the computer system, which lead to the audit clues loss; many experiences can not taught to the auditors directly. This paper presents a Web service-based audit fraud grid (AFG) to fulfill the new requirements. In the AFG, the heterogeneous data coming from different locations of an enterprise can be audited universally, globally and dynamically; the audit frauds warn can be performed in advance based on the auditors' experience and data fusion technology which improves the audit quality greatly; the content-based image retrieval technology is utilized to store and audit the original accounting documents.",2009,0, 3831,An Up-Sampling Based Texture Synthesis Scheme for Rapid Motion in High Resolution Video Coding,"Moving object is too fast or a long exposure of a moving object will cause a """"streak"""" of the moving object in the recorded video sequence. In this case, it's hard to recognize the detail of objects, and not necessary to encode the detail of motion which usually consumes lots of bits, especially under the situation of high resolution sequence. In this paper, we propose an upsampling based encoding method for the high resolution sequence with rapid motion. When rapid motion is detected in the current sequence, only a down-sampled version is encoded and sent to the decoder to save bit-rate. The decoder recovery the high resolution sequence through up-sampling the received low resolution bit stream. To avoid the visible artifacts and merge the synthesis into nowadays' I, P, B frame based codec, a synthetic region selection strategy is employed, which can help achieving a smooth transition between the synthetic frames and the adjacent non-synthetic frames. This scheme has been integrated into H.264, and experimental results show large amount of bit-rate is saved at similar visual quality levels compared with H.264.",2009,0, 3832,Forming simulation system of 3-roll continual tube rolling PQF based on finite element method,"Based on the characteristics of the PQF deformation process, a thermal-mechanical coupled model of this process is established by the three-dimensional elasto-plastic finite element method, and the visual simulation software of this process is generated with Visual Basic. Some important parameters such as the finished size of the rolled product, the temperature distribution of the tube, the rolling force of each roller are predict by utilizing this program. Comparison between simulation solutions and experiment results shows a good agreement, which means that the visual simulation software is capable of simulating PQF deformation process as well as forecasting product quality.",2009,0, 3833,mSWAT: Low-cost hardware fault detection and diagnosis for multicore systems,"Continued technology scaling is resulting in systems with billions of devices. Unfortunately, these devices are prone to failures from various sources, resulting in even commodity systems being affected by the growing reliability threat. Thus, traditional solutions involving high redundancy or piecemeal solutions targeting specific failure modes will no longer be viable owing to their high overheads. Recent reliability solutions have explored using low-cost monitors that watch for anomalous software behavior as a symptom of hardware faults. We previously proposed the SWAT system that uses such low-cost detectors to detect hardware faults, and a higher cost mechanism for diagnosis. However, all of the prior work in this context, including SWAT, assumes single-threaded applications and has not been demonstrated for multithreaded applications running on multicore systems. This paper presents mSWAT, the first work to apply symptom based detection and diagnosis for faults in multicore architectures running multithreaded software. For detection, we extend the symptom-based detectors in SWAT and show that they result in a very low silent data corruption (SDC) rate for both permanent and transient hardware faults. For diagnosis, the multicore environment poses significant new challenges. First, deterministic replay required for SWAT's single-threaded diagnosis incurs higher overheads for multithreaded workloads. Second, the fault may propagate to fault-free cores resulting in symptoms from fault-free cores and no available known-good core, breaking fundamental assumptions of SWAT's diagnosis algorithm. We propose a novel permanent fault diagnosis algorithm for multithreaded applications running on multicore systems that uses a lightweight isolated deterministic replay to diagnose the faulty core with no prior knowledge of a known good core. Our results show that this technique successfully diagnoses over 95% of the detected permanent faults while incurring low hardware ov- erheads. mSWAT thus offers an affordable solution to protect future multicore systems from hardware faults.",2009,0, 3834,An Investigation of the Effect of Discretization on Defect Prediction Using Static Measures,"Software repositories with defect logs are main resource for defect prediction. In recent years, researchers have used the vast amount of data that is contained by software repositories to predict the location of defect in the code that caused problems. In this paper we evaluate the effectiveness of software fault prediction with Naive-Bayes classifiers and J48 classifier by integrating with supervised discretization algorithm developed by Fayyad and Irani. Public datasets from the promise repository have been explored for this purpose. The repository contains software metric data and error data at the function/method level. Our experiment shows that integration of discretization method improves the software fault prediction accuracy when integrated with Naive-Bayes and J48 classifiers.",2009,0, 3835,Software Operational Profile Reduction and Classification Testing Based on Entropy Theory,"An algorithm based on conditional entropy for profile reduction was proposed in information system, in which profile reduction was defined from the perspective of probability of occurrence. Then we apply the test cases algorithm to test profile after reduction so that the efficiency of software testing can be improved.",2009,0, 3836,An Approach for Analyzing Infrequent Software Faults Based on Outlier Detection,"The fault analysis is critical process in software security system. However, identifying outliers in software faults has not been well addressed. In this paper, we define WCFPOF (weighted closed frequent pattern outlier factor) to measure the complete transactions, and propose a novel approach for detecting closed frequent pattern based outliers. Through discovering and maintaining closed frequent patterns, the outlier measure of each transaction is computed to generate outliers. The outliers are the data that contain relatively less closed frequent itemsets. To describe the reasons why detected outlier transactions are infrequent, the contradictive closed frequent patterns for each outlier are figured out. Experimental results show that our algorithm has shorter time consumption and better scalability.",2009,0, 3837,From dynamic reconfiguration to self-reconfiguration: Invasive algorithms and architectures,"In the first part of this invited keynote talk, highlights of a research initiative on dynamically reconfigurable computing systems that has been funded by the German Research Foundation (DFG) within its Priority Programme (Schwerpunktprogramm) 1148 from 2003 to 2009 will be presented To make dynamic reconfigurable computing become a reality, this joint nation-wide research initiative bundled multiple projects and, involved at times up to 50 researchers working in the topic and including as well fine grain as coarse grain reconfigurable computing. Here, we try to summarize the golden fruits, major achievements and biggest milestones of this joint research initiative that has enabled more than 100 person years of research work, and more than 20 students were able to defend a PhD theses based on research performed in this initiative on making dynamically reconfigurable computing become a reality. Whereas this first part of the talk reflects our research achievements on reconfigurable computing systems of the past, the second part of the talk is rather visionary and tries to foresee needs and applications for reconfigurability in architectures we might see in ten years from now in the future: If computing platforms may exploit dynamic resource reconfigurations efficiently, how can such capabilities be used to solve problems encountered when designing future multi-billion transistor devices which have enough chip area to integrate even 100-1000 full processor cores as basic blocks? One remedy to cope with the predicted problems of increasing probabilities and susceptibility to faults, leakage and power management, resource efficiency, timing, application concurrency, and mapping complexity might be self-organization and self-configuration: With the term invasive algorithms and invasive architectures, we envision that applications mapped to a reconfigurable SoC platform might map and configure themselves to a certain extent based on the temporal state and availability o- resources, computing demands during the execution and other state information of the resources (e.g., temperature, faultiness, resource usage, permissions, etc.) We will show that invasive computing, however, has to lead also to a new way of application development including algorithm design, language implementation and compilation tools. The expected benefits of such architectures allowing applications to spread their computations on resources and later free them again decentrally by themselves at run-time sound promising, but the overheads will need to be evaluated and traded-off. In particular, such computing paradigm would require to develop new reconfigurable system architectures hosting a mixture of fine (e.g., field-programmable) and coarse (i.e., software-programmable) grain cores in future SoC devices.",2009,0, 3838,EVM measurement techniques for MUOS,"Physical layer simulations and analysis techniques were used to develop the error vector magnitude (EVM) metric specifying transmitter signal quality. These tools also proved to be very useful in specifying lower level hardware unit performance and predicting mobile user objective system (MUOS) satellite 8-PSK transmitter performance before the hardware was built. However, the verification of EVM compliance at Ka frequencies posed challenges. Initial measurements showed unacceptably high levels of EVM which exceeded specification. Attempts to remove the contribution of the test equipment distortion and isolate the device-under-test distortion using commercial oscilloscope VSA software were unsuccessful. In this paper, we describe methods used to develop an accurate EVM measurement. The transmitted modulated signal was first recorded using a digitizing scope. In-house system identification, equalization, demodulation and analysis algorithms were then used to remove signal distortion due to the test equipment. Results from EVM measurements on MUOS single-channel hardware are given and performance is shown to be consistent with estimates made three years earlier. The results reduce technical risk and verify transmitter design by demonstrating signal quality.",2009,0, 3839,Behavioral Study of UNIX Commands in a Faulty Environment,"In this paper we propose the use of two approaches to tackle the detection and comprehension of how a faulty environment affects process behavior. We model processes as state-transition models with associated transition probabilities. The study employs SWIFI (software implemented fault injection) in commonly used Unix tools to analyze behavioral changes. The approaches complement each other as one allows the detection of dissimilar behaviors and the other the analysis of how relationships among process states evolve over time. Obtained results are promising, as they allow a clear characterization of process execution in a normal and a faulty environment.",2009,0, 3840,Building Reliable Data Pipelines for Managing Community Data Using Scientific Workflows,"The growing amount of scientific data from sensors and field observations is posing a challenge to Adata valetsA responsible for managing them in data repositories. These repositories built on commodity clusters need to reliably ingest data continuously and ensure its availability to a wide user community. Workflows provide several benefits to modeling data-intensive science applications and many of these benefits can help manage the data ingest pipelines too. But using workflows is not panacea in itself and data valets need to consider several issues when designing workflows that behave reliably on fault prone hardware while retaining the consistency of the scientific data. In this paper, we propose workflow designs for reliable data ingest in a distributed environment and identify workflow framework features to support resilience. We illustrate these using the data pipeline for the Pan-STARRS repository, one of the largest digital surveys that accumulates 100TB of data annually to support 300 astronomers.",2009,0, 3841,A New Fault Tolerance Heuristic for Scientific Workflows in Highly Distributed Environments Based on Resubmission Impact,"Even though highly distributed environments such as Clouds and Grids are increasingly used for e-science high performance applications, they still cannot deliver the robustness and reliability needed for widespread acceptance as ubiquitous scientific tools. To overcome this problem, existing systems resort to fault tolerance mechanisms such as task replication and task resubmission. In this paper we propose a new heuristic called resubmission impact to enhance the fault tolerance support for scientific workflows in highly distributed systems. In contrast to related approaches, our method can be used effectively on systems even in the absence of historic failure trace data. Simulated experiments of three real scientific workflows in the Austrian Grid environment show that our algorithm drastically reduces the resource waste compared to conservative task replication and resubmission techniques, while having a comparable execution performance and only a slight decrease in the success probability.",2009,0, 3842,Early Software Fault Prediction Using Real Time Defect Data,"Quality of a software component can be measured in terms of fault proneness of data. Quality estimations are made using fault proneness data available from previously developed similar type of projects and the training data consisting of software measurements. To predict faulty modules in software data different techniques have been proposed which includes statistical method, machine learning methods, neural network techniques and clustering techniques. The aim of proposed approach is to investigate that whether metrics available in the early lifecycle (i.e. requirement metrics), metrics available in the late lifecycle (i.e. code metrics) and metrics available in the early lifecycle (i.e. requirement metrics) combined with metrics available in the late lifecycle (i.e. code metrics) can be used to identify fault prone modules by using clustering techniques. This approach has been tested with three real time defect datasets of NASA software projects, JM1, PC1 and CM1. Predicting faults early in the software life cycle can be used to improve software process control and achieve high software reliability. The results show that when all the prediction techniques are evaluated, the best prediction model is found to be the fusion of requirement and code metric model.",2009,0, 3843,Derivation of Railway Software Safety Criteria and Management Procedure,Ariane 5 rocket which is a test flight of European Space Agency was exploded 37 seconds after launch due to a malfunction in the control software. The amount of this loss is more than US$370 million. The accident has initiated the criticality of the software safety. Nowadays software is widely used in the safety critical system because of the merit of the flexibility. But the flexibility makes it difficult to predict the software failures. This paper suggests software safety criteria for the safety critical railway system and describes the quality management procedure of the railway software.,2009,0, 3844,Quality of the Source Code for Design and Architecture Recovery Techniques: Utilities are the Problem,"Software maintenance is perhaps one of the most difficult activities in software engineering, especially for systems that have undergone several years of ad hoc maintenance. The problem is that, for such systems, the gap between the system implementation and its design models tend to be considerably large. Reverse engineering techniques, particularly the ones that focus on design and architecture recovery, aim to reduce this gap by recovering high-level design views from the source code. The course code becomes then the data on which these techniques operate. In this paper, we argue that the quality of a design and architecture recovery approach depends significantly on the ability to detect and eliminate the unwanted noise in the source code. We characterize this noise as being the system utility components that tend to encumber the system structure and hinder the ability to effectively recover adequate design views of the system. We support our argument by presenting various design and architecture recovery studies that have been shown to be successful because of their ability to filter out utility components. We also present existing automatic utility detection techniques along with the challenges that remain unaddressed.",2009,0, 3845,Non-homogeneous Inverse Gaussian Software Reliability Models,"In this paper we consider a novel software reliability modeling framework based on non-homogeneous inverse Gaussian processes (NHIGPs). Although these models are derived in a different way from the well-known non-homogeneous Poisson processes (NHPPs), they can be regarded as interesting stochastic point processes with both arbitrary time non-stationary properties and the inverse Gaussian probability law. In numerical examples with two real software fault data, it is shown that the NHIGP-based software reliability models could outperform the goodness-of-fit and the predictive performances more than the existing NHPP-based models.",2009,0, 3846,An Approach to Measure Value-Based Productivity in Software Projects,"Nowadays, after a lot of evolution in software engineering area, there is not yet a simple and direct answer to the question: What is the best software productivity metric? The simplest and most commonly used metric is the SLOC and its derivations, but these are admittedly problematic for this purpose. In another way, there are some indications of maturation in this topic, the new studies point to the use of more labored models, which are based on multiple dimensions and in the idea of produced value. Based in this tendency and on the evidence that each organization must define its own way to assess their productivity, we defined a process to support the definition of productivity measurement models in software organizations. We also discuss, in this paper, some issues about the difficulties related with the process adoption in real software organizations.",2009,0, 3847,Localizing Software Faults Simultaneously,"Current automatic diagnosis techniques are predominantly of a statistical nature and, despite typical defect densities, do not explicitly consider multiple faults, as also demonstrated by the popularity of the single-fault Siemens set. We present a logic reasoning approach, called Zoltar-M(ultiple fault), that yields multiple-fault diagnoses, ranked in order of their probability. Although application of Zoltar-M to programs with many faults requires further research into heuristics to reduce computational complexity, theory as well as experiments on synthetic program models and two multiple-fault program versions from the Siemens set show that for multiple-fault programs this approach can outperform statistical techniques, notably spectrum-based fault localization (SFL). As a side-effect of this research, we present a new SFL variant, called Zoltar-S(ingle fault), that is provably optimal for single-fault programs, outperforming all other variants known to date.",2009,0, 3848,A Bayesian Approach for the Detection of Code and Design Smells,"The presence of code and design smells can have a severe impact on the quality of a program. Consequently, their detection and correction have drawn the attention of both researchers and practitioners who have proposed various approaches to detect code and design smells in programs. However, none of these approaches handle the inherent uncertainty of the detection process. We propose a Bayesian approach to manage this uncertainty. First, we present a systematic process to convert existing state-of-the-art detection rules into a probabilistic model. We illustrate this process by generating a model to detect occurrences of the Blob antipattern. Second, we present results of the validation of the model: we built this model on two open-source programs, GanttProject v1.10.2 and Xerces v2.7.0, and measured its accuracy. Third, we compare our model with another approach to show that it returns the same candidate classes while ordering them to minimise the quality analysts' effort. Finally, we show that when past detection results are available, our model can be calibrated using machine learning techniques to offer an improved, context-specific detection.",2009,0, 3849,Improving Software Testing Cost-Effectiveness through Dynamic Partitioning,"We present a dynamic partitioning strategy that selects test cases using online feedback information. The presented strategy differs from conventional approaches. Firstly, the partitioning is carried out online rather than off-line. Secondly, the partitioning is not based on program code or specifications; instead, it is simply based on the fail or pass information of previously executed test cases and, hence, can be implemented in the absence of the source code or specification of the program under test. The cost-effectiveness of the proposed strategy has been empirically investigated with three programs, namely SPACE, SED, and GREP. The results show that the proposed strategy achieves a significant saving in terms of total number of test cases executed to detect all faults.",2009,0, 3850,Quality Assessment of Mission Critical Middleware System Using MEMS,"Architecture evaluation methods provide general guidelines to assess quality attributes of systems, which are not necessarily straightforward to practice with. With COTS middleware based systems, this assessment process is further complicated by the complexity of middleware technology and a number of design and deployment options. Efficient assessment is key to produce accurate evaluation results for stakeholders to ensure good decisions are made on system acquisition. In this paper, a systematic evaluation method called MEMS is developed to provide some structure to this assessment process. MEMS produces the evaluation plan with thorough design of experiments, definition of metrics and development of techniques for measurement. This paper presents MEMS and its application to a mission critical middleware system.",2009,0, 3851,Are Fault Failure Rates Good Estimators of Adequate Test Set Size?,"Test set size in terms of the number of test cases is an important consideration when testing software systems. Using too few test cases might result in poor fault detection and using too many might be very expensive and suffer from redundancy. For a given fault, the ratio of the number of failure causing inputs to the number of possible inputs is referred to as the failure rate. Assuming a test set represents the input domain uniformly, the failure rate can be re-defined as the fraction of failed test cases in the test set. This paper investigates the relationship between fault failure rates and the number of test cases required to detect the faults. Our experiments suggest that an accurate estimation of failure rates of potential fault(s) in a program can provide a reliable estimate of an adequate test set size with respect to fault detection (a test set of size sufficient to detect all of the faults) and therefore should be one of the factors kept in mind during test set generation.",2009,0, 3852,Interactive Specification and Verification of Behavioural Adaptation Contracts,"Adaptation is a crucial issue when building new applications by reusing existing software services which were not initially designed to interoperate with each other. Adaptation contracts describe composition constraints and adaptation requirements among these services. The writing of this specification by a designer is a difficult and error-prone task, especially when service protocol needs to be considered and service functionality accessed through behavioural interfaces. In this paper, we propose an interactive approach to support the contract design process, and more specifically: (i) a graphical notation to define port bindings, and an interface similarity measure to compare protocols and suggest some port connections to the designer, (ii) compositional and hierarchical techniques to facilitate the specification of adaptation contracts by building them incrementally, (iii) validation and verification techniques to check that the contract will make the involved services work correctly and as expected by the designer. Our approach is fully supported by a prototype tool we have implemented.",2009,0, 3853,Tag-Based Techniques for Black-Box Test Case Prioritization for Service Testing,"A web service may evolve autonomously, making peer web services in the same service composition uncertain as to whether the evolved behaviors may still be compatible to its originally collaborative agreement. Although peer services may wish to conduct regression testing to verify the original collaboration, the source code of the former service can be inaccessible to them. Traditional code-based regression testing strategies are inapplicable. The rich interface specifications of a web service, however, provide peer services with a means to formulate black-box testing strategies. In this paper, we formulate new test case prioritization strategies using tags embedded in XML messages to reorder regression test cases, and reveal how the test cases use the interface specifications of services. We evaluate experimentally their effectiveness on revealing regression faults in modified WS-BPEL programs. The results show that the new techniques can have a high probability of outperforming random ordering.",2009,0, 3854,Towards Selecting Test Data Using Topological Structure of Boolean Expressions,"Boolean expressions can be used in programs and specifications to describe the complex logic decisions in mission-critical, safety-critical and Web services applications. We define a topological model (T-model) to represent Boolean expressions and characterize the test data. This paper provides proofs of relevant T-model properties, employs the combinatorial design approach, and proposes a family of strategies and techniques to detect a variety of faults associated with Boolean expressions. We compare our strategies with MC/DC, MUMCUT, MANY-A, MANY-B, MAX-A and MAX-B, and conclude that T-model based approach detects more types of faults than MC/DC, MUMCUT MANY-A and MAX-A, and detects the same types but more instances of faults than MANY-B and MAX-B with much smaller test data set.",2009,0, 3855,A Hybrid Approach to Detecting Security Defects in Programs,"Static analysis works well at checking defects that clearly map to source code constructs. Model checking can find defects of deadlocks and routing loops that are not easily detected by static analysis, but faces the problem of state explosion. This paper proposes a hybrid approach to detecting security defects in programs. Fuzzy inference system is used to infer selection among the two detection approaches. A cluster algorithm is developed to divide a large system into several clusters in order to apply model checking. Ontology based static analysis employs logic reasoning to intelligently detect the defects. We also put forwards strategies to improve performance of the static analysis. At last, we perform experiments to evaluate the accuracy and performance of the hybrid approach.",2009,0, 3856,SmartClean: An Incremental Data Cleaning Tool,"This paper presents the SmartClean tool. The purpose of this tool is to detect and correct the data quality problems (DQPs). Compared with existing tools, SmartClean has the following main advantage: the user does not need to specify the execution sequence of the data cleaning operations. For that, an execution sequence was developed. The problems are manipulated (i.e., detected and corrected) following that sequence. The sequence also supports the incremental execution of the operations. In this paper, the underlying architecture of the tool is presented and its components are described in detail. The tool's validity and, consequently, of the architecture is demonstrated through the presentation of a case study. Although SmartClean has cleaning capabilities in all other levels, in this paper are only described those related with the attribute value level.",2009,0, 3857,Tree Topology Based Fault Diagnosis in Wireless Sensor Networks,"In order to improve the energy efficiency of the fault diagnosis in wireless sensor networks, we propose a tree topology based distributed fault diagnosis algorithm. The algorithm maintains high node fault detection rate and low fault alarm rate in wireless sensor networks under low node distribution density. First, the algorithm finds a good node with a multi-layer detection method, then, detects the status of other nodes by means of the status relation with the good node which is inferred by the parent-child relation in the tree topology, and up to achieves fault detection of the whole network. In ZigBee tree network, the simulation results show that the algorithm performs better in energy efficiency, and identify the faulty sensors with high accuracy and robustness.",2009,0, 3858,What Makes Testing Work: Nine Case Studies of Software Development Teams,Recently there has been a focus on test first and test driven development; several empirical studies have tried to assess the advantage that these methods give over testing after development. The results have been mixed. In this paper we investigate nine teams who tested during coding to examine the effect it had on the external quality of their code. Of the top three performing teams two used a documented testing strategy and the other an ad-hoc approach to testing. We conclude that their success appears to be related to a testing culture where the teams proactively test rather than carry out only what is required in a mechanical fashion.,2009,0, 3859,Using Dependency Information to Select the Test Focus in the Integration Testing Process,"Existing software systems consist of thousands of software components realizing countless requirements. To fulfill these requirements, components have to interact with or depend on each other. The goal of the integration testing process is to test that the interactions between these components are correctly realized. However, it is impossible to test all dependencies because of time and budget constraints. Therefore, error-prone dependencies have to be selected as tests. This paper presents an approach to select the test focus in the integration test process. It uses dependency and error information of previous versions of the system under test. Error-prone dependency properties are identified by statistical approaches and used to select dependencies in the current version of the system. The results of two case studies with real software systems are presented.",2009,0, 3860,Using TTCN-3 in Performance Test for Service Application,"Service applications are applicable to provide services for requests of users from network. Due to the fact that they have to endure a big number of concurrent requests, the performance of service applications running under specific arrival rate of requests should be assessed. To measure the performance of a service application, multi-party testing context is needed to simulate a number of concurrent requests and collect the responses. TTCN-3 is a test description language; it provides basic language elements for multi-party testing context that can be used in performance tests. This paper proposes a general approach of using TTCN-3 in multi-party performance testing service application. To this aim, a model of service application is presented, and performance testing framework for service applications is discussed. This testing framework is realized for a typical application by developing a reusable TTCN-3 abstract test suite.",2009,0, 3861,Hierarchical Stability-Based Model Selection for Clustering Algorithms,"We present an algorithm called HS-means which is able to learn the number of clusters in a mixture model. Our method extends the concept of clustering stability to a concept of hierarchical stability. The method chooses a model for the data based on analysis of clustering stability; it then analyzes the stability of each component in the estimated model and chooses a stable model for this component. It continues this recursive stability analysis until all the estimated components are unimodal. In so doing, the method is able to handle hierarchical and symmetric data that existing stability-based algorithms have difficulty with. We test our algorithm on both synthetic datasets and real world datasets. The results show that HS-means outperforms a popular stability-based model selection algorithm, both in terms of handling symmetric data and finding high-quality clusterings in the task of predicting CPU performance.",2009,0, 3862,On the Reliability of Wireless Sensors with Software-Based Attestation for Intrusion Detection,"Wireless sensor nodes are widely used in many areas, including military operation surveillance, natural phenomenon monitoring, and medical diagnosis data collection. These applications need to store and transmit sensitive or secret data, which requires intrusion detection mechanisms be deployed to ensure sensor node health, as well as to maintain quality of service and survivability. Because wireless sensors have inherent resource constraints, it is crucial to reduce energy consumption due to intrusion detection activities. In this paper by means of a probability model, we analyze the best frequency at which intrusion detection based on probabilistic code attestation on the sensor node should be performed so that the sensor reliability is maximized by exploiting the trade-off between the energy consumption and intrusion detection effectiveness. When given a set of parameter values characterizing the operational and networking conditions, a sensor can dynamically set its intrusion detection rate identified by the mathematical model to maximize its reliability and the expected sensor lifetime.",2009,0, 3863,dIP: A Non-intrusive Debugging IP for Dynamic Data Race Detection in Many-Core,"Traditional debug facilities are limited in providing debugging requirements for multicore parallel programming. Synchronization problems or bugs due to race conditions are particularly difficult to detect with software debugging tools. This work presents a fast and feasible hardware-assistant solution for many-core non-intrusive debugging. The key idea is to keep tracks of data accesses of shared memory areas and their lock synchronization activities by proposed data structures in proposed debugging IP (dIP). A page-based shared variable cache is provided to keep shared variables as long as possible, and an inexpensive pluggable off-chip RAM can eliminate the false-positive rate efficiently. To decrease the debugging traffic block, this work provides a thread library to specify shared memory/lock events and transmit those events to the dIP by a small proper hardware co-processor (eXtend dIP) of each core. Our experimental result shows the debugging traffic block (worse-case) by increasing cores, and adding tolerance buffers in XdIP can efficiently ease off. Moreover, the real workloads (SPLASH-2, MPEG-4, and H.264) are executed by the dIP non-instructive race-detection with only 4.7%~12.2% slow down in average. Finally, the hardware cost of dIP is also low when the growing of many-core.",2009,0, 3864,An abstraction-aware compiler for VHDL models,"Safety-critical hard real time systems as the flight control computer in avionics or airbag control software in the automotive industry need to be validated for their correct behavior. Besides the functional correctness, timely task completion is essential, i.e. the worst-case execution time (WCET) of each task in the system has to be determined. Saarland University and AbsInt GmbH have successfully developed the aiT WCET analyzer for computing safe upper bounds on the WCET of a task. The computation is mainly based on abstract interpretation of timing models of the processor and its periphery. Such timing models are currently hand-crafted by human experts. Therefore their implementation is a time-consuming and error-prone process. This paper presents an abstraction-aware compiler for automatically generating efficient pipeline analyzes out of abstracted timing models that could be derived from formal VHDL specifications.",2009,0, 3865,An efficient error concealment method for mobile TV broadcasting,"Nowadays, TV broadcasting has found its application in mobile terminal, however, due to the prediction structure of video coding standards, compressed video bitstreams are vulnerable to wireless channel disturbances for real-time transmission. In this paper, we propose a novel temporal error concealment method for mobile TV sequences. The proposed ordering methods utilizes continuity feature among adjacent frames, so that both inter and intra error propagation are alleviated. Combined with our proposed fuzzy metric based boundary matching algorithm (FBMA) which provides more accurate distortion function, experiment results show our proposal achieves better performance under error-prone channel, compared existing error concealment algorithms.",2009,0, 3866,Genome-Wide Association study for glaucoma,"The genome-wide association (GWA) study is the latest approach in the development of genetic studies and is renowned for its widespread success in identifying disease variants within the genome for various common diseases. It is a highly popular study amongst geneticists worldwide, evident from the numerous GWA studies conducted in laboratories all over the world. This paper introduces various GWA study designs currently recognized, including other aspects such as the software tools and its progress thus far. Especially, the paper reviews the genetic studies for glaucoma, an ocular disease which can lead to irreversible and permanent vision loss. Glaucomatous progression can be slowed or even halted if detected early; however, genetic information on glaucoma has not been well established yet. Therefore, by conducting a GWA study on glaucoma to find comprehensive associated genetic variants, the early detection of glaucoma through GWA may finally be seen as a possibility.",2009,0, 3867,Application of Fault Tree in Software Safety Analysis,"Along with the development of information technology, application of computer is increasing, software reliability and safety are become more and more regarded, so make use of fault tree analysis in analyzing the probability of failure of every module of system, thereby find the key module which is more impact the system safety, also make use of structure importance coefficient in quantitative analysis of importance extent of every module in the system.",2009,0, 3868,A Novel Parallel Ant Colony Optimization Algorithm with Dynamic Transition Probability,"Parallel implementation of ant colony optimization (ACO) can reduce the computational time obviously for the large scale combinatorial optimization problem. A novel parallel ACO algorithm is proposed in this paper, which use dynamic transition probability to enlarge the search space by stimulating more ants choosing new path at early stage of the algorithm; use new parallel strategies to improve the parallel efficiency. We implement the algorithm on the Dawn 400L parallel computer using MPI and C language. The numerical result indicates that: (1) the algorithm proposed in this paper can improve convergence speed effectively with the better solution quality; (2) more computational nodes can reduce the computational time obviously and obtain significant speedup; (3) the algorithm is more efficient for the large scale traveling salesman problem with fine quality of solution.",2009,0, 3869,Towards Semantic Event-Driven Systems,"One of the critical success factors of event-driven systems is the capability of detecting complex events from simple and ordinary event notifications. Complex events which trigger or terminate actionable situations can be inferred from large event clouds or event streams based on their event instance sequence, their syntax and semantics. Using semantics of event algebra patterns defined on top of event instance sequences for event detection is one of the promising approaches for detection of complex events. The developments and successes in building standards and tools for semantic technologies such as declarative rules and ontologies are opening novel research and application areas in event processing. One of these promising application areas is semantic event processing. In this paper we contribute with a conceptual approach which supports the implementation of the vision of semantic event-driven systems; using Semantic Web technologies, benefiting from complex event processing, and ensuring quality through trust and reputation management. All of these novel technologies leads to more intelligent decision supporting systems.",2009,0, 3870,Application of Fuzzy Data Mining Algorithm in Performance Evaluation of Human Resource,"The assessment of human resource performance objectively, thoroughly, and reasonably is critical to choosing managerial personnel suited for organizational development. Therefore, an efficient tool should be able to deal with various employees' data and assist managers to make decision and strategic plan. As an effect mathematic tool to deal with the vagueness and uncertainty, fuzzy data mining is considered as a highly desirable tool being applied to many application areas. In this paper we applied the fuzzy data mining technique to make the assessment and selection of human resource in enterprise. We present and justify the capabilities of fuzzy data mining technology in the evaluation of human resource in enterprise by proposing a practical model for improving the efficiency and effectiveness of human resource management. Firstly, the paper briefly explained the basic fuzzy data mining theory, and proposed the fuzzy data mining algorithm in detail. We gave process steps and the flow chart of algorithm in this part. Secondly, we used the human resource management data as illustration to implement the algorithm. We used the maximal tree to cluster the human resource. Then the raw data of human resource management is compared with each cluster and calculate the proximal values based on the equation in fuzzy data mining algorithm. At last we determined the evaluation of human resource. The whole process was easy to be completed. The results of this study indicated that the methodology was practical and feasible. It could help managers in enterprise assess performance of the human resource swiftly and effectively.",2009,0, 3871,The Estimation of the Best R Factor of the Path and the Application in the Multipath Transmission,"In the VOIP system, the researches on using the multipath transmission method to enhance speech quality of the system have been seen in reports. But there is no much report on how to estimate the link and how to choose the optimal path. In this paper, according to the mentioned problem, the research on the E-model (the model for predicting voice quality) of the ITU-T is carried out. The relationship among the R factor, the delay and packet-loss rate is derived from the simulating curves and formulas. A method and the steps of realization of estimating the best R factor of each path based on the characteristics of the link delay and the E-model are proposed. Firstly, we can get the network delay sequence TN by the RTP; Secondly, the statistical characteristics of the link are obtained by means of the maximum likelihood law and it is estimated. Lastly, the best path is chosen by the RTCP and the best buffer delay is set up to get the best quality of speech on each single path.",2009,0, 3872,A Fast High-Dimensional Tool for Detecting Anomalistic Nodes in Large Scale Systems (LSAND),"Today, large scale computer systems have become an important component in production and scientific computing and lead to rapid advances in many disciplines. However, the size, and complexity of systems make them very difficult to detect unusual nodes automatically and traditional host monitoring tools are not capable of dealing with the need of anomaly detection in large amount of nodes. In this paper, we introduce a novel tool, LSAND, which could detect anomalistic nodes in a horizontal view of the machines with comparable configuration and tasks running on. We evaluated LSAND in a cluster environment, table the results of our experiment and give a discussion on the effect and show that LSAND is both effective and efficient for detecting anomalistic nodes with high-dimensional features.",2009,0, 3873,An Improved Similarity Algorithm for Personalized Recommendation,"Recommendation systems represent personalized services that aim at predicting users' interest on information items available in the application domain. The computation of the neighbor set of users or resources is the most important step of the personalized recommendation system, and the key to this step is the calculation of similarity. This paper analyzes three main similarity algorithms and finds deficiencies in these algorithms, which affect the quality of the recommendation system. Then the paper proposes a new similarity algorithm Simi-Huang, which effectively overcomes the above-mentioned drawbacks. Experiments show that Simi-Huang algorithm is better than the three main similarity algorithms in the computation of accuracy, especially when the data is sparse. Under different training models, Simi-Huang is best in accuracy of all the algorithms; the smaller the training model is, the more accurate the algorithm is.",2009,0, 3874,Reputation-Aware Scheduling for Storage Systems in Data Grids,"Data grids provide such data-intensive applications with a large virtual storage framework with unlimited power. However, conventional scheduling algorithms for data grids are unable to meet the reputation service requirements of data-intensive applications. In this paper we address the problem of scheduling data-intensive jobs on data grids subject to reputation service constraints. Using the reputation-aware technique, the dynamic scheduling strategy is proposed to improve the capability of predicting the reliability and credibility for data-intensive applications. To incorporate reputation service into job scheduling, we introduce a new performance metric, degree of reputation sufficiency, to quantitatively measure quality of reputation service provided by data grids. Experimental results based on a simulated grid show that the proposed scheduling strategy significantly is capable of significantly satisfying the reputation service requirements and guaranteeing the desired response times .",2009,0, 3875,A Heuristic Approach with Branch Cut to Service Substitution in Service Orchestration,"With the rapidly growing number of Web services throughout the Internet, Service Oriented Architecture (SOA) enables a multitude of service providers (SP) to provide loosely coupled and inter-operable services at different Quality of Service (QoS) levels. This paper considers the services are published to a QoS-aware registry. The structure of composite service is described as a Service Orchestration that allows atomic services to be brought together into one business process; This paper considers the problem of finding a set of substitution atomic services to make the Service Orchestration re-satisfies the given multi-QoS constraints when one QoS metric went unsatisfied at runtime. This paper leverage hypothesis test to detect possible fault atomic services, and propose heuristic algorithms with different level branch cut to determine the Service Orchestration substitutions. Experiments are given to testify the algorithms are effective and efficient, and the probability cut algorithm reaches a cut/search ratio of 137.04% without loss solutions.",2009,0, 3876,Using Hessian Locally Linear Embedding for autonomic failure prediction,"The increasing complexity of modern distributed systems makes conventional fault tolerance and recovery prohibitively expensive. One of the promising approaches is online failure prediction. However, the process of feature extraction depends on the experienced administrators and their domain knowledge to filtering and compressing error events into a form that is easy for failure prediction. In this paper, we present a novel performance-centric approach to automate failure prediction with Manifold Learning techniques. More specifically, we focus on methods that use Supervised Hessian Locally Embedding algorithm to achieve autonomic failure prediction. In our experimental work we found that our method can automatically predict more than 60% of the CPU and memory failures, and around 70% of the network failure based on the runtime monitoring of the performance metrics.",2009,0, 3877,Distributed event processing for fault management of Web Services,"Within service orientation (SO) web services (WS) are the defacto standard for implementing service-oriented systems. While consumers of WS want to get uninterrupted and reliable service from the service providers WS providers can not always provide services in the expected level due to faults and failures in the system. As a result the fault management of these systems is becoming crucial. This work presents a distributed event-driven architecture for fault management of Web Services. According to the architecture managed WS report different events to the event databases. From event databases these events are sent to event processors. The event processors are distributed over the network. They process the events, detect fault scenarios in the event stream and manage faults in the WS.",2009,0, 3878,An integrated web-based platform for the provision of personalized advice in people at high risk for CVD,"Aim of the manuscript is to present an integrated web-based platform which is able to assess a person's risk to develop Cardiovascular Disease (CVD) using the Body Mass Index (BMI) as independent risk factor based on genetic and lifestyle information and in parallel to provide personalized advice in order to reduce this risk. A subject fills out a web available questionnaire regarding his/her lifestyle in terms of nutrition and food habits, while his/her biological material is send for DNA analysis. Data regarding lifestyle and genetic information are sent to a web server, in order to be used for the assessment of the subject to develop high BMI. The assessment is based on an artificial intelligence based system. The result of risk assessment is fed to a remote server where it is integrated with all values corresponding to the answers of the subject to the questionnaire. All values are transferred through the platform in an .xml file. Then, through appropriate mechanism, a report is generated as a document file (a pdf Acrobat file) which includes the result of risk assessment and the corresponding advice on lifestyle habits. Appropriate quality control actions are taken into consideration during the various processes, while access to the platform is permitted only to authenticated personnel. The latter is ensured by an authentication procedure in the user interface of the software and appropriate usernames/passwords.",2009,0, 3879,"Economics of malware: Epidemic risks model, network externalities and incentives","Malicious softwares or malwares for short have become a major security threat. While originating in criminal behavior, their impact are also influenced by the decisions of legitimate end users. Getting agents in the Internet, and in networks in general, to invest in and deploy security features and protocols is a challenge, in particular because of economic reasons arising from the presence of network externalities. Our goal in this paper is to model and quantify the impact of such externalities on the investment in security features in a network. We study a network of interconnected agents, which are subject to epidemic risks such as those caused by propagating viruses and worms. Each agent can decide whether or not to invest some amount to self-protect and deploy security solutions which decreases the probability of contagion. Borrowing ideas from random graphs theory, we solve explicitly this 'micro'-model and compute the fulfilled expectations equilibria. We are able to compute the network externalities as a function of the parameters of the epidemic. We show that the network externalities have a public part and a private one. As a result of this separation, some counter-intuitive phenomena can occur: there are situations where the incentive to invest in self-protection decreases as the fraction of the population investing in self-protection increases. In a situation where the protection is strong and ensures that the protected agent cannot be harmed by the decision of others, we show that the situation is similar to a free-rider problem. In a situation where the protection is weaker, then we show that the network can exhibit critical mass. We also look at interaction with the security supplier. In the case where security is provided by a monopolist, we show that the monopolist is taking advantage of these positive network externalities by providing a low quality protection.",2009,0, 3880,Prediction-Based Prefetching to Support VCR-like Operations in Gossip-Based P2P VoD Systems,"Supporting free VCR-like operations in P2P VoD streaming systems is challenging. The uncertainty of frequent VCR operations makes it difficult to provide high quality realtime streaming services over distributed self-organized P2P overlay networks. Recently, prefetching has emerged as a promising approach to smooth the streaming quality. However, how to efficiently and effectively prefetch suitable segments is still an open issue. In this paper, we propose PREP, a PREdiction-based Prefetching scheme to support VCR-like operations over gossip-based P2P on-demand streaming systems. By employing the reinforcement learning technique, PREP transforms users' streaming service procedure into a set of abstract states and presents an online prediction model to predict a user's VCR behavior via analyzing the large volumes of user viewing logs collected on the tracker. We further present a distributed data scheduling algorithm to proactively prefetch segments according to the predicted VCR behavior. Moreover, PREP takes advantage of the inherent peer collaboration of gossip protocol to optimize the response latency. Through comprehensive simulations, we demonstrate the efficiency of PREP by gaining the accumulated hit ratio close to 75% while reducing the response latency close to 70% with only less than 15% extra stress on the server side.",2009,0, 3881,Reliable Software Distributed Shared Memory Using Page Migration,"Reliability has recently become an important issue in PC cluster technology. This research proposes a software distributed shared memory system, named SCASH-FT, as an execution platform for high performance and highly reliable parallel system for commodity PC clusters. To achieve fault tolerance, each node has redundant page data that allows recovery from node failure using SCASH-FT. All page data is checkpointed and duplicated to another node when a user explicitly calls the checkpoint function. When failure occurs, SCASH-FT invokes the rollback function by restarting an execution from the last checkpoint data. SCASH-FT takes charge of processes such as detecting failure and restarting execution. So, all you have to do is just adding checkpoint function calls in the source code to determine the timing of each checkpoint. Evaluation results show that the checkpoint cost and the rollback penalty depend on the data access pattern and the checkpoint frequency. Thus, users can control their application performance by adjusting checkpoint frequency.",2009,0, 3882,A naive Bayesian Belief Network model for predicting effort deviation rate in software testing,"Most of projects' cost exceeds 10% of yearly corporations' turnover, a major factor contributing to this loss is the overrun cost of software testing. A lot of events during software Quality Assurance(QA) cycles, the main execution part of testing process, lead the loss. Therefore, there is a great potential benefit to find a way to predict the loss when the risk events arise or when we know they will happen during the QA cycles. In this paper, a model is proposed to solve the above problem via Bayesian Belief Network (BBN), in this model, five independent factors, which may lead the loss in software testing, are extracted by exploring the historical documents and questionnaires back from QA managers, and they are used to classify the loss of the QA effort, and predict the probability distribution of the loss, the mean of the distribution is defined as the predicated loss. The model is proved effective according to the data collected from 45 delayed QA cycles.",2009,0, 3883,An approach to automatic verification of stochastic graph transformations,"Non-functional requirements like performance and reliability play a prominent role in distributed and dynamic systems. To measure and predict such properties using stochastic formal methods is crucial. At the same time, graph transformation systems are a suitable formalism to formally model distributed and dynamic systems. Already, to address these two issues, stochastic graph transformation systems (SGTS) have been introduced to model dynamic distributed systems. But most of the researches so far are concentrated on SGTS as a modelling means without considering the need for suitable analysis tools. In this paper, we present an approach to verify this kind of graph transformation systems using PRISM (a stochastic model checker). We translate the SGTS to the input language of PRISM and then PRISM performs the model checking and returns the results back to the designers.",2009,0, 3884,Simulation of fault injection of microprocessor system using VLSI architecture system,Evaluating and possibly improving the fault tolerance and error detecting mechanisms is becoming a key issue when designing safety-critical electronic systems. The proposed approach is based on simulation-based fault injection and allows the analysis of the system behavior when faults occur. The paper describes how a microprocessor board employed in an automated light-metro control system has been modeled in VHDL and a Fault Injection Environment has been set up using a commercial simulator. Preliminary results about the effectiveness of the hardware fault-detection mechanisms are also reported. Such results will address the activity of experimental evaluation in subsequent phases of the validation process.,2009,0, 3885,A multi-scaling based admission control approach for network traffic flows,"In this paper, we propose an analytical expression for estimating byte loss probability at a single server queue with multi-scale traffic arrivals. We extend our investigation on the application potentiality of the estimation method and possible its quality in connection admission control mechanisms. Extensive experimental tests validate the efficiency and accuracy of the proposed loss probability estimation approach and its superior performance for admission control applications in network connection with respect to some well-known approaches suggested in the literature.",2009,0, 3886,Multimedia and communication technologies in Digital Ecosystems,"A Digital Ecosystem is an evolving computer based system that uses communication and networking technologies to provide a solution for a specific application domain. This paper investigates how the latest multimedia and communication technologies can be used in the development of Digital Ecosystems. The high-level models of two ongoing research projects for developing Digital Ecosystems are presented as examples. The first system aims to maximize the Utilisation of Harvested Rain Water by considering factors such as stored water level, predicted rainfall and the watering needs of different plants. The second system uses a wireless Sensor Network for bushfire detection and other weather parameters for generating early warnings. The choice of type of input and output information for these Digital Ecosystems needs to consider Quality of Service parameters, particularly delay and jitter.",2009,0, 3887,Fuzzy reliability of gracefully degradable computing systems,"Conventional reliability analysis rely on the probability model, which is often inappropriate for handling complex systems due to the lack of sufficient probabilistic information. For large complex systems made up of many components, the uncertainty of each individual parameter amplifies the uncertainty of the total system reliability. To overcome this problem, the concept of fuzzy approach has been used in the evaluation of the reliability of a system. In this paper, we use a gracefully degradable redundant computing system to capture the effect of coverage factor and repair on its reliability. Markov models on the other hand can handle degradation, imperfect fault coverage, complex repair policies, multi-operational-state components, dependent failures, and other sequence dependent events. In order to have the advantages of both fuzzy and Markov approaches, an attempt has been made to develop a fuzzy based Markov model.",2009,0, 3888,Diagnostic models for sensor measurements in rocket engine tests,"This paper presents our ongoing work in the area of using virtual reality (VR) environments for the Integrated Systems Health Management (ISHM) of rocket engine test stands. Specifically, this paper focuses on the development of an intelligent valve model that integrates into the control center at NASA Stennis Space Center. The intelligent valve model integrates diagnostic algorithms and 3D visualizations in order to diagnose and predict failures of a large linear actuator valve (LLAV). The diagnostic algorithm uses auto-associative neural networks to predict expected values of sensor data based on the current readings. The predicted values are compared with the actual values and drift is detected in order to predict failures before they occur. The data is then visualized in a VR environment using proven methods of graphical, measurement, and health visualization. The data is also integrated into the control software using an ActiveX plug-in.",2009,0, 3889,Control-oriented multirate LPV modelling of virtualized service center environments,"To reduce the operating and energy costs of IT systems, nowadays software components are executed in virtualized servers, where a varying fraction of the physical CPU capacity is shared among running applications. Further, in service oriented environments, providers need to comply with the Service Level Objectives (SLO) stipulated in contracts with their customers. The trade-off between energy consumption and quality of service guarantees can be formalized in terms of a constrained control problem. To effectively solve it, accurate models of the virtualized server dynamics are needed. In this paper, an LPV multirate modeling approach is proposed, and its suitability for the considered application is assessed on experimental data.",2009,0, 3890,[Title page i],"The following topics are dealt with: verifier-based password-authenticated key exchange, elliptic curve cryptography, electricity load forecasting, notion of information carrying, DDos attacks, linguistic multiple attribute decision making, ontology directory services, semantic Web, defect prone software modules, sequential pattern mining, text clustering algorithm, knowledge management system, Internet, speech extraction, speech emotion recognition, emulating process, billing system, bidirectional auction algorithm multirobot coordination, process algebra, image watermarking, photovoltaic system based on FPGA, wavelet threshold de-noising application, workflow process mining system.",2009,0, 3891,Predicting Defect-Prone Software Modules at Different Logical Levels,"Effective software defect estimation can bring cost reduction and efficient resources allocation in software development and testing. Usually, estimation of defect-prone modules is based on the supervised learning of the modules at the same logical level. Various practical issues may limit the availability or quality of the attribute-value vectors extracting from the high-level modules by software metrics. In this paper, the problem of estimating the defect in high-level software modules is investigated with a multi-instance learning (MIL) perspective. In detail, each high-level module is regarded as a bag of its low-level components, and the learning task is to estimate the defect-proneness of the bags. Several typical supervised learning and MIL algorithms are evaluated on a mission critical project from NASA. Compared to the selected supervised schemas, the MIL methods improve the performance of the software defect estimation models.",2009,0, 3892,Improving lesion detectability of a PEM system with post-reconstruction filtering,"We present a method to quantify the image quality of a positron emission mammography (PEM) imaging system through the metric of lesion detectability. For a customized image quality phantom, we assess the impact of different post-reconstruction filters on the acquired PEM image. We acquired six image quality phantom images on a Naviscan PEM scanner using different scan durations which gave differing amounts of background noise. The image quality phantom has dimensions of 130 mm A 130 mm A 66 mm and consists of 15 hot rod inserts with diameters of approximately 10 mm, 5 mm, 4 mm, 3 mm, and 2 mm filled with activity ratios of 3.5, 6.8 and 12.7 times the background activity. One region of the phantom had no inserts so as to measure the uniformity of the background noise. Lesion detectability was determined for each background uniformity and each activity ratio by extrapolating a fit of the recovery coefficients to the point where the lesion would be lost in the noise of the background (defined as 3 times the background's standard deviation). The data were reconstructed by the system's standard clinical software using an MLEM algorithm with 5 iterations. We compare the lesion detectability of an unfiltered image to the image after applying one of five common post-reconstruction filters. Two of the filters were found to improve lesion detectability: a bilateral filter (9% improvement) and a Perona-Malik filter (8% improvement). One filter was found to have negligible effect: a Gaussian filter showed a 1% decrease in lesion detectability. The other two filters tested were found to worsen lesion detectability: a median filter (8% decrease) and a Stick filter (7% decrease).",2009,0, 3893,The Utah PET lesion detection database,"Task-based assessment of image quality is a challenging but necessary step in evaluating advancements in PET instrumentation, algorithms, and processing. We have been developing methods of evaluating observer performance for detecting and localizing focal warm lesions using experimentally-acquired whole-body phantom data designed to mimic oncologic FDG PET imaging. This work describes a new resource of experimental phantom data that is being developed to facilitate lesion detection studies for the evaluation of PET reconstruction algorithms and related developments. A new large custom-designed thorax phantom has been constructed to complement our existing medium thorax phantom, providing two whole-body setups for lesion detection experiments. The new phantom is ~50% larger and has a removable spine/rib-cage attenuating structure that is held in place with low water resistance open cell foam. Several series of experiments have been acquired, with more ongoing, including both 2D and fully-3D acquisitions on tomographs from multiple vendors, various phantom configurations and lesion distributions. All raw data, normalizations, and calibrations are collected and offloaded to the database, enabling subsequent retrospective offline reconstruction with research software for various applications. The offloaded data are further processed to identify the true lesion locations in preparation for use with both human observers and numerical studies using the channelized non-prewhitened observer. These data have been used to study the impact of improved statistical algorithms, point spread function modeling, and time-of-flight measurements upon focal lesion detection performance, and studies on the effects of accelerated block-iterative algorithms and advanced regularization techniques are currently ongoing. Interested researchers are encouraged to contact the author regarding potential collaboration and application of database experiments to their projects.",2009,0, 3894,Analyzing Feasibility of Requirement Driven Service Composition,"In a service-oriented architecture, how to analyze the feasibility of service composition according to the requirements of service consumers has become a problem that must be solved in service composition. A method for analyzing feasibility of requirement driven service composition is proposed. Based on the support function at different stages in the lifecycle of service composition, the composition process is divided into a number of independent function modules, colored Petri nets are used to model for them, meanwhile the QoS properties of services such as price and success probability are considered, the task failure and processing strategies are characterized. Based on this, combining with the requirements of service consumer, the definition of feasibility and a service schedulability strategy are advanced. ASK-CTL is introduced to describe and verify basic properties of model, and the corresponding analysis algorithm is also given. What's more, at the end of this paper, an example was given to simulate analysis process by using CPN tools.",2009,0, 3895,Full coverage manufacturing testing for SRAM-based FPGA,"Full coverage detection of faults is required for FPGA manufacturing testing. Many researches for FPGA test only focus on algorithms of reduction configuration numbers for a configurable logic blocks (CLB), or interconnect routings without application for manufacturing testing. Taking advantage of flexibility and observability of software in conjunction with high-speed simulation of hardware, SoC co-verification technology based in-house FPGA functional test environment embedded with an in-house computerized tool, ConPlacement, can detect CLBs and interconnect routing of FPGA automatically, exhaustively and repeatedly. The approach to implement full coverage detection of CLB and interconnect routing faults by the FPGA functional test environment is presented in the paper. Experimental results of XC4010E demonstrate that 17 configurations are required to achieve 100% coverage for a FPGA-under-test.",2009,0, 3896,Time-sensitive access control model in P2P networks,"Peer-to-peer (P2P) networks have attracted more and more attentions in recent years. The access control model is very important for network environment especially for the dynamic networks such as P2P networks. In this paper, a novel time-sensitive access control was proposed for the P2P networks. Firstly, the time point and time span described by the OCL (object constraint language) was introduced. Secondly, the access control model considering the time on-line probability for each peer, and selecting the neighbor peer based on the weighted value of time on-line and trust value was developed. Thirdly, two different type of weighted was introduced, i.e. static weight and adaptive weight. Then, the proposed time-sensitive access control model was coded on the well-known query cycle simulator. It is concluded that our proposed access control model was superior to the current access control model, in terms of file sharing quality and quantity.",2009,0, 3897,Quality prediction model of object-oriented software system using computational intelligence,"Effective prediction of the fault-proneness plays a very important role in the analysis of software quality and balance of software cost, and it also is an important problem of software engineering. Importance of software quality is increasing leading to development of new sophisticated techniques, which can be used in constructing models for predicting quality attributes. In this paper, we use fuzzy c-means clustering (FCM) and radial basis function neural network (RBFNN) to construct prediction model of the fault-proneness, RBFNN is used as a classificatory, and FCM is as a cluster. Object-oriented software metrics are as input variables of fault prediction model. Experiments results confirm that designed model is very effective for predicting a class's fault-proneness, it has a high accuracy, and its implementation requires neither extra cost nor expert's knowledge. It also is automated. Therefore, proposed model was very useful in predicting software quality and classing the fault-proneness.",2009,0, 3898,Whisper500 wind turbine failure mode post-analysis and simulation,"Wind power is a promising green energy because of the energy crisis is grown up gradually. The first key factor to wind power is energy transfer efficiency and the reliability of the wind turbine system, if without efficiency, the power output just exhaust the energy, if without reliability, the wind turbine system is not only useless but danger to the surrender. This paper is use stress and dynamic simulation to analyze the falling accident of the blade of Whisper500 wind turbine. We use ADAMS software to build the model of Whisper500 wind turbine and simulate its structure, consider the thrust on the blade by flow field to analyze the force taken on screws when blades are rotating, and compare with real Whisper500 wind turbine to find the optimum electricity in the environment variation to improve the structural design of the wind turbine. The purpose is reducing the probability of accidents of the wind turbine so that have more efficiency and more reliability for the wind turbine system.",2009,0, 3899,Adaptation of ATAMSM to software architectural design practices for organically growing small software companies,"The architecture of a software application determines the degree of success of both operation and development of software. Adopted architectural options not only affect the functionality and performance of the software, but they also affect delivery related factors such as cost, time, changeability, scalability, and maintainability. It is thus very important to find appropriate means of assessing benefits as well as liabilities of different architectural options to maximize the life-time benefit and reduce the overall cost of ownership of a software application. The Architecture Tradeoff Analysis Method (ATAMSM) developed by Software Engineering Institute (SEI) is that kind of tool. Considerably this is a very big framework for dealing with architectural tradeoff issues faced by large companies for developing large as well as complex software applications. The practicing of full blown ATAM without taking into consideration of diverse forces affecting the value addition from its practice does not maximize benefits from its adoption. Related forces faced by small software companies are significantly different than those faced by large software companies. Therefore, ATAM should be adapted to make it suitable for the practice by small software companies. This paper presents the information about the architectural practice level of organically grown small software companies within the context of ATAM followed by the gap analysis between the industry practices and ATAM, and adaptation recommendations. Both literature review and field investigation based on key informant interview have been performed for this purpose. Based on the findings of this study an adaptation process of ATAM for the small companies has been proposed.",2009,0, 3900,A recommended market research based approach for small software companies for improving systematic reuse capability in delivering customized software solutions,"This paper suggests the application of market research methodology to screen new software application ideas based on market analysis and shows how a software company can combine market research with new software product development to provide exciting customized software applications that better meet consumer requirements and make the company profitable. Both state-of-art-review and filed investigation have been performed to assess global as well as local practices of market research methodology in different industries including the software industry. Upon analysis of review outputs and field level investigation findings, a set of recommendations for practicing market research methodology for small software companies have been derived for improving systematic reuse based sustained capability improvement in delivering customized software applications in attractive market segments in an increasing profitable manner.",2009,0, 3901,Workflow composition through design suggestions using design-time provenance information,"With the increasing complexity of the Grid based application workflows, the workflow design process is also getting more and more complex. Many workflow design tools provide mechanism to ease the workflow design process and to make the life easier for a workflow designer. In this paper, we present a provenance based workflow design suggestion system for quick and easy creation of the error-free workflows. In our approach, the provenance system intercepts the users' actions, processes, stores these actions in the provenance store and provides suggestions about possible subsequent actions for the workflow design. These suggested actions are based on the current user actions and are calculated using the provenance information available in the provenance store. These design suggestions partially automate the design process providing ease of use, reliability and correctness during the workflow design process. Creating error-free workflows is of pivotal importance in distributed execution environments. Increasing complexity in the designing of these complex workflows is making the design process more error prone and tedious. Taking into account the significance of correctness of the Grid based workflow and realizing the importance of the design-time in the life of a workflow based application, we present a novel approach of using recorded provenance information.",2009,0, 3902,Study on performance testing of index server developed as ISAPI Extension,"A major concern of most businesses is their ability to meet customers' performance requirements. Correspondingly, in order to ensure the information system to provide service with high quality, it's necessary to test the performance before information system issued. This paper proposes an approach to performance testing of Index Server, which is developed as ISAPI Extension. This paper mainly focuses on a case study that demonstrates the approach to a security updating index server. With avalanche and testing in the test lab, we assessed the performance of the system under both current workloads and those likely to be encountered in the future. In addition, this leds to find out the bottleneck of its performance.",2009,0, 3903,A two-stage safety analysis model for railway level crossing surveillance systems,"In many circumstances, the application of safety analysis tools may not give satisfactory results because the safety-related data are incomplete or there is a high level of uncertainty involved in the safety-related data. This paper presented a two-stage safety analysis model for railway level crossing surveillance systems by using fuzzy Petri nets, fault tree analysis and Markov model. An empirical study is also conducted to assess safety status of the most feasible level crossing surveillance system for Taiwan Railway Administration.",2009,0, 3904,Advance reservations for distributed real-time workflows with probabilistic service guarantees,"This paper addresses the problem of optimum allocation of distributed real-time workflows with probabilistic service guarantees over a grid of physical resources made available by a provider. The discussion focuses on how such a problem may be mathematically formalised, both in terms of constraints and objective function to be optimized, which also accounts for possible business rules for regulating the deployment of the workflows. The presented formal problem constitutes a probabilistic admission control test that may be run by a provider in order to decide whether or not it is worth to admit new workflows into the system, and to decide what the optimum allocation of the workflow to the available resources is. Various options are presented which may be plugged into the formal problem description, depending on the specific needs of individual workflows.",2009,0, 3905,Business-oriented fault localization based on probabilistic neural networks,"Analyzed here is a business-oriented fault localization algorithm based on transitive closure fault propagation model and probability neural networks (PNN). Business-oriented fault localization is to construct a fault propagation model for each large complex software business. This strategy focuses on availability of key business other than scattered fault information. Because of the complex dependent relations between software, hardware, and middleware, fault of one component may propagate into correlated components and produce multiple alarms (symptoms). Transitive closure is possible symptoms domain of faults. Fault diagnosis can be transformed into a classification problem. In practice, PNN is often an excellent pattern classifier, outperforming other classifiers including back propagation (BP). It trains quickly since the training is done in one pass of each training vector, rather than several iterations. In our fault localization algorithm FLPNN, conditional probability of symptom is used as weight of hidden layer, and probability of fault is used as weight of output layer. Input of FLPNN is binary vector which represents the occurrence of symptom or not. In order to adapt the change of fault pattern, incremental learning algorithm of DFLPNN is also investigated. The simulation results show the validity and efficiency of FLPNN compared with MCA+ under lost and spurious symptom circumstances.",2009,0, 3906,Optical Fault Attacks on AES: A Threat in Violet,"Microprocessors are the heart of the devices we rely on every day. However, their non-volatile memory, which often contains sensitive information, can be manipulated by ultraviolet (UV) irradiation. This paper gives practical results demonstrating that the non-volatile memory can be erased with UV light by investigating the effects of UV-Clight with a wavelength of 254 nm on four different depackaged microcontrollers. We demonstrate that an adversary can use this effect to attack an AES software implementation by manipulating the 256- bit S-box table. We show that if only a single byte of the table is changed, 2 500 pairs of correct and faulty encrypted inputs are sufficient to recover the key with a probability of 90%, in case the key schedule is not modified by the attack. Furthermore, we emphasize this by presenting a practical attack on an AES implementation running on an 8-bit microcontroller. Our attack involves only a standard decapsulation procedure and the use of alow-cost UV lamp.",2009,0, 3907,Application of pre-function information in software testing based on defect patterns,"In order to improve precision of software static testing based on defect patterns, inter-function information was extended and applied in software static testing. Pre-function information includes two parts, the effect of context to invoked function and the constraint of invoked function to context, which can be used to detect the common defects, such as null pointer defect, un-initial defect, dangling pointer defect, illegal operation defect, out of bounds defect and so on. Experiments show that, pre-function information can reduce false negative in software static testing effectively.",2009,0, 3908,Computer simulation of the grey system prediction method and its application in water quality prediction,"The water quality prediction is an important basis to implement water pollution control programs. In this paper, the gray system method is used to build a mathematical model of water quality prediction for the Yangtze River and using computer to simulate it to complete the implementation of the method for solving the model. Finally we find the method to solve problem of predicting the water quality of Yangtze River. The key problems are focused on the following two issues. (1) If we do not take more effective control measures, we make prediction analysis of the future development trendbased on past data. (2). According to the prediction analysis, we use computer to simulate that how much sewage we need to address each year.",2009,0, 3909,Adaptive online testing for efficient hard fault detection,"With growing semiconductor integration, the reliability of individual transistors is expected to rapidly decline in future technology generations. In such a scenario, processors would need to be equipped with fault tolerance mechanisms to tolerate in-field silicon defects. Periodic online testing is a popular technique to detect such failures; however, it tends to impose a heavy testing penalty. In this paper, we propose an adaptive online testing framework to significantly reduce the testing overhead. The proposed approach is unique in its ability to assess the hardware health and apply suitably detailed tests. Thus, a significant chunk of the testing time can be saved for the healthy components. We further extend the framework to work with the StageNet CMP fabric, which provides the flexibility to group together pipeline stages with similar health conditions, thereby reducing the overall testing burden. For a modest 2.6% sensor area overhead, the proposed scheme was able to achieve an 80% reduction in software test instructions over the lifetime of a 16-core CMP.",2009,0, 3910,Semantic keyword extraction via adaptive text binarization of unstructured unsourced video,"We propose a fully automatic method for summarizing and indexing unstructured presentation videos based on text extracted from the projected slides. We use changes of text in the slides as a means to segment the video into semantic shots. Unlike precedent approaches, our method does not depend on availability of the electronic source of the slides, but rather extracts and recognizes the text directly from the video. Once text regions are detected within keyframes, a novel binarization algorithm, Local Adaptive Otsu (LOA), is employed to deal with the low quality of video scene text, before feeding the regions to the open source Tesseract1 OCR engine for recognition. We tested our system on a corpus of 8 presentation videos for a total of 1 hour and 45 minutes, achieving 0.5343 Precision and 0.7446 Recall Character recognition rates, and 0.4947 Precision and 0.6651 Recall Word recognition rates. Besides being used for multimedia documents, topic indexing, and cross referencing, our system can be integrated into summarization and presentation tools such as the VAST MultiMedia browser.",2009,0, 3911,A no-reference perceptual blur metric using histogram of gradient profile sharpness,"No-reference measurement of blurring artifacts in images is a challenging problem in image quality assessment field. One of the difficulties is that the inherently blurry regions in some natural images may disturb the evaluation of blurring artifacts. In this paper, we study the image gradients along local image structures and propose a new perceptual blur metric to deal with the above problem. The gradient profile sharpness of image edge is efficiently calculated along horizontal or vertical direction. Then the sharpness distribution histogram rectified by just noticeable distortion (JND) threshold is used to evaluate the blurring artifacts and assess the image quality. Experimental results show that the proposed method can achieve good image quality prediction performance.",2009,0, 3912,A majority voter for intrusion tolerant software based on N-version programming techniques,"One of the drawbacks of the existing majority voters, which are widely used in N-version programming (NVP) technique, is the high probability of agreement on incorrect results generated by variants. Therefore, to propose an intrusion-tolerant software architecture based on NVP for hostile environments and to consider possible attacks, a new voting scheme is required. In this paper, we propose a voting scheme to improve the correctness of the binary majority voters in the hostile environments to treat the situations that more than half of the variants may have been compromised. We have used stochastic activity networks (SANs) to model the scheme for a triple-version programming (3VP) system and measure the probability of detecting the correct outputs by the voter. The evaluation results showed that the proposed scheme can improve the correctness of the classic majority voting algorithms to detect the correct output, especially the intrusion detection mechanisms are used in the scheme.",2009,0, 3913,New H.264 intra-rate estimation and inter-rate control driven by improved MAD-based Contrast Sensitivity,"This paper aims to improve H.264 bit-rate control. The proposed algorithm is based on a new and efficient rate-quantization (R-Q) model for the intra frame. For the inter frame, we propose to replace the current use of MAD by a new MAD-based human contrast sensitivity (MAD-CS) which is a more accurate complexity measure. R-Q model for the intra frame results from extensive experiments. The optimal initial quantization parameter QP is based on both target bit-rate and complexity of I-frame. The I-frame target bit-rate is derived from the global target bit-rate by using a new non linear model. MAD-CS includes the contrast sensitivity of the human visual system and weights the absolute differences by the probability of their occurrence. Extensive simulation results show that the use of MAD-CS and the proposed R-Q model achieves better rate control for intra frames, reduces the bit-rates when compared to the H.264 rate control adopted in JM reference software, minimizes the peak to signal ratio variations among encoded pictures and increases significantly as well subjective visual quality (measured by psycho visual experiments) as objective one.",2009,0, 3914,Resource prediction and quality control for parallel execution of heterogeneous medical imaging tasks,"We have established a novel control system for combining the parallel execution of deterministic and non-deterministic medical imaging applications on a single platform, sharing the same constrained resources. The control system aims at avoiding resource overload and ensuring throughput and latency of critical applications, by means of accurate resource-usage prediction. Our approach is based on modeling the required computation tasks, by employing a combination of weighted moving-average filtering and scenario-based Markov chains to predict the execution. Experimental validation on medical image processing shows an accuracy of 97%. As a result, the latency variation within non-deterministic analysis applications is reduced by 70% by adaptively splitting/merging of tasks. Furthermore, the parallel execution of a deterministic live-viewing application features constant throughput and latency by dynamically switching between quality modes. Interestingly, our solution can successfully be reused for alternative applications with several parallel streams, like in surveillance.",2009,0, 3915,RAW tool identification through detected demosaicing regularity,"RAW tools are PC software tools that develop the RAWs, i.e. the camera sensor data, into full-color photos. In this paper, we propose to study the internal processing characteristics of these RAW tools using 3 heterogeneous sets of demosaicing features. Through feature-level fusion, normalization and an Eigen-space regularization technique, we derive a compact set of discriminant features. Experimentally, we find that the compact feature set can be used to accurately distinguish 40 RAW-tool classes. A dissimilarity study also shows that the cropped image blocks from different RAW-tool or positional classes have a great deal of dissimilarity in our extracted demosaicing features.",2009,0, 3916,Development of a Petri net-based fault diagnostic system for industrial processes,"For the improvement of the reliability and safety of industrial processes, a fault detection and tracing approach has been proposed. In this paper, the P-invariant of Petri nets (PN) is applied to discover sequence faults, while both sensor faults and actuator faults are detected using exclusive logic functions. For industrial applications, the proposed fault detector has been implemented within a programmable logic controller (PLC) by converting the fault detection logic functions into ladder logic diagrams (LLD). Moreover, a fault tracer has been modeled by an AND/OR tree and a tracing procedure is provided to locate the faults. A mark stamping process is demonstrated as an example to illustrate the proposed diagnostic approach.",2009,0, 3917,Development of an Electrical Power Quality Monitor based on a PC,"This paper describes an electric power quality monitor developed at the University of Minho. The hardware of the monitor is constituted by four current sensors based on Rogowski effect, four voltage sensors based on Hall effect, a signal conditioning board and a computer. The software of the monitor consists of several applications, and it is based on LabVIEW. The developed applications allow the equipment to function as a digital scope, analyze harmonic content, detect and record disturbances in the voltage (wave shapes, sags, swells, and interruptions), measure energy, power, unbalances, power factor, register and visualize strip charts, register a great number of data in the hard drive and generate reports. This article also depicts an electrical power quality monitor integrated into active power filters developed at the University of Minho.",2009,0, 3918,Chaos Immune Particle Swarm Optimization Algorithm with Hybrid Discrete Variables and its Application to Mechanical Optimization,"During the iterative process of standard particle swarm optimization (PSO), the premature convergence of particles decreases the algorithm's searching ability. Through analyzing the reason of particle premature convergence during the renewal process, by introducing the selection strategy based on antibody density and initiation based on equal probability chaos, chaos immune particle swarm optimization (CIPSO) algorithm with hybrid discrete variables model was proposed, and its program CIPSO1.0 with Matlab software was developed. Initiation based on chaos makes initial particles possess good performance and the selection strategy based on antibody density makes the particles of immune particle swarm optimization (CIPSO) maintain the diversity during the iterative process, thus overcomes the defect of premature convergence. Example for mechanical optimization indicates that compared with the exiting algorithms, CIPSO gets better result, thus certify the improvement of the algorithm's searching ability by immunity mechanism and chaos initiation particle swarm.",2009,0, 3919,"A multi-tenant oriented performance monitoring, detecting and scheduling architecture based on SLA","Software as a Service (SaaS) is thriving as a new mode of service delivery and operation with the development of network technology and the maturity of application software. SaaS application providers offer services for multiple tenants through the Asingle-instance multi-tenancyA model, which can effectively reduce service costs due to scale effect. Meanwhile, the providers allocate resources according to the SLA signed with tenants to meet the different needs of service quality. However, the service quality of some tenants will be affected by some abnormal consumption of system resources since both hardware and software resources are shared by tenants. In order to deal with this issue, we proposed a multi-tenant oriented monitoring, detecting and scheduling architecture based on SLA for performance isolation. It would monitor service quality of per tenant, discover abnormal status and dynamically adjust the use of resources based on quantization of SLA parameters to ensure the full realization of SLA tasks.",2009,0, 3920,An Airborne imaging Multispectral Polarimeter (AROSS-MSP),"Transport of sediment and organisms in rivers, estuaries and the near-shore ocean is dependent on the dynamics of waves, tides, turbulence, and the currents associated with these interacting bodies of water. We present measurements of waves, currents and turbulence from color and polarization remote sensing in these regions using our Airborne Remote Optical Spotlight System-Multispectral Polarimeter (AROSS-MSP). AROSS-MSP is a 12-channel sensor system that measures 4 color bands (RGB-NTR) and 3 polarization states for the full linear polarization response of the imaged scene. Color and polarimetry, from airborne remotely-sensed time-series imagery, provide unique information for retrieving dynamic environmental parameters relating to sediment transport processes over a larger area than is possible with typical in situ measurements. Typical image footprints provide area coverage on the water surface on the order of 2 square kilometers with 2 m ground sample distance. A significant first step, in advanced sensing systems supporting a wide range of missions for organic UAVs, has been made by the successful development of the Airborne Remote Optical Spotlight System (AROSS) family of sensors. These sensors, in combination with advanced algorithms developed in the Littoral Remote Sensing (LRS) and Tactical Littoral Sensing (TLS) Programs, have exhibited a wide range of important environmental assessment products. An important and unique aspect of this combination of hardware and software has been the collection and processing of time-series imaging data from militarily-relevant standoff ranges that enable characterization of riverine, estuarine and nearshore ocean areas. However, an optimal EO sensor would further split the visible and near-infrared light into its polarimetric components, while simultaneously retaining the spectral components. AROSS-MSP represents the third generation of sophistication in the AROSS series, after AROSS-MultiChannel (AROSS-MC) which was de- veloped to collect and combine time-series image data from a 4-camera sensor package. AROSS-MSP extends the use of color or polarization filters on four panchromatic cameras that was provided by AROSS-MC to 12 simultaneous color and polarization data channels. This particular field of optical remote sensing is developing rapidly, and data of this much more general form is expected to enable the development of a number of additional important environmental data products. Important examples that are presently being researched are: minimizing surface reflections to image the sub-surface water column at greater depth, detecting objects in higher environmental clutter, improving ability to image through marine haze and maximizing wave contrast to improve oce?anographie parameter retrievals such as wave spectra and water depth and currents. These important capabilities can be supported using AROSS-MSP. The AROSS-MSP design approach utilizes a yoke-style positioner, digital framing cameras, and integrated Global Positioning System/Inertial Measurement Unit (GPS/IMU), with a computer-based data acquisition and control system. Attitude and position information are provided by the GPS/IMU, which is mounted on the sensor payload rather than on the airframe. The control system uses this information to calculate the camera pointing direction and maintain the intended geodetic location of the aim point in close proximity to the center of the image while maintaining a standoff range suitable for military applications. To produce high quality images for use in quantitative analysis, robust individual camera and inter-camera calibrations are necessary. AROSS-MSP is optimally focused and imagery is corrected for lens vignetting, non-uniform pixel response, relative radiometry and geometric distortion. The cameras are aligned with each other to sub-pixel accuracy for production of multichannel imagery products and with the IMU for mapping to a geodetic surface. The mapped, corrected",2009,0, 3921,Real-time and long-term monitoring of phosphate using the in-situ CYCLE sensor,"Dissolved nutrient dynamics broadly affect issues related to public health, ecosystem status and resource sustainability. Modeling ecosystem dynamics and predicting changes in normal variability due to potentially adverse impacts requires sustained and accurate information on nutrient availability. On site sampling is often resource limited which results in sparse data sets with low temporal and spatial density. For nutrient dynamics, sparse data sets will bias analyses because critical time scales for the relevant biogeochemical processes are often far shorter and spatially limited than sampling regimes. While data on an areal basis will always be constrained economically, an in-situ instrument that provides coherent data at a sub-tidal temporal scale can provide a significant improvement in the understanding of nutrient dynamics and biogeochemical cycles. WET Labs has developed an autonomous in-situ phosphate analyzer which is able to monitor variability in the dissolved reactive phosphate concentration (orthophosphate) for months with a sub-tidal sampling regime. The CYCLE phosphate sensor is designed to meet the nutrient monitoring needs of the community using a standard wet chemical method (heteropoly blue) and minimal user expertise. The heteropoly blue method for the determination of soluble reactive phosphate in natural waters is based on the reaction of phosphate ions with an acidified molybdate reagent to yield molybdophosphoric acid, which is then reduced with ascorbic acid to a highly colored blue phosphomolybdate complex. This method is selective, insensitive to most environmental changes (e.g., pH, salinity, temperature), and can provide detection limits in the nM range. The CYCLE sensor uses four micropumps that deliver the two reagents (ascorbic acid and acidified molybdate), ambient water, and a phosphate standard. The flow system incorporates an integrated pump manifold and fluidics housing that includes controller and mixing assemblies virtually i- nsensitive to bubble interference. A 5-cm pathlength reflective tube absorption meter measures the absorption at 880 nm associated with reactive phosphate concentration. Reagents and an on-board phosphate standard for quality assurance are delivered using a novel and simple-to-use cartridge system that eliminates the user's interaction with the reagents. The reagent cartridges are sufficient for more than 1000 samples. The precision of the CYCLE sensor is ~50 nM phosphate, with a dynamic range from ~0 to 10 ?M. The CYCLE sensor operates using 12 VDC input, and has a low current draw (milliamps). CYCLE also has 1 GB on-board data storage capacity, and communicates using a serial interface. The host software for the CYCLE sensor includes a variety of features, including deployment planning and sensor configuration, data processing, plotting of raw and processed data, tracking of reagent usage and a pre and post deployment calibration utility. The instrument has been deployed in a variety of sampling situations: freshwater, estuarine, and ocean. Deployments are typically for over 1000 samples worth of continuous run time without maintenance (4-12 wks). Using the CYCLE phosphate sensor, a sufficient sampling rate (~20-30 minutes per sample) is realized to monitor in-situ nutrient variability over a broad range of time scales including tidal cycles, runoff events, and phytoplankton bloom dynamics. We present a time series of phosphate data collected in Yaquina Bay, Oregon. Combining this data with complimentary measurements, the CYCLE phosphate provides a missing link in understanding nutrient dynamics in Yaquina Bay. We demonstrate that by correlating phosphate variability with nitrate, chlorophyll, dissolved oxygen, turbidity, CDOM, conductivity, and temperature, a greater understanding of the factors influencing nutrient flux in the bay is possible. What nutrients limit production and whether anthropogenic or oceanic sources of nutrients dominate bloom dynamics can",2009,0, 3922,Implementations of the Navy Coupled Ocean Data Assimilation system at the Naval Oceanographic Office,"The Naval Oceanographic Office uses the Navy Coupled Ocean Data Assimilation (NCODA) system to perform data assimilation for ocean modeling. Currently the system uses a 3D multivariate optimum interpolation (3D MVOI) algorithm to produce outputs of temperature, salinity, geopotential, and u/v velocity. NCODA is run in a standalone mode to support automated ocean data quality control (NCODA OcnQC) and to test software updates. NCODA is also coupled with the Regional/Global Navy Coastal Ocean Model (RNCOM/GNCOM). The RNCOM/NCODA system is being used as part of an Adaptive Sampling and Prediction (ASAP) pre-operational project, that makes use of the Ensemble Transform (ET) and Ensemble Transform Kalman Filter (ET KF) applied to ensemble runs of the RNCOM. The ET KF is used to predict the posterior error covariances resulting from possible profile measurements. These results aid in predicting the impact of ocean observations on the future analysis, and thus allow the direction of limited assets to areas that will have the maximum gain (for applications such as ocean acoustics). A review of these systems will be given as well as examples of the metrics used for the RNCOM/NCODA system, ensemble modeling, and ASAP.",2009,0, 3923,Instrumentation for continuous monitoring in marine environments,"Continuous monitoring data are a useful source of information for the understanding of seasonal chemical and biological changes in marine environments. They are useful to estimate nutrient dynamics, primary and secondary production as well as to assess C, N, P fluxes associated with biogeochemical cycling. More and better water quality data is needed to calculate Maximum Permissible Loading of coastal waters and we need better data to assess trends, to determine current status and impairments, and to test water quality models. For a long time these requirements were not met satisfactorily due to the absence of suitable instrumentation in the market. SYSTEA has tried to bridge this gap since ten years with the development of several field analyzers (NPA, NPA Plus and Pro, DPA series and latest is the WIZ-probe) and by participating in several R&D European projects (EXOCET/D, WARMER) we have proven our ability to build reliable and efficient in-situ probes which are now commercially available. The choice to work in collaboration with scientific institutions specialized in marine ecosystem study has been made very early by SYSTEA and is actually the only company able to offer a complete range of in-situ probes for continuous nutrient analysis, using its exclusive ?-LFA technology fully developed by SYSTEA, in collaboration with Sysmedia S.r.l with remote management capabilities. These innovative technical solutions allow deploying their DPA probe down to - 1500 m depth, maintaining a high level of accuracy and robustness as proved during the European project EXOCET/D in 2006. The WIZ probe is the latest development of SYSTEA, the state of the art portable """"in-situ"""" probe, to measure up to four chemical parameters continuously in surface waters or marine environments. The innovative design allows an easy handling and field deployment by the user. WIZ probe allows, in the standard configuration, the detection of four nutrient parameters (orthophosphate, ammonia, nit- rite and nitrate) in low concentrations while autonomously managing the well tested spectrophotometric wet chemistries, and an advanced fluorimetric method for ammonia measurement. Analytical methods have been developed for several other parameters including silicates, iron and trace metals. Results are directly recorded in concentration units; all measured values are stored with date, time and sample optical density (O.D.). The same data are remotely available through a serial communication port, which allows the complete probe configuration and remote control using the external Windows? based Wiz Control Panel software.",2009,0, 3924,Using social network analysis for software project management,"Software projects are complex endeavors where the application of processes, knowledge, skills and tools is necessary, in order to increase the probability of successful completion. Traditionally software projects are analyzed controlled and monitored by controlling project scope, time, cost and quality. In this paper we present how social network analysis can be used in order to improve software project control. The presented model is based on meta-networks where basic project entities are combined for representing communication, collaboration, and knowledge. The analysis of the model is illustrated with sample data extracted from three software projects.",2009,0, 3925,Software development tools for streaming DSP applications,"Programming multicore application processors is a daunting task and component-based software development has already demonstrated the effectiveness to simplify it. However, component compositions are cumbersome, time-consuming and error-prone. This paper presents a graphical tool to mitigate the problem, which efficiently visualizes the design capture, simulation and debugging processes of TI DaVinci multicore platform based on codec engine and the xDAIS framework components. The experimental result shows our proposed tool introduces only 1% run-time overhead, which is neglectable for practical applications.",2009,0, 3926,New Error Containment Schemes for H.264 Decoders,"This paper focuses on new error containment schemes for the H.264 advanced video codec (AVC). A lossless flexible macroblock ordering (FMO) removal scheme that allows playback of FMO-encoded videos on any H.264 player and a novel error concealment method have been developed. H.264 introduces powerful error resilience tools such as FMO to mitigate the effect of errors on the decoded videos. However, many commercial H.264 players cannot handle FMO. We have developed a new method to remove the FMO structure, thereby allowing the video to be decoded on any commercial player. We also present a model that accurately predicts the average overheads incurred by our scheme. At the same time, we developed a new error concealment method for I-frames to enhance video quality without relying on channel feedback. This method is shown to be superior to existing methods, including that from the JM reference software.",2009,0, 3927,On Supporting P2P-Based VoD Services over Mesh Overlay Networks,"Due to their ability to overcome many shortcomings associated with the contemporary client-server paradigm, Peer-to-Peer (P2P) networks have attracted phenomenal interests from researchers in both academia and industry. Interactive and multimedia streaming applications using P2P networks are, however, often prone to long startup delays, which disrupt the smooth playback and undermine users' perceived quality of service. In addition, P2P networks must be able to support a potential number of users while ensuring that the resources are efficiently utilized. In this paper, by addressing these shortcomings in the traditional P2P framework, we envision a novel scheme to effectively provide a Video-on-Demand (VoD) using P2P-based mesh overlay networks. The proposed scheme covers two main phases, namely requesting and scheduling modes. The former aims at dynamically selecting the required contents from the available peers. On the other hand, in the scheduling mode, the incoming requests are scheduled in a priority-based manner for minimizing the startup latency and sustaining the playback rate to an acceptable level. Computer simulations have been conducted to verify the effectiveness of the proposed scheme. The obtained results demonstrate the scalability of our envisioned scheme in addition to its capability to reduce the startup delay and provide a sustainable playback rate.",2009,0, 3928,Application of single pole auto reclosing in distributaion networks with high pentration of DGS,"Due to continued penetration of DG into existing distribution network, the tripping of DG during network fault condition may affect the network stability and reliability. Operation of DG should be remain during network temporary fault as far as possible in order to maximise the benefits of interconnection of DG and increase reliability of power supply service to consumers. Conventional three pole auto recloser in distribution network become incompatible with the presence of DG because it interrupts the operation of DG during network temporary single phase to earth fault event and cause unnecessary disconnection of DG. Therefore the application of single pole auto reclosing scheme (SPAR) which is widely used in transmission network should be considered in distribution network with DG. Literature survey shows that no investigation has been carried out for the application of fault identification and phase selection techniques to single pole auto reclosing scheme in distribution networks with high penetration of DG. This paper presents the development of an adaptive fault identification and phase selection scheme to be used in the implementation of SPAR in power distribution network with DG. The proposed method uses only three line current measurements at the relay point. The value of line current during prefault condition and transient period of fault condition is processed using condition rules with IF-THEN in order to determine the faulty phase and to initiate single pole auto reclosure. The analysis of the proposed method is performed using PSCAD/EMTDC power system software. Test results show that the proposed method can correctly detect the faulty phase within one cycle. The validity of the proposed method has been tested for different fault locations and network operating modes.",2009,0, 3929,Reverse engineering as a means of improving and adapting legacy finite element code,"The development of code for finite elements-based field computation has been going on at a pace since the 1970s, yielding code that was not put through the software lifecycle - where code is developed through a sequential process of requirements elicitation from the user/client to design, analysis, implementation and testing (with loops going back from the second stage onwards as dissatisfactions are identified or questions arise) and release and maintenance. As a result, today we have legacy code running into millions of lines, implemented without planning and not using proper state-of-the-art software design tools. It is necessary to redo this code to exploit object oriented facilities and make corrections or run on the web with Java. Object oriented code's principal advantage is reusability. It is ideal for describing autonomous agents so that values inside a method are private unless otherwise so provided - that is encapsulation makes programming neat and less error-prone in unexpected situations. Recent advances in software make such reverse engineering/reengineering of this code into object oriented form possible. The purpose of this paper is to show how existing finite element code can be reverse/re-engineered to improve it. Taking sections of working finite element code, especially matrix computation for equation solution as examples, we put it through reverse engineering to arrive at the effective UML design by which development was done and then translate it to Java. This then is the starting point for analyzing the design and improving it without having to throw away any of the old code.",2009,0, 3930,A Tool Suite for the Generation and Validation of Configurations for Software Availability,"The Availability Management Framework (AMF) is a service responsible for managing the availability of services provided by applications that run under its control. Standardized by the Service Availability Forum (SAF), AMF requires for its operations a complete and compliant AMF configuration of the applications to be managed. In this paper, we describe two complementary and integrated tools for AMF configurations generation and validation. Indeed, writing manually an AMF configuration is a tedious and error prone task as a large number of requirements defined in the standard have to be taken into consideration during the process. One solution for ensuring compliance with the standard is the validation of the configurations against all the AMF requirements. For this, we have designed and implemented a domain model for AMF configurations and use it as a basis for an AMF configuration validator. To further ease the task of a configuration designer, we have devised and implemented a method for generating automatically AMF configurations.",2009,0, 3931,Zoltar: A Toolset for Automatic Fault Localization,"Locating software components which are responsible for observed failures is the most expensive, error-prone phase in the software development life cycle. Automated diagnosis of software faults can improve the efficiency of the debugging process, and is therefore an important process for the development of dependable software. In this paper we present a toolset for automatic fault localization, dubbed Zoltar, which hosts a range of spectrum-based fault localization techniques featuring BARINEL, our latest algorithm. The toolset provides the infrastructure to automatically instrument the source code of software programs to produce runtime data, which is subsequently analyzed to return a ranked list of diagnosis candidates. Aimed at total automation (e.g., for runtime fault diagnosis), Zoltar has the capability of instrumenting the program under analysis with fault screeners as a run-time replacement for design-time test oracles.",2009,0, 3932,A Case for Automated Debugging Using Data Structure Repair,"Automated debugging is becoming increasingly important as the size and complexity of software increases. This paper makes a case for using constraint-based data structure repair, a recently developed technique for fault recovery, as a basis for automated debugging. Data structure repair uses given structural integrity constraints for key data structures to monitor their correctness during the execution of a program. If a constraint violation is detected, repair performs mutations on the data structures, i.e., corrupt program state, and transforms it into another state, which satisfies the desired constraints. The primary goal of data structure repair is to transform an erroneous state into an acceptable state. Therefore, the mutations performed by repair actions provide a basis of debugging faults in code (assuming the errors are due to bugs). A key challenge to embodying this insight into a mechanical technique arises due to the difference in the concrete level of the program states and the abstract level of the program code: repair actions apply to concrete data structures that exist at runtime, whereas debugging applies to code. We observe that static structures (program variables) hold handles to dynamic structures (heap-allocated data), which allows bridging the gap between the abstract and concrete levels. We envision a tool-chain where a data structure repair tool generates repair logs that are used by a fault localization tool and a repair abstraction tool that apply in synergy to not only identify the location of fault(s) in code but also to synthesize debugging suggestions. An embodiment of our vision can significantly reduce the cost of developing reliable software.",2009,0, 3933,Enhanced Automation for Managing Model and Metamodel Inconsistency,"Model-driven engineering (MDE) introduces additional challenges for managing evolution. For example, a metamodel change may affect instance models. Existing tool supported approaches for updating models in response to a metamodel change assume extra effort from metamodel developers. When no existing approach is applicable, metamodel users must update their models manually, an error prone and tedious task. In this paper, we describe the technical challenges faced when using the eclipse modeling framework (EMF) and existing approaches for updating models in response to a metamodel change. We then motivate and describe alternative techniques, including: a mechanism for loading, storing and manipulating inconsistent models; a mapping of inconsistent models to a human-usable notation for semi-automated and collaborative co-evolution; and integration with an inter-model reference manager, achieving automatic consistency checking as part of metamodel distribution.",2009,0, 3934,EA-Analyzer: Automating Conflict Detection in Aspect-Oriented Requirements,"One of the aims of aspect-oriented requirements engineering is to address the composability and subsequent analysis of crosscutting and non-crosscutting concerns during requirements engineering. Composing concerns may help to reveal conflicting dependencies that need to be identified and resolved. However, detecting conflicts in a large set of textual aspect-oriented requirements is an error-prone and time-consuming task. This paper presents EA-analyzer, the first automated tool for identifying conflicts in aspect-oriented requirements specified in natural-language text. The tool is based on a novel application of a Bayesian learning method that has been effective at classifying text. We present an empirical evaluation of the tool with three industrial-strength requirements documents from different real-life domains. We show that the tool achieves up to 92.97% accuracy when one of the case study documents is used as a training set and the other two as a validation set.",2009,0, 3935,Automatically Recommending Triage Decisions for Pragmatic Reuse Tasks,"Planning a complex software modification task imposes a high cognitive burden on developers, who must juggle navigating the software, understanding what they see with respect to their task, and deciding how their task should be performed given what they have discovered. Pragmatic reuse tasks, where source code is reused in a white-box fashion, is an example of a complex and error-prone modification task: the developer must plan out which portions of a system to reuse, extract the code, and integrate it into their own system. In this paper we present a recommendation system that automates some aspects of the planning process undertaken by developers during pragmatic reuse tasks. In a retroactive evaluation, we demonstrate that our technique was able to provide the correct recommendation 64% of the time and was incorrect 25% of the time. Our case study suggests that developer investigative behaviour is positively influenced by the use of the recommendation system.",2009,0, 3936,Automatic Generation of Object Usage Specifications from Large Method Traces,"Formal specifications are used to identify programming errors, verify the correctness of programs, and as documentation. Unfortunately, producing them is error-prone and time-consuming, so they are rarely used in practice. Inferring specifications from a running application is a promising solution. However, to be practical, such an approach requires special techniques to treat large amounts of runtime data. We present a scalable dynamic analysis that infers specifications of correct method call sequences on multiple related objects. It preprocesses method traces to identify small sets of related objects and method calls which can be analyzed separately. We implemented our approach and applied the analysis to eleven real-world applications and more than 240 million runtime events. The experiments show the scalability of our approach. Moreover, the generated specifications describe correct and typical behavior, and match existing API usage documentation.",2009,0, 3937,Alattin: Mining Alternative Patterns for Detecting Neglected Conditions,"To improve software quality, static or dynamic verification tools accept programming rules as input and detect their violations in software as defects. As these programming rules are often not well documented in practice, previous work developed various approaches that mine programming rules as frequent patterns from program source code. Then these approaches use static defect-detection techniques to detect pattern violations in source code under analysis. These existing approaches often produce many false positives due to various factors. To reduce false positives produced by these mining approaches, we develop a novel approach, called Alattin, that includes a new mining algorithm and a technique for detecting neglected conditions based on our mining algorithm. Our new mining algorithm mines alternative patterns in example form """"P1 or P2"""", where P1 and P2 are alternative rules such as condition checks on method arguments or return values related to the same API method. We conduct two evaluations to show the effectiveness of our Alattin approach. Our evaluation results show that (1) alternative patterns reach more than 40% of all mined patterns for APIs provided by six open source libraries; (2) the mining of alternative patterns helps reduce nearly 28% of false positives among detected violations.",2009,0, 3938,A Divergence-Oriented Approach to Adaptive Random Testing of Java Programs,"Adaptive Random Testing (ART) is a testing technique which is based on an observation that a test input usually has the same potential as its neighbors in detection of a specific program defect. ART helps to improve the efficiency of random testing in that test inputs are selected evenly across the input spaces. However, the application of ART to object-oriented programs (e.g., C++ and Java) still faces a strong challenge in that the input spaces of object-oriented programs are usually high dimensional, and therefore an even distribution of test inputs in a space as such is difficult to achieve. In this paper, we propose a divergence-oriented approach to adaptive random testing of Java programs to address this challenge. The essential idea of this approach is to prepare for the tested program a pool of test inputs each of which is of significant difference from the others, and then to use the ART technique to select test inputs from the pool for the tested program. We also develop a tool called ARTGen to support this testing approach, and conduct experiment to test several popular open-source Java packages to assess the effectiveness of the approach. The experimental result shows that our approach can generate test cases with high quality.",2009,0, 3939,Adaptive Random Test Case Prioritization,"Regression testing assures changed programs against unintended amendments. Rearranging the execution order of test cases is a key idea to improve their effectiveness. Paradoxically, many test case prioritization techniques resolve tie cases using the random selection approach, and yet random ordering of test cases has been considered as ineffective. Existing unit testing research unveils that adaptive random testing (ART) is a promising candidate that may replace random testing (RT). In this paper, we not only propose a new family of coverage-based ART techniques, but also show empirically that they are statistically superior to the RT-based technique in detecting faults. Furthermore, one of the ART prioritization techniques is consistently comparable to some of the best coverage-based prioritization techniques (namely, the """"additional"""" techniques) and yet involves much less time cost.",2009,0, 3940,Improving API Usage through Automatic Detection of Redundant Code,"Software projects often rely on third-party libraries made accessible through Application Programming Interfaces (APIs). We have observed many cases where APIs are used in ways that are not the most effective. We developed a technique and tool support to automatically detect such patterns of API usage in software projects. The main hypothesis underlying our technique is that client code imitating the behavior of an API method without calling it may not be using the API effectively because it could instead call the method it imitates. Our technique involves analyzing software systems to detect cases of API method imitations. In addition to warning developers of potentially re-implemented API methods, we also indicate how to improve the use of the API. Applying our approach on 10 Java systems revealed over 400 actual cases of potentially suboptimal API usage, leading to many improvements to the quality of the code we studied.",2009,0, 3941,Spectrum-Based Multiple Fault Localization,"Fault diagnosis approaches can generally be categorized into spectrum-based fault localization (SFL, correlating failures with abstractions of program traces), and model-based diagnosis (MBD, logic reasoning over a behavioral model). Although MBD approaches are inherently more accurate than SFL, their high computational complexity prohibits application to large programs. We present a framework to combine the best of both worlds, coined BARINEL. The program is modeled using abstractions of program traces (as in SFL) while Bayesian reasoning is used to deduce multiple-fault candidates and their probabilities (as in MBD). A particular feature of BARINEL is the usage of a probabilistic component model that accounts for the fact that faulty components may fail intermittently. Experimental results on both synthetic and real software programs show that BARINEL typically outperforms current SFL approaches at a cost complexity that is only marginally higher. In the context of single faults this superiority is established by formal proof.",2009,0, 3942,Type Inference for Soft-Error Fault-Tolerance Prediction,"Software systems are becoming increasingly vulnerable to a new class of soft errors, originating from voltage spikes produced by cosmic radiation. The standard technique for assessing the source-level impact of these soft errors, fault injection - essentially a black-box testing technique - provides limited high-level information. Since soft errors can occur anywhere, even control-structured white-box techniques offer little insight. We propose a type-based approach, founded on data-flow structure, to classify the usage pattern of registers and memory cells. To capture all soft errors, the type system is defined at the assembly level, close to the hardware, and allows inferring types in the untyped assembly representation. In a case study, we apply our type inference scheme to a prototype brake-by-wire controller, developed by Volvo Technology, and identify a high correlation between types and fault-injection results. The case study confirms that the inferred types are good predictors for soft-error impact.",2009,0, 3943,Instant-X: Towards a generic API for multimedia middleware,"The globalisation of our society leads to an increasing need for spontaneous communication. However, the development of such applications is a tedious and error-prone process. This results from the fact that in general only basic functionality is available in terms of protocol implementations and encoders/decoders. This leads to inflexible proprietary software systems implementing unavailable functionality on their own. In this work we introduce Instant-X, a novel component-based middleware platform for multimedia applications. Unlike related work, Instant-X provides a generic programming model with an API for essential tasks of multimedia applications with respect to signalling and data transmission. This API abstracts from concrete component implementations and thus allows replacing specific protocol implementations without changing the application code. Furthermore, Instant-X supports dynamic deployment, i.e., unavailable components can be automatically loaded at runtime. To show the feasibility of our approach we evaluated our Instant-X prototype regarding code complexity and performance.",2009,0, 3944,Web service reputation-based search agent,"The semantic web service is an emerging technology which automates discovery and execution of web services on web. Software agent is a main component in building semantic web and acts as middleware between users and web services to find the service that best fits the consumer's requirements. The UDDI registry is used to find service providers and web services published by service providers. The problem, however is that the current UDDI registries do not provide information on quality or reputation of web services. The user has to perform intensive search till the best service is found. We propose a reputation based search agent which performs enhanced search by executing and ranking the list of web services based on QoS requirements of users. The ranking is performed based on reputation of each web service which is regularly and dynamically updated. This model searches registered web services based on keyword taken from user, executes each web service, ranks them according to Qos requirement of user and display the ranked list of end result. The model thus provides users with high probability of quick and successful discovery of web services.",2009,0, 3945,A transdisciplinary approach to oppressive cityscapes and the role of greenery as key factors in sustainable urban development,"Through the recent process of urban development, characterized by urban expansion and redevelopment, industrialized countries have witnessed a surge in the number, scale and complexity of urban structures. However, it has become difficult to keep urban space adaptable to environmental realities and our cities don't completely meet the demands of society. These demands include the sustainable upgrading of social infrastructure and the regeneration of attractive urban space that is not only safe and highly efficient, but also consciously takes into account psychological influence. In this research ?oppressive? refers to cityscape featuring high-rise buildings that cause negative psychological pressure on residents. Oppression is a barrier to achieving sustainable urban development and current research is a step towards addressing this barrier. This paper tries to bring the research of oppression to the international scientific society to present parts of years of Japanese research in this field. Through various methodologies researchers have proved that cities have oppressive and depressive affects on residents but the influencing factors are not completely measured. This research discusses the key parameters of psychological health by assessing the impact of trees effect on real urban oppressive environment. This paper also compares the largeness and quality of trees' affect against other physical factors in the city environment. Two experiments were conducted, one in the real Tokyo urban environment - as a mega city and the other utilizing 3-dimensional computer software to simulate the real urban environment in an experiment room. Totally, 60 participants from the field of architecture looked at specific images and responded by filling in a pre-designed questionnaire. Results indicate that oppression which increases as building's solid angle increases is significantly influenced by the existence of trees and the sky factor. The placement of trees or planting des- - ign in the urban area is important.",2009,0, 3946,A biosignal analysis system applied for developing an algorithm predicting critical situations of high risk cardiac patients by hemodynamic monitoring,"A software system for efficient development of high quality biosignal processing algorithms and its application for the Computers in Cardiology Challenge 2009 Predicting Acute Hypotensive Episodes is described. The system is part of a medical research network, and supports import of several standard and non-standard data formats, a modular and therefore extremely flexible signal processing architecture, remote and parallel processing, different data viewers and a framework for annotating signals and for validating and optimizing algorithms. Already in 2001, 2004 and 2006 the system was used for implementing algorithms that successfully took part in the Challenges, respectively. In 2009 we received a perfect score of 10 out of 10 in event 1 and a score of 33 out of 40 in event 2 of the Challenge.",2009,0, 3947,The impact of virtualization on the performance of Massively Multiplayer Online Games,"Today's highly successful Massively Multiplayer Online Games (MMOGs) have millions of registered users and hundreds of thousands of active concurrent users. As a result of the highly dynamic MMOG usage patterns, the MMOG operators pre-provision and then maintain throughout the lifetime of the game tens of thousands of compute resources in data centers located across the world. Until recently, the difficulty of porting the MMOG software services to different platforms made it impractical to dynamically provision resources external to the MMOG operators' data centers. However, virtualization is a new technology that promises to alleviate this problem by providing a uniform computing platform with minimal overhead. To investigate the potential of this new technology, in this paper we propose a new hybrid resource provisioning model that uses a smaller and less expensive set of self-owned data centers, complemented by virtualized cloud computing resources during peak hours. Using real traces from RuneScape, one of the most successful contemporary MMOGs, we evaluate with simulations the effectiveness of the on-demand cloud resource provisioning strategy for MMOGs. We assess the impact of provisioning of virtualized cloud resources, analyze the components of virtualization overhead, and compare provisioning of virtualized resources with direct provisioning of data center resources.",2009,0, 3948,Effectiveness and Cost of Verification Techniques: Preliminary Conclusions on Five Techniques,"A group of 17 students applied 5 unit verification techniques in a simple Java program as training for a formal experiment. The verification techniques applied are desktop inspection, equivalence partitioning and boundary-value analysis, decision table, linearly independent path, and multiple condition coverage. The first one is a static technique, while the others are dynamic. JUnit test cases are generated when dynamic techniques are applied. Both the defects and the execution time are registered. Execution time is considered as a cost measure for the techniques. Preliminary results yield three relevant conclusions. As a first conclusion, performance defects are not easily found. Secondly, unit verification is rather costly and the percentage of defects it detects is low. Finally desktop inspection detects a greater variety of defects than the other techniques.",2009,0, 3949,CF Improvement Based on Probabilistic Analysis of Discrete Explicit Rating Vector,"Collaborative Filter (CF) is one of the important algorithms of Recommendation System, the sparsity problem is a significant impediment for real use of CF technique. In this paper, based on probabilistic analysis to users' discrete explicit rating vector, an All-Average improved algorithm are proposed to solve the problem of CF sparsity and other practical problems. Experimental result show this method improved the precision and quality of CF prediction.",2009,0, 3950,Detecting Corner Using Fuzzy Similarity Matrix,"In real-time video stream application, a feature point detector is very necessary. Stable feature points are useful in machine vision application. Feature point detectors such as SIFT, Harris and SUSAN are good methods which extracted high quality features, but neither the Harris detector nor the detection stage of SIFT can operate at real-time frame rate. In this paper, we present a new corner detection algorithm which uses fuzzy similarity matrix to calculate average similarity of contiguous pixels. It can be used in real-time applications of any complexity. In our experiments to evaluate our corner detector, difference image transformations are computed and the repeatability of feature points between a reference image and each of the transformed images are computed. Experiment results show our corner detector could give good results.",2009,0, 3951,An Improved Award-Penalty Mechanism Based on the Principal-Agent Theory in Build-Operate-Transfer Projects,"BOT is the primary financing model which used in China to solve the capital needs for infrastructure construction, however, the current award-penalty mechanism couldn't satisfy the development of BOT model; several problems about the quality and management rise during the authorized period. In view of this kind of situation, the paper introduces a new parameter A based on the principal-agent theory to create a new drive contract s(x) in BOT projects, thereby improving the current award-penalty mechanism, the parameter A depends on how hard the enterprises work, the quality parameter of the infrastructures that detected by the authorities, the managing situation and some exogenous variables that are unpredictable and uncontrollable. The improved mechanism can supply the guidance and foundations to realize the effective and rational supervise on the agents during the infrastructure construction and operation.",2009,0, 3952,Predicting Object-Oriented Software Maintainability Using Projection Pursuit Regression,"This paper presents ongoing work on using projection pursuit regression model to predict object-oriented software maintainability. The maintainability is measured as the number of changes made to code during a maintenance period by means of object-oriented software metrics. To evaluate the benefits of using PPR over nonlinear modeling techniques, we also build artificial neural network model, and multivariate adaptive regression splines model. The models performance is evaluated and compared using leave-one-out cross-validation with RMSE. The results suggest that PPR can predict more accurately than the other two modeling techniques. The study also provided the useful information on how to constructing software quality model.",2009,0, 3953,Automatic configuration of spectral dimensionality reduction methods for 3D human pose estimation,"In this paper, our main contribution is a framework for the automatic configuration of any spectral dimensionality reduction methods. This is achieved, first, by introducing the mutual information measure to assess the quality of discovered embedded spaces. Secondly, we overcome the deficiency of mapping function in spectral dimensionality reduction approaches by proposing data projection between spaces based on fully automatic and dynamically adjustable Radial Basis Function network. Finally, this automatic framework is evaluated in the context of 3D human pose estimation. We demonstrate mutual information measure outperforms all current space assessment metrics. Moreover, experiments show the mapping associated to the induced embedded space displays good generalization properties. In particular, it allows improvement of accuracy by around 30% when refining 3D pose estimates of a walking sequence produced by an activity independent method.",2009,0, 3954,Conflict Resolution in Collective Ubiquitous Context-Aware Applications,"The context-aware computing is a research field that defines systems capable of adapting their behavior according to any relevant information about entities of interest. The ubiquitous computing is closely related to the use of contexts, since it aims to provide personalized, transparent and on-demand services. Ubiquitous systems are frequently shared among multiple users, which may lead to conflicts that occur due to individual profiles divergences and/or environment resources incompatibility. In such situations it is interesting to detect and solve those conflicts, considering what is better for the group but also being fair enough with each individual demand, whenever possible. This work presents the important concepts on the collective ubiquitous context-aware applications field. Furthermore, it proposes a novel methodology for conflicts detection and resolution that considers the trade-off between quality of services and resources consumption. A case study based on a collective tourist guide was implemented as a proof-of-study to the proposed methodology.",2009,0, 3955,Mixing Simulated and Actual Hardware Devices to Validate Device Drivers in a Complex Embedded Platform,"The structure and the functionalities of a device driver are strongly influenced by the target platform architecture, as well as by the device communication protocol. This makes the generation of device drivers designed for complex embedded platforms a very time consuming and error prone activity. Validation becomes then a nodal point in the design flow. The aim of this paper is to present a co-simulation framework that allows validation of device drivers. The proposed framework supports all mechanisms used by device drivers to communicate with HW devices so that both modeled and actual components can be included in the simulated embedded platform. In this way, the generated code can be tested and validated even if the final platform is not ready yet. The framework has been applied to some examples to highlight the performance and effectiveness of this approach.",2009,0, 3956,Performance and scalability of M/M/c based queuing model of the SIP Proxy Server - a practical approach,"In recent years, Session Initiation Protocol (SIP) based Voice over IP (VoIP) based applications are alternative to the traditional Public Switched Telephone Networks (PSTN) because of its flexibility in the implementation of new features and services. The Session Initiation Protocol (SIP) is becoming a popular signaling protocol for Voice over IP (VoIP) based applications. The SIP Proxy server is a software application that provides call routing services by parsing and forwarding all the incoming SIP packets in an IP telephony network. The efficiency of this process can create large scale, highly reliable packet voice networks for service providers and enterprises. Since, SIP Proxy server performance can be characterized by its transaction states of each SIP session, we proposed the M/M/c performance model of the SIP Proxy Server and studied some of the key performance benchmarks such as server utilization, queue size and memory utilization. Provided the comparative results between the predicted results with the experimental results conducted in a lab environment.",2009,0, 3957,Vehicle speed detection system,"This research intends to develop the vehicle speed detection system using image processing technique. Overall works are the software development of a system that requires a video scene, which consists of the following components: moving vehicle, starting reference point and ending reference point. The system is designed to detect the position of the moving vehicle in the scene and the position of the reference points and calculate the speed of each static image frame from the detected positions. The vehicle speed detection from a video frame system consists of six major components: 1) Image Acquisition, for collecting a series of single images from the video scene and storing them in the temporary storage. 2) Image Enhancement, to improve some characteristics of the single image in order to provide more accuracy and better future performance. 3) Image Segmentation, to perform the vehicle position detection using image differentiation. 4) Image Analysis, to analyze the position of the reference starting point and the reference ending point, using a threshold technique. 5) Speed Detection, to calculate the speed of each vehicle in the single image frame using the detection vehicle position and the reference point positions, and 6) Report, to convey the information to the end user as readable information. The experimentation has been made in order to assess three qualities: 1) Usability, to prove that the system can determine vehicle speed under the specific conditions laid out. 2) Performance, and 3) Effectiveness. The results show that the system works with highest performance at resolution 320240. It takes around 70 seconds to detect a moving vehicle in a video scene.",2009,0, 3958,"Integrated Detection of Attacks Against Browsers, Web Applications and Databases","Anomaly-based techniques were exploited successfully to implement protection mechanisms for various systems. Recently, these approaches have been ported to the web domain under the name of """"web application anomaly detectors"""" (or firewalls) with promising results. In particular, those capable of automatically building specifications, or models, of the protected application by observing its traffic (e.g., network packets, system calls, or HTTP requests and responses) are particularly interesting, since they can be deployed with little effort. Typically, the detection accuracy of these systems is significantly influenced by the model building phase (often called training), which clearly depends upon the quality of the observed traffic, which should resemble the normal activity of the protected application and must be also free from attacks. Otherwise, detection may result in significant amounts of false positives (i.e., benign events flagged as anomalous) and negatives (i.e., undetected threats). In this work we describe Masibty, a web application anomaly detector that have some interesting properties. First, it requires the training data not to be attack-free. Secondly, not only it protects the monitored application, it also detects and blocks malicious client-side threats before they are sent to the browser. Third, Masibty intercepts the queries before they are sent to the database, correlates them with the corresponding HTTP requests and blocks those deemed anomalous. Both the accuracy and the performance have been evaluated on real-world web applications with interesting results. The system is almost not influenced by the presence of attacks in the training data and shows only a negligible amount of false positives, although this is paid in terms of a slight performance overhead.",2009,0, 3959,Estimation of program reverse semantic traceability influence at program reliability with assistance of object-oriented metrics,"In article the approach to the estimation of program reverse semantic traceability (RST) influence on program reliability with assistance of object-oriented metrics is proposed. In order to estimate reasonability of RST usage it is naturally to define how it influences on major adjectives of software development project: project cost, quality of the developed application, etc. At present object-oriented metrics of Chidamber and Kemerer are widely used for predictive estimation of software reliability at early stage of life cycle. In number of works, for example, it is proposed to use logistic regression for estimation of probability (that a module will have a fault). The parameters of this model are found by maximal likelihood method with calculation of object-oriented metrics. The paper shows how to change the software reliability model parameters, that was received using logistic regression, in order to estimate influence of program RST on program reliability.",2009,0, 3960,Goal-based safety standards and cots software selection,"In this paper we examine some of the challenges associated with adequately demonstrating the safety of COTS products as required by goal-based safety standards. The safety evidence available for COTS products - if any - is sometimes of questionable quality and applicability. This paper introduces a framework for assessing the applicability of the available evidence when selecting a COTS product for purchase. Use of this framework enables the purchase of a particular COTS product to be justified from a safety perspective, as well as identifying where further post-purchase analysis of the software will be required to support a safety argument.",2009,0, 3961,Probabilistic transient stability assessment using two-point estimate method,"With many uncertainties encountered in power system operation, the proper basis for assessing transient stability should be formed in terms of the solution of a stochastic problem. In this paper, two-point estimate method is used to find the maximum relative rotor angles' probability distribution functions for a given fault with uncertain load demands and clearing time. The low computing time requirement of the two-point estimate method allows online applications, and the use of detailed power systems dynamic model for time-domain simulation which offers high accuracy. The two-point estimate method is integrated in a straightforward manner with the existing transient stability analysis tools. The integrated software facility has potential applications in control rooms to assist the system operator in decision making process based on instability risks. The software system when implemented on a cluster of processors also makes it feasible to re-assess online transient stability for any change in system configuration arising from switching control. The method proposed has been tested on a 39-bus IEEE test system and validated using the Monte Carlo simulation.",2009,0, 3962,Defect location in traditional vs. Web applications - an empirical investigation,"So far, few attempts were carried out in literature to understand the specific nature of Web bugs and their distribution among the tiers of applications' architecture. In this paper we present an experimental investigation conducted with five pairs of homologous applications (Web and traditional) and 780 real bugs taken from SourceForge aimed at studying the distributions of bugs in Web and traditional applications. The investigation follows a rigorous experimental procedure and it was conducted in the context of three bachelor theses. The study results, although preliminarily, provide a clear-cut empirical evidence that the presentation layer in Web applications is more defect-prone when compared to analogous traditional applications.",2009,0, 3963,A dynamic reconfiguration approach for accelerating highly defective processors,"The advances on the scaling process have brought several challenges concerning fault-tolerance of new technologies. At nano-scale basis, the contacts and wires defect rate is predicted to be around 1% to 15%. At this point, it will be inevitable that designs in future technologies embed some defect tolerance scheme. The desired solution at the processor level should allow the computer architecture to continue to execute software, even with the high level of defects that new technologies should introduce. This paper presents an adaptive approach that is capable of guaranteeing not only software execution but also acceleration, even under aggressive defect densities. We propose the use of an on-line binary translation mechanism implemented in a dynamically reconfigurable fabric, exploiting regularity of the reconfigurable fabric as intrinsic spare-parts, trading a small acceleration penalty for quality assurance.",2009,0, 3964,Trade-off between safety and normal-case control performance based on probabilistic safety management of control laws,"This paper presents a probabilistic safety management framework for control laws to provide a balance between normal-case performance, safety and fault-case performance according to the international standard on safety, IEC 61508. It is based on multiobjective design for simultaneous problems for each context to optimize only normal-case performance out of the whole including fault-case performance. Also the framework establishes the existence of trade-off between them quantitatively for the first time ever.",2009,0, 3965,Architecture for Embedding Audiovisual Feature Extraction Tools in Archives,"In the near future, it will no longer be sufficient that only archivists annotate audiovisual material. Not only is the number of archivists limited, the time they can spend on annotating one item is insufficient to create time-based and detailed descriptions about the content to make fully optimized video search possible. Furthermore, we observe an accelerated increase in newly created audiovisual material that must be described due to introduction of file-based production methods. Fortunately, more and more high-quality feature extraction tools are being developed by research institutes. These tools examine the audiovisual essence and return particular information about the analyzed video and/or audio streams. For example, tools can automatically detect shot boundaries, detect and recognize faces and objects, segment audio streams, etc. As a result, they quickly and cheaply generate metadata that can be used for indexing and searching. On top of that, it relieves archivists of performing tedious and repetitive, but necessary low-added value tasks, for example identifying within an audio stream the speech and music segments. Although most tools are currently not yet commercially offered, it is to be expected that these solutions soon will become available for broadcasters and media companies alike. In this paper, we describe a solution on how to integrate such feature extraction tools within the annotation workflow of a media company. This solution, in the form of an architecture and workflow, is scalable, extensible, loosely coupled, and has clear and easy to implement interfaces. As such, our architecture allows one to plug in additional tools irrespective of the software and hardware used by the media company. By integrating feature extraction tools within the workflow of annotating audiovisual essence, more and better metadata can be created allowing other tools to improve indexing, search and retrieval of media material within audio- isual archives.",2009,0, 3966,Application of SQuaRE and Generalized Nets for extended validation of CE systems,"The need to develop robust, quality software architectures is more critical than ever today due to the significant complexity, size and interoperability requirements typical of modern systems. Rather than wait until the architecture, design and potentially implementation phases have been completed, this paper proposes that evaluation and testing methods should be applied during the architectural phase itself, and should be rooted in standard methodologies and processes, then complementary checked through the usage of Generalized Networks (GN) theory by a simulation platform. To guarantee the quality of the architecture the ISO/IEC CD 2504n of the SQuaRE series of standards (which describes a process for evaluating the quality of software products and also describing the requirements for the components of an architecture), are adopted as a reference methodology. (GN) is a tool for Discrete Event Simulation (DES), which is equally well suited for modelling simple and large, complex systems. For a complete assessment of quality, GNs seems to be a proper complement for the validations of the dynamics of the interoperable system, after tested by the quality procedure of SQuaRE. With these, a complete system can be validated through visualization, specification, simulation, analysis, development, and report-out of the test and evaluation procedures that is applied to the architecture and its components. By applying these quality assessment techniques earlier in the software development lifecycle, it is predicted that churn of both code and the architecture itself can be reduced. This would deliver improvements in the quality, reliability, consistency and indeed sustainability of the architecture and its implementations, compared when just either SQuaRE or GNs are used separately.",2009,0, 3967,On the scientific basis of Enterprise Interoperability,"The focus of this paper is to provide a methodological framework to support the exploration of the scientific basis of Enterprise Interoperability (EI). EI is an engineering discipline and, as such, requires a sound scientific foundation. But it is a young discipline for which boundaries and key problems are still under scrutiny. Furthermore, it is based on three more established engineering disciplines and the reciprocal support is also under careful study. Therefore, a simple listing of the most relevant scientific methods that may be useful is not helpful for our purpose. For this reason, the paper proposes a vision based on a Stratified Dependency Graph as a possible method to trace the relevance of the science base elements, and a system of Quality Feature to assess their actual effectiveness. This is just a first proposal, and much work is still ahead of us to achieve a comprehensive, well founded framework.",2009,0, 3968,Models for Systems and Software Engineering,"This chapter contains sections titled:
Objectives
Learning Curve Models
Learning Curve Exponential Model
Software Production Time Model
Software Production Regression Model
Assessing the Effect of Defects and Complexity of Learning
Queuing Analysis
Single-Server Fault Detection and Correction Model with Exponentially Distributed Time between Arrivals and Service Times
Multiple-Server Fault Detection and Correction Model with Exponentially Distributed Time between Arrivals and Service Times, and Finite Server Capacity [HIL01]
Assessing Effectiveness of Fault Correction
Summary This chapter contains sections titled:
References",2009,0, 3969,"Quantitative Methods to Ensure the Reliability, Maintainability, and Availability of Computer Hardware and Software",This chapter contains sections titled:
Objectives
Probability and Statistics
Design of Experiments: ANOVA Randomized Block Model [LEV01]
ANOVA Model
Design of Experiments: One-way ANOVA
Chebyshev's Theorem: The Rarity of Outliers
Reliability and Failure Analysis
Normal Distribution
Multiple Component Reliability Analysis
Computer System Availability and Maintenance
Fault Tree Analysis
Confidence Intervals Model
Summary This chapter contains sections titled:
References,2009,0, 3970,Internet Fault Tree Analysis for Reliability Estimation,This chapter contains sections titled:
Objectives
Introduction
Fault Tree Analysis
Model of FTA for Internet Services
Event Failure Analysis
Fault Tree for Analyzing Internet Service Failures
Predicting Failure Rates with Fault Correction
Summary This chapter contains sections titled:
References,2009,0, 3971,Semiotic Engineering Methods for Scientific Research in HCI,"Semiotic engineering was originally proposed as a semiotic approach to designing user interface languages. Over the years, with research done at the Department of Informatics of the Pontifical Catholic University of Rio de Janeiro, it evolved into a semiotic theory of human-computer interaction (HCI). It views HCI as computer-mediated communication between designers and users at interaction time. The system speaks for its designers in various types of conversations specified at design time. These conversations communicate the designers' understanding of who the users are, what they know the users want or need to do, in which preferred ways, and why. The designers' message to users includes even the interactive language in which users will have to communicate back with the system in order to achieve their specific goals. Hence, the process is, in fact, one of communication about communication, or metacommunication. Semiotic engineering has two methods to evaluate the quality of metac mmunication in HCI: the semiotic inspection method (SIM) and the communicability evaluation method (CEM). Up to now, they have been mainly used and discussed in technical contexts, focusing on how to detect problems and how to improve the metacommunication of specific systems. In this book, Clarisse de Souza and Carla Leitao discuss how SIM and CEM, which are both qualitative methods, can also be used in scientific contexts to generate new knowledge about HCI. The discussion goes into deep considerations about scientific methodology, calling the reader's attention to the essence of qualitative methods in research and the kinds of results they can produce. To illustrate their points, the authors present an extensive case study with a free open-source digital audio editor called Audacity. They show how the results obtained with a triangulation of SIM and CEM point at new research avenues not only for semiotic engineering and HCI but also for other areas of computer science such a software engineering and programming. Table of Contents: Introduction / Essence of Semiotic Engineering / Semiotic Engineering Methods / Case Study with Audacity / Lessons Learned with Semiotic Engineering Methods / The Near Future of Semiotic Engineering",2009,0, 3972,Empirical Case Studies in Attribute Noise Detection,"The quality of data is an important issue in any domain-specific data mining and knowledge discovery initiative. The validity of solutions produced by data-driven algorithms can be diminished if the data being analyzed are of low quality. The quality of data is often realized in terms of data noise present in the given dataset and can include noisy attributes or labeling errors. Hence, tools for improving the quality of data are important to the data mining analyst. We present a comprehensive empirical investigation of our new and innovative technique for ranking attributes in a given dataset from most to least noisy. Upon identifying the noisy attributes, specific treatments can be applied depending on how the data are to be used. In a classification setting, for example, if the class label is determined to contain the most noise, processes to cleanse this important attribute may be undertaken. Independent variables or predictors that have a low correlation to the class attribute and appear noisy may be eliminated from the analysis. Several case studies using both real-world and synthetic datasets are presented in this study. The noise detection performance is evaluated by injecting noise into multiple attributes at different noise levels. The empirical results demonstrate conclusively that our technique provides a very accurate and useful ranking of noisy attributes in a given dataset.",2009,1, 3973,Feature Selection with Imbalanced Data for Software Defect Prediction,"In this paper, we study the learning impact of data sampling followed by attribute selection on the classification models built with binary class imbalanced data within the scenario of software quality engineering. We use a wrapper-based attribute ranking technique to select a subset of attributes, and the random undersampling technique (RUS) on the majority class to alleviate the negative effects of imbalanced data on the prediction models. The datasets used in the empirical study were collected from numerous software projects. Five data preprocessing scenarios were explored in these experiments, including: (1) training on the original, unaltered fit dataset, (2) training on a sampled version of the fit dataset, (3) training on an unsampled version of the fit dataset using only the attributes chosen by feature selection based on the unsampled fit dataset, (4) training on an unsampled version of the fit dataset using only the attributes chosen by feature selection based on a sampled version of the fit dataset, and (5) training on a sampled version of the fit dataset using only the attributes chosen by feature selection based on the sampled version of the fit dataset. We compared the performances of the classification models constructed over these five different scenarios. The results demonstrate that the classification models constructed on the sampled fit data with or without feature selection (case 2 and case 5) significantly outperformed the classification models built with the other cases (unsampled fit data). Moreover, the two scenarios using sampled data (case 2 and case 5) showed very similar performances, but the subset of attributes (case 5) is only around 15% or 30% of the complete set of attributes (case 2).",2009,1, 3974,"Investigating the effect of dataset size, metrics sets, and feature selection techniques on software fault prediction problem","Software quality engineering comprises of several quality assurance activities such as testing, formal verification, inspection, fault tolerance, and software fault prediction. Until now, many researchers developed and validated several fault prediction models by using machine learning and statistical techniques. There have been used different kinds of software metrics and diverse feature reduction techniques in order to improve the models' performance. However, these studies did not investigate the effect of dataset size, metrics set, and feature selection techniques for software fault prediction. This study is focused on the high-performance fault predictors based on machine learning such as Random Forests and the algorithms based on a new computational intelligence approach called Artificial Immune Systems. We used public NASA datasets from the PROMISE repository to make our predictive models repeatable, refutable, and verifiable. The research questions were based on the effects of dataset size, metrics set, and feature selection techniques. In order to answer these questions, there were defined seven test groups. Additionally, nine classifiers were examined for each of the five public NASA datasets. According to this study, Random Forests provides the best prediction performance for large datasets and Naive Bayes is the best prediction algorithm for small datasets in terms of the Area Under Receiver Operating Characteristics Curve (AUC) evaluation parameter. The parallel implementation of Artificial Immune Recognition Systems (AIRS2Parallel) algorithm is the best Artificial Immune Systems paradigm-based algorithm when the method-level metrics are used.",2009,1, 3975,On the relative value of cross-company and within-company data for defect prediction,"We propose a practical defect prediction approach for companies that do not track defect related data. Specifically, we investigate the applicability of cross-company (CC) data for building localized defect predictors using static code features. Firstly, we analyze the conditions, where CC data can be used as is. These conditions turn out to be quite few. Then we apply principles of analogy-based learning (i.e. nearest neighbor (NN) filtering) to CC data, in order to fine tune these models for localization. We compare the performance of these models with that of defect predictors learned from within-company (WC) data. As expected, we observe that defect predictors learned from WC data outperform the ones learned from CC data. However, our analyses also yield defect predictors learned from NN-filtered CC data, with performance close to, but still not better than, WC data. Therefore, we perform a final analysis for determining the minimum number of local defect reports in order to learn WC defect predictors. We demonstrate in this paper that the minimum number of data samples required to build effective defect predictors can be quite small and can be collected quickly within a few months. Hence, for companies with no local defect data, we recommend a two-phase approach that allows them to employ the defect prediction process instantaneously. In phase one, companies should use NN-filtered CC data to initiate the defect prediction process and simultaneously start collecting WC (local) data. Once enough WC data is collected (i.e. after a few months), organizations should switch to phase two and use predictors learned from WC data.",2009,1, 3976,Data mining source code for locating software bugs: A case study in telecommunication industry,"In a large software system knowing which files are most likely to be fault-prone is valuable information for project managers. They can use such information in prioritizing software testing and allocating resources accordingly. However, our experience shows that it is difficult to collect and analyze fine-grained test defects in a large and complex software system. On the other hand, previous research has shown that companies can safely use cross-company data with nearest neighbor sampling to predict their defects in case they are unable to collect local data. In this study we analyzed 25 projects of a large telecommunication system. To predict defect proneness of modules we trained models on publicly available Nasa MDP data. In our experiments we used static call graph based ranking (CGBR) as well as nearest neighbor sampling for constructing method level defect predictors. Our results suggest that, for the analyzed projects, at least 70% of the defects can be detected by inspecting only (i) 6% of the code using a Naive Bayes model, (ii) 3% of the code using CGBR framework.",2009,1, 3977,Proactive Detection of Computer Worms Using Model Checking,"Although recent estimates are speaking of 200,000 different viruses, worms, and Trojan horses, the majority of them are variants of previously existing malware. As these variants mostly differ in their binary representation rather than their functionality, they can be recognized by analyzing the program behavior, even though they are not covered by the signature databases of current antivirus tools. Proactive malware detectors mitigate this risk by detection procedures that use a single signature to detect whole classes of functionally related malware without signature updates. It is evident that the quality of proactive detection procedures depends on their ability to analyze the semantics of the binary. In this paper, we propose the use of model checking-a well-established software verification technique-for proactive malware detection. We describe a tool that extracts an annotated control flow graph from the binary and automatically verifies it against a formal malware specification. To this end, we introduce the new specification language CTPL, which balances the high expressive power needed for malware signatures with efficient model checking algorithms. Our experiments demonstrate that our technique indeed is able to recognize variants of existing malware with a low risk of false positives.",2010,0, 3978,Wavelet Codes for Algorithm-Based Fault Tolerance Applications,"Algorithm-based fault tolerance (ABFT) methods, which use real number parity values computed in two separate comparable ways to detect computer-induced errors in numerical processing operations, can employ wavelet codes for establishing the necessary redundancy. Wavelet codes, one form of real number convolutional codes, determine the required parity values in a continuous fashion and can be intertwined naturally with normal data processing. Such codes are the transform coefficients associated with an analysis uniform filter bank which employs downsampling, while parity-checking operations are performed by a syndrome synthesis filter bank that includes upsampling. The data processing operations are merged effectively with the parity generating function to provide one set of parity values. Good wavelet codes can be designed starting from standard convolutional codes over finite fields by relating the field elements with the integers in the real number space. ABFT techniques are most efficient when employing a systematic form and methods for developing systematic codes are detailed. Bounds on the ABFT overhead computations are given and ABFT protection methods for processing that contains feedback are outlined. Analyzing syndromes' variances guide the selection of thresholds for syndrome comparisons. Simulations demonstrate the detection and miss probabilities for some high-rate wavelet codes.",2010,0, 3979,Highly Available Intrusion-Tolerant Services with Proactive-Reactive Recovery,"In the past, some research has been done on how to use proactive recovery to build intrusion-tolerant replicated systems that are resilient to any number of faults, as long as recoveries are faster than an upper bound on fault production assumed at system deployment time. In this paper, we propose a complementary approach that enhances proactive recovery with additional reactive mechanisms giving correct replicas the capability of recovering other replicas that are detected or suspected of being compromised. One key feature of our proactive-reactive recovery approach is that, despite recoveries, it guarantees the availability of a minimum number of system replicas necessary to sustain correct operation of the system. We design a proactive-reactive recovery service based on a hybrid distributed system model and show, as a case study, how this service can effectively be used to increase the resilience of an intrusion-tolerant firewall adequate for the protection of critical infrastructures.",2010,0, 3980,On the Quality of Service of Crash-Recovery Failure Detectors,We model the probabilistic behavior of a system comprising a failure detector and a monitored crash-recovery target. We extend failure detectors to take account of failure recovery in the target system. This involves extending QoS measures to include the recovery detection speed and proportion of failures detected. We also extend estimating the parameters of the failure detector to achieve a required QoS to configuring the crash-recovery failure detector. We investigate the impact of the dependability of the monitored process on the QoS of our failure detector. Our analysis indicates that variation in the MTTF and MTTR of the monitored process can have a significant impact on the QoS of our failure detector. Our analysis is supported by simulations that validate our theoretical results.,2010,0, 3981,Fisher Information-Based Evaluation of Image Quality for Time-of-Flight PET,"The use of time-of-flight (TOF) information during positron emission tomography (PET) reconstruction has been found to improve the image quality. In this work we quantified this improvement using two existing methods: 1) a very simple analytical expression only valid for a central point in a large uniform disk source and 2) efficient analytical approximations for post-filtered maximum likelihood expectation maximization (MLEM) reconstruction with a fixed target resolution, predicting the image quality in a pixel or in a small region of interest based on the Fisher information matrix. Using this latter method the weighting function for filtered backprojection reconstruction of TOF PET data proposed by C. Watson can be derived. The image quality was investigated at different locations in various software phantoms. Simplified as well as realistic phantoms, measured both with TOF PET systems and with a conventional PET system, were simulated. Since the time resolution of the system is not always accurately known, the effect on the image quality of using an inaccurate kernel during reconstruction was also examined with the Fisher information-based method. First, we confirmed with this method that the variance improvement in the center of a large uniform disk source is proportional to the disk diameter and inversely proportional to the time resolution. Next, image quality improvement was observed in all pixels, but in eccentric and high-count regions the contrast-to-noise ratio (CNR) increased less than in central and low- or medium-count regions. Finally, the CNR was seen to decrease when the time resolution was inaccurately modeled (too narrow or too wide) during reconstruction. Although the maximum CNR is not very sensitive to the time resolution error, using an inaccurate TOF kernel tends to introduce artifacts in the reconstructed image.",2010,0, 3982,Bayesian Approaches to Matching Architectural Diagrams,"IT system architectures and many other kinds of structured artifacts are often described by formal models or informal diagrams. In practice, there are often a number of versions of a model or diagram, such as a series of revisions, divergent variants, or multiple views of a system. Understanding how versions correspond or differ is crucial, and thus, automated assistance for matching models and diagrams is essential. We have designed a framework for finding these correspondences automatically based on Bayesian methods. We represent models and diagrams as graphs whose nodes have attributes such as name, type, connections to other nodes, and containment relations, and we have developed probabilistic models for rating the quality of candidate correspondences based on various features of the nodes in the graphs. Given the probabilistic models, we can find high-quality correspondences using search algorithms. Preliminary experiments focusing on architectural models suggest that the technique is promising.",2010,0, 3983,Recursive Pseudo-Exhaustive Two-Pattern Generation,"Pseudo-exhaustive pattern generators for built-in self-test (BIST) provide high fault coverage of detectable combinational faults with much fewer test vectors than exhaustive generation. In (n, k)-adjacent bit pseudo-exhaustive test sets, all 2k binary combinations appear to all adjacent k-bit groups of inputs. With recursive pseudoexhaustive generation, all (n, k)-adjacent bit pseudoexhaustive tests are generated for k ?? n and more than one modules can be pseudo-exhaustively tested in parallel. In order to detect sequential (e.g., stuck-open) faults that occur into current CMOS circuits, two-pattern tests are exercised. Also, delay testing, commonly used to assure correct circuit operation at clock speed requires two-pattern tests. In this paper a pseudoexhaustive two-pattern generator is presented, that recursively generates all two-pattern (n, k)-adjacent bit pseudoexhaustive tests for all k ?? n. To the best of our knowledge, this is the first time in the open literature that the subject of recursive pseudoexhaustive two-pattern testing is being dealt with. A software-based implementation with no hardware overhead is also presented.",2010,0, 3984,Generating Event Sequence-Based Test Cases Using GUI Runtime State Feedback,"This paper presents a fully automatic model-driven technique to generate test cases for graphical user interfaces (GUIs)-based applications. The technique uses feedback from the execution of a ??seed test suite,?? which is generated automatically using an existing structural event interaction graph model of the GUI. During its execution, the runtime effect of each GUI event on all other events pinpoints event semantic interaction (ESI) relationships, which are used to automatically generate new test cases. Two studies on eight applications demonstrate that the feedback-based technique 1) is able to significantly improve existing techniques and helps identify serious problems in the software and 2) the ESI relationships captured via GUI state yield test suites that most often detect more faults than their code, event, and event-interaction-coverage equivalent counterparts.",2010,0, 3985,An Integrated Data-Driven Framework for Computing System Management,"With advancement in science and technology, computing systems are becoming increasingly more complex with a growing number of heterogeneous software and hardware components. They are thus becoming more difficult to monitor, manage, and maintain. Traditional approaches to system management have been largely based on domain experts through a knowledge acquisition solution that translates domain knowledge into operating rules and policies. This process has been well known as cumbersome, labor intensive, and error prone. In addition, traditional approaches for system management are difficult to keep up with the rapidly changing environments. There is a pressing need for automatic and efficient approaches to monitor and manage complex computing systems. In this paper, we propose an integrated data-driven framework for computing system management by acquiring the needed knowledge automatically from a large amount of historical log data. Specifically, we apply text mining techniques to automatically categorize the log messages into a set of canonical categories, incorporate temporal information to improve categorization performance, develop temporal mining techniques to discover the relationships between different events, and take a novel approach called event summarization to provide a concise interpretation of the temporal patterns.",2010,0, 3986,Learning a Metric for Code Readability,"In this paper, we explore the concept of code readability and investigate its relation to software quality. With data collected from 120 human annotators, we derive associations between a simple set of local code features and human notions of readability. Using those features, we construct an automated readability measure and show that it can be 80 percent effective and better than a human, on average, at predicting readability judgments. Furthermore, we show that this metric correlates strongly with three measures of software quality: code changes, automated defect reports, and defect log messages. We measure these correlations on over 2.2 million lines of code, as well as longitudinally, over many releases of selected projects. Finally, we discuss the implications of this study on programming language design and engineering practice. For example, our data suggest that comments, in and of themselves, are less important than simple blank lines to local judgments of readability.",2010,0, 3987,Architectural Enhancement and System Software Support for Program Code Integrity Monitoring in Application-Specific Instruction-Set Processors,"Program code in a computer system can be altered either by malicious security attacks or by various faults in microprocessors. At the instruction level, all code modifications are manifested as bit flips. In this paper, we present a generalized methodology for monitoring code integrity at run-time in application-specific instruction-set processors. We embed monitoring microoperations in machine instructions, so the processor is augmented with a hardware monitor automatically. The monitor observes the processor's execution trace at run-time, checks whether it aligns with the expected program behavior, and signals any mismatches. Since the monitor works at a level below the instructions, the monitoring mechanism cannot be bypassed by software or compromised by malicious users. We discuss the ability and limitation of such monitoring mechanism for detecting both soft errors and code injection attacks. We propose two different schemes for managing the monitor, the operating system (OS) managed and application controlled, and design the constituent components within the monitoring architecture. Experimental results show that with an effective hash function implementation, our microarchitectural support can detect program code integrity compromises at a high probability with small area overhead and little performance degradation.",2010,0, 3988,The Future of Integrated Circuits: A Survey of Nanoelectronics,"While most of the electronics industry is dependent on the ever-decreasing size of lithographic transistors, this scaling cannot continue indefinitely. Nanoelectronics (circuits built with components on the scale of 10 nm) seem to be the most promising successor to lithographic based ICs. Molecular-scale devices including diodes, bistable switches, carbon nanotubes, and nanowires have been fabricated and characterized in chemistry labs. Techniques for self-assembling these devices into different architectures have also been demonstrated and used to build small-scale prototypes. While these devices and assembly techniques will lead to nanoscale electronics, they also have the drawback of being prone to defects and transient faults. Fault-tolerance techniques will be crucial to the use of nanoelectronics. Lastly, changes to the software tools that support the fabrication and use of ICs will be needed to extend them to support nanoelectronics. This paper introduces nanoelectronics and reviews the current progress made in research in the areas of technologies, architectures, fault tolerance, and software tools.",2010,0, 3989,Low-Complexity Transcoding of JPEG Images With Near-Optimal Quality Using a Predictive Quality Factor and Scaling Parameters,"A common transcoding operation consists of reducing the file size of a JPEG image to meet bandwidth or device constraints. This can be achieved by reducing its quality factor (QF) or reducing its resolution, or both. In this paper, using the structural similarity (SSIM) index as the quality metric, we present a system capable of estimating the QF and scaling parameters to achieve optimal quality while meeting a device's constraints. We then propose a novel low-complexity JPEG transcoding system which delivers near-optimal quality. The system is capable of predicting the best combination of QF and scaling parameters for a wide range of device constraints and viewing conditions. Although its computational complexity is an order of magnitude smaller than the system providing optimal quality, the proposed system yields quality results very similar to those of the optimal system.",2010,0, 3990,Modulation Quality Measurement in WiMAX Systems Through a Fully Digital Signal Processing Approach,"The performance assessment of worldwide interoperability for microwave access (WiMAX) systems is dealt with. A fully digital signal processing approach for modulation quality measurement is proposed, which is particularly addressed to transmitters based on orthogonal frequency-division multiplexing (OFDM) modulation. WiMAX technology deployment is rapidly increasing. To aid researchers, manufactures, and technicians in designing, realizing, and installing devices and apparatuses, some measurement solutions are already available, and new ones are being released on the market. All of them are arranged to complement an ad hoc digital signal processing software with an existing specialized measurement instrument such as a real-time spectrum analyzer or a vector signal analyzer. Furthermore, they strictly rely on a preliminary analog downconversion of the radio-frequency input signal, which is a basic front-end function provided by the cited instruments, to suitably digitize and digitally process the acquired samples. In the same way as the aforementioned solutions, the proposed approach takes advantage of existing instruments, but different from them, it provides for a direct digitization of the radio-frequency input signal. No downconversion is needed, and the use of general-purpose measurement hardware such as digital scopes or data acquisition systems is thus possible. A proper digital signal processing algorithm, which was designed and implemented by the authors, then demodulates the digitized signal, extracts the desired measurement information from its baseband components, and assesses its modulation quality. The results of several experiments conducted on laboratory WiMAX signals show the effectiveness and reliability of the approach with respect to the major competitive solutions; its superior performance in special physical-layer conditions is also highlighted.",2010,0, 3991,Hybrid Simulated Annealing and Its Application to Optimization of Hidden Markov Models for Visual Speech Recognition,"We propose a novel stochastic optimization algorithm, hybrid simulated annealing (SA), to train hidden Markov models (HMMs) for visual speech recognition. In our algorithm, SA is combined with a local optimization operator that substitutes a better solution for the current one to improve the convergence speed and the quality of solutions. We mathematically prove that the sequence of the objective values converges in probability to the global optimum in the algorithm. The algorithm is applied to train HMMs that are used as visual speech recognizers. While the popular training method of HMMs, the expectation-maximization algorithm, achieves only local optima in the parameter space, the proposed method can perform global optimization of the parameters of HMMs and thereby obtain solutions yielding improved recognition performance. The superiority of the proposed algorithm to the conventional ones is demonstrated via isolated word recognition experiments.",2010,0, 3992,Performability Analysis of Multistate Computing Systems Using Multivalued Decision Diagrams,"A distinct characteristic of multistate systems (MSS) is that the systems and/or their components may exhibit multiple performance levels (or states) varying from perfect operation to complete failure. MSS can model behaviors such as shared loads, performance degradation, imperfect fault coverage, standby redundancy, limited repair resources, and limited link capacities. The nonbinary state property of MSS and their components as well as dependencies existing among different states of the same component make the analysis of MSS difficult. This paper proposes efficient algorithms for analyzing MSS using multivalued decision diagrams (MDD). Various reliability, availability, and performability measures based on state probabilities or failure frequencies are considered. The application and advantages of the proposed algorithms are demonstrated through two examples. Furthermore, experimental results on a set of benchmark examples are presented to illustrate the advantages of the proposed MDD-based method for the performability analysis of MSS, as compared to the existing methods.",2010,0, 3993,The Probabilistic Program Dependence Graph and Its Application to Fault Diagnosis,"This paper presents an innovative model of a program's internal behavior over a set of test inputs, called the probabilistic program dependence graph (PPDG), which facilitates probabilistic analysis and reasoning about uncertain program behavior, particularly that associated with faults. The PPDG construction augments the structural dependences represented by a program dependence graph with estimates of statistical dependences between node states, which are computed from the test set. The PPDG is based on the established framework of probabilistic graphical models, which are used widely in a variety of applications. This paper presents algorithms for constructing PPDGs and applying them to fault diagnosis. The paper also presents preliminary evidence indicating that a PPDG-based fault localization technique compares favorably with existing techniques. The paper also presents evidence indicating that PPDGs can be useful for fault comprehension.",2010,0, 3994,Vulnerability Discovery with Attack Injection,"The increasing reliance put on networked computer systems demands higher levels of dependability. This is even more relevant as new threats and forms of attack are constantly being revealed, compromising the security of systems. This paper addresses this problem by presenting an attack injection methodology for the automatic discovery of vulnerabilities in software components. The proposed methodology, implemented in AJECT, follows an approach similar to hackers and security analysts to discover vulnerabilities in network-connected servers. AJECT uses a specification of the server's communication protocol and predefined test case generation algorithms to automatically create a large number of attacks. Then, while it injects these attacks through the network, it monitors the execution of the server in the target system and the responses returned to the clients. The observation of an unexpected behavior suggests the presence of a vulnerability that was triggered by some particular attack (or group of attacks). This attack can then be used to reproduce the anomaly and to assist the removal of the error. To assess the usefulness of this approach, several attack injection campaigns were performed with 16 publicly available POP and IMAP servers. The results show that AJECT could effectively be used to locate vulnerabilities, even on well-known servers tested throughout the years.",2010,0, 3995,Evaluation of Accuracy in Design Pattern Occurrence Detection,"Detection of design pattern occurrences is part of several solutions to software engineering problems, and high accuracy of detection is important to help solve the actual problems. The improvement in accuracy of design pattern occurrence detection requires some way of evaluating various approaches. Currently, there are several different methods used in the community to evaluate accuracy. We show that these differences may greatly influence the accuracy results, which makes it nearly impossible to compare the quality of different techniques. We propose a benchmark suite to improve the situation and a community effort to contribute to, and evolve, the benchmark suite. Also, we propose fine-grained metrics assessing the accuracy of various approaches in the benchmark suite. This allows comparing the detection techniques and helps improve the accuracy of detecting design pattern occurrences.",2010,0, 3996,Assessing Software Service Quality and Trustworthiness at Selection Time,"The integration of external software in project development is challenging and risky, notably because the execution quality of the software and the trustworthiness of the software provider may be unknown at integration time. This is a timely problem and of increasing importance with the advent of the SaaS model of service delivery. Therefore, in choosing the SaaS service to utilize, project managers must identify and evaluate the level of risk associated with each candidate. Trust is commonly assessed through reputation systems; however, existing systems rely on ratings provided by consumers. This raises numerous issues involving the subjectivity and unfairness of the service ratings. This paper describes a framework for reputation-aware software service selection and rating. A selection algorithm is devised for service recommendation, providing SaaS consumers with the best possible choices based on quality, cost, and trust. An automated rating model, based on the expectancy-disconfirmation theory from market science, is also defined to overcome feedback subjectivity issues. The proposed rating and selection models are validated through simulations, demonstrating that the system can effectively capture service behavior and recommend the best possible choices.",2010,0, 3997,Exception Handling for Repair in Service-Based Processes,"This paper proposes a self-healing approach to handle exceptions in service-based processes and to repair the faulty activities with a model-based approach. In particular, a set of repair actions is defined in the process model, and repairability of the process is assessed by analyzing the process structure and the available repair actions. During execution, when an exception arises, repair plans are generated by taking into account constraints posed by the process structure, dependencies among data, and available repair actions. The paper also describes the main features of the prototype developed to validate the proposed repair approach for composed Web services; the self-healing architecture for repair handling and the experimental results are illustrated.",2010,0, 3998,Using Launch-on-Capture for Testing BIST Designs Containing Synchronous and Asynchronous Clock Domains,"This paper presents a new at-speed logic built-in self-test (BIST) architecture supporting two launch-on-capture schemes, namely aligned double-capture and staggered double-capture, for testing multi-frequency synchronous and asynchronous clock domains in a scan-based BIST design. The proposed architecture also includes BIST debug and diagnosis circuitry to help locate BIST failures. The aligned scheme detects and allows diagnosis of structural and delay faults among all synchronous clock domains, whereas the staggered scheme detects and allows diagnosis of structural and delay faults among all asynchronous clock domains. Both schemes solve the long-standing problem of using the conventional one-hot scheme, which requires testing each clock domain one at a time, or the simultaneous scheme, which requires adding isolation logic to normal functional paths across interacting clock domains. Physical implementation is easily achieved by the proposed solution due to the use of a slow-speed, global scan enable signal and reduced timing-critical design requirements. Application results for industrial designs demonstrate the effectiveness of the proposed architecture.",2010,0, 3999,A Traveling-Wave-Based Protection Technique Using Wavelet/PCA Analysis,"This paper proposes a powerful high-speed traveling-wave-based technique for the protection of power transmission lines. The proposed technique uses principal component analysis to identify the dominant pattern of the signals preprocessed by wavelet transform. The proposed protection algorithm presents a discriminating method based on the polarity, magnitude, and time interval between the detected traveling waves at the relay location. A supplemental algorithm consisting of a high-set overcurrent relay as well as an impedance-based relay is also proposed. This is done to overcome the well-known shortcomings of traveling-wave-based protection techniques for the detection of very close-in faults and single-phase-to-ground faults occurring at small voltage magnitudes. The proposed technique is evaluated for the protection of a two-terminal transmission line. Extensive simulation studies using PSCAD/EMTDC software indicate that the proposed approach is reliable for rapid and correct identification of various fault cases. It identifies most of the internal faults very rapidly in less than 2 ms. In addition, the proposed technique presents high noise immunity.",2010,0, 4000,Efficient Software Verification: Statistical Testing Using Automated Search,"Statistical testing has been shown to be more efficient at detecting faults in software than other methods of dynamic testing such as random and structural testing. Test data are generated by sampling from a probability distribution chosen so that each element of the software's structure is exercised with a high probability. However, deriving a suitable distribution is difficult for all but the simplest of programs. This paper demonstrates that automated search is a practical method of finding near-optimal probability distributions for real-world programs, and that test sets generated from these distributions continue to show superior efficiency in detecting faults in the software.",2010,0, 4001,Application of Prognostic and Health Management technology on aircraft fuel system,"Prognostic and health management (PHM), which could provide the ability of fault detection (FD), fault isolation (FI) and estimation of remaining useful life (RUL), has been applied to detect and diagnose device faults and assess its health status, with aiming to enhance device reliability, safety, and reduce its maintenance costs. In this paper, taking an aircraft fuel system as an example, with virtual instrument technology and computer simulation technology, an integrated approach of signal processing method and model-based method is introduced to build the virtual simulation software of aircraft fuel PHM system for overcoming the difficulty in obtaining the failures information from the real fuel system. During the process of constructing the aircraft fuel PHM system, the first step is to analyze the fuel system failure modes and status parameters that can identify the failure modes. The main failure modes are determined as joints looseness, pipe broken, nozzle clogging, and fuel tank leakage. The status parameters are fuel pressure and fuel flow. Then, the status parameter model is constructed to imitate the behavior of sensor which detecting fuel system status. On this basis, utilizing the signal processing module provided by Labview software, the outputs from the virtual sensors, which collect the failure data, are processed to realize the simulation of failure detection and failure diagnosis. All the result shows that the virtual simulation software well accomplishes the task of the aircraft fuel system failure detection and diagnosis.",2010,0, 4002,A stochastic filtering based data driven approach for residual life prediction and condition based maintenance decision making support,"As an efficient means of detecting potential plant failure, condition monitoring is growing popular in industry with million's spent on condition monitoring hardware and software. The use of condition monitoring techniques will generally increase plant availability and reduce downtime costs, but in some cases it will also tend to over-maintaining the plant in question. There is obviously a need for appropriate decision support in plant maintenance planning utilising available condition monitoring information, but compared to the extensive literature on diagnosis, relatively little research has been done on the prognosis side of condition based maintenance. In plant prognosis, a key, but often uncertain quantity to be modelled is the residual life prediction based on available condition information to date. This paper shall focus upon such a residual life prediction of the monitored items in condition based maintenance and review the recent developments in modelling residual life prediction using stochastic filtering. We first demonstrate the role of residual life prediction in condition based maintenance decision making, which highlights the need for such a prediction. We then discuss in detail the basic filtering model we used for residual life prediction and the extensions we made. We finally present briefly the result of the comparative studies between the filtering based model and other models using empirical data. The results show that the filtering based approach is the best in terms of prediction accuracy and cost effectiveness.",2010,0, 4003,Improving interpretation of component-based systems quality through visualisation techniques,"Component-based software development is increasingly more commonplace and is widely used in the development of commercial software systems. This has led to the existence of several research works focusing on software component-based systems quality. The majority of this research proposes quality models focused on component-based systems in which different measures are proposed. In general, the result of assessing the measures is a number, which is necessary to determine the component-based system quality level. However, understanding and interpreting the data set is not an easy task. In order to facilitate the interpretation of results, this study selects and adapts a specific visual metaphor with which to show component-based systems quality. A tool has additionally been developed which permits the automatic assessment of the measures to be carried out. The tool also shows the results visually and proposes corrective actions through which to improve the level of quality. A case study is used to assess and to show the quality of a real-world component-based software system in a graphic manner.",2010,0, 4004,Computer-aided recoding for multi-core systems,"The design of embedded computing systems faces a serious productivity gap due to the increasing complexity of their hardware and software components. One solution to address this problem is the modeling at higher levels of abstraction. However, manually writing proper executable system models is challenging, error-prone, and very time-consuming. We aim to automate critical coding tasks in the creation of system models. This paper outlines a novel modeling technique called computer-aided recoding which automates the process of writing abstract models of embedded systems by use of advanced computer-aided design (CAD) techniques. Using an interactive, designer-controlled approach with automated source code transformations, our computer-aided recoding technique derives an executable parallel system model directly from available sequential reference code. Specifically, we describe three sets of source code transformations that create structural hierarchy, expose potential parallelism, and create explicit communication and synchronization. As a result, system modeling is significantly streamlined. Our experimental results demonstrate the shortened design time and higher productivity.",2010,0, 4005,The Theory of Relative Dependency: Higher Coupling Concentration in Smaller Modules,"Our observations on several large-scale software products has consistently shown that smaller modules are proportionally more defect prone. These findings, challenge the common recommendations from the literature suggesting that quality assurance (QA) and quality control (QC) resources should focus on larger modules. Those recommendations are based on the unfounded assumption that a monotonically increasing linear relationship exists between module size and defects. Given that complexity is correlated with the size.",2010,0, 4006,Assess Content Comprehensiveness of Ontologies,"This paper proposes a novel method to assess and evaluate content comprehensiveness of ontologies. Comparing to other researchers methods which just count the number of classes and properties, the method concerns about the actual content coverage of ontologies. By applying statistical analysis to a corpus, we assign different weights to different terms chosen from the corpus. These terms are then used for evaluating ontologies. Afterwards, a score is generated for each ontology to mark its content comprehensiveness. Experiments are then appropriately designed to evaluate the qualities of typical ontologies to show the effectiveness of the proposed evaluation method.",2010,0, 4007,The end of the blur,"This paper describes the application of NASA's software that calculates optical aberrations (wavefront errors). The power of this software lies in its ability to use an optical system's existing camera as a sensor to detect its own error, without installing any separate devices. This software is expected to sharpen images from space and improving the image quality of the astronomical telescopes, it could also redefine perfect vision for humans.",2010,0, 4008,Condition Monitoring of the Power Output of Wind Turbine Generators Using Wavelets,"With an increasing number of wind turbines being erected offshore, there is a need for cost-effective, predictive, and proactive maintenance. A large fraction of wind turbine downtime is due to bearing failures, particularly in the generator and gearbox. One way of assessing impending problems is to install vibration sensors in key positions on these subassemblies. Such equipment can be costly and requires sophisticated software for analysis of the data. An alternative approach, which does not require extra sensors, is investigated in this paper. This involves monitoring the power output of a variable-speed wind turbine generator and processing the data using a wavelet in order to extract the strength of particular frequency components, characteristic of faults. This has been done for doubly fed induction generators (DFIGs), commonly used in modern variable-speed wind turbines. The technique is first validated on a test rig under controlled fault conditions and then is applied to two operational wind turbine DFIGs where generator shaft misalignment was detected. For one of these turbines, the technique detected a problem 3 months before a bearing failure was recorded.",2010,0, 4009,Predictive data mining model for software bug estimation using average weighted similarity,"Software bug estimation is a very essential activity for effective and proper software project planning. All the software bug related data are kept in software bug repositories. Software bug (defect) repositories contains lot of useful information related to the development of a project. Data mining techniques can be applied on these repositories to discover useful interesting patterns. In this paper a prediction data mining technique is proposed to predict the software bug estimation from a software bug repository. A two step prediction model is proposed In the first step bug for which estimation is required, its summary and description is matched against the summary and description of bugs available in bug repositories. A weighted similarity model is suggested to match the summary and description for a pair of software bugs. In the second step the fix duration of all the similar bugs are calculated and stored and its average is calculated, which indicates the predicted estimation of a bug. The proposed model is implemented using open source technologies and is explained with the help of illustrative example.",2010,0, 4010,Towards Enhanced User Interaction to Qualify Web Resources for Higher-Layered Applications,"The Web offers autonomous and frequently useful resources in growing manner. User Generated Content (UGC) like Wikis, Weblogs or Webfeeds often do not have one responsible authorship or declared experts who checked the created content for e.g. accuracy, availability, objectivity or reputation. The user is not able easily, to control the quality of the content he receives. If we want to utilize the distributed information flood as a linked knowledge base for higher-layered applications - e.g. for knowledge transfer and learning - information quality (iq) is a very important and complex aspect to analyze, personalize and annotate resources. In general, low information quality is one of the main discriminators of data sources on the Web. Assessing information quality with measurable terms can offer a personalized and smart view on a broad, global knowledge base. We developed the qKAI application framework to utilize available, distributed data sets in a practically manner. In the following we present our adaption of information quality aspects to qualify Web resources based on a three-level assessment model. We deploy knowledge-related iq-criteria as tool to implement iq-mechanisms stepwise into the qKAI framework. Here, we exemplify selected criteria of information quality in qKAI like relevance or accuracy. We derived assessment methods for certain iq-criteria enabling rich, game-based user interaction and semantic resource annotation. Open Content is embedded into knowledge games to increase the users' access and learning motivation. As side effect the resources' quality is enhanced stepwise by ongoing user interaction.",2010,0, 4011,Transparent Fault Tolerance of Device Drivers for Virtual Machines,"In a consolidated server system using virtualization, physical device accesses from guest virtual machines (VMs) need to be coordinated. In this environment, a separate driver VM is usually assigned to this task to enhance reliability and to reuse existing device drivers. This driver VM needs to be highly reliable, since it handles all the I/O requests. This paper describes a mechanism to detect and recover the driver VM from faults to enhance the reliability of the whole system. The proposed mechanism is transparent in that guest VMs cannot recognize the fault and the driver VM can recover and continue its I/O operations. Our mechanism provides a progress monitoring-based fault detection that is isolated from fault contamination with low monitoring overhead. When a fault occurs, the system recovers by switching the faulted driver VM to another one. The recovery is performed without service disconnection or data loss and with negligible delay by fully exploiting the I/O structure of the virtualized system.",2010,0, 4012,Context-Aware Adaptive Applications: Fault Patterns and Their Automated Identification,"Applications running on mobile devices are intensely context-aware and adaptive. Streams of context values continuously drive these applications, making them very powerful but, at the same time, susceptible to undesired configurations. Such configurations are not easily exposed by existing validation techniques, thereby leading to new analysis and testing challenges. In this paper, we address some of these challenges by defining and applying a new model of adaptive behavior called an Adaptation Finite-State Machine (A-FSM) to enable the detection of faults caused by both erroneous adaptation logic and asynchronous updating of context information, with the latter leading to inconsistencies between the external physical context and its internal representation within an application. We identify a number of adaptation fault patterns, each describing a class of faulty behaviors. Finally, we describe three classes of algorithms to detect such faults automatically via analysis of the A-FSM. We evaluate our approach and the trade-offs between the classes of algorithms on a set of synthetically generated Context-Aware Adaptive Applications (CAAAs) and on a simple but realistic application in which a cell phone's configuration profile changes automatically as a result of changes to the user's location, speed, and surrounding environment. Our evaluation describes the faults our algorithms are able to detect and compares the algorithms in terms of their performance and storage requirements.",2010,0, 4013,Low Overhead Incremental Checkpointing and Rollback Recovery Scheme on Windows Operating System,"Implementation of a low overhead incremental checkpointing and rollback recovery scheme that consists of incremental checkpointing combines copy-on-write technique and optimal checkpointing interval is addressed in this article. The checkpointing permits to save process state periodically during failure-free execution, and the recovery scheme maintains to normally execute the task when failure occurs in a PC-based computer-controlled system employed with Windows Operating System. Excess size of capturing state and arbitrary checkpointing results in either performance degradation or expensive recovery cost. For the objective of minimizing overhead, the checkpointing and recovery scheme is designed of Win32 API interception associated with incremental checkpointing and copy-on-write technique. Instead of saving entire process space, it only needs to save the modified pages and uses buffer to save state temporarily in the process of checkpointing so that the checkpointing overhead is reduced. While system is encountered with failure, the minimum expected time of the total overhead to complete a task is calculated by using probability to find the optimal checkpointing interval. From simulation results, the proposed checkpointing and rollback recovery scheme not only enhances the capability of the normal task executing but also reduces the overhead of checkpointing and recovery.",2010,0, 4014,Unified Approach for Next Generation Multimedia Based Communication Components Integration with Signaling and Media Processing Framework,"Modern digital technology makes it possible to manipulate multi-dimensional signals with systems that range from simple digital circuits to advanced parallel computers. It allows the user to modify and gives the excellent results for user experience as well as other commercial and security applications. Communication Technologies like Circuit Switched Video Telephony, IMS based multimedia Applications like Video Share, VoIP, VVoIP, Video on Demand etc are Multimedia based method provides real time Audio, Video and Data, it is growing in the current mobile and Broadband technologies. Current Multimedia based communication technologies are supporting low bandwidth error prone communication mechanism between terminals. Video Telephony supports signaling and the data transmission from peer to peer over low data rate flow in the network. Media control components are required to integrate the communication signaling and media processing. This paper proposes generic approach across various multimedia framework and signaling modules. A new media control interface layer enables seamless data and signaling flow between various multimedia data processing frameworks and signaling modules and improves media communication processing. This layer increases the data flow and improves the quality of media data processing. It helps to integrate communication signaling module with media processing. This paper analyzes the media control interface based media communication based mechanisms like CS Video Telephony, IP based Multimedia Technologies like Video Share, VOIP etc. Media control Interface layer helps to integrate easily various Multimedia applications such as CS Video Telephony, Violet into various media processing multimedia framework such as DirectShow, GStreamer, Opencore etc. It also helps to integrate multimedia middleware stacks such as Media Transfer protocol [MTP], Digital Living Network Alliance [DLNA], Image Processing Pipeline Algorithms with the Application and Nativ- e Layer.",2010,0, 4015,Evaluation of WS-* Standards Based Interoperability of SOA Products for the Hungarian e-Government Infrastructure,"The proposed architecture of the Hungarian e-government framework, mandating the functional co-operation of independent organizations, puts special emphasis on interoperability. WS-*standards have been created to reach uniformity and interoperability in the common middleware tasks for Web services such as security, reliable messaging and transactions. These standards, however, while existing for some time, have implementations slightly different in quality. In order to assess implementations, thorough tests should be performed, and relevant test cases ought to be accepted. For selecting mature SOA products for e-government application, a methodology of such an assessment is needed. We have defined a flexible and extensible test bed and a set of test cases for SOA products considering three aspects: compliance with standards, interoperability and development support.",2010,0, 4016,Performance Evaluation of Handoff Queuing Schemes,"One of the main advantages of new wireless systems is the freedom to make and receive calls anywhere and at any time; handovers are considered a key element for providing this mobility. This paper presents the handoff queuing problem in cellular networks. We propose a model to study three handoff queuing schemes, and provide a performance evaluation of these ones. The paper begin by a presentation of the different handoff queuing scheme to evaluate. Then, gives an evaluation model with the different assumption considered in the simulations. The evaluation concerns the blocking probability for handoff and original calls. These simulations are conducted for each scheme, according to different offered loads, size of call (original and handoff) queue, and number of voice channels. A model is proposed and introduced in this paper for the study of three channel assignment schemes; namely, they are the non prioritized schemes (NPS), the original call queuing schemes, and the handoff call queuing schemes.",2010,0, 4017,A Hermite Interpolation Based Motion Vector Recovery Algorithm for H.264/AVC,"Error Concealment is a very useful method to improve the video quality. It aims at using the maximum received video data to recover the lost information of the picture at the decoder. Lagrange interpolation algorithm is the most effective interpolation for error concealment while it lost some detail information. Hermite interpolation algorithm considers the change rate of the motion vector as well as the motion vector itself, which is more accurate. In this paper, we propose a novel method which uses hermite interpolation to predict the lost motion vectors. We take the change rate (derivative) of motion vector into account, and synthesizing the horizontal and vertical recovered motion vectors adaptively by the minimum distance method. The experimental result shows that our method achieves higher PSNR values than Lagrange interpolation.",2010,0, 4018,Failure Detection and Localization in OTN Based on Optical Power Analysis,"In consideration of the new features of Optical Transport Networks (OTN), the failure detection and localization has become a new challenging issue in OTN management research area. This paper proposes a scheme to detect and locate the failures based on the optical power analysis. In failure detection section of the scheme, this paper propose a method to detect the performance degradation caused by possible failures based on real optical power analysis and build a status matrix which demonstrates the current optical power deviation of the fiber port of each node in OTN. In failure localization section of the scheme, this paper proposes the multiple failures location algorithm (MFLA), which deals with both single point failure and multi-point failures, to locate the multiple failures based on analyzing the status matrix and the switching relationship matrix. Then, an exemplary scenario is given to present the result of detecting and locating the fiber link failure and OXC device failure with the proposed scheme.",2010,0, 4019,New Handover Scheme Based on User Profile: A Comparative Study,In this paper we have analyzed the number of handover required in different time stamp for a particular user on the quality of service parameters namely handover call blocking probability. This system model is based on the analysis of user mobility profile and assigns a weightage factor to each cell. The mathematical equation of the handover blocking probability derived from stochastic comparisons has-been used to compute upper bounds on dropping handover blocking probability by using Guard channel assignment scheme. A conceptual view is given to reduce the new call blocking probability. We use Dynamic Channel Reservation Scheme that can assign handover-reserved channels to new calls depending on the handover weightage factor to reduce new call blocking probability.,2010,0, 4020,Priority-Based Service Differentiation Scheme for Medium and High Rate Sensor Networks,"In Medium and High Rate Wireless Sensor Networks (MHWSNs), a sensor node may gather different kinds of network traffic. To support quality of service (QoS) requirements for different kinds of network traffic, an effective and fare queuing and scheduling scheme is necessary. The paper present a queuing model to distinguish different network traffic based priority, and proposed a rate adjustment model to adjust output rate at a node. Simulation results shows the proposed priority-based service differentiation queuing model can guarantee low average delay for high priority real time traffic. Using rate adjustment model can achieve low packet loss probability, therefore increase the network throughput.",2010,0, 4021,Security and Performance Aspects of an Agent-Based Link-Layer Vulnerability Discovery Mechanism,"The identification of vulnerable hosts and subsequent deployment of mitigation mechanisms such as service disabling or installation of patches is both time-critical and error-prone. This is in part owing to the fact that malicious worms can rapidly scan networks for vulnerable hosts, but is further exacerbated by the fact that network topologies are becoming more fluid and vulnerable hosts may only be visible intermittently for environments such as virtual machines or wireless edge networks. In this paper we therefore describe and evaluate an agent-based mechanism which uses the spanning tree protocol (STP) to gain knowledge of the underlying network topology to allow both rapid and resource-efficient traversal of the network by agents as well as residual scanning and mitigation techniques on edge nodes. We report performance results, comparing the mechanism against a random scanning worm and demonstrating that network immunity can be largely achieved despite a very limited warning interval. We also discuss mechanisms to protect the agent mechanism against subversion, noting that similar approaches are also increasingly deployed in case of malicious code.",2010,0, 4022,Identifying Security Relevant Warnings from Static Code Analysis Tools through Code Tainting,"Static code analysis tools are often used by developers as early vulnerability detectors. Due to their automation they are less time-consuming and error-prone then manual reviews. However, they produce large quantities of warnings that developers have to manually examine and understand.In this paper, we look at a solution that makes static code analysis tools more useful as an early vulnerability detector. We use flow-sensitive, interprocedural and context-sensitive data flow analysis to determine the point of user input and its migration through the source code to the actual exploit. By determining a vulnerabilities point of entry we lower the number of warnings a tool produces and we provide the developer with more information why this warning could be a real security threat. We use our approach in three different ways depending on what tool we examined. First, With the commercial static code analysis tool, Coverity, we reanalyze its results and create a set of warnings that are specifically relevant from a security perspective. Secondly, we altered the open source analysis tool Findbugs to only analyze code that has been tainted by user input. Third, we created an own analysis tool that focuses on XSS vulnerabilities in Java code.",2010,0, 4023,Estimating Error-probability and its Application for Optimizing Roll-back Recovery with Checkpointing,"The probability for errors to occur in electronic systems is not known in advance, but depends on many factors including influence from the environment where the system operates. In this paper, it is demonstrated that inaccurate estimates of the error probability lead to loss of performance in a well known fault tolerance technique, Roll-back Recovery with checkpointing (RRC). To regain the lost performance, a method for estimating the error probability along with an adjustment technique are proposed. Using a simulator tool that has been developed to enable experimentation, the proposed method is evaluated and the results show that the proposed method provides useful estimates of the error probability leading to near-optimal performance of the RRC fault-tolerant technique.",2010,0, 4024,A Smart CMOS Image Sensor with On-chip Hot Pixel Correcting Readout Circuit for Biomedical Applications,"One of the most recent and exciting applications for CMOS image sensors is in the biomedical field. In such applications, these sensors often operate in harsh environments (high intensity, high pressure, long time exposure), which increase the probability for the occurrence of hot pixel defects over their lifetime. This paper presents a novel smart CMOS image sensor integrating hot pixel correcting readout circuit to preserve the quality of the captured images. With this approach, no extra non-volatile memory is required in the sensor device to store the locations of the hot pixels. In addition, the reliability of the sensor is ensured by maintaining a real-time detection of hot pixels during image capture.",2010,0, 4025,Population-Based Algorithm Portfolios for Numerical Optimization,"In this paper, we consider the scenario that a population-based algorithm is applied to a numerical optimization problem and a solution needs to be presented within a given time budget. Although a wide range of population-based algorithms, such as evolutionary algorithms, particle swarm optimizers, and differential evolution, have been developed and studied under this scenario, the performance of an algorithm may vary significantly from problem to problem. This implies that there is an inherent risk associated with the selection of algorithms. We propose that, instead of choosing an existing algorithm and investing the entire time budget in it, it would be less risky to distribute the time among multiple different algorithms. A new approach named population-based algorithm portfolio (PAP), which takes multiple algorithms as its constituent algorithms, is proposed based upon this idea. PAP runs each constituent algorithm with a part of the given time budget and encourages interaction among the constituent algorithms with a migration scheme. As a general framework rather than a specific algorithm, PAP is easy to implement and can accommodate any existing population-based search algorithms. In addition, a metric is also proposed to compare the risks of any two algorithms on a problem set. We have comprehensively evaluated PAP via investigating 11 instantiations of it on 27 benchmark functions. Empirical results have shown that PAP outperforms its constituent algorithms in terms of solution quality, risk, and probability of finding the global optimum. Further analyses have revealed that the advantages of PAP are mostly credited to the synergy between constituent algorithms, which should complement each other either over a set of problems, or during different stages of an optimization process.",2010,0, 4026,Applying an effective model for VNPT CDN,"Most operations of Content Distribution Network (CDN) have measured to evaluate the ability to serve users with content or services they want. Activity measurement process provides the ability to predict, monitor and ensure activities throughout the CDN. Five parameters or regular measurement units are often used by content providers to evaluate the operation of CDN including Cache hit ratio, reserved bandwidth, latency, surrogate server utilization and reliability. There are many ways to measure CDN activities, one which use simulation tools. The simulation CDN is implemented using software tools that have value for research and development, internal testing and diagnostic CDN performance, because of accessing real CDN traces and logs is not easy due to the proprietary nature of commercial CDN. In this article, we will apply a CDN simulation model (based on [CDNSim; 2007]) for design a CDN based on the network infrastructure of Vietnam Posts and Telecommunications Group (VNPT Network).",2010,0, 4027,Trends in Firewall Configuration Errors: Measuring the Holes in Swiss Cheese,"The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point Firewall-1 rule sets. In general, that survey indicated that corporate firewalls often enforced poorly written rule sets. This article revisits the first survey. In addition to being larger, the current study includes configurations from two major vendors. It also introduces a firewall complexity. The study's findings validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule -set's complexity is (still) positively correlated with the number of detected configuration errors. However, unlike the 2004 study, the current study doesn't suggest that later software versions have fewer errors.",2010,0, 4028,Design of fault tolerant system based on runtime behavior tracing,"The current researches to improve the reliability of operating systems have been focusing on the evolution of kernel architecture or protecting against device driver errors. In particularly, the device driver errors are critical to the most of the complementary operating systems that have a kernel level device driver. Especially on special purpose embedded system, because of its limited resources and variety of devices, more serious problems are induced. Preventing data corruption or blocking the arrogation of operational level is not enough to cover the entire problems. For examples, when using device drivers, the violation of function's call sequence can cause a malfunction. Also a violation of behavior rules on system level involves the same problem. This type of errors is difficult to be detected by the previous methods. Accordingly, we designed a system that traces system behavior at runtime and recovers optimally when errors are detected. We experimented in Linux 2.6.24 kernel operating on GP2X-WIZ mobile game player.",2010,0, 4029,Assessing communication media richness in requirements negotiation,"A critical claim in software requirements negotiation regards the assertion that group performances improve when a medium with different richness level is used. Accordingly, the authors have conducted a study to compare traditional face-to-face communication, the richest medium and two less rich communication media, namely a distributed three-dimensional virtual environment and a text-based structured chat. This comparison has been performed with respect to the time needed to accomplish a negotiation. Furthermore, as the only assessment of the time could not be meaningful, the authors have also analysed the media effect on the issues arisen in the negotiation process and the quality of the negotiated software requirements.",2010,0, 4030,Introducing Queuing Network-Based Performance Awareness in Autonomic Systems,"This paper advocates for the introduction of performance awareness in autonomic systems. The motivation is to be able to predict the performance of a target configuration when a self-* feature is planning a system reconfiguration.We propose a global and partially automated process based on queues and queuing networks models. This process includes decomposing a distributed application into black boxes, identifying the queue model for each black box and assembling these models into a queuing network according to the candidate target configuration. Finally, performance prediction is performed either through simulation or analysis.This paper sketches the global process and focuses on the black box model identification step. This step is automated thanks to a load testing platform enhanced with a workload control loop. Model identification is then based on statistical tests. The model identification process is illustrated by experimental results.",2010,0, 4031,A Cubic 3-Axis Magnetic Sensor Array for Wirelessly Tracking Magnet Position and Orientation,"In medical diagnoses and treatments, e.g., endoscopy, dosage transition monitoring, it is often desirable to wirelessly track an object that moves through the human GI tract. In this paper, we propose a magnetic localization and orientation system for such applications. This system uses a small magnet enclosed in the object to serve as excitation source, so it does not require the connection wire and power supply for the excitation signal. When the magnet moves, it establishes a static magnetic field around, whose intensity is related to the magnet's position and orientation. With the magnetic sensors, the magnetic intensities in some predetermined spatial positions can be detected, and the magnet's position and orientation parameters can be computed based on an appropriate algorithm. Here, we propose a real-time tracking system developed by a cubic magnetic sensor array made of Honeywell 3-axis magnetic sensors, HMC1043. Using some efficient software modules and calibration methods, the system can achieve satisfactory tracking accuracy if the cubic sensor array has enough number of 3-axis magnetic sensors. The experimental results show that the average localization error is 1.8 mm.",2010,0, 4032,An Efficient Duplicate Detection System for XML Documents,"Duplicate detection, which is an important subtask of data cleaning, is the task of identifying multiple representations of a same real-world object and necessary to improve data quality. Numerous approaches both for relational and XML data exist. As XML becomes increasingly popular for data exchange and data publishing on the Web, algorithms to detect duplicates in XML documents are required. Previous domain independent solutions to this problem relied on standard textual similarity functions (e.g., edit distance, cosine metric) between objects. However, such approaches result in large numbers of false positives if we want to identify domain-specific abbreviations and conventions. In this paper, we present the process of detecting duplicate includes three modules, such as selector, preprocessor and duplicate identifier which uses XML documents and candidate definition as input and produces duplicate objects as output. The aim of this research is to develop an efficient algorithm for detecting duplicate in complex XML documents and to reduce number of false positive by using MD5 algorithm. We illustrate the efficiency of this approach on several real-world datasets.",2010,0, 4033,Performance Evaluation of the Judicial System in Taiwan Using Data Envelopment Analysis and Decision Trees,"A time-honored maxim says that the judicial system is the last line of defending justice. Its performance has a great impact on how the citizen trust or distrust their state apparatus in a democracy. Technically speaking, the judicial process and its procedures are very complicated and the purpose of the whole system is to go through the law and due process to protect civil liberties and rights and to defend the public good of the nation. Therefore, it is worthwhile to assess the performance of judicial institutions in order to advance the efficiency and quality of judicial verdict. This paper combines data envelopment analysis (DEA) and decision trees to achieve this objective. In particular, DEA is first of all used to evaluate the relative efficiency of 18 district courts in Taiwan. Then, the efficiency scores and the overall efficiency of each decision making units are then used to train a decision tree model. Specifically, C5.0, CART, and CHAID decision trees are constructed for comparisons. The decision rules in the best decision tree model can be used to distinguish between efficient units and inefficient units and allow us to understand important factors affecting the efficiency of judicial institutions. The experimental result shows that C5.0 performs the best for predicting (in) efficient judicial institutions, which provides 80.37% average accuracy.",2010,0, 4034,RFOH: A New Fault Tolerant Job Scheduler in Grid Computing,"The goal of grid computing is to aggregate the power of widely distributed resources. Considering that the probability of failure is great in such systems, fault tolerance has become a crucial area in computational grid. In this paper, we propose a new strategy named RFOH for fault tolerant job scheduling in computational grid. This strategy maintains the history of fault occurrence of resources in Grid Information Server (GIS). Whenever a resource broker has jobs to schedule, it uses this information in Genetic Algorithm and finds a near optimal solution for the problem. Further, it increases the percentage of jobs executed within specified deadline. The experimental result shows that we can have a combination of user satisfaction and reliability. Using checkpoint techniques, the proposed strategy can make grid scheduling more reliable and efficient.",2010,0, 4035,Model-based validation of safety-critical embedded systems,"Safety-critical systems have become increasingly software reliant and the current development process of Abuild, then integrateA has become unaffordable. This paper examines two major contributors to today's exponential growth in cost: system-level faults that are not discovered until late in the development process; and multiple truths of analysis results when predicting system properties through model-based analysis and validating them against system implementations. We discuss the root causes of such system-level problems, and an architecture-centric model-based analysis approach of different operational quality aspects from an architecture model. A key technology is the SAE Architecture Analysis & Design Language (AADL) standard for embedded software-reliant system. It supports a single source approach to analysis of operational qualities such as responsiveness, safety-criticality, security, and reliability through model annotations. The paper concludes with a summary of an industrial case study that demonstrates the feasibility of this approach.",2010,0, 4036,Development of fault detection and reporting for non-central maintenance aircraft,"This paper describes how real-time faults can be automatically detected in Boeing 737 airplanes without significant hardware or software modifications, or potentially expensive system re-certification by employing a novel approach to Airplane Conditioning and Monitoring System (ACMS) usage. The ACMS is a function of the Digital Flight Data Acquisition Unit (DFDAU), which also collects aircraft parameters and transmits them to the Flight Data Recorder (FDR). The DFDAU receives digital and analog data from various airplane subsystems, which is also available to the ACMS. Exploiting customized ACMS software allows airline operators to specify collection and processing of various aircraft parameters for flight data monitoring, maintenance, and operational efficiency trending. Employing a rigorous systems engineering approach with detailed signal analysis, fault detection algorithms are created for software implementation within the ACMS to support ground-based reporting systems. To date, over 160 algorithms are in development based upon the existing Fault Reporting and Fault Isolation Manual (FRM/FIM) structure and availability of system signals for individual faults. Following successful field-testing and implementation, 737 airplane customers have access to a state of fault detection automation not previously available on aircraft without central maintenance monitoring.",2010,0, 4037,A dual use fiber optic technology for enabling health management,"Advanced diagnostic and prognostic technology promises to reduce support costs and further improve safety of flight. The ability to detect fault precursors and predict remaining useful life can provide longer periods of uninterrupted operations as well as reduce support costs and improve crew response to degrading conditions. Two major barriers to realizing this potential include the cost of implementation in legacy and new aircraft and the lack of data to develop or mature algorithms. The cost of incorporating the additional sensing, data collection, processing and communications hardware into legacy aircraft is typically prohibitive, particularly if qualification/requalification of hardware and software are required. Boeing and Killdeer Mountain Manufacturing (KMM) developed a dual use technology to enable the low cost and footprint implementation of health management systems. The Chafing Protection System (CHAPS) uses optical fiber within wire bundles to detect and locate the wire chafing. An Optical Time Domain Reflectometry (OTDR) is used to locate the source of the chafing. Boeing has developed technology to also make use of the CHAPS fiber to communicate health management data onboard a vehicle. This technology includes a split ring connector which enables the implementation of a low cost, support critical, high bandwidth data network for health management.",2010,0, 4038,Cassini spacecraft's in-flight Fault Protection redesign for unexpected regulator malfunction,"After the launch of the Cassini ?Mission-to-Saturn? Spacecraft, the volume of subsequent mission design modifications was expected to be minimal due to the rigorous testing and verification of the Flight Hardware and Flight Software. For known areas of risk where faults could potentially occur, component redundancy and/or autonomous Fault Protection (FP) routines were implemented to ensure that the integrity of the mission was maintained. The goal of Cassini's FP strategy is to ensure that no credible Single Point Failure (SPF) prevents attainment of mission objectives or results in a significantly degraded mission, with the exception of the class of faults which are exempted due to low probability of occurrence. In the case of Cassini's Propulsion Module Subsystem (PMS) design, a waiver was approved prior to launch for failure of the prime regulator to properly close; a potentially mission catastrophic single point failure. However, one month after Cassini's launch when the fuel & oxidizer tanks were pressurized for the first time, the prime regulator was determined to be leaking at a rate significant enough to require a considerable change in Main Engine (ME) burn strategy for the remainder of the mission. Crucial mission events such as the Saturn Orbit Insertion (SOI) burn task which required a characterization exercise for the PMS system 30 days before the maneuver were now impossible to achieve. This paper details the steps that were necessary to support the unexpected malfunction of the prime regulator, the introduction of new failure modes which required new FP design changes consisting of new/modified under-pressure & over-pressure algorithms; all which must be accomplished during the operation phase of the spacecraft, as a result of a presumed low probability, waived failure which occurred after launch.",2010,0, 4039,Resilient Critical Infrastructure Management Using Service Oriented Architecture,"The SERSCIS project aims to support the use of interconnected systems of services in Critical Infrastructure (CI) applications. The problem of system interconnectedness is aptly demonstrated by 'Airport Collaborative Decision Making' (A-CDM). Failure or underperformance of any of the interlinked ICT systems may compromise the ability of airports to plan their use of resources to sustain high levels of air traffic, or to provide accurate aircraft movement forecasts to the wider European air traffic management systems. The proposed solution is to introduce further SERSCIS ICT components to manage dependability and interdependency. These use semantic models of the critical infrastructure, including its ICT services, to identify faults and potential risks and to increase human awareness of them. Semantics allows information and services to be described in such a way that makes them understandable to computers. Thus when a failure (or a threat of failure) is detected, SERSCIS components can take action to manage the consequences, including changing the interdependency relationships between services. In some cases, the components will be able to take action autonomously -- e.g. to manage 'local' issues such as the allocation of CPU time to maintain service performance, or the selection of services where there are redundant sources available. In other cases the components will alert human operators so they can take action instead. The goal of this paper is to describe a Service Oriented Architecture (SOA) that can be used to address the management of ICT components and interdependencies in critical infrastructure systems.",2010,0, 4040,Improving the Effectiveness and the Efficiency of the Some Operations in Maintenance Processes Using Dynamic Taxonomies,"The purpose of this work is to increase the effectiveness and the efficiency of some operations carried out during the activities in the aeronautical maintenance and transformation processes. In particular, we examine the Non-Routine Card (NRC) Resolution and the Activity Planning Processes. An NRC is a fault, not expected, detected during the maintenance/transformation operations. The costs and the efforts of the NRC management are of the same order of magnitude of the a priori scheduled activities. The process requires that corrective actions come from applicable technical documentation. There are many kinds of defects and technical documentation does not cover all possible cases. Often, we observed the operators refer to similar NRC solved in the past. Therefore, the operators browse the set of NRCs. We found two major issues: the enormous number of NRC and the lack of a shared and concrete definition of similarity between NRCs. The latter is due to a distinctive feature of the NRCs. The operators find AsimilarA NRCs using their expertise. On other hand, the Activity Planning process requires the periodic re-calculation of the optimal schedule of the maintenance activities due to the presence of NRCs. The upgrading of the schedule is a very time consuming task. NRC management and activity planning share common problems. Both of them require searching and browsing large set of items by specialized technicians. We propose a common approach for the two problems. The application we developed relies on the concept of dynamic taxonomy.",2010,0, 4041,Wavelet Coherence and Fuzzy Subtractive Clustering for Defect Classification in Aeronautic CFRP,"Despite their high specific stiffness and strength, carbon fiber reinforced polymers, stacked at different fiber orientations, are susceptible to interlaminar damages. They may occur in the form of micro-cracks and voids, and leads to a loss of performance. Within this framework, ultrasonic tests can be exploited in order to detect and classify the kind of defect. The main object of this work is to develop the evolution of a previous heuristic approach, based on the use of Support Vector Machines, proposed in order to recognize and classify the defect starting from the measured ultrasonic echoes. In this context, a real-time approach could be exploited to solve real industrial problems with enough accuracy and realistic computational efforts. Particularly, we discuss the cross wavelet transform and wavelet coherence for examining relationships in time-frequency domains between. For our aim, a software package has been developed, allowing users to perform the cross wavelet transform, the wavelet coherence and the Fuzzy Inference System. Since the ill-posedness of the inverse problem, Fuzzy Inference has been used to regularize the system, implementing a data-independent classifier. Obtained results assure good performances of the implemented classifier, with very interesting applications.",2010,0, 4042,Localized QoS Routing with Admission Control for Congestion Avoidance,"Localized Quality of Service (QoS) routing has been recently proposed for supporting the requirements of multimedia applications and satisfying QoS constraints. Localized algorithms avoid the problems associated with the maintenance of global network state by using statistics of flow blocking probabilities. Using local information for routing avoids the overheads of global information with other nodes. However, localized QoS routing algorithms perform routing decisions based on information updated from path request to path request. This paper proposes to tackle a combined localized routing and admission control in order to avoid congestion. We introduce a new Congestion Avoidance Routing algorithm (CAR) in localized QoS routing which make a decision of routing in each connection request using an admission control to route traffic away from congestion. Simulations of various network topologies are used to illustrate the performance of the CAR. We compare the performance of the CAR algorithm against the Credit Based Routing (CBR) algorithm and the Quality Based Routing (QBR) under various ranges of traffic loads.",2010,0, 4043,Large Scale Disaster Information System Based on P2P Overlay Network,"When a large-scale disaster occurs, information sharing among administration, residents, and volunteers is indispensable. However, as reported, a lot of examples show that it is difficult to use well. The Disaster Information System is not built on the infrastructure which the system failure was considered at the disaster is nominated for the cause. In this study, we focus on the operation usability of the Disaster Information Sharing Systems works at each area, and share the resources of those systems with the P2P overlay network, by decentralizing and integrating the disaster information to realize the redundancy of the system. For the disorder between the nodes and the communication links, we propose a mechanism to detect the faults in order to improve the robustness of the system.",2010,0, 4044,Fault Tolerance and Recovery in Grid Workflow Management Systems,"Complex scientific workflows are now commonly executed on global grids. With the increasing scale complexity, heterogeneity and dynamism of grid environments the challenges of managing and scheduling these workflows are augmented by dependability issues due to the inherent unreliable nature of large-scale grid infrastructure. In addition to the traditional fault tolerance techniques, specific checkpoint-recovery schemes are needed in current grid workflow management systems to address these reliability challenges. Our research aims to design and develop mechanisms for building an autonomic workflow management system that will exhibit the ability to detect, diagnose, notify, react and recover automatically from failures of workflow execution. In this paper we present the development of a Fault Tolerance and Recovery component that extends the ActiveBPEL workflow engine. The detection mechanism relies on inspecting the messages exchanged between the workflow and the orchestrated Web Services in search of faults. The recovery of a process from a faulted state has been achieved by modifying the default behavior of ActiveBPEL and it basically represents a non-intrusive checkpointing mechanism. We present the results of several scenarios that demonstrate the functionality of the Fault Tolerance and Recovery component, outlining an increase in performance of about 50% in comparison to the traditional method of resubmitting the workflow.",2010,0, 4045,A Failure Detection System for Large Scale Distributed Systems,"Failure detection is a fundamental building block for ensuring fault tolerance in large scale distributed systems. In this paper we present an innovative solution to this problem. The approach is based on adaptive, decentralized failure detectors, capable of working asynchronous and independent on the application flow. The proposed failure detectors are based on clustering, the use of a gossip-based algorithm for detection at local level and the use of a hierarchical structure among clusters of detectors along which traffic is channeled. In this we present result proving that the system is able to scale to a large number of nodes, while still considering the QoS requirements of both applications and resources, and it includes the fault tolerance and system orchestration mechanisms, added in order to assess the reliability and availability of distributed systems in an autonomic manner.",2010,0, 4046,A Multidimensional Array Slicing DSL for Stream Programming,"Stream languages offer a simple multi-core programming model and achieve good performance. Yet expressing data rearrangement patterns (like a matrix block decomposition) in these languages is verbose and error prone. In this paper, we propose a high-level programming language to elegantly describe n-dimensional data reorganization patterns. We show how to compile it to stream languages.",2010,0, 4047,Fault Tolerance by Quartile Method in Wireless Sensor and Actor Networks,"Recent technological advances have lead to the emergence of wireless sensor and actor networks (WSAN) which sensors gather the information for an event and actors perform the appropriate actions. Since sensors are prone to failure due to energy depletion, hardware failure, and communication link errors, designing an efficient fault tolerance mechanism becomes an important issue in WSAN. However, most research focus on communication link fault tolerance without considering sensing fault tolerance on paper survey. In this situation, actor may perform incorrect action by receiving error sensing data. To solve this issue, fault tolerance by quartile method (FTQM) is proposed in this paper. In FTQM, it not only determines the correct data range but also sifts the correct sensors by data discreteness. Therefore, actors could perform the appropriate actions in FTQM. Moreover, FTQM also could be integrated with communication link fault tolerance mechanism. In the simulation results, it demonstrates FTQM has better predicted rate of correct data, the detected tolerance rate of temperature, and the detected temperature compared with the traditional sensing fault tolerance mechanism. Moreover, FTQM has better performance when the real correct data rate and the threshold value of failure are varied.",2010,0, 4048,Impact of disk corruption on open-source DBMS,"Despite the best intentions of disk and RAID manufacturers, on-disk data can still become corrupted. In this paper, we examine the effects of corruption on database management systems. Through injecting faults into the MySQL DBMS, we find that in certain cases, corruption can greatly harm the system, leading to untimely crashes, data loss, or even incorrect results. Overall, of 145 injected faults, 110 lead to serious problems. More detailed observations point us to three deficiencies: MySQL does not have the capability to detect some corruptions due to lack of redundant information, does not isolate corrupted data from valid data, and has inconsistent reactions to similar corruption scenarios. To detect and repair corruption, a DBMS is typically equipped with an offline checker. Unfortunately, the MySQL offline checker is not comprehensive in the checks it performs, misdiagnosing many corruption scenarios and missing others. Sometimes the checker itself crashes; more ominously, its incorrect checking can lead to incorrect repairs. Overall, we find that the checker does not behave correctly in 18 of 145 injected corruptions, and thus can leave the DBMS vulnerable to the problems described above.",2010,0, 4049,Mini-Me: A min-repro system for database software,"Testing and debugging database software is often challenging and time consuming. A very arduous task for DB testers is finding a min-repro - the Asimplest possible setupA that reproduces the original problem. Currently, a great deal of searching for min-repros is carried out manually using non-database-specific tools, which is both slow and error-prone. We propose to demonstrate a system, called Mini-Me1, designed to ease and speed-up the task of finding min-repros in database-related products. Mini-Me employs several effective tools, including: the novel simplification transformations, the high-level language for creating search scripts and automation, the Arecord-and-replayA functionality, and the visualization of the search space and results. In addition to the standard application mode, the system can be interacted with in the game mode. The latter can provide an intrinsically motivating environment for developing successful search strategies by DB testers, which can be data-mined and recorded as patterns and used as recommendations for DB testers in the future. Potentially, a system like Mini-Me can save hours of time (for both customers and testers to isolate a problem), which could result in faster fixes and large cost savings to organizations.",2010,0, 4050,An improved monte carlo method in fault tree analysis,"The Monte Carlo (MC) method is one of the most general ones in system reliability analysis, because it reflects the statistical nature of the problem. It is not restricted by type of failure models of system components, allows to capture the dynamic relationship between events and estimate the accuracy of obtained results by calculating standard error. However, it is rarely used in Fault Tree (FT) software, because a huge number of trials are required to reach a tolerable precision if the value of system probability is relatively small. Regrettably, this is the most important practical case, because nowadays highly reliable systems are ubiquitous. In the present paper we study several enhancements of the raw simulation method: variance reduction, parallel computing, and improvements based on simple preliminary information about FT structure. They are efficiently developed both for static and dynamic FTs. The effectiveness and accuracy of the improved MC method is confirmed by numerous calculations of complex industrial benchmarks.",2010,0, 4051,Efficient analysis of imperfect coverage systems with functional dependence,"Traditional approaches to handling functional dependence in systems with imperfect fault coverage are based on Markov models, which are inefficient due to the well-known state space explosion problem. Also, the Markov-based methods typically assume exponential time-to-failure distributions for the system components. In this paper we propose a new combinatorial approach to handling functional dependence in the reliability analysis of imperfect coverage systems. Based on the total probability theorem and the divide-and-conquer strategy, the approach separates the effects of functional dependence and imperfect fault coverage from the combinatorics of the system solution. The proposed approach is efficient, accurate, and has no limitation on the type of time-to-failure distributions for the system components. The application and advantages of the proposed approach are illustrated through analyses of two examples.",2010,0, 4052,Qualitative-Quantitative Bayesian Belief Networks for reliability and risk assessment,This paper presents an extension of Bayesian belief networks (BBN) enabling use of both qualitative and quantitative likelihood scales in inference. The proposed method is accordingly named QQBBN (Qualitative-Quantitative Bayesian Belief Networks). The inclusion of qualitative scales is especially useful when quantitative data for estimation of probabilities are lacking and experts are reluctant to express their opinions quantitatively. In reliability and risk analysis such situation occurs when for example human and organizational root causes of systems are modeled explicitly. Such causes are often not quantifiable due to limitations in the state of the art and lack of proper quantitative metrics. This paper describes the proposed QQBBN framework and demonstrates its uses through a simple example.,2010,0, 4053,A Novel Method of Fault Diagnosis in Wind Power Generation System,"Along with environmental consciousness enhancement, conventional energy depletion, wind energy exploitation is expanding gradually due to the renewable merit, clean without any pollution and vast reserve features. Therefore, wind power generation (WPG) system equipped with Doubly Fed Induction Generators is emerging as mushroom. Larger rated capacity of power unit, the higher tower and the variable pitch are the main scope in WPG system. However, latent trouble will bring about likewise. If a fault occurred, it will be catastrophic for WPG system. Consequently, the technology of fault detection will play a more important role in WPG system. Based on the present status, a novel method is proposed in this paper after summarizing and analyzing the lack of previous methods. Then an example for detecting the inverter fault is studied using PSCAD software. Results indicated that the proposed method is effective and feasible.",2010,0, 4054,Research on Online Static Risk Assessment for Urban Power System,"With the rapid development of urbanization, the importance of city power grids safety has been gradually recognized. Given a full consideration for the characteristics of city power grids, we design a complete set of risk evaluation index system based on probability theory by employing risk theory and analytic hierarchy process (AHP) in power system online static security risk assessment. Further, we have developed city grid online risk assessment software, and have provided a clear description of some key technologies of implementation and calculation process after installation. Finally, the test result shows the functionality and applicability of our software.",2010,0, 4055,Numerical Simulation of the Unsteady Flow and Power of Horizontal Axis Wind Turbine using Sliding Mesh,Horizontal axis wind turbine (hereafter HAWT) is the common equipment in wind turbine generator systems in recent years. The paper relates to the numerical simulation of the unsteady airflow around a HAWT of the type Phase VI with emphasis on the power output. The rotor diameter is 10-m and the rotating speed is 72 rpm. The simulation was undertaken with the Computational Fluid Dynamics (CFD) software FLUENT 6.2 using the sliding meshes controlled by user-defined functions (UDF). Entire mesh node number is about 1.8 million and it is generated by GAMBIT 2.2 to achieve better mesh quality. The numerical results were compared with the experimental data from NREL UAE wind tunnel test. The comparisons show that the numerical simulation using sliding meshes can accurately predict the aerodynamic performance of the wind turbine rotor.,2010,0, 4056,The Research of Power Quality Real Time Monitor for Coal Power System Based on Wavelet Analysis and ARM Chip,"In order to prevent coal power system fault for safety production, a novel power quality real time monitor is researched in this paper. According to the coal power system special characteristics and safety production standard, the harmonic of the coal power system is analyzed first based on the wavelet theory, and then the monitoring system is designed with ARM LPC2132 as the client computer to fulfill the common power parameter data acquisition. The whole monitoring system is composed of the signal transform module, data processing module, communication module, host computer interfaces and their function modules. The system software is designed with the platform of LabWindowns/CVI. The research result shows that the power quality monitor can detect the harmonic states in real time.",2010,0, 4057,Study and Realizing of Method of AC Locating Fault in Distribution System,"The idea of AC locating fault method in distribution system is that inject an AC signal into the fault phase after a single line to ground fault happened and then diagnose the fault along the transmission line with the handled AC signal detector utilizing dichotomy method until the fault is determined. The frequency of the injected AC signal used in this study is 60 Hz. Compared with the injected S signal technique, this method is called low frequency AC signal injection method. In this paper, the hardware of the signal source and the software design are introduced. The SCM managing pulse is used in the control section of the hardware, and the application of PWM control technique in this hardware is discussed in this reference; as the software design, the PWM signal is generated by coding based on the relation between the injected signal and PWM waveforms. The high frequency PWM signal excited a couple of breakers in the inversion source and then the output terminal will get high stable injected signals by filtering and generate AC signal with invariable frequency and adjustable voltage, based on which the signal detector could detect the required signal easily. The proposed signal source device reduces the difficulty of high impedance to ground detecting and improves the accuracy and reliability of locating fault. This technique, which allows the ground detecting and is convenient for engineers' operation, reduces the locating fault time, improves its efficiency and is proved by simulation and analysis on its validity.",2010,0, 4058,Notice of Retraction
Design of Synchronous Sampling System Based on ATT7022C,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

Higher and higher demands of power quality detection are asked approach to the further demand to understanding of power quality. Moreover, harmonics in the grid is detected how fast and synchronous appears especially important. An approach to detect harmonics is presented in this paper. The design is mainly composed of a single ATT7022C chip and a microcontroller. Working principle is introduced firstly. Then the structure of the system is discussed. A software design approach of data acquisition, based on the chip ATT7022C and low cost DSP TMS320F2812 (F2812), is given in this paper. This design is given to verify fast speed, high detection accuracy and poor calculation.",2010,0, 4059,Model and implementation for runtime software monitoring system,"For complicated Software-Intensive System, it is always hard to guarantee the reliability and safety of software. Effective methods for detecting faults and isolating software fault from hardware fault are desiderated especially. In order to detect and isolate fault in SIS, a method called runtime software monitoring is studied, and a new kind of runtime software monitoring system(RSMS) is constructed in this paper. The RSMS can not only detect software fault by observing software behavior to determine whether it complies with its intended behavior, but also can assist to isolate software fault from hardware fault and to locate software fault based on fault symptoms acquired by our method. The software architecture of RSMS is presented from different views by using A4+1A view model and layer architectural style. The RSMS prototype is implemented through architecture-based software development method. By applying the prototype in practice, it proved that the RSMS prototype is feasible and effective for detecting and diagnosing faults in SIS.",2010,0, 4060,A model for early prediction of faults in software systems,"Quality of a software component can be measured in terms of fault proneness of data. Quality estimations are made using fault proneness data available from previously developed similar type of projects and the training data consisting of software measurements. To predict faulty modules in software data different techniques have been proposed which includes statistical method, machine learning methods, neural network techniques and clustering techniques. Predicting faults early in the software life cycle can be used to improve software process control and achieve high software reliability. The aim of proposed approach is to investigate that whether metrics available in the early lifecycle (i.e. requirement metrics), metrics available in the late lifecycle (i.e. code metrics) and metrics available in the early lifecycle (i.e. requirement metrics) combined with metrics available in the late lifecycle (i.e. code metrics) can be used to identify fault prone modules using decision tree based Model in combination of K-means clustering as preprocessing technique. This approach has been tested with CM1 real time defect datasets of NASA software projects. The high accuracy of testing results show that the proposed Model can be used for the prediction of the fault proneness of software modules early in the software life cycle.",2010,0, 4061,Analysing the need for autonomic behaviour in grid computing,"With the introduction of Grid computing, complexity of large scale distributed systems has become unmanageable because of the manual system adopted for the management these days. Due to dynamic nature of grid, manual management techniques are time-consuming, in-secure and more prone to errors. This leads to new paradigm of self-management through autonomic computing to pervade over the old manual system to begin the next generation of Grid computing. In this paper, we have discussed the basic concept of grid computing and the need for grid to be autonomic. A comparative analysis of different grid middleware has been provided to show the absence of autonomic behavior in current grid architecture. To conclude the discussion, we have mentioned the areas where research work has been lacking and what we believe the community should be considering.",2010,0, 4062,Performance-effective operation below Vcc-min,"Continuous circuit miniaturization and increased process variability point to a future with diminishing returns from dynamic voltage scaling. Operation below Vcc-min has been proposed recently as a mean to reverse this trend. The goal of this paper is to minimize the performance loss due to reduced cache capacity when operating below Vcc-min. A simple method is proposed: disable faulty blocks at low voltage. The method is based on observations regarding the distributions of faults in an array according to probability theory. The key lesson, from the probability analysis, is that as the number of uniformly distributed random faulty cells in an array increases the faults increasingly occur in already faulty blocks. The probability analysis is also shown to be useful for obtaining insight about the reliability implications of other cache techniques. For one configuration used in this paper, block disabling is shown to have on the average 6.6% and up to 29% better performance than a previously proposed scheme for low voltage cache operation. Furthermore, block-disabling is simple and less costly to implement and does not degrade performance at or above Vcc-min operation. Finally, it is shown that a victim-cache enables higher and more deterministic performance for a block-disabled cache.",2010,0, 4063,Fault-based attack of RSA authentication,"For any computing system to be secure, both hardware and software have to be trusted. If the hardware layer in a secure system is compromised, not only it would be possible to extract secret information about the software, but it would also be extremely hard for the software to detect that an attack is underway. In this work we detail a complete end-to-end fault-attack on a microprocessor system and practically demonstrate how hardware vulnerabilities can be exploited to target secure systems. We developed a theoretical attack to the RSA signature algorithm, and we realized it in practice against an FPGA implementation of the system under attack. To perpetrate the attack, we inject transient faults in the target machine by regulating the voltage supply of the system. Thus, our attack does not require access to the victim system's internal components, but simply proximity to it. The paper makes three important contributions: first, we develop a systematic fault-based attack on the modular exponentiation algorithm for RSA. Second, we expose and exploit a severe flaw on the implementation of the RSA signature algorithm on OpenSSL, a widely used package for SSL encryption and authentication. Third, we report on the first physical demonstration of a fault-based security attack of a complete microprocessor system running unmodified production software: we attack the original OpenSSL authentication library running on a SPARC Linux system implemented on FPGA, and extract the system's 1024-bit RSA private key in approximately 100 hours.",2010,0, 4064,HW/SW co-detection of transient and permanent faults with fast recovery in statically scheduled data paths,"This paper describes a hardware-/software-based technique to make the data path of a statically scheduled super scalar processor fault tolerant. The results of concurrently executed operations can be compared with little hardware overhead in order to detect a transient or permanent fault. Furthermore, the hardware extension allows to recover from a fault within one to two clock cycles and to distinguish between transient and permanent faults. If a permanent fault was detected, this fault is masked for the rest of the program execution such that no further time is needed for recovering from that fault. The proposed extensions were implemented in the data path of a simple VLIW processor in order to prove the feasibility and to determine the hardware overhead. Finally a reliability analysis is presented. It shows that for medium and large scaled data paths our extension provides an up to 98% better reliability than triple modular redundancy.",2010,0, 4065,ERSA: Error Resilient System Architecture for probabilistic applications,"There is a growing concern about the increasing vulnerability of future computing systems to errors in the underlying hardware. Traditional redundancy techniques are expensive for designing energy-efficient systems that are resilient to high error rates. We present Error Resilient System Architecture (ERSA), a low-cost robust system architecture for emerging killer probabilistic applications such as Recognition, Mining and Synthesis (RMS) applications. While resilience of such applications to errors in low-order bits of data is well-known, execution of such applications on error-prone hardware significantly degrades output quality (due to high-order bit errors and crashes). ERSA achieves high error resilience to high-order bit errors and control errors (in addition to low-order bit errors) using a judicious combination of 3 key ideas: (1) asymmetric reliability in many-core architectures, (2) error-resilient algorithms at the core of probabilistic applications, and (3) intelligent software optimizations. Error injection experiments on a multi-core ERSA hardware prototype demonstrate that, even at very high error rates of 20,000 errors/second/core or 2??10-4 error/cycle/core (with errors injected in architecturally-visible registers), ERSA maintains 90% or better accuracy of output results, together with minimal impact on execution time, for probabilistic applications such as K-Means clustering, LDPC decoding and Bayesian networks. Moreover, we demonstrate the effectiveness of ERSA in tolerating high rates of static memory errors that are characteristic of emerging challenges such as Vccmin problems and erratic bit errors. Using the concept of configurable reliability, ERSA platforms may also be adapted for general-purpose applications that are less resilient to errors (but at higher costs).",2010,0, 4066,Continuous Verification of Large Embedded Software Using SMT-Based Bounded Model Checking,"The complexity of software in embedded systems has increased significantly over the last years so that software verification now plays an important role in ensuring the overall product quality. In this context, bounded model checking has been successfully applied to discover subtle errors, but for larger applications, it often suffers from the state space explosion problem. This paper describes a new approach called continuous verification to detect design errors as quickly as possible by exploiting information from the software configuration management system and by combining dynamic and static verification to reduce the state space to be explored. We also give a set of encodings that provide accurate support for program verification and use different background theories in order to improve scalability and precision in a completely automatic way. A case study from the telecommunications domain shows that the proposed approach improves the error-detection capability and reduces the overall verification time by up to 50%.",2010,0, 4067,Generating Test Plans for Acceptance Tests from UML Activity Diagrams,"The Unified Modeling Language (UML) is the standard to specify the structure and behaviour of software systems. The created models are a constitutive part of the software specification that serves as guideline for the implementation and the test of software systems. In order to verify the functionality which is defined within the specification documents, the domain experts need to perform an acceptance test. Hence, they have to generate test cases for the acceptance test. Since domain experts usually have a low level of software engineering knowledge, the test case generation process is challenging and error-prone. In this paper we propose an approach to generate high-level acceptance test plans automatically from business processes. These processes are modeled as UML Activity Diagrams (ACD). Our method enables the application of an all-path coverage criterion to business processes for testing software systems.",2010,0, 4068,Dynamic Workflow Management and Monitoring Using DDS,"Large scientific computing data-centers require a distributed dependability subsystem that can provide fault isolation and recovery and is capable of learning and predicting failures to improve the reliability of scientific workflows. This paper extends our previous work on the autonomic scientific workflow management systems by presenting a hierarchical dynamic workflow management system that tracks the state of job execution using timed state machines. Workflow monitoring is achieved using a reliable distributed monitoring framework, which employs publish-subscribe middleware built upon OMG Data Distribution Service standard. Failure recovery is achieved by stopping and restarting the failed portions of workflow directed acyclic graph.",2010,0, 4069,Time Coordination of Distance Protections Using Probabilistic Fault Trees With Time Dependencies,"Distance protection of the electrical power system is analyzed in the paper. Electrical power transmission lines are divided into sections equipped with protective relaying system. Numerical protection relays use specialized digital signal processors as the computational hardware, together with the associated software tools. The input analogue signals are converted into a digital representation and processed according to the appropriate mathematical algorithms. The distance protection is based on local and remote relays. Hazard is the event: remote circuit breaker tripping provided the local circuit breaker can be opened. Coordination of operation of protection relays in time domain is an important and difficult problem. Incorrect values of time delays of protective relays can cause the hazard. In the paper, the time settings are performed using probabilistic fault trees with time dependencies (PFTTD). PFTTD is built for the above mentioned hazard. PFTTD are used in selection of time delays of primary (local) and backup (remote) protections. Results of computations of hazard probabilities as a function of time delay are given.",2010,0, 4070,Micro-Computed Tomography analysis methods to assess sheep cancellous bone preservation,"The goal of this study was to determine if mineral dissolution from cancellous bone specimens alters stereology parameters determined by Micro-Computed Tomography (Micro-CT). Sheep cancellous bone cores were excised from lumbar vertebrae and randomized for immediate storage in one of four AbathingA solutions - Phosphate Buffered Saline (PBS), PBS supersaturated with Hydroxyapatite (HA) (PBS+HA), PBS supplemented with Protease Inhibitor cocktail solution (PBS+PI), and Supersaturated HA PBS supplemented with Protease Inhibitor Cocktail (PBS+HA+PI). An additional sample was stored in 70% Ethanol (EtOH) to provide reference values since samples are often stored and processed by micro-CT in EtOH prior to mechanical tests. Common micro-CT parameters used to examine bone structure and quality can assess vertebral cancellous bone changes due to storage and handling. Micro-CT data were collected while the samples were stored in their storage and bathing solutions at baseline (t=0 months). Following data collection, samples were stored at -20AC for 6 months. Micro-CT data were re-collected and degradation effects on the cancellous bone specimens were evaluated in stereology parameters between samples stored in different solutions and for both time points. Specimen geometry and selected volumes used for data analysis were assessed for possible computational differences due to the software algorithms used for stereology. No differences were seen at baseline due to the specimen size or the pre-determined analysis volumes. Stereology differences were seen between bathing media groups at 6 months. Additionally, threshold values used for processing were different between the bathing solutions, reflecting changes due to solution density differences.",2010,0, 4071,Notice of Retraction
Application of Homogeneous Markov Chain in Quantitative Analysis of Teaching Achievement Indicators in Physical Education,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

Markov chain is a statistical analysis method which is based on probability theory and uses random mathematical models to analyze the quantitative relationship in the course of development and changes of objects. This paper explores the application of Markov chain analysis method in the assessment of the teaching effectiveness of physical education, based on the features and requirements of teaching activities in physical education,. As a method to quantify the teaching achievement indicators in physical education, limit distribution in Markov process solves the problems which arise in the evaluation of the teaching quality by using students' score due to students at different physical levels. This provides a good method for scientific teaching evaluation in physical education.",2010,0, 4072,Study on Fault Tree Analysis of Fuel Cell Stack Malfunction,"In order to enhance the reliability and safety of Fuel Cell Engine(FCE), combined the composition of FCE developed by our group with the electrochemical reaction mechanism of fuel cell, the fault symptom of fuel cell stack malfunction was defined and analyzed from four aspects: hardware faults, software faults, environmental and man-made factors. Then its fault tree model was established, all the common fault causes were figured out qualitatively by Fussel Algorithm and were classified as 19 minimal cut sets. At last, the happening probability of top event, important degree of probability and key importance of each basic event were quantitatively calculated. Based on the study and analysis above, several effective rectification measures implied in practical work were put forward, which can provide helpful guiding significance to the control, management and maintenance of FCE in future.",2010,0, 4073,Notice of Retraction
Research and Application of the Data Mining Technology on the Quality of Teaching Evaluation,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

The assessment of the quality of teaching is in accordance with the purpose and principles of teaching. By use of the evaluation of technical feasibility of the teaching process and the expected results, the value of the judgement can be given to provide some information and make some kind of assessment on the subject which need to be assessed. On the teaching quality of teachers, there are many kinds of the evaluation criteria and different index systems. In this paper, Grey Clustering comprehensive assessment of teaching quality in connection with the computer program to assess the quality of teaching for a teacher is used. Compared with the traditional paper-assessment method, the assessment has more scientific, accurate and convincing.",2010,0, 4074,An Optimized Algorithm for Finding Approximate Tandem Repeats in DNA Sequences,"In gene analysis, finding approximate tandem repeats in DNA sequence is an important issue. MSATR is one of the latest methods for finding those repetitions, which suffers deficiencies of runtime cost and poor result quality. This paper proposes an optimized algorithm mMSATR for detecting approximate tandem repeats in genomic sequences more efficiently. By introducing the definition of CASM to reduce the searching scope and optimizing the original mechanism adopted by MSATR, mMSATR makes the detecting process more efficient and improves the result quality. The theoretical analysis and experiment results indicates that mMSATR is able to get more results within less runtime. Algorithm mMSATR is superior to other methods in finding results, and it greatly reduces the runtime cost, which is of benefit when the gene data becomes larger.",2010,0, 4075,Non-contact Discharge Detection System for High Voltage Equipment Based on Solar-Blind Ultraviolet Photomultiplier,"Optical radiation is a very important character signal of high-voltage equipments surface discharge, and it can be used to characterize the equipment insulation condition, but usually, the equipment surface discharge is very weak, and the light signal mainly distributes below 400 nm ultraviolet (UV) band. To detect the solar-blind band UV signal, it can not only help us to find the early discharge, but also to detect the discharge in the daytime. In this paper, a discharge UV pulse detection system was designed, first, the detection principle is introduced, then the key hardware and software parts are introduced in detail, and finally it is tested in laboratory. Experiment shows that this system has the characters of long detection distance, high-sensitivity, and can effectively find the surface discharge. So it provides a new non-contact method to inspect the discharge phenomenal of high-voltage equipment.",2010,0, 4076,A Data Mining Model to Predict Software Bug Complexity Using Bug Estimation and Clustering,"Software defect(bug) repositories are great source of knowledge. Data mining can be applied on these repositories to explore useful interesting patterns. Complexity of a bug helps the development team to plan future software build and releases. In this paper a prediction model is proposed to predict the bug's complexity. The proposed technique is a three step method. In the first step, fix duration for all the bugs stored in bug repository is calculated and complexity clusters are created based on the calculated bug fix duration. In second step, bug for which complexity is required its estimated fix time is calculated using bug estimation techniques. And in the third step based on the estimated fix time of bug it is mapped to a complexity cluster, which defines the complexity of the bug. The proposed model is implemented using open source technologies and is explained with the help of illustrative example.",2010,0, 4077,Tourism emergency data mining and intelligent prediction based on networking autonomic system,"This paper introduces the key technologies and tourism applications of networking autonomic system. The paper focuses especially in the theories, architectures and algorithms being used. It discusses the requirements for networking autonomic system in China and introduces a data mining and intelligent predicting system of tourism emergency based on networking autonomic system, which concentrates on the methods, such as quantum immune clone, multi-level and multi-scale prediction model and local/global coordination mechanism of agent, in tourism data mining process.",2010,0, 4078,Object oriented design metrics and tools a survey,"The most important measure that must be considered in any software product is its design quality. The design phase takes only 5-10 % of the total effort but a large part (up to 80%) of total effort goes into correcting bad design decisions. If bad design is not fixed, the cost for fixing it after software delivery is between 5 and 100 times or higher. Researches on object oriented design metrics have produced a large number of metrics that can be measured to identify design problems and assess design quality attributes. However the use of these design metrics is limited in practice due to the difficulty of measuring and using a large number of metrics. This paper presents a survey of object-oriented design metrics. The goal of this paper is to identify a limited set of metrics that have significant impact on design quality attributes. We adopt the notion of defining design metrics as independent variables that can be measured to assess their impact on design quality attributes as dependent variables. We also present survey of existing object oriented design metrics tools that can be used to automate the measurement process. We present our conclusions on the set of important object oriented design metrics that can be assessed using these tools.",2010,0, 4079,An approach to measure the Hurst parameter for the Dhaka University network traffic,"The main goal of this work was to analyze the network traffic of the University of Dhaka and find out the Hurst parameter to assess the degree of self similarity. For this verification a number of tests and analyses were performed on the data collected from the University Gateway router. The conclusions were supported by a rigorous statistical analysis of 7.5 millions of data packets of high quality Ethernet traffic measurements collected between Aug '07 and March'08 and the data were analyzed using both visual and statistical experimentation. Busy hour traffic and non-busy hour traffic, both were considered. All the software was coded using MATLAB and can be used as a tool to determine the inherent self similarity of a data traffic.",2010,0, 4080,Utilizing CK metrics suite to UML models: A case study of Microarray MIDAS software,"Software metrics provide essential means for software practitioners to assess its quality. However, to assess software quality, it is important to assess its UML models because of UML wide and recent usage as an object-oriented modeling language. But the issue is which type of software metrics can be utilized on UML models. One of the most important software metrics suite is Chidamber and Kemerer metrics suite, known by CK suite. In the current work, an automated tool is developed to compute the six CK metrics by gathering the required information from class diagrams, activity diagrams, and sequence diagrams. In addition, extra information is collected from system designer, such as the relation between methods and their corresponding activity diagrams and which attributes they use. The proposed automated tool operates on XMI standard file format to provide independence from a specific UML tool. To evaluate the applicability and quality of this tool, it has been applied to two examples: an online registration system and one of the bioinformatics Microarray tools (MIDAS).",2010,0, 4081,On Constructing Efficient Shared Decision Trees for Multiple Packet Filters,"Multiple packet filters serving different purposes (e.g., firewalling, QoS) and different virtual routers are often deployed on a single physical router. The HyperCuts decision tree is one efficient data structure for performing packet filter matching in software. Constructing a separate HyperCuts decision tree for each packet filter is not memory efficient. A natural alternative is to construct shared HyperCuts decision trees to more efficiently support multiple packet filters. However, we experimentally show that naively classifying packet filters into shared HyperCuts decision trees may significantly increase the memory consumption and the height of the trees. To help decide which subset of packet filters should share a HyperCuts decision tree, we first identify a number of important factors that collectively impact the efficiency of the resulted shared HyperCuts decision tree. Based on the identified factors, we then propose to use machine learning techniques to predict whether any pair of packet filters should share a tree. Given the pair-wise prediction matrix, a greedy heuristic algorithm is used to classify packets filters into a number of shared HyperCuts decision trees. Our experiments using both real packets filters and synthetic packet filters show that the shared HyperCuts decision trees consume considerably less memory.",2010,0, 4082,An extensive comparison of bug prediction approaches,"Reliably predicting software defects is one of software engineering's holy grails. Researchers have devised and implemented a plethora of bug prediction approaches varying in terms of accuracy, complexity and the input data they require. However, the absence of an established benchmark makes it hard, if not impossible, to compare approaches. We present a benchmark for defect prediction, in the form of a publicly available data set consisting of several software systems, and provide an extensive comparison of the explanative and predictive power of well-known bug prediction approaches, together with novel approaches we devised. Based on the results, we discuss the performance and stability of the approaches with respect to our benchmark and deduce a number of insights on bug prediction models.",2010,0, 4083,Assessing the precision of FindBugs by mining Java projects developed at a university,"Software repositories are analyzed to extract useful information on software characteristics. One of them is external quality. A technique used to increase software quality is automatic static analysis, by means of bug finding tools. These tools promise to speed up the verification of source code; anyway, there are still many problems, especially the high number of false positives, that hinder their large adoption in software development industry. We studied the capability of a popular bug-finding tool, FindBugs, for defect prediction purposes, analyzing the issues revealed on a repository of university Java projects. Particularly, we focused on the percentage of them that indicates actual defects with respect to their category and priority, and we ranked them. We found that a very limited set of issues have high precision and therefore have a positive impact on code external quality.",2010,0, 4084,Assessing UML design metrics for predicting fault-prone classes in a Java system,"Identifying and fixing software problems before implementation are believed to be much cheaper than after implementation. Hence, it follows that predicting fault-proneness of software modules based on early software artifacts like software design is beneficial as it allows software engineers to perform early predictions to anticipate and avoid faults early enough. Taking this motivation into consideration, in this paper we evaluate the usefulness of UML design metrics to predict fault-proneness of Java classes. We use historical data of a significant industrial Java system to build and validate a UML-based prediction model. Based on the case study we have found that level of detail of messages and import coupling-both measured from sequence diagrams, are significant predictors of class fault-proneness. We also learn that the prediction model built exclusively using the UML design metrics demonstrates a better accuracy than the one built exclusively using code metrics.",2010,0, 4085,Assessment of issue handling efficiency,"We mined the issue database of GNOME to assess how issues are handled. How many issues are submitted and resolved? Does the backlog grow or decrease? How fast are issues resolved? Does issue resolution speed increase or decrease over time? In which subproject are issues handled most efficiently? To answer such questions, we apply several visualization and quantification instruments to the raw issue data. In particular, we aggregate issues into four risk categories, based on their resolution time. These categories are the basis both for visualizing and ranking, which are used in concert for issue database exploration.",2010,0, 4086,Validity of network analyses in Open Source Projects,"Social network methods are frequently used to analyze networks derived from Open Source Project communication and collaboration data. Such studies typically discover patterns in the information flow between contributors or contributions in these projects. Social network metrics have also been used to predict defect occurrence. However, such studies often ignore or side-step the issue of whether (and in what way) the metrics and networks of study are influenced by inadequate or missing data. In previous studies email archives of OSS projects have provided a useful trace of the communication and co-ordination activities of the participants. These traces have been used to construct social networks that are then subject to various types of analysis. However, during the construction of these networks, some assumptions are made, that may not always hold; this leads to incomplete, and sometimes incorrect networks. The question then becomes, do these errors affect the validity of the ensuing analysis? In this paper we specifically examine the stability of network metrics in the presence of inadequate and missing data. The issues that we study are: 1) the effect of paths with broken information flow (i.e. consecutive edges which are out of temporal order) on measures of centrality of nodes in the network, and 2) the effect of missing links on such measures. We demonstrate on three different OSS projects that while these issues do change network topology, the metrics used in the analysis are stable with respect to such changes.",2010,0, 4087,Clones: What is that smell?,"Clones are generally considered bad programming practice in software engineering folklore. They are identified as a bad smell and a major contributor to project maintenance difficulties. Clones inherently cause code bloat, thus increasing project size and maintenance costs. In this work, we try to validate the conventional wisdom empirically to see whether cloning makes code more defect prone. This paper analyses relationship between cloning and defect proneness. We find that, first, the great majority of bugs are not significantly associated with clones. Second, we find that clones may be less defect prone than non-cloned code. Finally, we find little evidence that clones with more copies are actually more error prone. Our findings do not support the claim that clones are really a Abad smellA. Perhaps we can clone, and breathe easy, at the same time.",2010,0, 4088,THEX: Mining metapatterns from java,"Design patterns are codified solutions to common object-oriented design (OOD) problems in software development. One of the proclaimed benefits of the use of design patterns is that they decouple functionality and enable different parts of a system to change frequently without undue disruption throughout the system. These OOD patterns have received a wealth of attention in the research community since their introduction; however, identifying them in source code is a difficult problem. In contrast, metapatterns have similar effects on software design by enabling portions of the system to be extended or modified easily, but are purely structural in nature, and thus easier to detect. Our long-term goal is to evaluate the effects of different OOD patterns on coordination in software teams as well as outcomes such as developer productivity and software quality. we present THEX, a metapattern detector that scales to large codebases and works on any Java bytecode. We evaluate THEX by examining its performance on codebases with known design patterns (and therefore metapatterns) and find that it performs quite well, with recall of over 90%.",2010,0, 4089,Mutation Operators for Actor Systems,"Mutation testing is a well known technique for estimating and improving the quality of test suites. Given a test suite T for a system S, mutation testing systematically creates mutants of S and executes T to measure how many mutants T detects. If T does not detect some (non-equivalent) mutants, T can be improved by adding test cases that detect those mutants. Mutants are created by applying mutation operators. Mutation operators are important because they define the characteristics of the system that are tested as well as the characteristics that are improved in the test suite. While mutation operators are well defined for a number of programming paradigms such as sequential or multi-threaded, to the best of our knowledge, mutation operators have not been defined for the actor programming model. In this paper, we define and classify mutation operators that can be used for mutation testing of actor programs.",2010,0, 4090,Test Coverage Analysis of UML State Machines,"Software testing is a very important activity of the software development process. To expedite the testing process and improve the quality of the tests, models are increasingly used as a basis to derive test cases automatically - a technique known as model-based testing (MBT). Given a system model and a test suite derived automatically from the model or created by other process, the coverage of the model achieved by the test suite is important to assess the quality and completeness of the test suite early in the software development process. This paper presents a novel tool that shows visually the coverage achieved by a test suite on a UML state machine model. The tool receives as input a UML state machine model represented in XMI and a test suite represented in a XML format, and produces a colored UML state machine model that shows the coverage result. Model test coverage is determined by simulating the execution of the test suite over the model. An example is presented in order to show the features of the tool.",2010,0, 4091,A Demo on Using Visualization to Aid Run-Time Verification of Dynamic Service Systems,"Future software systems will be dynamic service oriented systems. Service-Oriented Architecture (SOA) provides an extensible and dynamic architecture to be used, for example, in smart environments. In such an environment, software has to adapt its behaviour dynamically. Thus, there is a need for Verifying and Validating (V & V) the adaptations at run-time. This paper contributes to that by introducing a novel visualization tool to be used with traditional V & V techniques to aid the software analysts in the verification process of dynamic software systems. When Quality of Service (QoS) of dynamic software systems varies due to the changing environment the Interactive Quality Visualization (IQVis) tool detects these changes and provides analysts an easier way of understanding the changed behaviour of the system.",2010,0, 4092,A Comparison of Constraint-Based and Sequence-Based Generation of Complex Input Data Structures,"Generation of complex input data structures is one of the challenging tasks in testing. Manual generation of such structures is tedious and error-prone. Automated generation approaches include those based on constraints, which generate structures at the concrete representation level, and those based on sequences of operations, which generate structures at the abstract representation level by inserting or removing elements to or from the structure. In this paper, we compare these two approaches for five complex data structures used in previous research studies. Our experiments show several interesting results. First, constraint-based generation can generate more structures than sequence-based generation. Second, the extra structures can lead to false alarms in testing. Third, some concrete representations of structures cannot be generated only with sequences of insert operations. Fourth, slightly different implementations of the same data structure can behave differently in testing.",2010,0, 4093,Applications of Optimization to Logic Testing,"A tradeoff exists in software logic testing between test set size and fault detection. Testers may want to minimize test set size subject to guaranteeing fault detection or they may want to maximize faults detection subject to a test set size. One way to guarantee fault detection is to use heuristics to produce tests that satisfy logic criteria. Some logic criteria have the property that they are satisfied by a test set if detection of certain faults is guaranteed by that test set. An empirical study is conducted to compare test set size and computation time for heuristics and optimization for various faults and criteria. The results show that optimization is a better choice for applications where each test has significant cost, because for a small difference in computation time, optimization reduces test set size. A second empirical study examined the percentage of faults detected in a best, random, and worst case, first for a test set size of one and then again for a test set size of ten. This study showed that if you have a limited number of tests from which to choose, the exact tests you choose have a large impact on fault detection.",2010,0, 4094,Towards Security Vulnerability Detection by Source Code Model Checking,"Security in code level is an important aspect to achieve high quality software. Various security programming guidelines are defined to improve the quality of software code. At the same time, enforcing mechanisms of these guidelines are needed. In this paper, we use source code model checking technique to check whether some security programming guidelines are followed, and correspondingly to detect related security vulnerabilities. Two SAP security programming guidelines related to logging sensitive information and Cross-Site Scripting attack are used as examples. In the case studies, Bandera Tool Set is used as source code model checker, and minimizing programmers' additional effort is set as one of the goals.",2010,0, 4095,Modelling Requirements to Support Testing of Product Lines,"The trend towards constantly growing numbers of product variants and features in industry makes the improvement of analysis and specification techniques a key efficiency enabler. The development of a single generic functional specification applicable to a whole product family can help to save costs and time to market significantly. However, the introduction of a product-line approach into a system manufacturer's electronics development process is a challenging task, prone to human error, with the risk of spreading a single fault across a whole platform of product variants. In this contribution, a combined approach on variant-management and model-based requirements analysis and validation is presented. The approach, process and tool presented are generally applicable to functional requirements analysis and specification, since informal specifications or only an abstract idea of the required function are demanded as an input. It has been experienced in several industrial projects that the presented approach may help to reduce redundancies and inconsistencies and as a consequence it may ease and improve subsequent analysis, design and testing activities. Furthermore, the application of the presented variant management approach may benefit from model-based specifications, due to their improved analysability and changeability. In this contribution we present our experiences and results using model-based and variant-management concepts for requirements specification to support system testing. Additionally, we present an extension to integrate testing into the variant-management concept. The presented approach and process are supported by the MERAN tool-suite, which has been developed as an add-in to IBM RationalDOORS.",2010,0, 4096,A Measurement Framework for Assessing Model-Based Testing Quality,This paper proposes a measurement framework for assessing the relative quality of alternative approaches to system level model-based testing. The motivation is to investigate the types of measures that the MBT community should apply. The purpose of this paper is to provide a basis for discussion by proposing some initial ideas on where we should probe for MBT quality measurement. The centerpiece of the proposal offered here is the concept of an operational profile (OP) and its relevance to model-based testing.,2010,0, 4097,Generating Minimal Fault Detecting Test Suites for Boolean Expressions,"New coverage criteria for Boolean expressions are regularly introduced with two goals: to detect specific classes of realistic faults and to produce as small as possible test suites. In this paper we investigate whether an approach targeting specific fault classes using several reduction policies can achieve that less test cases are generated than by previously introduced testing criteria. In our approach, the problem of finding fault detecting test cases can be formalized as a logical satisfiability problem, which can be efficiently solved by a SAT algorithm. We compare this approach with respect to the well-known MUMCUT and Minimal-MUMCUT strategies by applying it to a series of case studies commonly used as benchmarks, and show that it can reduce the number of test cases further than Minimal-MUMCUT.",2010,0, 4098,Numerical simulations of thermo-mechanical stresses during the casting of multi-crystalline silicon ingots,"Silicon is an important semiconductor substrate for manufacturing solar cells. The mechanical and electrical properties of multi-crystalline silicon (mc-Si) are primarily influenced by the quality of the feedstock material and the crystallization process. In this work, numerical calculations, applying finite element analysis (FEA) and finite volume methods (FVM) are presented, in order to predict thermo-mechanical stresses during the solidification of industrial size mc-Si ingots. A two-dimensional global model of an industrial multi-crystallization furnace was created for thermal stationary and time-dependent calculations using the software tool CrysMAS. Subsequent thermo-mechanical analyses of the silica crucible and the ingot were performed with the FEA code ANSYS, allowing additional calculations to define mechanical boundary conditions as well as material models. Our results show that thermal analyses are in good agreement with experimental measurements. Furthermore we show that our approach is suitable to describe the generation of thermo-mechanical stress within the silicon ingot.",2010,0, 4099,Measurement and Analysis of Link Quality in Wireless Networks: An Application Perspective,"Estimating the quality of wireless link is vital to optimize several protocols and applications in wireless networks. In realistic wireless networks, link quality is generally predicted by measuring received signal strength and error rates. Understanding the temporal properties of these parameters is essential for the measured values to be representative, and for accurate prediction of performance of the system. In this paper, we analyze the received signal strength and error rates in an IEEE 802.11 indoor wireless mesh network, with special focus to understand its utility to measurement based protocols. We show that statistical distribution and memory properties vary across different links, but are predictable. Our experimental measurements also show that, due to the effect of fading, the packet error rates do not always monotonically decrease as the transmission rate is reduced. This has serious implications on many measurement-based protocols such as rate-adaptation algorithms. Finally, we describe real-time measurement framework that enables several applications on wireless testbed, and discuss the results from example applications that utilize measurement of signal strength and error rates.",2010,0, 4100,Evolutionary Optimization of Software Quality Modeling with Multiple Repositories,"A novel search-based approach to software quality modeling with multiple software project repositories is presented. Training a software quality model with only one software measurement and defect data set may not effectively encapsulate quality trends of the development organization. The inclusion of additional software projects during the training process can provide a cross-project perspective on software quality modeling and prediction. The genetic-programming-based approach includes three strategies for modeling with multiple software projects: Baseline Classifier, Validation Classifier, and Validation-and-Voting Classifier. The latter is shown to provide better generalization and more robust software quality models. This is based on a case study of software metrics and defect data from seven real-world systems. A second case study considers 17 different (nonevolutionary) machine learners for modeling with multiple software data sets. Both case studies use a similar majority-voting approach for predicting fault-proneness class of program modules. It is shown that the total cost of misclassification of the search-based software quality models is consistently lower than those of the non-search-based models. This study provides clear guidance to practitioners interested in exploiting their organization's software measurement data repositories for improved software quality modeling.",2010,1, 4101,A Framework for Clustering Categorical Time-Evolving Data,"A fundamental assumption often made in unsupervised learning is that the problem is static, i.e., the description of the classes does not change with time. However, many practical clustering tasks involve changing environments. It is hence recognized that the methods and techniques to analyze the evolving trends for changing environments are of increasing interest and importance. Although the problem of clustering numerical time-evolving data is well-explored, the problem of clustering categorical time-evolving data remains as a challenging issue. In this paper, we propose a generalized clustering framework for categorical time-evolving data, which is composed of three algorithms: a drifting-concept detecting algorithm that detects the difference between the current sliding window and the last sliding window, a data-labeling algorithm that decides the most-appropriate cluster label for each object of the current sliding window based on the clustering results of the last sliding window, and a cluster-relationship-analysis algorithm that analyzes the relationship between clustering results at different time stamps. The time-complexity analysis indicates that these proposed algorithms are effective for large datasets. Experiments on a real dataset show that the proposed framework not only accurately detects the drifting concepts but also attains clustering results of better quality. Furthermore, compared with the other framework, the proposed one needs fewer parameters, which is favorable for specific applications.",2010,0, 4102,Low-capture-power at-speed testing using partial launch-on-capture test scheme,"Most previous DFT-based techniques for low-capture-power broadside testing can only reduce test power in one of the two capture cycles, launch cycle and capture cycle. Even if some methods can reduce both of them, they may make some testable faults in standard broadside testing untestable. In this paper, a new test application scheme called partial launch-on-capture (PLOC) is proposed to solve the two problems. It allows only a part of scan flip-flops to be active in the launch cycle and capture cycle. In order to guarantee that all testable faults in the standard broadside testing can be detected in the new test scheme, extra efforts are required to check the overlapping part. In addition, calculation of the overlapping part is different from previous techniques for the stuck-at fault testing because broadside testing requires two consecutive capture cycles. Therefore, a new scan flip-flop partition algorithm is proposed to minimize the overlapping part. Sufficient experimental results are presented to demonstrate the efficiency of the proposed method.",2010,0, 4103,Pin-count-aware online testing of digital microfluidic biochips,"On-line testing offers a promising method for detecting defects, fluidic abnormalities, and bioassay malfunctions in microfluidic biochips. To reduce product cost for disposable biochips, testing steps and functional fluidic operations must be implemented on pin-constrained designs. However, previous testing methods for pin-constrained designs do not optimize test schedules to reduce the number of control pins and test/assay completion time. We propose a pin-count-aware online testing method for pin-constrained designs to support the execution of both fault testing and the target bioassay protocol. The proposed method interleaves fault testing with the target bioassay protocol for online testing. It is aimed at significantly reducing the completion time for testing and for the bioassay, while keeping the number of control pins small. Two practical applications, namely a multiplexed bioassay and an interpolation-based mixing protocol, are used to evaluate the effectiveness of the proposed method.",2010,0, 4104,Sustainability at Kluge Estate vineyard and winery,"Kluge Estate, a vineyard and winery in Charlottesville, Virginia, with one of the largest productions in the Commonwealth, is working to become a more sustainable business. Through implementing sustainable practices, Kluge Estate is seeking to benefit its business, the environment, and its community. However, due to a lack of relevant information about its environmental impact, Kluge Estate's decision-makers are unable to justify sustainable choices with quantified data. To resolve this problem, this paper focuses on assessing Kluge Estate's environmental impact. The Kluge Estate system is a complex combination of agriculture and manufacturing, making it difficult to assess the environmental impact throughout the life-cycle of its products. Life-cycle assessment (LCA) is a method that quantifies the environmental impact of a product or process; the life-cycle starts with the extraction of raw materials from the earth, continues through manufacturing, transportation, consumer use of the product, and concludes with disposal or recycling. To conduct the LCA, the team mapped the inputs, outputs and processes of each life-cycle stage of the Kluge Estate product Cru, an aperitif wine, with the goal of providing quantitative information about environmental impact to decision makers. SimaPro 7.1, an LCA software package, was used to perform the LCA for the production of Cru. SimaPro 7.1 utilizes databases containing comprehensive data and conversions gathered through research concerning the impacts of specific materials and processes that exist; specifically, the capstone group used the Ecoinvent Life Cycle Inventory database, which contains agricultural information. The comparison of LCA stages of a bottle of Cru shows that the disposal stage has the greatest contribution in human health (DALY) and ecosystem quality (PDF * m2 * yr), but extraction has the greatest contribution in resources (MJ surplus). Further investigations into the extraction stage, compari- - ng product components, show that the glass bottle has the largest contribution in human health, due to the energy intensive process to generate new glass. The Cru has the largest impact in ecosystem quality, due to the processes needed to harvest and cut the wood as well as the generation of ethanol. The Foil has the largest contribution in resources due to the process to generate tin. A look into Kluge Estate's on-site operation shows the processes in the vineyard and winery have similar environmental impacts in human health. The major contributor in the vineyard is the Spraying of the crops due to heavy tractor and agrochemical use. The major contributors in the winery are Aging, Stabilization and Storing, which all require large amounts of electricity. Its operation measured in ecosystem quality, shows the greatest environmental impacts come from the vineyard processes, again due to the spraying.",2010,0, 4105,A high-performance fault-tolerant software framework for memory on commodity GPUs,"As GPUs are increasingly used to accelerate HPC applications by allowing more flexibility and programmability, their fault tolerance is becoming much more important than before when they were used only for graphics. The current generation of GPUs, however, does not have standard error detection and correction capabilities, such as SEC-DED ECC for DRAM, which is almost always exercised in HPC servers. We present a high-performance software framework to enhance commodity off-the-shelf GPUs with DRAM fault tolerance. It combines data coding for detecting bit-flip errors and checkpointing for recovering computations when such errors are detected. We analyze performance of data coding in GPUs and present optimizations geared toward memory-intensive GPU applications. We present performance studies of the prototype implementation of the framework and show that the proposed framework can be realized with negligible overheads in compute intensive applications such as N-body problem and matrix multiplication, and as low as 35% in a highly-efficient memory intensive 3-D FFT kernel.",2010,0, 4106,On the resolution of conflicts for collective pervasive context-aware applications,"The main goal of this ongoing dissertation is to define and evaluate an efficient and flexible methodology to detect and solve collective conflicts for pervasive context-aware applications. The solution proposed is flexible enough to be used by applications with different characteristics and also considers the resource constraints, which are typical in pervasive systems. One of the basic motivations to the development of this work is the existence of a great number of collective context-aware applications, specially the ones related to the pervasive computing area. Besides, to the best of the author's knowledge, until now there is no work in literature that could be applied to many different applications, and that also considers systematically the trade-off between quality of services (QoS) and resource consumption. In this work, QoS means the users' satisfaction with the application's tasks. A user is considered satisfied when he/she can perform the tasks he/she has previously demanded.",2010,0, 4107,Resilient image sensor networks in lossy channels using compressed sensing,"Data loss in wireless communications greatly affects the reconstruction quality of wirelessly transmitted images. Conventionally, channel coding is performed at the encoder to enhance recovery of the image by adding known redundancy. While channel coding is effective, it can be very computationally expensive. For this reason, a new mechanism of handling data losses in wireless multimedia sensor networks (WMSN) using compressed sensing (CS) is introduced in this paper. This system uses compressed sensing to detect and compensate for data loss within a wireless network. A combination of oversampling and an adaptive parity (AP) scheme are used to determine which CS samples contain bit errors, remove these samples and transmit additional samples to maintain a target image quality. A study was done to test the combined use of adaptive parity and compressive oversampling to transmit and correctly recover image data in a lossy channel to maintain Quality of Information (QoI) of the resulting images. It is shown that by using the two components, an image can be correctly recovered even in a channel with very high loss rates of 10%. The AP portion of the system was also tested on a software defined radio testbed. It is shown that by transmitting images using a CS compression scheme with AP error detection, images can be successfully transmitted and received even in channels with very high bit error rates.",2010,0, 4108,Comparison of exact static and dynamic Bayesian context inference methods for activity recognition,"This paper compares the performance of inference in static and dynamic Bayesian Networks. For the comparison both kinds of Bayesian networks are created for the exemplary application activity recognition. Probability and structure of the Bayesian Networks have been learnt automatically from a recorded data set consisting of acceleration data observed from an inertial measurement unit. Whereas dynamic networks incorporate temporal dependencies which affect the quality of the activity recognition, inference is less complex for dynamic networks. As performance indicators recall, precision and processing time of the activity recognition are studied in detail. The results show that dynamic Bayesian Networks provide considerably higher quality in the recognition but entail longer processing times.",2010,0, 4109,Experimental responsiveness evaluation of decentralized service discovery,"Service discovery is a fundamental concept in service networks. It provides networks with the capability to publish, browse and locate service instances. Service discovery is thus the precondition for a service network to operate correctly and for the services to be available. In the last decade, decentralized service discovery mechanisms have become increasingly popular. Especially in ad-hoc scenarios - such as ad-hoc wireless networks - they are an integral part of auto-configuring service networks. Albeit the fact that auto-configuring networks are increasingly used in application domains where dependability is a major issue, these environments are inherently unreliable. In this paper, we examine the dependability of decentralized service discovery. We simulate service networks that are automatically configured by Zeroconf technologies. Since discovery is a time-critical operation, we evaluate responsiveness - the probability to perform some action on time even in the presence of faults - of domain name system (DNS) based service discovery under influence of packet loss. We show that responsiveness decreases significantly already with moderate packet loss and becomes practicably unacceptable with higher packet loss.",2010,0, 4110,Optimizing RAID for long term data archives,We present new methods to extend data reliability of disks in RAID systems for applications like long term data archival. The proposed solutions extend existing algorithms to detect and correct errors in RAID systems by preventing accumulation of undetected errors in rarely accessed disk segments. Furthermore we show how to change the parity layout of a RAID system in order to improve the performance and reliability in case of partially defect disks. All methods benefit of a hierarchical monitoring scheme that stores reliability related information. Our proposal focuses on methods that do not need significant hardware changes.,2010,0, 4111,An extension of GridSim for quality of service,GridSim is a well known and useful open software product through which users can simulate a Grid environment. At present Qualities of Service are not modeled in GridSim. When utilising a Grid a user may wish to make decisions about type of service to be contracted. For instance performance and security are two levels of service upon which different decisions may be made. Subsequently during operation a grid may not be able to fulfill its contractual obligations. In this case renegotiation is necessary. This paper describes an extension to GridSim that enables various Qualities of Service to be modeled together with Service Level Agreements and renegotiation of contract with associated costs. The extension is useful as it will allow users to make better estimates of potential costs and will also enable grid service suppliers to more accurately predict costs and thus provide better service to users.,2010,0, 4112,The Study on the Fixed End Wave in Magnetostrictive Position Sensor,"The magnetostrictive position sensor is a kind of displacement sensor utilizing the magnetostrictive effect and inverse effect of magnetostrictive material. This essay discussed the influence of the fixed end of the sensor system on the detected signal with the driver impulse. The fixed end waves (a kind of elastic wave) were described and defined in this essay. The mechanisms of torsional magnetic field on the long magnetostrictive material line and fixed end waves were discussed, and relative theory models were constructed in this paper. Experiments showed that the fixed end wave of the system could be generated and transmit along the line with impulse current, which is also a kind of noise wave should be removed or weakened. Consequently, this essay should provide the theory basis and data for promoting the signal quality of the signal detection of the sensor.",2010,0, 4113,Mining Frequent Patterns from Software Defect Repositories for Black-Box Testing,"Software defects are usually detected by inspection, black-box testing or white-box testing. Current software defect mining work focuses on mining frequent patterns without distinguishing these different kinds of defects, and mining with respect to defect type can only give limited guidance on software development due to overly broad classification of defect type. In this paper, we present four kinds of frequent patterns from defects detected by black-box testing (called black-box defect) based on a kind of detailed classification named ODC-BD (Orthogonal Defect Classification for Blackbox Defect). The frequent patterns include the top 10 conditions (data or operation) which most easily result in defects or severe defects, the top 10 defect phenomena which most frequently occur and have a great impact on users, association rules between function modules and defect types. We aim to help project managers, black-box testers and developers improve the efficiency of software defect detection and analysis using these frequent patterns. Our study is based on 5023 defect reports from 56 large industrial projects and 2 open source projects.",2010,0, 4114,Voice Quality in VoIP Networks Based on Random Neural Networks,"The growth of Internet has led to the development of many new applications and technologies. Voice over Internet Protocol (VoIP) is one of the fastest growing applications. Calculating the quality of calls has been a complex task. The ITU E-Model gives a framework to measure quality of VoIP calls but the MOS element is a subjective measure. In this paper, we discuss a novel method using Random Neural Network (RNN) to accurately predict the perceived quality of voice and more importantly to perform this on real-time traffic to overcome the drawbacks of available methods. The novelty of this model is that RNN model provides a non-intrusive method to accurately predict and monitor perceived voice quality for both listening and conversational voice. This method has learning capabilities and this makes it possible for it to adapt to any network changes without human interference. Our novel model uses three input variables (neurons) delay, jitter, and packet loss and the codec used was G711.a. Results show a good degree of accuracy in calculating Mean Option Score (MOS), compared to Perceptual Evaluation of Speech Quality (PESQ) algorithm. WAN emulation software WANem was used to generate different samples for testing and training the RNN.",2010,0, 4115,Designing Modular Hardware Accelerators in C with ROCCC 2.0,"While FPGA-based hardware accelerators have repeatedly been demonstrated as a viable option, their programmability remains a major barrier to their wider acceptance by application code developers. These platforms are typically programmed in a low level hardware description language, a skill not common among application developers and a process that is often tedious and error-prone. Programming FPGAs from high level languages would provide easier integration with software systems as well as open up hardware accelerators to a wider spectrum of application developers. In this paper, we present a major revision to the Riverside Optimizing Compiler for Configurable Circuits (ROCCC) designed to create hardware accelerators from C programs. Novel additions to ROCCC include (1) intuitive modular bottom-up design of circuits from C, and (2) separation of code generation from specific FPGA platforms. The additions we make do not introduce any new syntax to the C code and maintain the high level optimizations from the ROCCC system that generate efficient code. The modular code we support functions identically as software or hardware. Additionally, we enable user control of hardware optimizations such as systolic array generation and temporal common subexpression elimination. We evaluate the quality of the ROCCC 2.0 tool by comparing it to hand-written VHDL code. We show comparable clock frequencies and a 18% higher throughput. The productivity advantages of ROCCC 2.0 is evaluated using the metrics of lines of code and programming time showing an average of 15 improvement over hand-written VHDL.",2010,0, 4116,Codesign and Simulated Fault Injection of Safety-Critical Embedded Systems Using SystemC,"The international safety standard IEC-61508 highly recommends fault injection techniques in all steps of the development process of safety-critical embedded systems, in order to analyze the reaction of the system in a faulty environment and to validate the correct implementation of fault tolerance mechanisms. Simulated fault injection enables an early dependability assessment that reduces the risk of late discovery of safety related design pitfalls and enables the analysis of fault tolerance mechanisms at each design refinement step using techniques such as failure mode and effect analysis. This paper presents a SystemC based executable modeling approach for the codesign and early dependability assessment by means of simulated fault injection of safety-critical embedded systems, which reduces the gap between the abstractions at which the system is designed and assessed. The effectiveness of this approach is examined in a train on-board safety-critical odometry example, which combines fault tolerance and sensor-fusion.",2010,0, 4117,Early Consensus in Message-Passing Systems Enriched with a Perfect Failure Detector and Its Application in the Theta Model,"While lots of consensus algorithms have been proposed for crash-prone asynchronous message-passing systems enriched with a failure detector of the class (the class of eventual leader failure detectors), very few algorithms have been proposed for systems enriched with a failure detector of the class P (the class of perfect failure detectors). Moreover, (to the best of our knowledge) the early decision and stopping notion has not been investigated in such systems. This paper presents an early-deciding/stopping P-based consensus algorithm. A process that does not crash decides (and stops) in at most min(f+2, t+1) rounds, where t is the maximum number of processes that may crash, and f the actual number of crashes (0 f t). Differently from what occurs in a synchronous system, a perfect failure detector notifies failures asynchronously. This makes the design of an early deciding (and stopping) algorithm not trivial. Interestingly enough, the proposed algorithm meets the lower bound on the number of rounds for early decision in synchronous systems. In that sense, it is optimal. The paper then presents an original algorithm that implements a perfect failure detector in the Theta model, an interesting model that achieves some form of synchrony without relying on physical clocks. Hence, the stacking of these algorithms provides an algorithm that solves consensus in the Theta model in min(f+2, t+1) communication rounds, i.e., in two rounds when there are no failures, which is clearly optimal.",2010,0, 4118,Towards Understanding the Importance of Variables in Dependable Software,"A dependable software system contains two important components, namely, error detection mechanisms and error recovery mechanisms. An error detection mechanism attempts to detect the existence of an erroneous software state. If an erroneous state is detected, an error recovery mechanism will attempt to restore a correct state. This is done so that errors are not allowed to propagate throughout a software system, i.e., errors are contained. The design of these software artefacts is known to be very difficult. To detect and correct an erroneous state, the values held by some important variables must be ensured to be suitable. In this paper we develop an approach to capture the importance of variables in dependable software systems. We introduce a novel metric, called importance, which captures the impact a given variable has on the dependability of a software system. The importance metric enables the identification of critical variables whose values must be ensured to be correct.",2010,0, 4119,Emulation of Transient Software Faults for Dependability Assessment: A Case Study,"Fault Tolerance Mechanisms (FTMs) are extensively used in software systems to counteract software faults, in particular against faults that manifest transiently, namely Mandelbugs. In this scenario, Software Fault Injection (SFI) plays a key role for the verification and the improvement of FTMs. However, no previous work investigated whether SFI techniques are able to emulate Mandelbugs adequately. This is an important concern for assessing critical systems, since Mandelbugs are a major cause of failures, and FTMs are specifically tailored for this class of software faults. In this paper, we analyze an existing state-of-the-art SFI technique, namely G-SWFIT, in the context of a real-world fault-tolerant system for Air Traffic Control (ATC). The analysis highlights limitations of G-SWFIT regarding its ability to emulate the transient nature of Mandelbugs, because most of injected faults are activated in the early phase of execution, and they deterministically affect process replicas in the system. We also notice that G-SWFIT leaves untested the 35% of states of the considered system. Moreover, by means of an experiment, we show how emulation of Mandelbugs is useful to improve SFI. In particular, we emulate concurrency faults, which are a critical sub-class of Mandelbugs, in a fully representative way. We show that proper fault triggering can increase the confidence in FTMs' testing, since it is possible to reduce the amount of untested states down to 5%.",2010,0, 4120,Software Fault Prediction Model Based on Adaptive Dynamical and Median Particle Swarm Optimization,"Software quality prediction can play a role of importance in software management, and thus in improve the quality of software systems. By mining software with data mining technique, predictive models can be induced that software managers the insights they need to tackle these quality problems in an efficient way. This paper deals with the adaptive dynamic and median particle swarm optimization (ADMPSO) based on the PSO classification technique. ADMPSO can act as a valid data mining technique to predict erroneous software modules. The predictive model in this paper extracts the relationship rules of software quality and metrics. Information entropy approach is applied to simplify the extraction rule set. The empirical result shows that this method set of rules can be streamlined and the forecast accuracy can be improved.",2010,0, 4121,Applications of Support Vector Mathine and Unsupervised Learning for Predicting Maintainability Using Object-Oriented Metrics,"Importance of software maintainability is increasing leading to development of new sophisticated techniques. This paper presents the applications of support vector machine and unsupervised learning in software maintainability prediction using object-oriented metrics. In this paper, the software maintainability predictor is performed. The dependent variable was maintenance effort. The independent variable were five OO metrics decided clustering technique. The results showed that the Mean Absolute Relative Error (MARE) was 0.218 of the predictor. Therefore, we found that SVM and clustering technique were useful in constructing software maintainability predictor. Novel predictor can be used in the similar software developed in the same environment.",2010,0, 4122,Wireless Intrusion Detection System Using a Lightweight Agent,"The exponential growth in wireless network faults, vulnerabilities, and attacks make the Wireless Local Area Network (WLAN) security management a challenging research area. Deficiencies of security methods like cryptography (e.g. WEP) and firewalls, causes the use of more complex security systems, such as Intrusion Detection Systems, to be crucial. In this paper, we present a hybrid wireless intrusion detection system (WIDS). To implement the WIDS, we designed a simple lightweight agent. The proposed agent detect the most destroying and serious attacks; Man-In-The-Middle and Denial-of-Service; with the minimum selected feature set. To evaluate our proposed WIDS and its agent, we collect a complete data-set using open source attack generator softwares. Experimental results show that in comparison with similar systems, in addition of more simplicity, our WIDS provides high performance and precision.",2010,0, 4123,Exploiting Spectrum Usage Patterns for Efficient Spectrum Management in Cognitive Radio Networks,"A cognitive radio (CR) is very significant technology to use a spectrum dynamically in wireless communication networks. However, very little has been done on using the spectrum usage patterns to handle with the problem of spectrum allocation in dynamic spectrum access. We suggest a scheme by exploiting spectrum usage patterns for the efficient spectrum management and reduce the communication cost in cognitive radio networks (CRNs). We propose the following three factors into account: spectrum sensing scheme with a sleep mode, spectrum decision scheme with a probability of spectrum access and spectrum handoff scheme with back-off time. All factors make use of spectrum usage patterns based on the statistical information. The first factor reduces the number of spectrum sensing. The second increases the opportunity of spectrum access and the last decreases the number of spectrum handoff. First of all, our proposed spectrum management scheme considers the analysis of the spectrum usage patterns and various factors obtained from the analysis is applied to lessen the communication cost in CRNs. The simulation results show that our proposed scheme improve the efficiency of spectrum management in dynamic spectrum access.",2010,0, 4124,Active Data Selection for Sensor Networks with Faults and Changepoints,"We describe a Bayesian formalism for the intelligent selection of observations from sensor networks that may intermittently undergo faults or changepoints. Such active data selection is performed with the goal of taking as few observations as necessary in order to maintain a reasonable level of uncertainty about the variables of interest. The presence of faults/changepoints is not always obvious and therefore our algorithm must first detect their occurrence. Having done so, our selection of observations must be appropriately altered. Faults corrupt our observations, reducing their impact; changepoints (abrupt changes in the characteristics of data) may require the transition to an entirely different sampling schedule. Our solution is to employ a Gaussian process formalism that allows for sequential time-series prediction about variables of interest along with a decision theoretic approach to the problem of selecting observations.",2010,0, 4125,"A Quadratic, Complete, and Minimal Consistency Diagnosis Process for Firewall ACLs","Developing and managing firewall Access Control Lists (ACLs) are hard, time-consuming, and error-prone tasks for a variety of reasons. Complexity of networks is constantly increasing, as it is the size of firewall ACLs. Networks have different access control requirements which must be translated by a network administrator into firewall ACLs. During this task, inconsistent rules can be introduced in the ACL. Furthermore, each time a rule is modified (e.g. updated, corrected when a fault is found, etc.) a new inconsistency with other rules can be introduced. An inconsistent firewall ACL implies, in general, a design or development fault, and indicates that the firewall is accepting traffic that should be denied or vice versa. In this paper we propose a complete and minimal consistency diagnosis process which has worst-case quadratic time complexity with the number of rules in a set of inconsistent rules. There are other proposals of consistency diagnosis algorithms. However they have different problems which can prevent their use with big, real-life, ACLs: on the one hand, the minimal ones have exponential worst-case time complexity; on the other hand, the polynomial ones are not minimal.",2010,0, 4126,In Situ Software Visualisation,"Software engineers need to design, implement, comprehend and maintain large and complex software systems. Awareness of information about the properties and state of individual artifacts, and the process being enacted to produce them, can make these activities less error-prone and more efficient. In this paper we advocate the use of code colouring to augment development environments with rich information overlays. These in situ visualisations are delivered within the existing IDE interface and deliver valuable information with minimal overhead. We present CODERCHROME, a code colouring plug-in for Eclipse, and describe how it can be used to support and enhance software engineering activities.",2010,0, 4127,Managing Structure-Related Software Project Risk: A New Role for Project Governance,"This paper extends recent research on the risk implications of software project organization structures by considering how structure-related risk might be managed. Projects, and other organizations involved in projects, are usually structured according to common forms. These organizational entities interact with each other, creating an environment in which risks relating to their structural forms can impact the project and its performance. This source of risk has previously been overlooked in software project research. The nature of the phenomenon is examined and an approach to managing structure-related risk is proposed, responsibility for which is assigned as a new role for project governance. This assignment is necessary because, due to the structural and relational nature of these risks, the project is poorly placed to manage such threats. The paper argues that risk management practices need to be augmented with additional analyses to identify, analyze and assess structural risks to improve project outcomes and the delivery of quality software. The argument is illustrated and initially validated with two project case studies. Implications for research and practice are drawn and directions for future research are suggested, including extending the theory to apply to other organization structures.",2010,0, 4128,Identification and Analysis of Skype Peer-to-Peer Traffic,"More and more applications are adopting peer-to-peer (P2P) technology. Skype is a P2P based, popular VoIP software. The software works almost seamlessly across Network Address Translations (NATs) and firewalls and has better voice quality than most IM applications. The communication protocol and source code of Skype are undisclosed. It uses high strength encryption and random port number selection, which render the traditional flow identification solutions invalid. In this paper, we first obtain the Skype clients and super nodes by analyzing the process of login and calling in different network environments. Then we propose a method to identify Skype traffic based on Skype nodes and flow features. Our proposed method makes the previously hard-to-detect Skype traffic, especially voice service traffic, much easier to identify. We design an identification system utilizing the proposed method and implement the system in a LAN network. We also successfully identified Skype traffic in one of the largest Internet Providers over a period of 93 hours, during which over 30TB data were transmitted. Through experiments, we show that our proposed approach and implementations can indeed identify Skype traffic with higher accuracy and effectiveness.",2010,0, 4129,Towards Fully Automated Test Management for Large Complex Systems,"Development of large and complex software intensive systems with continuous builds typically generates large volumes of information with complex patterns and relations. Systematic and automated approaches are needed for efficient handling of such large quantities of data in a comprehensible way. In this paper we present an approach and tool enabling autonomous behavior in an automated test management tool to gain efficiency in concurrent software development and test. By capturing the required quality criteria in the test specifications and automating the test execution, test management can potentially be performed to a great extent without manual intervention. This work contributes towards a more autonomous behavior within a distributed remote test strategy based on metrics for decision making in automated testing. These metrics optimize management of fault corrections and retest, giving consideration to the impact of the identified weaknesses, such as fault-prone areas in software.",2010,0, 4130,Searching for a Needle in a Haystack: Predicting Security Vulnerabilities for Windows Vista,"Many factors are believed to increase the vulnerability of software system; for example, the more widely deployed or popular is a software system the more likely it is to be attacked. Early identification of defects has been a widely investigated topic in software engineering research. Early identification of software vulnerabilities can help mitigate these attacks to a large degree by focusing better security verification efforts in these components. Predicting vulnerabilities is complicated by the fact that vulnerabilities are, most often, few in number and introduce significant bias by creating a sparse dataset in the population. As a result, vulnerability prediction can be thought of us preverbally searching for a needle in a haystack. In this paper, we present a large-scale empirical study on Windows Vista, where we empirically evaluate the efficacy of classical metrics like complexity, churn, coverage, dependency measures, and organizational structure of the company to predict vulnerabilities and assess how well these software measures correlate with vulnerabilities. We observed in our experiments that classical software measures predict vulnerabilities with a high precision but low recall values. The actual dependencies, however, predict vulnerabilities with a lower precision but substantially higher recall.",2010,0, 4131,Fault Detection Likelihood of Test Sequence Length,"Testing of graphical user interfaces is important due to its potential to reveal faults in operation and performance of the system under consideration. Most existing test approaches generate test cases as sequences of events of different length. The cost of the test process depends on the number and total length of those test sequences. One of the problems to be encountered is the determination of the test sequence length. Widely accepted hypothesis is that the longer the test sequences, the higher the chances to detect faults. However, there is no evidence that an increase of the test sequence length really affect the fault detection. This paper introduces a reliability theoretical approach to analyze the problem in the light of real-life case studies. Based on a reliability growth model the expected number of additional faults is predicted that will be detected when increasing the length of test sequences.",2010,0, 4132,A Counter-Example Testing Approach for Orchestrated Services,"Service oriented computing is based on a typical combination of features such as very late binding, run-time integration of software elements owned and managed by third parties, run-time changes. These characteristics generally make difficult both static and dynamic verification capabilities of service-centric systems. In this domain verification and testing research communities have to face new issues and revise existing solutions; possibly profiting of the new opportunities that the new paradigm makes available. In this paper, focusing on service orchestrations, we propose an approach to automatic test case generation aiming in particular at checking the behaviour of services participating in a given orchestration. The approach exploits the availability of a runnable model (the BPEL specification) and uses model checking techniques to derive test cases suitable to detect possible integration problems. The approach has been implemented in a plug-in for the Eclipse platform already released for public usage. In this way BPEL developers can easily derive, using a single environment, test suites for each participant service they would like to compose.",2010,0, 4133,Satisfying Test Preconditions through Guided Object Selection,"A random testing strategy can be effective at finding faults, but may leave some routines entirely untested if it never gets to call them on objects satisfying their preconditions. This limitation is particularly frustrating if the object pool does contain some precondition-satisfying objects but the strategy, which selects objects at random, does not use them. The extension of random testing described in this article addresses the problem. Experimentally, the resulting strategy succeeds in testing 56% of the routines that the pure random strategy missed; it tests hard routines 3.6 times more often; although it misses some of the faults detected by the original strategy, it finds 9.5% more faults overall; and it causes negligible overhead.",2010,0, 4134,"We're Finding Most of the Bugs, but What are We Missing?","We compare two types of model that have been used to predict software fault-proneness in the next release of a software system. Classification models make a binary prediction that a software entity such as a file or module is likely to be either faulty or not faulty in the next release. Ranking models order the entities according to their predicted number of faults. They are generally used to establish a priority for more intensive testing of the entities that occur early in the ranking. We investigate ways of assessing both classification models and ranking models, and the extent to which metrics appropriate for one type of model are also appropriate for the other. Previous work has shown that ranking models are capable of identifying relatively small sets of files that contain 75-95% of the faults detected in the next release of large legacy systems. In our studies of the rankings produced by these models, the faults not contained in the predicted most fault prone files are nearly always distributed across many of the remaining files; i.e., a single file that is in the lower portion of the ranking virtually never contains a large number of faults.",2010,0, 4135,An Application of Six Sigma and Simulation in Software Testing Risk Assessment,"The conventional approach to Risk Assessment in Software Testing is based on analytic models and statistical analysis. The analytic models are static, so they don't account for the inherent variability and uncertainty of the testing process, which is an apparent deficiency. This paper presents an application of Six Sigma and Simulation in Software Testing. DMAIC and simulation are applied to a testing process to assess and mitigate the risk to deliver the product on time, achieving the quality goals. DMAIC is used to improve the process and achieve required (higher) capability. Simulation is used to predict the quality (reliability) and considers the uncertainty and variability, which, in comparison with the analytic models, more accurately models the testing process. Presented experiments are applied on a real project using published data. The results are satisfactorily verified. This enhanced approach is compliant with CMMI and provides for substantial Software Testing performance-driven improvements.",2010,0, 4136,An Empirical Evaluation of Regression Testing Based on Fix-Cache Recommendations,"Background: The fix-cache approach to regression test selection was proposed to identify the most fault-prone files and corresponding test cases through analysis of fixed defect reports. Aim: The study aims at evaluating the efficiency of this approach, compared to the previous regression test selection strategy in a major corporation, developing embedded systems. Method: We launched a post-hoc case study applying the fix-cache selection method during six iterations of development of a multi-million LOC product. The test case execution was monitored through the test management and defect reporting systems of the company. Results: From the observations, we conclude that the fix-cache method is more efficient in four iterations. The difference is statistically significant at = 0.05. Conclusions: The new method is significantly more efficient in our case study. The study will be replicated in an environment with better control of the test execution.",2010,0, 4137,(Un-)Covering Equivalent Mutants,"Mutation testing measures the adequacy of a test suite by seeding artificial defects (mutations) into a program. If a test suite fails to detect a mutation, it may also fail to detect real defects-and hence should be improved. However, there also are mutations which keep the program semantics unchanged and thus cannot be detected by any test suite. Such equivalent mutants must be weeded out manually, which is a tedious task. In this paper, we examine whether changes in coverage can be used to detect non-equivalent mutants: If a mutant changes the coverage of a run, it is more likely to be non-equivalent. In a sample of 140 manually classified mutations of seven Java programs with 5,000 to 100,000 lines of code, we found that: (a) the problem is serious and widespread-about 45% of all undetected mutants turned out to be equivalent; (b) manual classification takes time-about 15 minutes per mutation; (c) coverage is a simple, efficient, and effective means to identify equivalent mutants-with a classification precision of 75% and a recall of 56%; and (d) coverage as an equivalence detector is superior to the state of the art, in particular violations of dynamic invariants. Our detectors have been released as part of the open source JAVALANCHE framework; the data set is publicly available for replication and extension of experiments.",2010,0, 4138,MuTMuT: Efficient Exploration for Mutation Testing of Multithreaded Code,"Mutation testing is a method for measuring the quality of test suites. Given a system under test and a test suite, mutations are systematically inserted into the system, and the test suite is executed to determine which mutants it detects. A major cost of mutation testing is the time required to execute the test suite on all the mutants. This cost is even greater when the system under test is multithreaded: not only are test cases from the test suite executed on many mutants, but also each test case is executed for multiple possible thread schedules. We introduce a general framework that can reduce the time for mutation testing of multithreaded code. We present four techniques within the general framework and implement two of them in a tool called MuTMuT. We evaluate MuTMuT on eight multithreaded programs. The results show that MuTMuT reduces the time for mutation testing, substantially over a straightforward mutant execution and up to 77% with the advanced technique over the basic technique.",2010,0, 4139,Automated Test Data Generation on the Analyses of Feature Models: A Metamorphic Testing Approach,"A Feature Model (FM) is a compact representation of all the products of a software product line. The automated extraction of information from FMs is a thriving research topic involving a number of analysis operations, algorithms, paradigms and tools. Implementing these operations is far from trivial and easily leads to errors and defects in analysis solutions. Current testing methods in this context mainly rely on the ability of the tester to decide whether the output of an analysis is correct. However, this is acknowledged to be time-consuming, error-prone and in most cases infeasible due to the combinatorial complexity of the analyses. In this paper, we present a set of relations (so-called metamorphic relations) between input FMs and their set of products and a test data generator relying on them. Given an FM and its known set of products, a set of neighbour FMs together with their corresponding set of products are automatically generated and used for testing different analyses. Complex FMs representing millions of products can be efficiently created applying this process iteratively. The evaluation of our approach using mutation testing as well as real faults and tools reveals that most faults can be automatically detected within a few seconds.",2010,0, 4140,Java code reviewer for verifying object-oriented design in class diagrams,"Verification and Validation (V&V) processes play an important role in quality control. The earlier defects are detected, the less rework incurs. According to the findings from literature, most of the defects occurred during the design and coding phases. Automatic detection of these defects would alleviate the problem. This research therefore invented an automatic code reviewer to examine Java source files against the object-oriented design described in UML class diagrams. Prior to the review process, the class diagrams are converted into XML format so that the information of classes and relations could be extracted and used to generate the review checklists. The code reviewer will then follow the checklist items to verify whether all defined classes exist in the code, the class structures with encapsulated methods and parameters are correctly implemented, all relations of associated classes are valid. Finally, the summary report will then be generated to notify the results.",2010,0, 4141,A software reliability prediction model based on benchmark measurement,Software reliability is a very important and active research field in software engineering. There have been one hundred of prediction model since the first prediction model published. But most of them are adapted after software test and only few of them can be used before test. The paper proposed an idea that to predict the software reliability making use of the similar projects measurement data based on software process benchmark. Its prediction uses benchmark measurement and software process data before software test.,2010,0, 4142,Predict protein subnuclear location with ensemble adaboost classifier,"Protein function prediction with computational method is becoming an important research field in protein science and bioinformatics. In eukaryotic cells, the knowledge of subnuclear localization is essential for understanding the life function of nucleus. In this study, A novel ensemble classifier is designed incorporating three AdaBoost classifiers to predict protein subnuclear localization. The base classifier algorithms in AdaBoost classifier is fuzzy K nearest neighbors (FKNN). Three parts amino acid pair compositions with different spaces are computed to construct features vector for representing a protein sample. Jackknife cross-validation test are used to evaluate performance of proposed with two benchmark datasets. Compared with prior works, promising results obtained indicate that the proposed method is more effective and practical. Current approach may also be used to improve the prediction quality of other protein attributes. The software written in Matlab are available freely by contacting the corresponding author.",2010,0, 4143,A Profile Approach to Using UML Models for Rich Form Generation,"The Model Driven Development (MDD) has provided a new way of engineering today's rapidly changing requirements into the implementation. However, the development of user interface (UI) part of an application has not benefit much from MDD although today's UIs are complex software components and they play an essential role in the usability of an application. As one of the most common UI examples, consider view forms that are used for collecting data from the user. View forms are usually generated with a lot of manual efforts after the implementation. For example, in case of Java 2 Enterprise Edition (Java EE) web applications, developers create all view forms manually by referring to entity beans to determine the content of forms, but such manual creation is pretty tedious and certainly very much error-prone and makes the system maintenance difficult. One promise in MDD is that we can generate code from UML models. Existing design models in MDD, however, cannot provide all class attributes that are required to generate the practical code of UI fragments. In this paper, we propose a UML profile for view form generation as an extension of the object relational mapping (ORM) profile. A profile form of hibernate validator is also introduced to implement the practical view form generation that includes an user input validation.",2010,0, 4144,Software Maintenance Prediction Using Weighted Scenarios: An Architecture Perspective,"Software maintenance is considered one of the most important issues in software engineering which has some serious implications in term of cost and effort. It consumes enormous amount of organization's overall resources. On the other hand, software architecture of an application has considerable effect on quality factors such as maintainability, performance, reliability and flexibility etc. Using software architecture for quantification of certain quality factor will help organizations to plan resources accordingly. This paper is an attempt to predict software maintenance effort at architecture level. The method takes requirements, domain knowledge and general software engineering knowledge as input in order to prescribe application architecture. Once application architecture is prescribed, then weighted scenarios and certain factors (i.e. system novelty, turnover and maintenance staff ability, documentation quality, testing quality etc) that affect software maintenance are applied to application architecture to quantify maintenance effort. The technique is illustrated and evaluated using web content extraction application architecture.",2010,0, 4145,Improvement and its Evaluation of Worker's Motion Monitor System in Automobile Assembly Factory,"Warranty of quality in industrial products depends on to confirm everything on their production process. There are some cases in assemble process that the process itself satisfies the regulation or not is very difficult. For example, to fix a part by using some bolts, order to screwing is often regulated to guarantee accuracy to attach the part. However, once it is fixed, even if the procedure is violated as long as screws are enough fastened, thus accuracy is not guaranteed and the violation cannot be detected. So we are developing a system that monitors worker's motions in routine works in factory, employing terrestrial magnetism and acceleration sensors. In this system, these sensors are attached onto tools, and the system judges the work is correctly done or not. In this paper, we describe a method to judge which uses Local Outlier Factor (LOF), and we show some evaluations on automobile assembly factory.",2010,0, 4146,NLAR: A New Approach to AQM,"The traditional adaptive Random Early Detection (RED) algorithm allows network to achieve high throughput and low average delay. However it uses a linear dropping probability function, which causes high jitter in the core router. To overcome the drawbacks of queue jitter in the traditional adaptive RED algorithm, this paper proposes a new Non-Linear Adaptive RED (NLAR) approach based on the Active Queue Management (AQM) scheme, which provides a non-linear adaptation to the dropping probability function of the adaptive RED. NLAR enables the gradient of the dropping probability to vary along with the deviation that is between the average queue length and the target queue length, which contributes to a more stable algorithm. Empirical simulation with various data analysis have demonstrated that the NLAR algorithm outperforms the adaptive RED algorithm in most scenarios.",2010,0, 4147,Intelligent Agents for Fault Tolerance: From Multi-agent Simulation to Cluster-Based Implementation,"Recent research in multi-agent systems incorporate fault tolerance concepts, but does not explore the extension and implementation of such ideas for large scale parallel computing systems. The work reported in this paper investigates a swarm array computing approach, namely 'Intelligent Agents'. A task to be executed on a parallel computing system is decomposed to sub-tasks and mapped onto agents that traverse an abstracted hardware layer. The agents intercommunicate across processors to share information during the event of a predicted core/processor failure and for successfully completing the task. The feasibility of the approach is validated by simulations on an FPGA using a multi-agent simulator, and implementation of a parallel reduction algorithm on a computer cluster using the Message Passing Interface.",2010,0, 4148,Evaluation of Error Control Mechanisms Based on System Throughput and Video Playable Frame Rate on Wireless Channel,"Error control mechanisms are widely used in video communications over wireless channels. However for improving end-to-end video quality: they consume extra bandwidth and reduce effective system throughput. In this paper, considering the parameters of system throughput and playable frame rate as evaluating metrics, we investigate the efficiency of different error control mechanisms. We develop a throughput analytical model to present system effective throughput for different error control mechanisms under different conditions. For a given packet loss probability, both optimal retransmission times in adaptive ARQ and optimal number of redundant packets in adaptive FEC for each type of frames are derived by keeping the system throughput as a constant value. Also, end to end playable frame rates for the two schemes are computed. Then which error control scheme is the most suitable for which application condition is concluded. Finally empirical simulation experimental results with various data analysis are demonstrated.",2010,0, 4149,Reverse engineering legacy code for finite element field computation in magnetics,"The development of code for finite elements-based field computation has been going on at a pace since the 1970s, yielding code that was not put through the formal software lifecycle. As a result, today we have legacy code running into millions of lines, implemented without planning and not using proper state-of-the-art software design tools. It is necessary to redo this code to exploit object oriented facilities and make corrections or run on the web in Java. Object oriented code's principal advantage is reusability. It is ideal for describing autonomous agents so that values inside a method are private unless otherwise so provided - that is encapsulation makes programming neat and less error-prone in unexpected situations. Recent advances in software make such reverse engineering/reengineering of this code into object oriented form possible. In this paper we reverse engineer legacy code in FORTRAN written decades ago for the computation of magnetic fields by the finite element method into the modern languages of Java and C++.",2010,0, 4150,Proposal of a language for describing differentiable sizing models for electromagnetic devices design,"Sizing by optimization usually implies a model definition able to link the desired objective and constrained functions with the design parameters. When gradient based optimization algorithms are used, highly accurate derivatives are required. Without special software solutions, this procedure easily becomes time-consuming or error-prone. In this paper, our goal is to first observe the modeling needs of electromagnetic devices in the optimization context. A new modeling language is then proposed in order to satisfy and formalize these needs. The concepts are validated by the modeling and optimization procedures of an electromagnetic actuator.",2010,0, 4151,"System and software assurance rationalizing governance, engineering practice, and engineering economics","This paper discusses rationalizing governance, engineering practice, and engineering economics to produce conformant systems that meet their quality attribute targets for system and software assurance in an optimal, cost-effective fashion. It begins with a description of the governance landscape and addresses defining and trading off system quality characteristics, models for assessing the cost and value of software assurance, addressing multi-dimensional risk, and the delivery of value to the organization, its customers, and its stakeholders.",2010,0, 4152,The Effects of Time Constraints on Test Case Prioritization: A Series of Controlled Experiments,"Regression testing is an expensive process used to validate modified software. Test case prioritization techniques improve the cost-effectiveness of regression testing by ordering test cases such that those that are more important are run earlier in the testing process. Many prioritization techniques have been proposed and evidence shows that they can be beneficial. It has been suggested, however, that the time constraints that can be imposed on regression testing by various software development processes can strongly affect the behavior of prioritization techniques. If this is correct, a better understanding of the effects of time constraints could lead to improved prioritization techniques and improved maintenance and testing processes. We therefore conducted a series of experiments to assess the effects of time constraints on the costs and benefits of prioritization techniques. Our first experiment manipulates time constraint levels and shows that time constraints do play a significant role in determining both the cost-effectiveness of prioritization and the relative cost-benefit trade-offs among techniques. Our second experiment replicates the first experiment, controlling for several threats to validity including numbers of faults present, and shows that the results generalize to this wider context. Our third experiment manipulates the number of faults present in programs to examine the effects of faultiness levels on prioritization and shows that faultiness level affects the relative cost-effectiveness of prioritization techniques. Taken together, these results have several implications for test engineers wishing to cost-effectively regression test their software systems. These include suggestions about when and when not to prioritize, what techniques to employ, and how differences in testing processes may relate to prioritization cost--effectiveness.",2010,0, 4153,Reliability Analysis of Embedded Applications in Non-Uniform Fault Tolerant Processors,"Soft error analysis has been greatly aided by the concept of Architectural vulnerability Factor (AVF) and Architecturally Correct Execution (ACE). The AVF of a processor is defined as the probability that a bit flip in the processor architecture will result in a visible error in the final output of a program. In this work, we exploit the techniques of AVF analysis to introduce a software-level vulnerability analysis. This metric allows insight into the vulnerability of instruction and software to hardware faults with a micro-architectural involved fault injection method. The proposed metric can be used to make judgments about the reliability of different programs on different processors with regard to architectural and compiler guidelines for improving the processor reliability.",2010,0, 4154,A fault section detection method using ZCT when a single phase to ground fault in ungrounded distribution system,"A single line to ground fault (SLG) detection in ungrounded network is very difficult, because fault current magnitude is very small. It is generated by a charging current between distribution line and ground. As it is very small, is not used for fault detection in case of SLG. So, SLG has normally been detected by switching sequence method which makes customers experience blackouts. A new fault detection algorithm based on comparison of zero-sequence current and line-to-line voltage phases is proposed. The algorithm uses ZCT installed to ungrounded distribution network. The proposed in this paper algorithm has the advantage that it can detect fault phase and distinguish a faulted section as well. The simulation tests of proposed algorithm were performed using Matlab Simulink and the results are presented in the paper.",2010,0, 4155,Adaptive random testing of mobile application,"Mobile applications are becoming more and more powerful yet also more complex. While mobile application users expect the application to be reliable and secure, the complexity of the mobile application makes it prone to have faults. Mobile application engineers and testers use testing technique to ensure the quality of mobile application. However, the testing of mobile application is time-consuming and hard to automate. In this paper, we model the mobile application from a black box view and propose a distance metric for the test cases of mobile software. We further proposed an ART test case generation technique for mobile application. Our experiment shows our ART tool can both reduce the number of test cases and the time needed to expose first fault when compared with random technique.",2010,0, 4156,Evaluation of software testing process based on Bayesian networks,"In this paper, we will introduce a Bayesian networks (BN) approach for probability evaluation method of software quality assurance. Then, we will present a method for transforming Fault Tree Analysis (FTA) to Bayesian networks and build an evaluation model based on Bayesian networks. Bayesian networks can perform forward risk prediction and backward diagnosis analysis by deduction on the model. Finally, we will illustrate the rationality and validity of the Bayesian networks through an example of evaluation of software testing process.",2010,0, 4157,Defect association and complexity prediction by mining association and clustering rules,"Number of defects remaining in a system provides an insight into the quality of the system. Software defect prediction focuses on classifying the modules of a system into fault prone and non-fault prone modules. This paper focuses on predicting the fault prone modules as well as identifying the types of defects that occur in the fault prone modules. Software defect prediction is combined with association rule mining to determine the associations that occur among the detected defects and the effort required for isolating and correcting these defects. Clustering rules are used to classify the defects into groups indicating their complexity: SIMPLE, MODERATE and COMPLEX. Moreover the defects are used to predict the effect on the project schedules and the nature of risk concerning the completion of such projects.",2010,0, 4158,Keystroke identification with a genetic fuzzy classifier,"This paper proposes the use of fuzzy if-then rules for Keystroke identification. The proposed methodology modifies Ishibuchi's genetic fuzzy classifier to handle high dimensional problems such as keystroke identification. High dimensional property of a problem increases the number of rules with low fitness. For decreasing them, rule initialization and coding are modified. Furthermore a new heuristic method is developed for improving the population quality while running GA. Experimental result demonstrates that we can achieve better running time, interpretability and accuracy with these modifications.",2010,0, 4159,The application of multi-function interface MVB NIC in distributed locomotive fault detecting and recording system,"Locomotive condition monitoring and fault diagnosis system is an important component of modern locomotive, it needs a reliable, high-speed communication network to ensure that the system's reliable operation in the complex locomotive environment. The Controller Area Network (CAN) used in the existing distributed locomotive fault detecting and recording system is not suitable for vehicles bus, so the paper brought forward the scheme using the Multifunction Vehicle Bus (MVB). Firstly, it described the alteration of system structure and operating principle key design concepts in detail, next designed the multi-function interface MVB NIC using SOPC (system on a programmable chip) technology, given the realization of hardware and software, ultimately proceeded the network test in the lab, and verified the correctness and feasibility of the design. The improved network has farther transmission distance, higher rates, better reliability and real-time.",2010,0, 4160,Semantic consistency checking for model transformations,"Model transformation, as a key technique of MDA, is error-prone because of conceptual flaws in design and man-made errors in manual transformation rules. So the consistency checking of model transformations is of great importance for MDA. In this paper, a framework of semantic consistency checking for model transformation is proposed and discussed. In this framework, a graph representation is required to describe model languages, model transformation rules, and source code. Then several semantic properties are selected to be studied, and algorithms based on critical pairs are given to check whether these properties are preserved by model transformations. At last, a case study is performed to demonstrate the feasibility.",2010,0, 4161,Uplink array concept demonstration with the EPOXI spacecraft,"Uplink array technology is currently being developed for NASA's Deep Space Network (DSN), to provide greater range and data throughput for future NASA missions, including manned missions to Mars and exploratory missions to the outer planets, the Kuiper belt, and beyond. The DSN uplink arrays employ N microwave antennas transmitting at X-band to produce signals that add coherently at the spacecraft, thereby providing a power gain of N2 over a single antenna. This gain can be traded off directly for N2 higher data rate at a given distance such as Mars, providing, for example, HD quality video broadcast from earth to a future manned mission, or it can provide a given data-rate for commands and software uploads at a distance N times greater than possible with a single antenna. The uplink arraying concept has been recently demonstrated using the three operations 34-meter antennas of the Apollo complex at Goldstone, CA, which transmitted arrayed signals to the EPOXI spacecraft. Both two-element and three-element uplink arrays were configured, and the theoretical array gains of 6 dB and 9.5 dB, respectively, were demonstrated experimentally. This required initial phasing of the array elements, the generation of accurate frequency predicts to maintain phase from each antenna despite relative velocity components due to earth-rotation and spacecraft trajectory, and monitoring of the ground system phase for possible drifts caused by thermal effects over the 16 km fiber-optic signal distribution network. This provides a description of the equipment and techniques used to demonstrate the uplink arraying concept in a relevant operational environment. Data collected from the EPOXI spacecraft was analyzed to verify array calibration, array gain, and system stability over the entire five hour duration of this experiment.",2010,0, 4162,A QoS prediction approach based on improved collaborative filtering,"Consumers need to make prediction on the quality of unused Web services before selecting from a large number of services. Usually, this prediction is made based on other consumers' experiences. Being aware of the similarity of consumers' assessment, this paper proposes a QoS prediction approach. This approach calculates the similarity among consumers, and then uses a improved collaborative filtering technology to predicting the QoS of the unused Web services. Experimental results show that with this approach preciseness of QoS prediction for Web services is higher than other prediction approaches, and it has good feasibility and effectiveness.",2010,0, 4163,An improved software reliability model incorporating debugging time lag,"Software reliability growth models (SRGMs) based on the non-homogeneous Poisson process (NHPP) are quite successful tools that have been proposed to assess the software reliability. Most NHPP-SRGMs assume the detected faults are immediately corrected, but it is not the case in real environments. In this paper, incorporating the testing coverage, considering imperfect debugging and correcting time lag, we propose an improved NHPP-SRGM. Experimental results show that the proposed model fits the failure data quite well and has a fairly accurate prediction capability.",2010,0, 4164,An efficient experimental approach for the uncertainty estimation of QoS parameters in communication networks,"In communication networks setup and tuning activities, a key issue is to assess the impact of a new service running on the network on the overall Quality of Service. To this aim suitable figures of merit and test beds have to be adopted and time-consuming measurement campaigns generally should be carried out. A preliminary issue to be accomplished for is the metrological characterization of the test set-up aimed to provide a confidence level and a variability interval to the measurement results. This allows identifying and evaluating the intrinsic uncertainty to be considered in the experimental measurement of Quality of Service parameters. This paper proposes an original experimental approach suitable for the purpose. The uncertainty components involved in the measurement process are identified and experimentally quantified by means of effective statistical analyses. The proposed approach takes into account the general characteristics of the network topology, the number and type of devices involved, the characteristics of the current services operating on the network, and of the new services to be implemented, as well as the intrinsic uncertainties related to the set-up and to the measurement method. As an application example, the proposed approach has been adopted to the measurement of the packet jitter on a test bed involving a real computer network displaced on several kilometers. The obtained results show the effectiveness of the proposal.",2010,0, 4165,Self adaptive BCI as service-oriented information system for patients with communication disabilities,"A new service-oriented information architecture is presented that can help the communication-disabled to socialize in their private and public environment. The service is designed for physically handicapped people who cannot communicate without expensive custom-made tools. Statistics show that, e.g., in Belgium 1:500 persons suffer from some form of motor or speech disability, mostly due to stroke (aphasia patients). Patients with severe motor or speech disabilities need expensive tailor-made made devices and individualized protocols to communicate. About 1:6000 do have problems with information exchange in their daily practice, such as patients with severe autistic disorders, and Amyotrophic Lateral Sclerosis (ALS), Locked-in Syndrome (LIS) and Speech and Language Impaired (SLI) patients, and their communication is often limited to care takers and family, because the interaction with other people through electronic systems often fails. In fact, all these disabled yearn to basically participate in our society. Enhancing the amount of adapted devices and personal care takers has huge consequences and is mostly unfeasible by firm limits in specialists, infrastructure and budget. The quality of life can be graded up by a service-oriented information architecture that supports an on-line Mind Speller, i.e., a Brain-Computer Interface (BCI) that enables subjects to spell text on a computer screen, and potentially have it voiced, without muscular activity, to assist or enable patients to communicate, but also to provide speech revalidation, as in autism spectrum disorder patients. The Mind Speller operates non-invasively by detecting P300 signals in their EEG. With the support of predictive text algorithms, the mind spelled characters will be words and sentences, and even stories (story telling), enabling the communication-disabled to participate in either the physical environment as - by the Internet - the global digital world.",2010,0, 4166,A runtime approach for software fault analysis based on interpolation,"In the application system, obtaining the information of the system at runtime and analyzing them are important for system adjustment. Many runtime metrics can be collected from software systems, and some statistical relationships exist among these metrics. Extracting the information of these metrics from the monitoring data and then analyzing the relationships between these metrics is an effective way to detect failure and diagnose fault. This paper proposes a fault analysis approach for the system at runtime which gets the information of the system by monitoring. We demonstrate this approach in a case study which shows that our approach is effective and is beneficial to find the relationship between the fault and the component.",2010,0, 4167,Trustable web services with dynamic confidence time interval,"One part of trustfulness on web services application over the network is confidence of services that providers can guarantee to their customers. Therefore, after the development process, web services developers must be sure that the delivered services are qualified for availability and reliability during their execution. However, there is a critical problem when errors occur during the execution time of the service agent, such as the infinite loop problem in the service process. This unexpected problem of the web services software can cause critical damages in various aspects, especially lives and dead of people. Although there are various methods have been proposed to protect the unexpected errors, most of them are procedures in the verification and validation during the development processes. Nevertheless, these methods cannot completely solve the infinite loop problem since this problem is usually occurred by an unexpected values obtained from the execution of request and response processes . Therefore, this paper proposed a system architecture includes with a protection mechanism that completely detect and protect the unbound loop problem of web services when request services of each requester are under the dynamic situation. This proposed solution can guarantee that users will definitely be protected from a critical lost occurred from the unexpected infinite loop of the web services system. Consequently, all service agents with dynamic loop control condition can be trustable.",2010,0, 4168,Interaction Testing: From Pairwise to Variable Strength Interaction,"Although desirable as an important activity for quality assurances and enhancing reliability, complete and exhaustive software testing is prohibitively impossible due to resources as well as timing constraints. While earlier work has indicated that uniform pairwise testing (i.e. based on 2-way interaction of variables) can be effective to detect most faults in a typical software system, a counter argument suggests such conclusion cannot be generalized to all software system faults. In some system, faults may also be non-uniform and caused by more than two parameters. Considering these issues, this paper explores the issues pertaining to t-way testing from pairwise to variable strength interaction in order to highlight the state-of-the-art as well as the current state-of-practice.",2010,0, 4169,Predicting Software Reliability with Support Vector Machines,"Support vector machine (SVM) is a new method based on statistical learning theory. It has been successfully used to solve nonlinear regression and time series problems. However, SVM has rarely been applied to software reliability prediction. In this study, an SVM-based model for software reliability forecasting is proposed. In addition, the parameters of SVM are determined by Genetic Algorithm (GA). Empirical results show that the proposed model is more precise in its reliability prediction and is less dependent on the size of failure data comparing with the other forecasting models.",2010,0, 4170,Enhance Fault Localization Using a 3D Surface Representation,"Debugging is a difficult and time-consuming task in software engineering. To locate faults in programs, a statistical fault localization technique makes use of program execution statistics and employs a suspiciousness function to assess the relation between program elements and faults. In this paper, we develop a novel localization technique by using a 3D surface to visualize previous suspiciousness functions and using fault patterns to enhance such a 3D surface. By clustering realistic faults, we determine various fault patterns and use 3D points to represent them. We employ spline method to construct a 3D surface from those 3D points and build our suspiciousness function. Empirical evaluation on a common data set, Siemens suite, shows that the result of our technique is more effective than four existing representative such techniques.",2010,0, 4171,Study of ERP Test-Suite Reduction Based on Modified Condition/Decision Coverage,"Enterprise Resource Planning (ERP) systems represent a huge market in the commercial arena. Products from suppliers such as SAP, Oracle and more recently, Microsoft, dominate the software market. Testing in these projects is a significant effort but is hardly supported by methods and tools other than those provided by the suppliers themselves. Experience shows that testing in these projects is critical, but often neglected. Recent 'lessons learned' work by the Paul Gerrard indicates that a benefit, risk- and coverage-based test approach could significantly reduce the risk of failures. It is evidence that modified condition/decision coverage (MC/DC) is an effective verification method and can help to detect safety faults despite of its expensive cost. In regression testing, it is quite costly to return all of test cases in test suite because new test cases are added to test suite as the software evolves. Therefore, it is necessary to reduce the test suite to improve test efficiency and save test cost. Many existing test-suite reduction techniques are not effective to reduce MC/DC test suite. This paper proposes a new test-suite reduction technique for MC/DC: a bi-objective model that considers both the coverage degree of test case for test requirements and the capability of test cases to reveal error. Our experiment results show that the technique both reduces the size of test suite and better ensures the effectiveness of test suite to reveal error.",2010,0, 4172,An Ontology-Based Framework for Designing a Sensor Quality of Information Software Library,"Assessing the quality of sensor-originated information is key for the effective and predictable operation sensor-enabled computerized applications. However, with increasing uncertainties due to alternative deployment scenarios, operational realities, and sensing resource use, it becomes very challenging in designing replicable software solutions that can be easily reused in a number of occasions with minimal customization. Leveraging semantic sensor web technologies, this paper presents an ontology-based design framework for organizing a library of quality of information (QoI) analysis algorithms specific to a data source, and interfacing to a library containing computational algorithms assessing quality of information. The ontology-based framework is broad enough to allow easy accommodation of new computational algorithms that domain experts may provide to the library as needed to reflect specific deployment, operational, sensing realities.",2010,0, 4173,Detecting patterns and antipatterns in software using Prolog rules,"Program comprehension is a key prerequisite for the maintainance and analysis of legacy software systems. Knowing about the presence of design patterns or antipatterns in a software system can significantly improve the program comprehension. Unfortunately, in many cases the usage of certain patterns is seldom explicitly described in the software documentation, while antipatterns are never described as such in the documentation. Since manual inspection of the code of large software systems is difficult, automatic or semi-automatic procedures for discovering patterns and antipatterns from source code can be very helpful. In this article we propose detection methods for a set of patterns and antipatterns, using a logic-based approach. We define with help of Prolog predicates both structural and behavioural aspects of patterns and antipatters. The detection results obtained for a number of test systems are also presented.",2010,0, 4174,Portable artificial nose system for assessing air quality in swine buildings,"To practice an efficient air quality management in livestocks, a standardized measurement technology has always been requested in order to assess the odor, of which results are acceptable by every party involved, i.e., the owner, the state and the public. This paper has reported on a prototype of portable electronic nose (e-nose) designed specially to assess malodors in swine buildings atmosphere in the pig farm. The briefcase formed e-nose consists of eight chemical gas sensors that are sensitive to gases usually presented in pig farm such as ammonia, hydrogen sulfide, hydrocarbons etc. The system contains gas flow controller, measurement circuit and data acquisition unit, all of which are automated and controlled by an in-house software on a notebook PC via a USB port. We have tested the functionality of this e-nose in a pig farm under a real project aimed specifically to reduce the odor emission from swine buildings in the pig farm. The e-nose was used to assess the air quality inside sampled swine buildings. Based on the results given in this paper, recommendations on appropriate feeding menu, buildings' cleaning schedule and emission control program have been made.",2010,0, 4175,Synthesizing simulators for model checking microcontroller binary code,"Model checking of binary code is recognized as a promising tool for the verification of embedded software. Our approach, which is implemented in the [MC]SQUARE model checker, uses tailored simulators to build state spaces for model checking. Previously, these simulators have been generated by hand in a time-consuming and error-prone process. This paper proposes a method for synthesizing these simulators from a description of the hardware in an architecture description language in order to tackle these drawbacks. The application of this approach to the Atmel ATmega16 microcontroller is detailed in a case study.",2010,0, 4176,SLA-Driven Dynamic Resource Management for Multi-tier Web Applications in a Cloud,"Current service-level agreements (SLAs) offered by cloud providers do not make guarantees about response time of Web applications hosted on the cloud. Satisfying a maximum average response time guarantee for Web applications is difficult due to unpredictable traffic patterns. The complex nature of multi-tier Web applications increases the difficulty of identifying bottlenecks and resolving them automatically. It may be possible to minimize the probability that tiers (hosted on virtual machines) become bottlenecks by optimizing the placement of the virtual machines in a cloud. This research focuses on enabling clouds to offer multi-tier Web application owners maximum response time guarantees while minimizing resource utilization. We present our basic approach, preliminary experiments, and results on a EUCALYPTUS-based testbed cloud. Our preliminary results shows that dynamic bottleneck detection and resolution for multi-tier Web application hosted on the cloud will help to offer SLAs that can offer response time guarantees.",2010,0, 4177,Region-Based Prefetch Techniques for Software Distributed Shared Memory Systems,"Although shared memory programming models show good programmability compared to message passing programming models, their implementation by page-based software distributed shared memory systems usually suffers from high memory consistency costs. The major part of these costs is inter-node data transfer for keeping virtual shared memory consistent. A good prefetch strategy can reduce this cost. We develop two prefetch techniques, TReP and HReP, which are based on the execution history of each parallel region. These techniques are evaluated using offline simulations with the NAS Parallel Benchmarks and the LINPACK benchmark. On average, TReP achieves an efficiency (ratio of pages prefetched that were subsequently accessed) of 96% and a coverage (ratio of access faults avoided by prefetches) of 65%. HReP achieves an efficiency of 91% but has a coverage of 79%. Treating the cost of an incorrectly prefetched page to be equivalent to that of a miss, these techniques have an effective page miss rate of 63% and 71% respectively. Additionally, these two techniques are compared with two well-known software distributed shared memory (sDSM) prefetch techniques, Adaptive++ and TODFCM. TReP effectively reduces page miss rate by 53% and 34% more, and HReP effectively reduces page miss rate by 62% and 43% more, compared to Adaptive++ and TODFCM respectively. As for Adaptive++, these techniques also permit bulk prefetching for pages predicted using temporal locality, amortizing network communication costs and permitting bandwidth improvement from multi-rail network interfaces.",2010,0, 4178,Body sets and lines: A reliable representation of images,"This paper proposes a novel definition of color lines and sets, based on the dichromatic model for lambertian objects. The ends of the body vectors are robustly detected, from the clearest to the darkest through to a multi-level 2D histogram analysis. Finally, instead of classically defining the topographic map along one sole luminance direction, our body lines are designed along each body vector. Compared to existing topographic maps, our method is more compact while better preserving the color quality. Furthermore, it is faster to compute than.",2010,0, 4179,An effective nonparametric quickest detection procedure based on Q-Q distance,"Quickest detection schemes are geared toward detecting a change in the state of a data stream or a real-time process. Classical quickest detection schemes invariably assume knowledge of the pre-change and post-change distributions that may not be available in many applications. In this paper, we present a distribution free nonparametric quickest detection procedure based on a novel distance measure, referred to as the Q-Q distance calculated from the Q-Q plot, for detection of distribution changes. Through experimental study, we show that the Q-Q distance-based detection procedure presents comparable or better performance compared to classical parametric and other nonparametric procedures. The proposed procedure is most effective when detecting small changes.",2010,0, 4180,Automatic synthesis of OSCI TLM-2.0 models into RTL bus-based IPs,"Transaction-level modeling (TLM) is the most promising technique to deal with the increasing complexity of modern embedded systems. TLM provides designers with high-level interfaces and communication protocols for abstract modeling and efficient simulation of system platforms. The Open SystemC Initiative (OSCI) has recently released the TLM-2.0 standard, to standardize the interface between component models for bus-based systems. The TLM standard aims at facilitating the interchange of models between suppliers and users, and thus encouraging the use of virtual platforms for fast simulation prior to the availability of register-transfer level (RTL) code. On the other hand, because a TLM IP description does not include the implementation details that must be added at the RTL, the process to synthesize TLM designs into RTL implementations is still manual, time spending and error prone. In this context, this paper presents a methodology for automating the TLM-to-RTL synthesis by applying the theory of high-level synthesis (HLS) to TLM, and proposes a protocol synthesis technique based on the extended finite state machine (EFSM) model for generating the RTL IP interface compliant with any RTL bus-based protocol.",2010,0, 4181,Application of ANN in food safety early warning,"In recent years, frequent occurrence of food safety crisis has seriously affected people's health, which causes widespread concern around the world. To effectively track and trace food has become an extremely urgent global issue. Early warning of food safety can prevent food safety crisis. However, there is still very few automatic tracking systems for the entire food supply chain. In the paper we propose a data mining technique to predict food quality using back-propagation (BP) neural network. Some prediction errors could occur when predicted data are near threshold values. To reduce errors, data near the threshold values are selected to train our system. Special care of threshold values and performance of our proposed algorithm are discussed in the paper.",2010,0, 4182,A novel resilient multi-path establishment scheme for sensor networks,"With the development of Wireless sensor networks, secure communication among sensors by establishing pair-wise keys becomes a critical issue for many sensor network applications. Due to sensors' special uses, limited capabilities and declining connection probability, the random key pre-distribution schemes are no longer suitable for networks with large size. Multi-path key establishment scheme becomes a great addition to these existing schemes. In this paper, we propose a novel resilient establishment scheme for sensor networks. In our scheme, the sender sensor first partitions the pair-wise key into many sub-keys, and computes hash value of all sub-keys and products a hash checkout tree. It then transmits each sub-key and its hash path value via different node-disjoint paths. The receiver first checkouts the integrity of sub-keys by using hash path value, identifies faulty key establishment paths, and it then recovers the original pair-wise key. The salient features of the scheme are its flexibility of trading transmission, efficient checkout and low information disclosure. Resilience and information leaking analysis is conducted to show that the proposed scheme is agile and efficient compared with the existing schemes. The other nice feature of the proposed scheme is that the parameters of the scheme can be set up to meet the requirements for different sensor network applications.",2010,0, 4183,Ultrasonic Waveguides Detection-based approach to locate defect on workpiece,"Conventional ultrasonic techniques, such as pulse-echo, has been limited to testing relatively simple geometries or interrogating the region in the immediate vicinity of the transducer. A novel, efficiency methodology uses ultrasonic waveguides to examine structural components. The advantages of this technique include: its ability to detect the entire structure in a single measurement through long distance with little attenuation; and its capacity to test inaccessible regions of complex components. However, in practical work, this technique exists dispersion and mode conversion phenomena which makes poor signal to noise ratio, thereby, influences the actual application of this technique. In order to solve this problem, simulation with experiments can not only verifies the feasibility of this technique, but also has guiding significant for actual work. This paper reports on a novel approach in the simplification of the simulation of Ultrasonic Waveguides Detection. The first step is the selection of the frequency of signal which has the fastest group velocity and relatively small dispersion. The second step is the decision of and le. As the numerical analysis characteristics of general-purpose software ANSYS, two key parameters: time step t and mesh element size le need to be carefully selected. This report finds the balance point between the accuracy of results and calculation time to determine two key parameters which significantly influence the result of the simulation result. Finally, this report show the experiment results on two-dimensional flat panel structure and three-dimensional triangle-iron structure respectively. From the result shown, the error between the simulation and actual value is less than 0.4%, perfectly prove the feasibility of this approach.",2010,0, 4184,"Matrix-geometric solution of a heterogeneous two-Server M/(PH,M)/2 queueing system with server breakdowns","In this paper, we study a repairable queueing system with two different servers, where Server 1 is perfectly reliable and Server 2 is subject to breakdown. The service times of two servers are assumed to follow phase type (PH) distribution and exponential distribution, respectively. By establishing the quasi-birth-and-death (QBD) process of the system states, we first derive the equilibrium condition of the system, and then obtain the matrix-geometric solution for the steady-state probability vectors of the system. Finally, numerical results are presented.",2010,0, 4185,Probabilistic fault prediction of incipient fault,"In this work, a probabilistic fault prediction approach is presented for prediction of incipient fault in an uncertain way. The approach has two stages. In the first stage, normal data is analyzed by principle component analysis (PCA) to get control limits of the statistics of T2 and SPE. In the second stage, fault data starts by PCA so as to derive the statistics of T2 and SPE. Then, the samplings of these two statistics obeying some certain prediction distribution are obtained using Bayesian AR model on the basis of the Winbugs software. At last, one-step prediction fault probabilities are estimated by kernel density estimation method according to the statistics' corresponding control limits. The prediction performance of this approach is illustrated using the data from the simulator of the Tennessee Eastman process.",2010,0, 4186,Model Based Testing Using Software Architecture,"Software testing is an ultimate obstacle to the final release of software products. Software testing is also a leading cost factor in the overall construction of software products. On the one hand, model-based testing methods are new testing techniques aimed at increasing the reliability of software, and decreasing the cost by automatically generating a suite of test cases from a formal behavioral model of a system. On the other hand, the architectural specification of a system represents a gross structural and behavioral aspect of a system at the high level of abstraction. Formal architectural specifications of a system also have shown promises to detect faults during software back-end development. In this work, we discuss a hybrid testing method to generate test cases. Our proposed method combines the benefits of model-based testing with the benefits of software architecture in a unique way. A simple Client/Server system has been used to illustrate the practicality of our testing technique.",2010,0, 4187,Provenance Collection in Reservoir Management Workflow Environments,"There has been a recent push towards applying information technology principles, such as workflows, to bring greater efficiency to reservoir management tasks. These workflows are data intensive in nature, and the data is derived from heterogenous data sources. This has placed an emphasis on the quality and reliability of data that is used in reservoir engineering applications. Data provenance is metadata that pertains to the history of the data and can be used to assess data quality. In this paper, we present an approach for collecting provenance information from application logs in the domain of reservoir engineering. In doing so, we address challenges due to: 1) the lack of a workflow orchestration framework in reservoir engineering and 2) the inability of many reservoir engineering applications to collect provenance information. We present an approach that uses the workflow instances detection algorithm and the Open Provenance Model (OPM) for capturing provenance information from the logs.",2010,0, 4188,Improving Change Impact Analysis with a Tight Integrated Process and Tool,"Change impact analysis plays an immanent role in the maintenance and enhancement of software systems, especially for defect prevention. In our previous work we have developed approaches to detect logical dependencies among artifacts in repositories and calculated different metrics. But that is not enough, because in order to use change impact analysis a detailed process with guidelines on the one hand, and appropriate tools on the other hand will be needed. To show the importance of such an approach, we have gathered problems and analyzed requirements in the field of a social insurance company. Based on these requirements we have developed a process and a tool which helps analysts in performing activities along the defined process. In this paper we present the tool for change impact analysis and the significance of the tight integration of the tool into the process.",2010,0, 4189,Towards Scalable Robustness Testing,"Several approaches have been developed to assess the robustness of a system. We propose a model-based approach to scalable testing the robustness of a software system using event sequence graphs (ESG) and decision tables (DT). Elementary modification operators are introduced to manipulate ESGs and DTs resulting in faulty models. Test cases generated from these faulty models are applied to the system under consideration to check its robustness. Thus, the approach enables the quantification of robustness with respect to a universe of erroneous inputs.",2010,0, 4190,Self-Checked Metamorphic Testing of an Image Processing Program,"Metamorphic testing is an effective technique for testing systems that do not have test oracles, for which it is practically impossible to know the correct output of an arbitrary test input. In metamorphic testing, instead of checking the correctness of a test output, the satisfaction of metamorphic relation among test outputs is checked. If a violation of the metamorphic relation is found, the system implementation must have some defects. However, a randomly or accidently generated incorrect output may satisfy a metamorphic relation as well. Therefore, checking only metamorphic relations is not good enough to ensure the testing quality. In this paper, we propose a self-checked metamorphic testing approach, which integrates structural testing into metamorphic testing, to detect subtle defects in a system implementation. In our approach, metamorphic testing results are further verified by test coverage information, which is automatically produced during the metamorphic testing. The effectiveness of the approach has been investigated through testing an image processing program.",2010,0, 4191,Testing Effort Dependent Software FDP and FCP Models with Consideration of Imperfect Debugging,"Software reliability can be enhanced considerably during testing with faults being detected and corrected by testers. The allocation of testing resources, such as man power and CPU hours, during testing phase can largely influence fault detection speed and the time to correct a detected fault. The testing resources allocation is usually depicted by testing effort function, which has been incorporated into software reliability models in some recent papers. Fault correction process (FCP) is usually modeled as a delayed process of fault detection process (FDP). In addition, debugging is usually not perfect and new faults can be introduced during testing. In this paper, flexible testing effort dependent paired models of FDP and FCP are derived with consideration of fault introduction. A real dataset is used to illustrate the application of proposed models.",2010,0, 4192,Sensitivity of Two Coverage-Based Software Reliability Models to Variations in the Operational Profile,"Software in field use may be utilized by users with diverse profiles. The way software is used affects the reliability perceived by its users, that is, software reliability may not be the same for different operational profiles. Two software reliability growth models based on structural testing coverage were evaluated with respect to their sensitivity to variations in operational profile. An experiment was performed on a real program (SPACE) with real defects, submitted to three distinct operational profiles. Distinction among the operational profiles was assessed by applying the Kolmogorov-Smirnov test. Testing coverage was measured according to the following criteria: all-nodes, all-arcs, all-uses, and all-potential-uses. Reliability measured for each operational profile was compared to the reliabilities estimated by the two models, estimated reliabilities were obtained using the coverage for the four criteria. Results from the experiment show that the predictive ability of the two models is not affected by variations in the operational profile of the program.",2010,0, 4193,System of Measuring the Sub-Pixel Edge of CCD Image's,"In order to improve the precision, speed, integration and reliability of the linear CCD system, which was used to detect the sub-pixel edge of picture, a new digital system based on auto focusing was designed. The system captures the image of the tested work piece through a CCD, puts the image data into computer, gathers coordinate of tested edge of work by the method of digital image processing and auto focusing. The data of signals memorized in computer were processed by using software system of data collecting. Thereby, the inspection of precise size and location of defects were achieved. The result shows that The resolving power of this method can reach 20 m and the error is less than 10%.The possibility that the impurity particles can be checked out is up to 100%.",2010,0, 4194,Evaluation of depth compression and view synthesis distortions in multiview-video-plus-depth coding systems,"Several quality evaluation studies have been performed for video-plus-depth coding systems. In these studies, however, the distortions in the synthesized views have been quantified in experimental setups where both the texture and depth videos are compressed. Nevertheless, there are several factors that affect the quality of the synthesized view. Incorporating more than one source of distortion in the study could be misleading; one source of distortion could mask (or be masked by) the effect of other sources of distortion. In this paper, we conduct a quality evaluation study that aims to assess the distortions introduced by the view synthesis procedure and depth map compression in multiview-video-plus-depth coding systems. We report important findings that many of the existing studies have overlooked, yet are essential to the reliability of quality evaluation. In particular, we show that the view synthesis reference software yields high distortions that mask those due to depth map compression, when the distortion is measured by average luma peak signal-to-noise ratio. In addition, we show what quality metric to use in order to reliably quantify the effect of depth map compression on view synthesis quality. Experimental results that support these findings are provided for both synthetic and real multiview-video-plus-depth sequences.",2010,0, 4195,A software-based receiver sampling frequency calibration technique and its application in GPS signal quality monitoring,"This paper has investigated the sampling frequency error impact on the signal processing in a software-correlator based GPS receiver as well as the periodic averaging technique in a pre-correlation GPS signal quality monitor. The refined signal model of receiver processing in the presence of clock error is established as the foundation of the performance analyses. A software-based method is developed to accurately calibrate both the digital IF and the sampling frequency simultaneously. The method requires no additional hardware other than the GPS receiver RF front end output samples. It enables inline calibration of the receiver measurements instead of complicated post-processing. The performance of the technique is evaluated using simulated signals as well as live GPS signals collected by several GPS data acquisition equipments, including clear time domain waveforms/eye patterns, amplitude probability density histograms, Power Spectrum Density (PSD) envelopes, and correlation-related characteristics. The results show that we can calibrate the sampling frequency with an accuracy resolution of 109 of the true sampling frequency online, and the pre-correlation SNR can be potentially improved by 39dB using periodic averaging.",2010,0, 4196,Message analysis method based on a stream database for information system management,"The authors conducted a detailed survey of a large-scale data center. They found that automated monitoring of messages produced by operating systems, middleware, and applications was adopted as a key method for detecting system failures and malfunctions, and found that this was an effective method for improving the quality of system management operations. The survey revealed that message occurrences are not only monitored, they are also analyzed for particular patterns. Those patterns include multiple occurrences surpassing a threshold in a predefined time period, consecutive occurrences within a predefined time interval, or occurrences violating predefined sequences, and so on. A custom-made application is currently used for this message analysis. The authors found some problems in implementation of the application. To overcome these problems and aid information system management, the authors proposed a message analysis method using a stream database. An experiment proved the effectiveness of the method.",2010,0, 4197,A Quality Model in a Quality Evaluation Framework for MDWE methodologies,"Nowadays, diverse development methodologies exist in the field of Model-Driven Web Engineering (MDWE), each of which covers different levels of abstraction on Model-Driven Architecture (MDA): CIM, PIM, PSM and Code. Given the high number of methodologies available, it is necessary to evaluate the quality of existing methodologies and provide helpful information to the developers. Furthermore, proposals are constantly appearing and the need may arise not only to evaluate the quality but also to find out how it can be improved. In this context, QuEF (Quality Evaluation Framework) can be employed to assess the quality of MDWE methodologies. This article presents the work being carried out and describes tasks to define a Quality Model component for QuEF. This component would be responsible for providing the basis for specifying quality requirements with the purpose of evaluating quality.",2010,0, 4198,A framework based security-knowledge database for vulnerabilities detection of business logic,"This paper presents a framework for vulnerabilities detection of business logic in the software design phase. First, model the business logic in the design phase finite state machine, and extract relevant business processes from the model. Calculate the similarity degree between attack pattern and the business processes. Thus, find out if there are some vulnerabilities in the business logic and generate a report of threats analysis. Finally, Focusing on the business logic of user registration in the web application, we model it as a FSA then detect the model. By analyzing the detection result we conclude that the approach is correct and effective and can improve software security and reliability.",2010,0, 4199,Fractal web service composition framework,"By using semantic web service composition, individual web services can be combined to create complex, but easily reconfigurable systems. Such systems can play a vital role in today's changing economic conditions, as they allow business to quickly adapt to market changes. Taking into consideration that manual web service composition is both time-consuming and error prone, the proposed framework uses a semi-automatic approach. The composition is done in a fractal manner as existing web service chains can easily be incorporated in new web service chains.",2010,0, 4200,The quality of multiple VoIP calls in an encrypted wireless network,"We quantified the user-perceived quality for a VoIP application running in an encrypted wireless network and we experimentally determined the maximum number of parallel VoIP calls that can be achieved, at the best quality of the speech signal. We studied the behaviour of the application for four encryption mechanisms and for each of them we measured the bandwidth waste due to encryption. We used the ITU-T PESQ score to objectively assess the quality of the speech signal. In this paper we present the test setup and the tools we used for our experiments, as well as our experimental results and conclusions.",2010,0, 4201,Algorithm-based fault tolerance for many-core architectures,"Modern many-core architectures with hundreds of cores provide a high computational potential. This makes them particularly interesting for scientific high-performance computing and simulation technology. Like all nano scaled semiconductor devices, many-core processors are prone to reliability harming factors like variations and soft errors. One way to improve the reliability of such systems is software-based hardware fault tolerance. Here, the software is able to detect and correct errors introduced by the hardware. In this work, we propose a software-based approach to improve the reliability of matrix operations on many-core processors. These operations are key components in many scientific applications.",2010,0, 4202,A Practical Approach to Robust Design of a RFID Triple-Band PIFA Structure,"This paper presents a practical methodology of obtaining a robust optimal solution for a multi U-slot PIFA (Planar Inverted F-Antenna) structure with triple bands of 433 MHz, 912 MHz and 2.45 GHz. Using evolutionary strategy, a global optimum is first sought out in terms of the dimensions of the slots and shorting strip. Then, starting with the optimized values, Taguchi quality method is carried out in order to obtain a robust design of the antenna against the changes of uncontrollable factors such as material property and feed position. To prove the validity of the proposed method, the performance of the antenna is predicted by general-purpose electromagnetic software and also compared to experimental results.",2010,0, 4203,Research on Failure Detector in Remote Disaster-tolerant System,"Failure detector is one of the key components for remote disaster-tolerant systems. In this paper, a flexible hierarchical architecture is designed and the detection component is divided from decision-making. In doing so, failure detector can be applied to remote disaster-tolerant system and can meet the requirements of different applications. An adaptable failure detection algorithm is put forward considering message delay and message loss and clock synchronization. This algorithm can adapt itself to the change of network condition and applications' requirements. The algorithm has been implemented in a remote disaster-tolerant system and the test results showed that the error rate is greatly decreased using the algorithm.",2010,0, 4204,Correlation between Indoor Air Distribution and Pollutants in Natural Ventilation,"The indoor air quality of lecture room in the university located in Xi'an city centre was assessed. The primary aim was to obtain correlations between the air distribution and pollutants. It was found that air distribution had important effect on the IAQ, and rational air distribution has played an important part in pollutants dispersion and attenuation. The visual air-flow field of lecture room was obtained via the computational fluid dynamics (CFD) method and Fluent software, meanwhile, some parameters of indoor pollutants was gained by field testing and the lecture room characteristics were investigated. The results showed that indoor pollutants can't be removed just depending on natural ventilation when there are vortexes in the air distribution. Furthermore, reasonable suggestions for creating healthy teaching environment and better IAQ are offered.",2010,0, 4205,A logic based approach to locate composite refactoring opportunities in object-oriented code,"In today's software engineering, more and more emphasis is put on the quality of object-oriented software design. It is commonly accepted that building a software system with maintainability and reusability issues in mind is far more important than just getting all the requirements fulfilled in one way or another. Design patterns are powerful means to obtain this goal. Tools have been built that automatically detect design patterns in object-oriented code and help in understanding the code. Other tools help in refactoring object-oriented code towards introducing design patterns, but human intelligence is needed to detect where these design patterns should be inserted. This paper proposes a logic approach to the automatic detection of places within object-oriented code where the Composite design pattern could have been used. Suspects identified by such a tool could very well be served as input data for other tools that automatically refactor the code as to introduce the missing design pattern.",2010,0, 4206,Testing-oriented improvements of OCL specification patterns,"Detailed and unequivocal model specifications are a prerequisite for attaining the automated software development goal as promoted by the Model Driven Engineering paradigm. The use of Design by Contract assists in creating such model specifications. However, writing from scratch a large amount of assertions can be tedious, time-consuming, and error-prone. Consequently, a number of constraint patterns have been identified in the literature, and corresponding OCL specifications have been proposed. Automating their use in tools should speed the writing task and increase its correctness. Yet, no attention has been paid to the influence of such specifications on the testing process. We approach this area by proposing new OCL specification patterns for some of the existing constraint patterns. Our proposal should increase the efficiency while testing/debugging models and applications. Relevant examples and tool-support are used in order to explain and validate our approach.",2010,0, 4207,Visual robot guidance in conveyor tracking with belt variables,"The paper presents a method and related algorithm for visual robot guidance in tracking objects moving on conveyor belts; the instantaneous location of the moving object is evaluated by a vision system consisting from a stationary, down looking monochrome video camera, a controller-integrated image processor and a vision extension of the structured V+ robot programming environment. The algorithm for visual tracking of the conveyor belt for """"on-the-fly"""" object grasping is partitioned in two stages: (i) visual planning of the instantaneous destination of the robot, (ii) dynamic re-planning of the robot's destination while tracking the object moving on the conveyor belt. The control method uses in addition the concept of Conveyor Belt Window - CBW. The paper discusses in detail the motion control algorithm for visually tracking the conveyor belt on which objects are detected, recognised and located by the vision part of the system. Experiments are finally reported in what concerns the statistics of object locating errors and motion planning errors function of the size of the objects and of the belt speed. These tests have been carried out on a 4-d.o.f. SCARA robot with artificial vision and the AVI AdeptVision software extension of the V+ high-level, structured programming environment.",2010,0, 4208,Achieving Flow-Level Controllability in Network Intrusion Detection System,"Current network intrusion detection systems are lack of controllability, manifested as significant packet loss due to the long-term resources occupation by a single flow. The reasons can be classified into two kinds. The first kind is known as normal reasons, that is, the processing of mass arriving packets of a large flow can not be limited to a determinable period of time and thus makes other flows starved. The second kind, in which the CPU is trapped in a dead-loop like state due to processing some packets with particular content of a flow, is considered as abnormal reasons. In fact, it is a kind of software crashes. In this paper, we discuss the innate defects of traditional packet-driven NIDS, and implement a flow-driven framework which can achieve fine-grained controllability. An Active Two-threshold scheme based on ideal Exit-Point (ATEP) is proposed in order to diminish data preserving overhead during flow switches and to detect crash in time. A quick crash recovery mechanism is also given which can recover the trapped thread from 90% crashes in 0.2 ms. The experimental results show that our flow-driven framework with ATEP scheme can achieve higher throughput and less packet loss ratio than the uncontrollable packet-driven systems with less than 1% of extra CPU overhead. What's more, in the case of crash occurrence, the ATEP scheme is still able to maintain rather steady throughput without sudden decrease.",2010,0, 4209,An Automatic Approach to Model Checking UML State Machines,"UML has become the dominant modeling language in software engineering arena. In order to reduce cost induced by design issues, it is crucial to detect model-level errors in the initial phase of software development. In this paper, we focus on the formal verification of dynamic behavior of UML diagrams. We present an approach to automatically verifying models composed of UML state machines. Our approach is to translate UML models to the input language of our home grown model checker PAT in such a way as to be transparent for users. Compared to previous efforts, our approach supports a more complete subset of state machine including fork, join, history and submachine features. It alleviates the state explosion problem by limiting the use of auxiliary variables. Additionally, this approach allows to check safety/liveness properties (with various fairness assumptions), trace refinement relationships and so on with the help of PAT.",2010,0, 4210,An Approach to Achieving the Reliability in TV Embedded System,"In this paper we propose an approach to improving the reliability of a TV set through the systematical model-based automated testing procedure. The proposed approach defines a probabilistic transition model of a TV set based on user behavior. The test scenarios are derived from the usage model and conducted through the automated execution framework, in which the TV set under testing is treated as a black box. Based on test results, the reliability analysis is performed.",2010,0, 4211,Are Longer Test Sequences Always Better? - A Reliability Theoretical Analysis,"One of the interesting questions currently discussed in software testing, both in practice and academia, is the role of test sequences on software testing, especially on fault detection. Previous work includes empirical research on rather small examples tested by relatively short test sequences. Belief is """"the longer the better"""", i.e., the longer test sequences are, the more faults are detected. This paper extends those approaches applied to a large commercial application using test sequences of increasing length, which are generated and selected by graph-model-based techniques. Experiments applying many software reliability models of different categories deliver surprising results.",2010,0, 4212,Quantitative Evaluation of Related Web-Based Vulnerabilities,Current web application scanner reports contribute little to diagnosis and remediation when dealing with vulnerabilities that are related or vulnerability variants. We propose a quantitative framework that combines degree of confidence reports pre-computed from various scanners. The output is evaluated and mapped based on derived metrics to appropriate remediation for the detected vulnerabilities and vulnerability variants. The objective is to provide a trusted level of diagnosis and remediation that is appropriate. Examples based on commercial scanners and existing vulnerabilities and variants are used to demonstrate the framework's capability.,2010,0, 4213,Studying the Impact of Social Structures on Software Quality,"Correcting software defects accounts for a significant amount of resources such as time, money and personnel. To be able to focus testing efforts where needed the most, researchers have studied statistical models to predict in which parts of a software future defects are likely to occur. By studying the mathematical relations between predictor variables used in these models, researchers can form an increased understanding of the important connections between development activities and software quality. Predictor variables used in past top-performing models are largely based on file-oriented measures, such as source code and churn metrics. However, source code is the end product of numerous interlaced and collaborative activities carried out by developers. Traces of such activities can be found in the repositories used to manage development efforts. In this paper, we investigate statistical models, to study the impact of social structures between developers and end-users on software quality. These models use predictor variables based on social information mined from the issue tracking and version control repositories of a large open-source software project. The results of our case study are promising and indicate that statistical models based on social information have a similar degree of explanatory power as traditional models. Furthermore, our findings suggest that social information does not substitute, but rather augments traditional product and process-based metrics used in defect prediction models.",2010,0, 4214,A Technique for Just-in-Time Clone Detection in Large Scale Systems,"Existing clone tracking tools have limited support for sharing clone information between developers in a large scale system. Developers are not notified when new clones are introduced by other developers or when existing clones are modified. We propose a client-server architecture that centrally detects and maintains clone information for an entire software system stored in a version control system. Clients retrieve a list of clones relevant to the code they are working on from the server. Whenever an update is committed to the version control system, the server detects and incrementally updates clone information. We propose techniques to improve the speed of the incremental clone detection. In order to reduce the number of comparisons required for clone detection, we select representative clones from the existing clone list. We build a string-based technique to compare the newly committed code with the representative clones and to update the clone list. In a case study, we show that our approach significantly reduces the clone detection time, while supporting clone detection across the entire software system.",2010,0, 4215,Adatpive single phase fault identification and selection technique for maintaining continued operaton of distributed generation,This paper presents the development of an adaptive rule based fault identification and phase selection technique to be used in the implementation of single phase auto-reclosing (SPAR) in power distribution network with distributed generators (DGs). The proposed method uses only the three line currents measured at the relay point. The waveform pattern of phase angle and symmetrical components of the three line currents during transient period of fault condition are analyzed using conditioning rules with IF-THEN in order to determine the type of single phase to ground fault and initiate single pole auto-reclosing command or three phase reclosing command for other type of faults. The proposed method is implemented and verified in PSCAD/EMTDC power system software. The test results show that the proposed method can correctly detects the faulty phase within one cycle in power distribution network with DGs under various network operating conditions.,2010,0, 4216,Research on Modeling and Simulation of Activated Sludge Process,"Due to the complexity of wastewater treatment process, it is difficult to apply the existing mathematical models in practice. A new model is presented for the wastewater treatment process in this paper. This model is based on Benchmark Simulation Model no.1 (BSM1) modeling method, and then simplifies Activated Sludge Model No. 1 (ASM1) which was set up to connect the secondary settler model dynamically. Meanwhile, the parameters of the model are adjusted by the experiment data. Finally, the practical data was used to predict the COD values of the water quality. The results demonstrate that this proposed model is useful.",2010,0, 4217,Design of Real-Time Monitoring System of Bridgman Single Crystal Growth Parameters,"A computer system has been developed to detect Bridgman single crystal growth parameters. The main hardware is a Personal Computer. Software is Kingview6.53. Detected parameters are the temperature gradient within a furnace chamber, the crucible rotation speed, the crucible pull-down rate, the Solid-liquid interface temperature and so on. Depend on the detection values can analyze the causes of a crystal growth lacuna existence. It provides a basis for improving a crystal growth method and enhancing a crystal growth quality.",2010,0, 4218,Finite Element Modelling of Circumferential Magnetic Flux Leakage Inspection in Pipeline,"The axial magnetic flux leakage(MFL) inspection tools cannot reliably detect or size axially aligned cracks, such as SCC, longitudinal corrosion, long seam defects, and axially oriented mechanical damage. To focus on this problem, the circumferential MFL inspection tool is introduced. The finite element (FE) model is established by adopting ANSYS software to simulate magnetostatics. The results show that the amount of flux that is diverted out of the pipe depends on the geometry of the defect, the primary variables that affect the flux leakage are the ones that define the volume of the defect. The defect location can significantly affect flux leakage, the magnetic field magnitude arising due to the presence of the defect is immersed in the high field close to the permanent magnets. These results demonstrate the feasibility of detecting narrow axial defects and the practicality of developing a circumferential MFL tool.",2010,0, 4219,Eyecharts: Constructive benchmarking of gate sizing heuristics,"Discrete gate sizing is one of the most commonly used, flexible, and powerful techniques for digital circuit optimization. The underlying problem has been proven to be NP-hard. Several (suboptimal) gate sizing heuristics have been proposed over the past two decades, but research has suffered from the lack of any systematic way of assessing the quality of the proposed algorithms. We develop a method to generate benchmark circuits (called eyecharts) of arbitrary size along with a method to compute their optimal solutions using dynamic programming. We evaluate the suboptimalities of some popular gate sizing algorithms. Eyecharts help diagnose the weaknesses of existing gate sizing algorithms, enable systematic and quantitative comparison of sizing algorithms, and catalyze further gate sizing research. Our results show that common sizing methods (including commercial tools) can be suboptimal by as much as 54% (Vt-assignment), 46% (gate sizing) and 49% (gate-length biasing) for realistic libraries and circuit topologies.",2010,0, 4220,Automated development tools for linux USB drivers,"USB devices are widely used in consumer electronics. Writing device drivers has always been a tedious and error-prone job. This paper presents assisting tools for developing USB drivers under Linux OS. The tool kit includes (1) a generic-skeleton generator that can automatically generate generic USB driver code skeleton according to user-specified configuration, (2) a flattened-HID-driver generator that can merge stacked HID drivers to a monolithic driver and prune C codes to reduce size and response time for embedded applications, and (3) an ECP (Extended C Preprocessor) compiler that provides type-checking capability for low-level I/O operation and makes the driver code more readable.",2010,0, 4221,Objective video quality assessment of mobile television receivers,"The automated evaluation of mobile television receivers shall be facilitated by objective video quality assessment. Therefore, a test bench was set up, whose design and signal flow will be presented first. We describe and compare different full-reference video quality metrics and a simple no-reference metric from literature. The implemented metrics are evaluated and then used to assess the video quality of receivers for digital television broadcast by applying different RF scenarios. We present the achieved results with the different metrics for the purpose of receiver comparison.",2010,0, 4222,A safety related analog input module based on diagnosis and redundancy,"This paper introduces a safety-related analog input module to achieve data acquisition of 4-20mA current signals. It is an integral part of the safety-instrumented systems (SIS) which is used to provide critical control and safety applications for automation users. In order to ensure the performance of analog input circuit in good condition, a combination of hardware and software diagnosis should be carried out periodically. These kinds of internal self-diagnosis allow the device to detect improper operation within itself. If potentially dangerous process occurs, the AI has redundancy to maintain operation even when parts fail. The article presents special hardware, diagnostic software and full fault injection testing of the complete design. The test result shows that the safety-related AI is capable of detecting and locating of mostly potential faults and internal component failures.",2010,0, 4223,A study of medical image tampering detection,"Currently, methods of image tampering detection are divided into two categories, active detection and passive detection. In this paper, we try to review several detecting methods and hope this will offer some help to this field. We will focus on the passive detection method for medical images and show some results of our experiments in which we extract statistical features (IQM and HOWS based) of source images and their doctored version respectively. Manipulations we take to doctor the images include: brightness adjustment, rotation, scale, filtering, compression and so on, using fix manipulation parameter and random selected parameter. Different classifiers are chosen then to discriminate the source images from the doctored ones. We compare the performance of the classifiers to show that the passive detection methods are effective while dealing with medical image tapering detecting.",2010,0, 4224,Overview of power system operational reliability,"The traditional reliability evaluation assesses the long-term performance of power system but its constant failure rate cannot reflect the time-varying performances in an operational time frame. This paper studies the operational reliability of power system in the real-time operating conditions based on online operation information obtained from the Energy Management System (EMS). A framework of operational reliability evaluation is proposed systematically. The effects of components' inherence conditions, environment conditions and operating electrical conditions on failure rates are considered in their operational reliability models. To meet the restrictive requirement of real-time evaluation, a special algorithm is presented. The indices for operational reliability are defined as well. The software, Operational Reliability Evaluation Tools (ORET), is developed to implement the above described functions. The work reported in this paper can be used to assess the system risk in an operational timeframe and give warnings when the system reliability is low.",2010,0, 4225,Using Aurora Road Network Modeler for Active Traffic Management,"Active Traffic Management (ATM) is the ability to dynamically manage recurrent and nonrecurrent congestion based on prevailing traffic conditions. Focusing on trip reliability, it maximizes the effectiveness and efficiency of freeway corridors. ATM relies on fast and trustworthy traffic simulation software that can assess a large number of control strategies for a given road network, given various scenarios, in a matter of minutes. Effective traffic density estimation is crucial for the successful deployment of feedback algorithms for congestion control. Aurora Road Network Modeler (RNM) is an open-source macrosimulation tool set for operational planning and management of freeway corridors. Aurora RNM employs Cell Transmission Model (CTM) for road networks extended to support multiple vehicle classes. It allows dynamic filtering of measurement data coming from traffic sensors for the estimation of traffic density. In this capacity, it can be used for detection of faulty sensors. The virtual sensor infrastructure of Aurora RNM serves as an interface to the real world measurement devices, as well as a simulation of such measurement devices.",2010,0, 4226,Failure prediction method for Network Management System by using Bayesian network and shared database,"Network Management System (NMS) is a service that employs a variety of tools, applications, and devices to assist network administrators on monitoring and maintaining network. Keeping the network in high quality of service is the main purpose of NMS. This paper proposed a method to solve the network problem by making a prediction of failure based on network-data behavior. The prediction represented by conditional probability generated by Bayesian network. Bayesian network is a probability graphical model for representing the probabilistic relationship among a large number of variables and doing probabilistic inference with those variables. In order to describe how the prediction works, we discuss the prediction result by simulation on network congestion.",2010,0, 4227,The Application of Wireless Sensor Networks in Machinery Fault Diagnosis,"Wireless sensor network is a thriving information collecting and processing technology, which is widely used in military field, industry and environmental monitoring, etc. In a wireless sensor network which is made up of tens of thousands battery-powered sensor nodes, data fusion technology can be used to reduce communication traffic in order to save energy. With respect to large mechanical equipment, traditional wired sensors are commonly used for fault detection and diagnosis. There will be no wiring problem if wireless sensor networks are used, which is favorable to detect potential problems in mechanical equipment without affecting normal production of enterprises. In this paper, the application of wireless sensor networks in machinery fault diagnosis is studied, a data fusion model for machinery fault diagnosis in wireless sensor networks and PCA neural data fusion algorithm are proposed, and the effectiveness of the method is demonstrated in an experiment.",2010,0, 4228,Formal Fault Tolerant Architecture,"This paper shows the need of development by refinement: from most abstract specification to the implementation, in order to ensure 1) the traceability of the needs and requirements, 2) a good management of the development and 3) a reliable and fault-tolerant design of systems. We propose a formal architecture of models and methods for critical requirements and fault-tolerance. System complexity increases and the choices of their implementation are numerous. So the architecture verification achieves a prominent role in the system design cycle. Fault detecting at this early level decreases the time and costs of correction. We show how a formal method, B method, may be used to write the abstract specification of a system then to product correct-by-construction architecture through many steps of formal refinement. During these steps, a fault scenario is injected with a suitable introspective reaction by the system. All refinement steps, including the introspective correction, should be proven to be correct and satisfy the initial specification of the system. At the lower levels, design is separated between hardware and software communities. But even at these levels many design traces could be captured to prove not only the consistency of each design unit but the coherence between the different sub-parts: software, digital or other technologies",2010,0, 4229,A Modified History Based Weighted Average Voting with Soft-Dynamic Threshold,"Voting is a widely used fault-masking technique for real time systems. Several voting algorithms exist in literature. In this paper, a survey on the few existing voting algorithms is presented and a modified history based weighted average voting algorithm with soft-dynamic threshold value is proposed with two different weight assignment techniques, which combines all the advantages of the surveyed voting algorithms but overcomes their deficiencies. The proposed algorithm with both type of weight assignment techniques, gives better performance compared to the existing history based weighted average voting algorithms in the presence of intermittent errors. In the presence of permanent errors, when all the modules are fault prone, the proposed algorithm with first type of weight assignment technique gives higher availability than all the surveyed voting algorithms. If at least one module is fault free, this algorithm gives almost 100 % safety and also higher range of availability than the other surveyed voting algorithms.",2010,0, 4230,Software component quality prediction using KNN and Fuzzy logic,"Prediction of product quality within software engineering, preventive and corrective actions within the various project phases are constantly improved over the past decades. Practitioners and software companies were using various methods, different approaches and best practices in software development projects. Nevertheless, the issue of quality is pushing software companies to constantly invest in efforts to produce enough quality products that will arrive in time, with good enough quality to the customer. However, the quality is not for free, it has a price that is required at the time you notice about her. In this paper fuzzy logic and KNN classification method approaches are presented to predict Weibull distribution parameters shape, slope and the total number of faults in the system based on the software components individual contribution. Since the Weibull distribution is one of the most widely used probability distributions in the reliability engineering, predicting of its characteristics early in the software lifecycle might be useful input for the planning and control of verification activities.",2010,0, 4231,Streamlining collection of training samples for object detection and classification in video,"This paper is concerned with object recognition and detection in computer vision. Many promising approaches in the field exploit the knowledge contained in a collection of manually annotated training samples. In the resulting paradigm, the recognition algorithm is automatically constructed by some machine learning technique. It has been shown that the quantity and quality of positive and negative training samples is critical for good performance of such approaches. However, collecting the samples requires tedious manual effort which is expensive in time and prone to error. In this paper we present design and implementation of a software system which addresses these problems. The system supports an iterative approach whereby the current state-of-the-art detection and recognition algorithms are used to streamline the collection of additional training samples. The presented experiments have been performed in the frame of a research project aiming at automatic detection and recognition of traffic signs in video.",2010,0, 4232,Research on the 3D visual rapid design platform of belt conveyor based on AutoCAD,"Belt conveyors are widely used in the areas of coal, ports, electric power, chemical, etc, to transport bulk solid. The traditional design methods have many defects, such as long design cycle, prone to error, and it is unable to meet the needs of modern society. This paper presents a new method of designing belt conveyors, the 3D visual rapid design platform based on the AutoCAD software for the design of belt conveyors. This platform uses Visual Basic software to develop the rapid design program, establishes the stereo models of the belt conveyor by extending the functions of AutoCAD with ObjectARX. So it builds the three-dimensional design environment in which the designers can adjust the structure of the belt conveyor based on the actual environment, manage the parts of the belt conveyor and get the engineering drawing of the belt conveyor. This platform increases the design speed and improves the design quality.",2010,0, 4233,Process research on cogging rolling of H beam,"H beam has been widely used in industry and civil steel structure. But there are many key technologies that must be solved in order to produce high quality H-beam. This paper describes an investigation into the rolling technology using MARC software to model reversing rolling process of H-beam and compare the results with PDA data. A FEM model involving in three-dimensional, elasto-plastic and thermo-mechanical coupling has been established successfully to predict this multi-pass rolling process. The analysis produces outputs such as deformation rules, rolling force. The mechanism that the web is built up during the rolling process is also discussed. The influence of web reduction, the ratio of web reduction ration and flange reduction ration are also discussed. A new rolling schedule is suggested in order to improve the quality of the final product and improve rolling efficiency.",2010,0, 4234,Accurate diagnosis of rolling bearing based on wavelet packet and genetic-support vector machine,This paper studies on the combination usage of wavelet packet and artificial genetic-support vector machine in the fault diagnosis of ball bearing. Energy eigenvector of frequency domain is extracted using wavelet packet analysis method. Fault state of ball bearing is identified by using radial basis function genetic-support vector machine. The test results show that this GSVM model is effective to detect fault of ball bearing.,2010,0, 4235,Development on preventive maintenance management system for expressway asphalt pavements,"In view of the status that there was no expressway pavement preventive maintenance management system at home and abroad at present, based on the technology theory obtained by the author and the demands and process of expressway asphalt pavement preventive maintenance management, preventive maintenance management system for expressway asphalt pavement (EPMMS (V1.0)) was developed. The work or functions of expressway maintenance quality evaluating, pavement performance predicting, optimum selecting of preventive maintenance treatment, the optimal timing determining, and post-evaluating could be realized or auxiliary realized, The theoretical foundation, development environment and the whole framework were expounded in the paper, the development and realization process of the five sub-systems was introduced in detail, the system testing and application were briefly introduced. The test results showed that this system could meet the demands of software products register test code. Primary application showed the system was correct and efficient, software support would be provided for expressway pavement preventive maintenance management, the pavement preventive maintenance management would be more standardized and convenient, and also the demands of the expressway maintenance management department could be greatly met by this system.",2010,0, 4236,Automatic detection technology of surface defects on plastic products based on machine vision,"It is very necessary to detect surface defects on plastic products during production process and post treatment. The research and application of automatic detection technology of surface defects on plastic products is supposed to greatly liberate the human workforce, improve the automated production level, and has wide application prospect. The development of machine vision's key technologies such as illumination system, CCD camera, image enhancement, image segmentation, image recognition, and so on has been explained in detail. Its application on detection for surface defects on plastic products such as plastic electronic components, strips, PVC building materials, films, leather, bottles, and so on is also presented briefly. Especially, it mainly focuses on the automatic detection of surface defects for injection products, and the automatic detection system is proposed. It is composed of the conveyor belt device, the image acquisition and processing software and PLC control device, et al.",2010,0, 4237,Design and implementation of a direct RF-to-digital UHF-TV multichannel transceiver,"This manuscript presents the design and implementation of a direct UHF digital transceiver that provides direct sampling of the UHF-TV input signal spectrum. Our SDR-based approach is based on a pair of high-speed ADC/DAC devices along with a channelizer, a dechannelizer and a channel management unit, which is capable of inserting, deleting and/or conmuting individual RF-TV channels. Simulation results and in-lab measurements assess that the proposed system is able to receive, manipulate and retransmit a real UHF-TV spectrum at a negligible quality penalty cost.",2010,0, 4238,Urban land use change detection based on RS&GIS,"Urban land use change may influence natural phenomena and ecological processes. Decreasing of cultivated land area in Chengdu is one of the critical problems in recent years. The objective of this study is to detect land use changes between 1992 to 2008 using satellite images of Landsat TM/ETM+. The land use maps of 1992 waw collected from 1:50000 digital maps and Arc/View 3.2 software. TM/ETM+ satellite data were used to generate land use map. The images quality assessment and georeferencing were performed on images. Different suitable spectral transformations such as rationing, PCA, Tasseled Cap transformation and data fusion were performed on the images in ENVI4.0 and ERDAS IMAGE 8.5 software. Image classification was done using supervised classification maximum likelihood and minimum distance classifier utilizing original and synthetic bands resulted from diverse spectral transformation. The result of change detection shows that the cultivated land area decreased between 1992 and 2008 by 31.683% from 21.127104hm2 to 14.433104hm2, and the built-land area increased between 1992 and 2008 by 66.318% from 4.427104hm2 to 7.363104hm2. Also, the area with irrigated land farms have been decreased to 6.694104hm2 by 75.343% and the dry land farming area increased to 2.728104hm2 by 9.2%. The overlaying map of land use change shows that intensity of conversion of arable lands to built-land increased, and its speed of spatio-temporal change of urban land use is notable.",2010,0, 4239,An efficient polling scheme for wireless LANs,"With the popularity of WLAN application how to meet the requirements of various applications and perform more efficiently on the limited bandwidth of wireless networks becomes a key problem. This paper suggest a new scheme of MAC protocol of WLAN, making possible the service of the two-class priority station polling system under the mixed policy of exhaustive and gated services, optimizing the services of the system in time of load change input via adjustment of the times of the access gated services, and strengthening the flexibility and fairness of multimedia transmissions in wireless networks. The theoretical model is established with discretetime Markov chain and probability generating function. The analyses demonstrate that this scheme enables an effective allocation of channel resources for different tasks, guarantees the quality of system services.",2010,0, 4240,A SVM-based method for abnormity detection of log curves,"Rapid and accurate abnormity detection of log curves is critical in the quality control for logging industry. Traditional methods based on manual detection have been proven to be ineffective and unreliable. A machine learning method based on Support Vector Machine (SVM) is proposed in this paper to address this problem. The SVM classifiers are established according to the suspicious sections selected from log curves and the detected results given by experts. A genetic algorithm (GA) is introduced for optimization of parameters. With GA and SVM fusions, the optimal models for classifiers are determined to detect abnormity sections. Experimental results of China XiangJiang Oilfield show that an accuracy of 95% is achieved for suspicious straight sections and 96% is achieved for suspicious bouncing sections, which has proven the feasibility of this method.",2010,0, 4241,Quality of learning analysis based on Bayesian Network,"The level of quality of learning is directly related to the competitiveness of students in the future social life, the overall quality and comprehensive national strength of Chinese citizens; the establishment and improvement of student learning quality analysis and guiding systems are the strategic starting point for promotion of education. The Bayesian Network (BN) proposed by Pearl is a new mechanism for uncertain knowledge representation and manipulation based on probability theory and graph theory. BN is network structure with clarity semantics. It exploits the structure of the domain to allow a compact representation of complex joint probability distribution. Its sound probabilistic semantics explicit encoding of relevance relationships, inference algorithms and learning algorithms that are fairly efficient and effective in practice, and decision-making mechanism of facility, have led BN to enter the Artificial Intelligence(AI) mainstream. The present thesis is to make an experimental analysis of the test paper based on Bayesian Network. The main toolkit used in this experiment is BNT software suite compiled with MATLAB. This software suite provides us with a lot of basic function sets for Bayes Network learning. It is suitable for the accurate and appropriate logics of various types of joints, and it also has the function of parameter learning and structure learning. From the experiment we come to the conclusion that five factors including absorption rate of teaching and work accuracy have great influence on quality of learning.",2010,0, 4242,Research on fruit firmness test system based on virtual instrument technology,"A fruit firmness detection system was designed based on LabVIEW development environment which detects the displacement of the pendulum with eddy current displacement sensors according to the impact pendulum method and the signals were treated by the JKU-12 data acquisition cards. The whole system debug successfully in WinXP SP2, LabVIEW8.2 trial version of the Chinese environment. It could record the details of the probe penetrating into the fruit. The system was proved stable and reliable, and it provides a good technical method to test and analyze fruit firmness.",2010,0, 4243,Probabilistic neural network and its application,"Nuclear marine apparatus is a huge complicated system, most of which equipments are of nonlinearity, time varying, coupling and inexactness. Neuralnet is widely applied in nuclear fault diagnosis for its approaching any kinds of nonlinearity mapping. At present, BP neural net is used more widely, but the layers of the net and the neurones on each layer can not be delimited easily; such net may fall into the minimum point in the course of training. In this essay, PNN proves effective in diagnosing faults on nuclearmarine apparatus.",2010,0, 4244,Analysis of the effect of Java software faults on security vulnerabilities and their detection by commercial web vulnerability scanner tool,"Most software systems developed nowadays are highly complex and subject to strict time constraints, and are often deployed with critical software faults. In many cases, software faults are responsible for security vulnerabilities which are exploited by hackers. Automatic web vulnerability scanners can help to locate these vulnerabilities. Trustworthiness of the results that these tools provide is important; hence, relevance of the results must be assessed. We analyze the effect on security vulnerabilities of Java software faults injected on source code of Web applications. We assess how these faults affect the behavior of the scanner vulnerability tool, to validate the results of its application. Software fault injection techniques and attack trees models were used to support the experiments. The injected software faults influenced the application behavior and, consequently, the behavior of the scanner tool. High percentage of uncovered vulnerabilities as well as false positives points out the limitations of the tool.",2010,0, 4245,An empirical study of the influence of software Trustworthy Attributes to Software Trustworthiness,"Software Trustworthiness is a hotspot problem in software engineering, and Software Trustworthy Attribute is the base of Software Trustworthiness. Software defect is the basic reason that influences Software Trustworthiness. Therefore, in this thesis we will attempt to utilize text classification technology to classify the historical software detects according to Software Trustworthy Attributes, in order to analyze the influence of Software Trustworthy Attributes to Software Trustworthiness, get the pivotal attribute of Software Trustworthy Attributes; and analyze the influence of Software Trustworthy Attributes to Software Trustworthiness in different version of Gentoo Linux.",2010,0, 4246,A data mining based method: Detecting software defects in source code,"With the expansion of software size and complexity, how to detect defects becomes a challenging problem. This paper proposes a defect detection method which applies data mining techniques in source code to detect two types of defects in one process. The two types of defects are rule-violating defects and copy-paste related defects which may include semantic defects. During the process, this method can also extract implicit programming rules without prior knowledge of the software and detect copy-paste segments with different granularities. The method is evaluated with the Linux kernel that contains more than 4 million lines of C code. The result shows that the resulting system can quickly detect many programming rules and violations to the rules. After using the novel pruning techniques, it will greatly reduce the effort of manually checking violations so as a large number of false positives are effectively eliminated. As an illustrative example of its effectiveness, a case study shows that among the top 50 violations reported by the proposed model, 11 defects can be confirmed after examining the source code.",2010,0, 4247,Time series analysis for bug number prediction,"Monitoring and predicting the increasing or decreasing trend of bug number in a software system is of great importance to both software project managers and software end-users. For software managers, accurate prediction of bug number of a software system will assist them in making timely decisions, such as effort investment and resource allocation. For software end-users, knowing possible bug number of their systems will enable them to take timely actions in coping with loss caused by possible system failures. To accomplish this goal, in this paper, we model the bug number data per month as time series and, use time series analysis algorithms as ARIMA and X12 enhanced ARIMA to predict bug number, in comparison with polynomial regression as the baseline. X12 is the widely used seasonal adjustment algorithm proposed by U.S. Census. The case study based on Debian bug data from March 1996 to August 2009 shows that X12 enhanced ARIMA can achieve the best performance in bug number prediction. Moreover, both ARIMA and X12 enhanced ARIMA outperform the baseline as polynomial regression.",2010,0, 4248,Modeling risk factors dependence using Copula method for assessing software schedule risk,"Based on risk factor method, a model for building dependence among risk factors is discussed for assessing schedule risk of software project. Three main risk factors which influence software schedule are described. Copula method is used to model the dependence among risk factor, and Monte Carlo method is adopted to simulate the schedule risk model. The empirical results from a Communication Corporation show that schedule risk of the new software project is overestimated without modeling the dependence among risk factors. Moreover, the model with considering risk factor dependence by Copula can assess the schedule risk accurately.",2010,0, 4249,3D protein model assessment using geometric and biological features,"Automatic prediction of protein three-dimensional structures from its amino acid sequence has become one of the most important researched fields in bioinformatics. With that increases the importance of determining the quality of these protein models. Protein three-dimensional structure evaluation is a complex problem in computational structure biology. We attempt to solve this problem using SVM and information from both sequence and structure of the protein. The goal is to generate a machine that understands structures from PDB and when given a new model, predicts whether it belongs to the same class as the PDB structures or not (correct or incorrect protein model). Here we show one such machine; results appear promising for further analysis. For the purpose of reducing computational overhead multiprocessor environment and basic feature selection method is used.",2010,0, 4250,GUI test-case generation with macro-event contracts,"To perform a comprehensive GUI testing, a large number of test cases are needed. This paper proposes a GUI test-case generation approach that is suitable for system testing. The key idea is to extend high-level GUI scenarios with contracts and use the contracts to infer the ordering dependencies of the scenarios. From the ordering dependencies, a state machine of the system is constructed and used to generate test cases automatically. A case study is conducted to investigate the quality of the test cases generated by the proposed approach. The results showed that, in comparison to creating test cases manually, the proposed approach can detect more faults with less human effort.",2010,0, 4251,A computer-vision-assisted system for Videodescription scripting,"We present an application of video indexing/summarization to produce Videodescription (VD) for the blinds. Audio and computer vision technologies can automatically detect and recognize many elements that are pertinent to VD which can speed-up the VD production process. We have developed and integrated many of them into a first computer-assisted VD production software. The paper presents the main outcomes of this R&D activity started 5 years ago in our laboratory. Up to now, usability performance on various video and TV series types have shown a reduction of up to 50% in the VD time production process.",2010,0, 4252,Numerical simulation of metal interconnects of power semiconductor devices,"This paper presents a methodology and a software tool - R3D - for extraction, simulations, analysis, and optimization of metal interconnects of power semiconductor devices. This tool allows an automated calculation of large area device Rdson value, to analyze current density and potential distributions, to design sense device, and to optimize a layout to achieve a balanced and optimal design. R3D helps to reduce the probability of a layout error, and drastically speeds up and improves the quality of layout design.",2010,0, 4253,Assessing and improving the effectiveness of logs for the analysis of software faults,"Event logs are the primary source of data to characterize the dependability behavior of a computing system during the operational phase. However, they are inadequate to provide evidence of software faults, which are nowadays among the main causes of system outages. This paper proposes an approach based on software fault injection to assess the effectiveness of logs to keep track of software faults triggered in the field. Injection results are used to provide guidelines to improve the ability of logging mechanisms to report the effects of software faults. The benefits of the approach are shown by means of experimental results on three widely used software systems.",2010,0, 4254,An empirical investigation of fault types in space mission system software,"As space mission software becomes more complex, the ability to effectively deal with faults is increasingly important. The strategies that can be employed for fighting a software bug depend on its fault type. Bohrbugs are easily isolated and removed during software testing. Mandelbugs appear to behave chaotically. While it is more difficult to detect these faults during testing, it may not be necessary to correct them; a simple retry after a failure occurrence may work. Aging-related bugs, a sub-class of Mandelbugs, can cause an increasing failure rate. For these faults, proactive techniques may prevent future failures. In this paper, we analyze the faults discovered in the on-board software for 18 JPL/NASA space missions. We present the proportions of the various fault types and study how they have evolved over time. Moreover, we examine whether or not the fault type and attributes such as the failure effect are independent.",2010,0, 4255,Application of a fault injection based dependability assessment process to a commercial safety critical nuclear reactor protection system,"Existing nuclear power generation facilities are currently seeking to replace obsolete analog Instrumentation and Control (I&C) systems with contemporary digital and processor based systems. However, as new technology is introduced into existing and new plants, it becomes vital to assess the impact of that technology on plant safety. From a regulatory point of view, the introduction or consideration of new digital I&C systems into nuclear power plants raises concerns regarding the possibility that the fielding of these I&C systems may introduce unknown or unanticipated failure modes. In this paper, we present a fault injection based safety assessment methodology that was applied to a commercial safety grade digital Reactor Protection System. Approximately 10,000 fault injections were applied to the system. This paper presents a overview of the research effort, lessons learned, and the results of the endeavor.",2010,0, 4256,A study of the internal and external effects of concurrency bugs,"Concurrent programming is increasingly important for achieving performance gains in the multi-core era, but it is also a difficult and error-prone task. Concurrency bugs are particularly difficult to avoid and diagnose, and therefore in order to improve methods for handling such bugs, we need a better understanding of their characteristics. In this paper we present a study of concurrency bugs in MySQL, a widely used database server. While previous studies of real-world concurrency bugs exist, they have centered their attention on the causes of these bugs. In this paper we provide a complementary focus on their effects, which is important for understanding how to detect or tolerate such bugs at run-time. Our study uncovered several interesting facts, such as the existence of a significant number of latent concurrency bugs, which silently corrupt data structures and are exposed to the user potentially much later. We also highlight several implications of our findings for the design of reliable concurrent systems.",2010,0, 4257,Volunteer-instigated connectivity restoration algorithm for Wireless Sensor and Actor Networks,"Due to their applications, Wireless Sensor and Actor Networks (WSANs) have recently been getting significant attention from the research community. In these networks, maintaining inter-actor connectivity is of a paramount concern in order to plan an optimal coordinated response to a detected event. Failure of an actor may partition the inter-actor network into disjoint segments, and may thus hinder inter-actor coordination. This paper presents VCR, a novel distributed algorithm that opts to repair severed connectivity while imposing minimal overhead on the nodes. In VCR the neighbors of the failed actor volunteer to restore connectivity by exploiting their partially utilized transmission range and by repositioning closer to the failed actor. Furthermore, a diffusion force is applied among the relocating actors based on transmission range in order to reduce potential of interference, and improve connectivity. VCR is validated through simulation and is shown to outperform contemporary schemes found in the literature.",2010,0, 4258,Diverse Partial Memory Replication,"An important approach for software dependability is the use of diversity to detect and/or tolerate errors. We develop and evaluate an approach for automated program diversity called Diverse Partial Memory Replication (DPMR), aimed at detecting memory safety errors. DPMR is an automatic compiler transformation that replicates some subset of an executable's data memory and applies one or more diversity transformations to the replica. DPMR can detect any kind of memory safety errors in any part of a program's data memory. Moreover, DPMR is novel because it uses partial replication within a single address space, replicating (and comparing) only a subset of a program's memory. We also perform a detailed study of the diversity mechanisms and state comparison policies in DPMR (a first of its kind for such diversity approaches), which is valuable for exploiting the high flexibility of DPMR.",2010,0, 4259,Assignments acceptance strategy in a Modified PSO Algorithm to elevate local optima in solving class scheduling problems,Local optima in optimization problems describes a state where no small modification of the current best solution will produce a solution that is better. This situation will make the optimization algorithm unable to find a way to global optimum and finally the quality of the generated solution is not as expected. This paper proposes an assignment acceptance strategy in a Modified PSO Algorithm to elevate local optima in solving class scheduling problems. The assignments which reduce the value of objective function will be totally accepted and the assignment which increases or maintains the value of objective function will be accepted based on acceptance probability. Five combinations of acceptance probabilities for both types of assignments were tested in order to see their effect in helping particles moving out from local optima and also their effect towards the final penalty of the solution. The performance of the proposed technique was measured based on percentage penalty reduction (%PR). Five sets of data from International Timetabling Competition were used in the experiment. The experimental results shows that the acceptance probability of 1 for neutral assignment and 0 for negative assignments managed to produce the highest percentage of penalty reduction. This combination of acceptance probability was able to elevate the particle stuck at the local optima which is one of the unwanted situations in solving optimization problems.,2010,0, 4260,Proposal for a set of quality attributes relevant for Web 2.0 application success,"Quality and usability of Web applications are considered to be key aspects of their success. If these aspects are not adequately represented in a Web application, or if they are not appropriately combined, there will be little to prevent the users from browsing further in search of an application that will more effectively satisfy their needs. However, the main challenge is to identify key attributes that will retain users on a Web application longer or influence their decision to visit it again. There are many frameworks and methodologies that deal with this issue but very few of them have an emphasis on assessing the quality and usability of Web 2.0 applications. This paper contains a critical review of previous research in the field of Web quality assessment. It provides the theoretical basis for the development of a set of attributes that should be considered when measuring the quality of Web 2.0 applications.",2010,0, 4261,Automatic Concurrency Management for distributed applications,"Building distributed applications is difficult mostly because of concurrency management. Existing approaches primarily include events and threads. Researchers and developers have been debating for decades to prove which is superior. Although the conclusion is far from obvious, this long debate clearly shows that neither of them is perfect. One of the problems is that they are both complex and error-prone. Both events and threads need the programmers to explicitly manage concurrency, and we believe it is just the source of difficulties. In this paper, we propose a novel approach-automatic concurrency management by the runtime system. It dynamically analyzes the programs to discover potential concurrency opportunities; and it dynamically schedules the communication and the computation tasks, resulting in automatic concurrent execution. This approach is inspired by the instruction scheduling technologies used in modern microprocessors, which dynamically exploits instruction-level parallelism. However, hardware scheduling algorithms do not fit software in many aspects, thus we have to design a new scheme completely from scratch. automatic concurrency management is a runtime technique with no modification to the language, compiler or byte code, so it is good at backward compatibility. It is essentially a dynamic optimization for networking programs.",2010,0, 4262,Joint throughput and packet loss probability analysis of IEEE 802.11 networks,"Wireless networks have grown their popularity over the past number of years. Usually wireless networks operate according to IEEE 802.11, which specifies protocols of physical and MAC layers. A number of different studies have been conducted on performance of IEEE 802.11 wireless networks. However, questions of QoS control in such networks have not received sufficient attention from the research community. This paper considers modeling of QoS of IEEE 802.11 networks defined in terms of throughput requirements and packet loss probability limitations. An influence of sizes of packets being transmitted through the network on the QoS is investigated. Extensive simulations confirm results obtained from the mathematical model.",2010,0, 4263,Resilient workflows for high-performance simulation platforms,"Workflows systems are considered here to support large-scale multiphysics simulations. Because the use of large distributed and parallel multi-core infrastructures is prone to software and hardware failures, the paper addresses the need for error recovery procedures. A new mechanism based on asymmetric checkpointing is presented. A rule-based implementation for a distributed workflow platform is detailed.",2010,0, 4264,A systems approach to verification using hardware acceleration,"The increasing complexity of system-on-chip devices has made verification of these devices an extremely difficult task. Additionally, there is the pressure of time-to-market that is faced by all semiconductor companies. Some of the functional complexity of such devices has made it necessary to run system level sequences that were typically run only in post-silicon phase in the pre-silicon stages. However, the effective speeds of simulators used during functional verification do not lend well to running system level tests. In this paper we will describe how a hardware accelerator was used to execute system level tests. We will share some of the results seen and some of the design issues that were detected using such an approach. We have illustrated this approach choosing three distinct areas of (i) secure boot, (ii) built-in-self-test sequences, and (iii) scan testing. We also believe that going to a system level approach using hardware acceleration helps to find several difficult corner case issues that remain undetected using other verification approaches.",2010,0, 4265,SES-based framework for fault-tolerant systems,"Embedded real-time systems are often used in harsh environments, for example engine control systems in automotive vehicles. In such ECUs (Engine Control Unit) faults can lead to serious accidents. In this paper we propose a safety embedded architecture based on coded processing. This framework only needs two channels to provide fault tolerance and allows the detection and identification of permanent and transient faults. Once a fault is detected by an observer unit the SES guard makes it visible and initiates a suitable failure reaction.",2010,0, 4266,Fault-tolerant defect prediction in high-precision foundry,"High-precision foundry production is subjected to rigorous quality controls in order to ensure a proper result. Such exams, however, are extremely expensive and only achieve good results in a posteriori fashion. In previous works, we presented a defect prediction system that achieved a 99% success rate. Still, this approach did not take into account sufficiently the geometry of the casting part models, resulting in higher raw material requirements to guarantee an appropriate outcome. In this paper, we present here a fault-tolerant software solution for casting defect prediction that is able to detect possible defects directly in the design phase by analysing the volume of three-dimensional models. To this end, we propose advanced algorithms to recreate the topology of each foundry part, analyze its volume and simulate the casting procedure, all of them specifically designed for an robust implementation over the latest graphic hardware that ensures an interactive design process.",2010,0, 4267,A component based approach for modeling expert knowledge in engine testing automation systems,"Test automation systems used for developing combustion engines comprise hardware components and software functionality they depend on. Such systems usually perform similar tasks; they comprise similar hardware and execute similar software. Regarding their details, however, literally no two systems are exactly the same. In order to support such variations, the automation system has to be customized accordingly. Without a tools that properly supports both, customization as well as standardization of functionality, customization can be time consuming and error-prone. In this paper we describe a modeling driven approach that is based on components with hard- and software view that allows defining standard functionality for physical hardware. We show how this way most of the automation system's standard functionality can be generated automatically, while still allowing to add custom functionality.",2010,0, 4268,Fault location for a series compensated transmission line based on wavelet transform and an adaptive neuro-fuzzy inference system,"Fault diagnosis is a major area of investigation for power system and intelligent system applications. This paper proposes an efficient and practical algorithm based on using wavelet MRA coefficients for fault detection and classification, as well as accurate fault location. A three-phase transmission line with series compensation is simulated using MATLAB software. The line currents at both ends are processed using an online wavelet transform algorithm to obtain wavelet MRA for fault recognition. Directions and magnitudes of spikes in the wavelet coefficients are used for fault detection and classification. After identifying the fault section, the summation of the sixth level MRA coefficients of the currents are fed to adaptive neuro-fuzzy inference system (ANFIS) to obtain accurate fault location. The proposed scheme is able to detect all types of internal faults at different locations either before or after the series capacitor, at different inception angles, and at different fault resistances. It can also detect the faulty phase(s) and can differentiate between internal and external faults. The simulation results show that the proposed method has the characteristic of a simple and clear recognition process. We conclude that the algorithm is ready for series compensated transmission lines.",2010,0, 4269,"Substation's switchgear's reliability evaluation as a part of transmission, distribution and generation modeling software",The paper gives the main idea of substation's switchgear's reliability evaluation module as a part of software that was developed in Riga Technical University in frames of project Information technology to ensure the sustainability of generation and transmission grid. The module allows easily evaluate switchgear's up-state's probability and interruption time for large amount of different types of switchgears by switchgear's bay's faults and elements standardization. Also it is possible to evaluate energy not supplied and costs related to energy not supplied due to switchgear's fault.,2010,0, 4270,The limitations of software signature and basic block sizing in soft error fault coverage,"This paper presents a detailed analysis of the efficiency of software-only techniques to mitigate SEU and SET in microprocessors. A set of well-known rules is presented and implemented automatically to transform an unprotected program into a hardened one. SEU and SET are injected in all sensitive areas of MIPS-based microprocessor architecture. The efficiency of each rule and a combination of them are tested. Experimental results show the limitations of the control-flow techniques in detecting the majority of SEU and SET faults, even when different basic block sizes are evaluated. A further analysis on the undetected faults with control flow effect is done and five causes are explained. The conclusions can lead designers in developing more efficient techniques to detect these types of faults.",2010,0, 4271,Evaluation of a new low cost software level fault tolerance technique to cope with soft errors,"Increasing soft error rates make the protection of combinational logic against transient faults in future technologies a major issue for the fault tolerance community. Since not every transient fault leads to an error at application level, software level fault tolerance has been proposed by several authors as a better approach. In this paper, a new software level technique to detect and correct errors due to transient faults is proposed and compared to a classic one, and the costs of detection and correction for both approaches are compared and discussed.",2010,0, 4272,Model driven testing of embedded automotive systems with timed usage models,"Extended Automation Method 2.0 (EXAM) is employed at AUDI AG to perform the testing of automotive systems. The main drawback of EXAM is, that each test case must be devised and created individually. This procedure is apparently awkward and error-prone. Moreover, the development of increasingly complex functionality poses new challenges to the testing routine in industry. We employed Timed Usage Models to extend the EXAM test method. The usage model serves as the basis for the whole testing process, including test planning and test case generation. We derived automatically platform independent test cases for the execution in EXAM. Test-bench specific code was automatically generated for the test cases in EXAM, where they were executed on hardware-in-the-loop simulators (HILs). Usage models were created for functionalities from power train, comfort, and energy management. The application of usage models allowed the assessment of the test effort and the systematic generation of test cases.",2010,0, 4273,A Hybrid Approach for Detection and Correction of Transient Faults in SoCs,"Critical applications based on Systems-on-Chip (SoCs) require suitable techniques that are able to ensure a sufficient level of reliability. Several techniques have been proposed to improve fault detection and correction capabilities of faults affecting SoCs. This paper proposes a hybrid approach able to detect and correct the effects of transient faults in SoC data memories and caches. The proposed solution combines some software modifications, which are easy to automate, with the introduction of a hardware module, which is independent of the specific application. The method is particularly suitable to fit in a typical SoC design flow and is shown to achieve a better trade-off between the achieved results and the required costs than corresponding purely hardware or software techniques. In fact, the proposed approach offers the same fault-detection and -correction capabilities as a purely software-based approach, while it introduces nearly the same low memory and performance overhead of a purely hardware-based one.",2010,0, 4274,Automatic installation of software-based fault tolerance algorithms in programs generated by GCC compiler,"The problem of designing radiation-tolerant devices working in application critical systems becomes very important especially if human life depends on the reliability of control mechanisms. One of the possible solution of this problem are pure software protection methods. They constitute different category of techniques to detect transient faults and correct corresponding errors. Software fault tolerance schemes are cheaper to implement since they can be used with standard, commercial of-the-shelf (COTS) components. Additionally, they do not require any hardware modification. In this paper, author propose a new implementation mechanism for software based fault protection algorithms performed automatically during application compilation.",2010,0, 4275,Usage of the safety-oriented real-time OASIS approach to build deterministic protection relays,"As any safety-related system, medium voltage protection relays have to comply with a Safety Integrated Level (SIL), as defined by the IEC 61508 standard. The safety-function of the software part of protection relays is first to detect any faults within the supervised power network, then ask the tripping of the circuit breakers in order to isolate the faulty portion of the network. However, it is required that detection and isolation of faults must occur within a given time, as specified by the IEC 60255 standard. Schneider Electric currently achieves the demonstration that a protection relay is performing its safety-function within such temporal constraints at the price of a costly phase of tests. The OASIS approach is a complete tool-chain to build safety-critical deterministic real-time systems, which enables the demonstration of the system timeliness. In this paper, we show how we apply the OASIS approach to build a deterministic protection relay system. We designed a software platform called OASISepam, based on an existing product from Schneider Electric, namely the Sepam 10. We show a preliminary evaluation of our implementation over a STR710 ARM-based board that runs the OASIS kernel. Notably, we show that the observed worst-case end-to-end detection time of OASISepam fulfils the specified constraint expressed in the design phase and translated in the OASIS programming model. Consequently, the temporal behaviour of protection relays is mastered, thus reducing application development costs and allowing the optimization of selectivity.",2010,0, 4276,A die-based defect-limited yield methodology for line control,"Defect monitoring and control in the semiconductor fab has been well documented over the years. The methodologies typically described in the literature involve controls through full-wafer defect counts, or defect densities, with attempts to correlate defects to electrical fail modes in order to predict the yield impact. These wafer-based methodologies are not adequate for determining the impact of defects on yield. Most notably, severe complications arise when applying wafer-based methods on wafers with mixed distributions (mix of random and clustered defects). This paper describes the proper statistical treatment of defect data to estimate yield impact for mixed-distribution wafer maps. This die-based, defect-limited yield (DLY) methodology properly addresses random and clustered defects, and applies a die-based multi-stage sampling method to select defects for review. The estimated yield impact of defects on the die can then be determined. Additionally, a die normalization technique is described that permits application of this die-based methodology on multiple products with different die sizes.",2010,0, 4277,A QoS constrained dynamic priority scheduling algorithm for process engine,"In this paper, a task scheduling algorithm in process engine is proposed to not only maximize overall customer satisfaction but also guarantee QoS requirement. This new algorithm dynamically assigns priority to a task based on the weighted utility value considering the predicted business value and left time to execute the process instance the task belongs to. The business value of each kind of process instance is modeled as a utility function of response time which reflects the inverse proportional relationship between the customer satisfaction and response time. Experiments show that the proposed algorithm promotes the total value of utility function with better QoS achievement compared to traditional algorithms.",2010,0, 4278,Model-based dependency analysis in service delivery process management,"Build up well-defined and optimized service process is the key to deliver good service quality and service satisfaction. An effective method of dependency analysis in the service delivery process is the core to construct an optimized process. However, with increasingly complicated services, there is the large amount of the tasks elements with extreme complex dependency in the service process. Under this situation, set up correct relationship among tasks becomes time consuming and error prone. In this paper, we propose a model-based dependency analysis to automatically build up the dependency relationship among tasks in the service delivery process. It addresses the problem in analyzing dependency firstly and our approach is given. Based on the dependency analysis, we also propose several advanced analysis features on the process to guide user to optimize the process via reducing the process cost. Also a tool adopting our approach has been implemented and introduced. Based on the tool, a case study about the test service delivery process is illustrated to show the results.",2010,0, 4279,A decision support system utilizing a semantic agent,"The adaptabilty, rapidity, and focus on high quality solutions offered by agile methodology have lead to a paradigm shift in the software development process in many enterprises. Agile methodology is iterative in nature, with each iteration i.e. timebox lasting 2-6 weeks. Iterations involve small teams comprising 9-19 developers working through the entire software development life cycle. Agile methodology works on two basic principles. The first being regular adaptation to changing circumstances and the second -focus on technical excellence and good design and high quality code. The first principle accommodates that tasks in an agile project cannot be predicted more than a week in advance. Thus the need arises for project teams to incorporate experts in the problem domain, such that they are better equipped to handle changes rapidly. However this methodology has been criticised as it may not bring about the benefits intended by the second principle unless practised by skilled programmers, who can create high quality code. Hence a project manager should be equipped with a highly skilled team. We propose the utilization of a semantic agent, which will act on behalf of the project manager and suggest experts based on a set of parameters. Our semantic agent is based on a semantic matching algorithm. This algorithm utilizes an ontology based similarity framework to make recommendations and suggest training paths to satisfy the requirements of the project manager. The agent uses this algorithm to recommend employees based on their expertise, past experience and availability. Further, based on recommendations made by the agent we classify employees as experts and non experts and suggest knowledge transfer methods to upgrade their skills.",2010,0, 4280,A qualitative and quantitative assessment method for software process model,"Aimed at the problems of high-cost, non-completeness and ambiguity existed in the traditional assessment methods for Software Process Model (SPM), this paper proposes a qualitative and quantitative assessment method. On the basis of assessment theory and domain experience of SPM, the unclear goals in the project start-up phase are qualitatively described in the form of problem set and expert problem domain is constructed as the assessment criteria on the implementation capability of SPM in the proposed method. With the integration of qualitative analysis and quantitative analysis by using a multi-index synthetic assessment algorithm of AHP (Analytic Hierarchy Process), the weight vector of goals and the reciprocal comparative matrix of problem domain are calculated and then the assessment result in dimensionless index is obtained. In order to reduce the cost of assessment, questionnaires instead of project tracking and audit are adopted to extract project goals in the method. Meanwhile, SPM is comprehensively assessed from four aspects including personnel, method, product and process, and the assessment result can intuitively reflect the capability of different SPM in a numerical form. Finally, an example of how to assess and choose four classical SPMs in the initial stage of a practical project is given out to show the validity of this assessment method.",2010,0, 4281,A bidirectional graph transformation approach to analysis of concurrent software models,"The application of model driven software development still faces strong challenges. One challenge we focus on here is analysis of concurrent software systems for detecting potential defects such as race conditions or atomicity violations. We adopt a BiG (Bidirectional Graph Transformation) approach to analysis of concurrent software models. The essential idea of our approach is that we choose labeled transition systems as the behavior model of the concurrent system, and then conduct model transformation and extract a labeled partial order view from the labeled transition systems for software analysis. The potential of BiG in this work is that model transformation is effectively supported based on queries, and models before and after transformation can also be synchronized automatically. This research is expected to benefit model-driven software development in that analysis of state models is lightweight and can be automated. It also provides the engineers with an interesting example about the application of bidirectional transformation to software analysis, which will encourage the improvement of BiG and its practical applications.",2010,0, 4282,A Pattern-Driven Generation of Security Policies for Service-Oriented Architectures,"Service-oriented Architectures support the provision, discovery, and usage of services in different application contexts. The Web Service specifications provide a technical foundation to implement this paradigm. Moreover, mechanisms are provided to face the new security challenges raised by SOA. To enable the seamless usage of services, security requirements can be expressed as security policies (e.g. WS-Policy and WS-SecurityPolicy) that enable the negotiation of these requirements between clients and services. However, the codification of security policies is a difficult and error-prone task due to the complexity of the Web Service specifications. In this paper, we introduce our model-driven approach that facilitates the transformation of architecture models annotated with simple security intentions to security policies. This transformation is driven by security configuration patterns that provide expert knowledge on Web Service security. Therefore, we will introduce a formalised pattern structure and a domain-specific language to specify these patterns.",2010,0, 4283,Detecting Data Inconsistency Failure of Composite Web Services Through Parametric Stateful Aspect,"Runtime monitoring of Web service compositions with WS-BPEL has been widely acknowledged as a significant approach to understand and guarantee the quality of services. However, most existing monitoring technologies only track patterns related to the execution of an individual process. As a result, the possible inconsistency failure caused by implicit interactions among concurrent process instances cannot be detected. To address this issue, this paper proposes an approach to specify the behavior properties related to shared resources for web service compositions and verify their consistency with the aid of a parametric stateful aspect extension to WS-BPEL. Parameters are introduced in pattern specification, which allows monitoring not only events but also their values bound to the parameters at runtime to keep track of data flow among concurrent process instances. An efficient implementation is also provided to reduce the runtime overhead of monitoring and event observation. Our experiments show that the proposed approach is promising.",2010,0, 4284,Software project schedule variance prediction using Bayesian Network,"The major objective of software engineer is to guarantee to deliver high-quality software on time and within budget. But as the development of software technology and the rapid extension of application areas, the size and complexity of software is increasing so quickly that the cost and schedule is often out of control. However, few groups or researchers have proposed an effective method to help project manager make reasonable project plan, resource allocation and improvement actions. This paper proposes Bayesian Network to solve the problem of predicting and controlling of software schedule in order to achieve proactive management. Firstly, we choose factors influencing software schedule and determine some significant cause-effect relationship between factors. Then we analyze data using statistical analysis and deal with data using data discretization. Thirdly, we construct the Bayesian structure of software schedule and learn the condition probability table of the structure. The structure and condition probability table constitute the model for software schedule variance. The model can be used not only to help project manager predict probability of software schedule variance but also guide software developers to make reasonable improvement actions. At last, an application shows how to use the model and the result proves the validity of the model. In addition, a sensitivity analysis is developed with the model to locate the most important factor of software schedule variance.",2010,0, 4285,The design of alarm and control system for electric fire prevention based on MSP430F149,"This paper introduces an electrical fire control system which is composed of stand-alone electrical fire detector and electrical fire monitoring equipment. The detector with fieldbus communication technologies uses MCU to realize the main route leakage protection for low-voltage 3-phase 4-wire system; at the same time, finish the measure, display and control of voltage, current, power, electricity, temperature and other parameters; through the monitoring equipments have been installed the management software written by VC++, the operation of the scene can be monitored and the alarm information can be detected in time. With RS-485 or Ethernet communication, the system can be associated with kinds of standard power monitoring software, circular monitoring and control more than 250 detects, and record 100 types of faults and data with more than 12 months storage time.",2010,0, 4286,Neural network fault prediction and its application,"In this paper, the forecasting algorithm employs wavelet function to replace sigmoid function in the hidden layer of Back-Propagation Neural Network. And a Wavelet Neural Network prediction model is established to predict Anode Effect (the most typical fault) through forecasting the change rate of cell resistance. The authors have developed forecasting software based on platform of Visual Basic 6.0. The simulation results show that the proposed method not only has greatly improved fault prediction precision and real-time, but also improved the operation efficiency. That means we can increase energy efficiency and the safety of aluminum production process.",2010,0, 4287,Exploitation of Multiple Hyperspace Dimensions to Realize Coexistence Optimized Wireless Automation Systems,"The need for multiple radio systems in overlapping regions of a factory floor introduces a coexistence problem. The current research challenge is to design and realize radio systems that should be able to achieve a desired quality-of-service (QoS) in harsh, time-varying, coexisting industrial environments. Conventional coexistence solutions attempt to accommodate coexisting systems in a single dimension, mostly in the frequency dimension. The concept of multidimensional electromagnetic (EM) space utilization provides optimal opportunities to achieve coexistence optimized solutions. It can revolutionarily augment the shareable capacity of the resource space and provide optimal coexisting capabilities of radio systems. A software defined radio (SDR)-based cognitive radio (CR) is realized which can exploit the frequency, time, and power dimensions of the EM space to improve coexistence in the 2.4 GHz industrial, scientific, and medical (ISM) band. Furthermore, a conventional hardware defined radio (HDR) and additional simulations are used to test and prove the feasibility of the triple EM space utilization. Joint results of these experiments are presented in this contribution. Additionally, we present a novel computational efficient algorithm to detect cyclic properties of industrial wireless systems.",2010,0, 4288,"Smart phone based medicine in-take scheduler, reminder and monitor","Out-patient medication administration was identified as the most error-prone procedure in modern healthcare. Most medication administration errors were made when patients acquired prescribed and over-the-counter medicines from several drug stores and use them at home without proper guidance. In this paper, we introduce Wedjat, a smart phone application that helps patients to avoid these mistakes. Wedjat can remind its users to take the correct medicines on time and keep an in-take record for later review by healthcare professionals. Wedjat has two distinguished features: (1) it can alert the patients about potential drug-drug/drug-food interactions and plan an in-take schedule that avoids these adverse interactions; (2) it can revise an in-take schedule automatically when a dose was missed. In both cases, the software always produces the simplest schedule with least number of in-takes. Wedjat works with the calendar application available on most smart phones to issue medicine and meal reminders. It also shows pictures of the medicine and provides succinct in-take instructions. As a telemonitoring device, Wedjat can maintain medicine in-take records on board, synchronize them with a database on a host machine or upload them onto an electronic medical records (EMR) system. A prototype of Wedjat has been implemented on Window Mobile platform. This paper introduces the design concepts of Wedjat with emphasis on its medication scheduling and grouping algorithms.",2010,0, 4289,Assessing the quality of scientific publications as educational content on digital libraries,"In recent years, new projects and research lines in several scientific areas have emerged in order to develop and improve the process of teaching/learning within the network of universities in Cuba. From a general viewpoint, applying information and communication technologies on the management of the information generated by higher education processes stimulates the use of digital libraries. In this paper, we aim at applying the LOQEVAL (Learning Object Quality EVALuation) proposal, which is based on the intensive use of ontologies, for assessing the quality of scientific publications as educational contents. The paper explains the whole process of assessment using the use case of the digital library of the Agrarian University of Havana, which is currently under development.",2010,0, 4290,New Supervisory Control and Data Acquisition (SCADA) based fault isolation system for low voltage distribution systems,"This paper proposes a new supervisory control and data acquisition (SCADA) based fault isolation system on the low voltage (415/240 V) distribution system. It presents a customized distribution automation system (DAS) for automatic operation and secure fault isolation tested in the Malaysian utility distribution system; Tenaga Nasional Berhad (TNB) distribution system. It presents the first research work on customer side automation for operating and controlling between the consumer and the substation in an automated manner. The paper focuses on the development of very secure automated fault isolation work tested to TNB distribution operating principles as the fault is detected, identified, isolated and cleared in few seconds by just clicking the mouse of laptop or desktop connected to the system. Supervisory Control and Data Acquisition (SCADA) technique has been developed and utilized to build Human Machine Interface (HMI) that provides a Graphical User Interface (GUI) functions for the engineers and technicians to monitor and control the system. Microprocessor based Remote Monitoring Devices have been used for customized software integrated to the hardware. Power Line Carrier (PLC) has been used as communication media between the consumer and the substation. As a result, complete DAS and fault isolation system has been developed for remote automated operation, cost reduction, maintenance time saving and less human intervention during faults conditions.",2010,0, 4291,Computational Intelligent Gait-Phase Detection System to Identify Pathological Gait,"An intelligent gait-phase detection algorithm based on kinematic and kinetic parameters is presented in this paper. The gait parameters do not vary distinctly for each gait phase; therefore, it is complex to differentiate gait phases with respect to a threshold value. To overcome this intricacy, the concept of fuzzy logic was applied to detect gait phases with respect to fuzzy membership values. A real-time data-acquisition system was developed consisting of four force-sensitive resistors and two inertial sensors to obtain foot-pressure patterns and knee flexion/extension angle, respectively. The detected gait phases could be further analyzed to identify abnormality occurrences, and hence, is applicable to determine accurate timing for feedback. The large amount of data required for quality gait analysis necessitates the utilization of information technology to store, manage, and extract required information. Therefore, a software application was developed for real-time acquisition of sensor data, data processing, database management, and a user-friendly graphical-user interface as a tool to simplify the task of clinicians. The experiments carried out to validate the proposed system are presented along with the results analysis for normal and pathological walking patterns.",2010,0, 4292,Towards Self-Assisted Troubleshooting for the Deployment of Private Clouds,"Acquiring a private computing cloud is the first step that an enterprise would choose to enable the cloud model and get its considerable benefits while keeping the control within the enterprise. The enterprise level applications that provide the infrastructure enabling cloud computing services are typically built by integrating inter-related complex software components. Critical challenges of these applications are the increasing level of inter-component dependencies and the customized growth, which make recurrent deployment of such applications, as the one required in private clouds, labor intensive and error prone. In this paper we investigate the type of issues faced when deploying a cloud computing management infrastructure and propose a solution to self-assist the deployment. We show how by leveraging virtual image technologies we can detect faulty installations and their signatures early in the deployment process. We also propose a methodology to capture in a shared repository and update these signatures for reuse in subsequent deployments in the form of two level signature patterns. We explore the perspective of our solution and criteria of analysis.",2010,0, 4293,Proving transaction and system-level properties of untimed SystemC TLM designs,"Electronic System Level (ESL) design manages the enormous complexity of todays systems by using abstract models. In this context Transaction Level Modeling (TLM) is state-of-the-art for describing complex communication without all the details. As ESL language, SystemC has become the de facto standard. Since the SystemC TLM models are used for early software development and as reference for hardware implementation their correct functional behavior is crucial. Admittedly, the best possible verification quality can be achieved with formal approaches. However, formal verification of TLM models is a hard task. Existing methods basically consider local properties or have extremely high run-time. In contrast, the approach proposed in this paper can verify true TLM properties, i.e. major TLM behavior like for instance the effect of a transaction and that the transaction is only started after a certain event can be proven. Our approach works as follows: After a fully automatic SystemC-to-C transformation, the TLM property is mapped to monitoring logic using C assertions and finite state machines. To detect a violation of the property the approach uses a BMC-based formulation over the outermost loop of the SystemC scheduler. In addition, we improve this verification method significantly by employing induction on the C model forming a complete and efficient approach. As shown by experiments state-of-the-art proof techniques allow proving important non-trivial behavior of SystemC TLM designs.",2010,0, 4294,A Neural network based approach for modeling of severity of defects in function based software systems,"There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this paper, five Neural Network Based techniques are explored and comparative analysis is performed for the modeling of severity of faults present in function based software systems. The NASA's public domain defect dataset is used for the modeling. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that out of the five neural network based techniques Resilient Backpropagation algorithm based Neural Network is the best for modeling of the software components into different level of severity of the faults. Hence, the proposed algorithm can be used to identify modules that have major faults and require immediate attention.",2010,0, 4295,A Density Based Clustering approach for early detection of fault prone modules,"Quality of a software component can be measured in terms of fault proneness of data. Quality estimations are made using fault proneness data available from previously developed similar type of projects and the training data consisting of software measurements. To predict fault-proneness of modules different techniques have been proposed which includes statistical methods, machine learning techniques, neural network techniques and clustering techniques. The aim of proposed approach is to investigate that whether metrics available in the early lifecycle (i.e. requirement metrics), metrics available in the late lifecycle (i.e. code metrics) and metrics available in the early lifecycle (i.e. requirement metrics) combined with metrics available in the late lifecycle (i.e. code metrics) can be used to identify fault prone modules by using Density Based Clustering technique. This approach has been tested with real time defect datasets of NASA software projects named as PC1. Predicting faults early in the software life cycle can be used to achieve high software quality. The results show that the fusion of requirement and code metric is the best prediction model for detecting the faults as compared with mostly used code based model.",2010,0, 4296,A new low cost fault tolerant solution for mesh based NoCs,"In this paper a new fault tolerant routing algorithm with minimum hardware requirements and extremely high fault tolerance for 2D-mesh based NoCs is proposed. The LCFT (Low Cost Fault Tolerant) algorithm, removes the main limitations (forbidden turns) of the famous XY. So not only many new routes will be added to the list of selectable paths as well as deadlock freedom, but also it creates high level of fault tolerance. All these things are yielded only by the cost of adding one more virtual channel (for a total of two). Results show that LCFT algorithm can work well under almost bad conditions of faults in comparison with the already published methods.",2010,0, 4297,A history data based traffic incident impact analyzing and predicting method,"Traffic incidents are a main factor that reduces capacity and service quality of roads. Due to the absence of efficient incident impact analyzing and predicting method, the traffic congestion and secondary accident brought by traffic incidents can hardly be avoided. In this paper we propose a history data based traffic incident impact analyzing and predicting method. By extracting regularity and volatility information from history data, we find a solution to analyze both parts that comprise the traffic flow status: the road condition without the incident, and the impact of the incident. Thereby we can predict the traffic condition under incidents by estimating the two components separately and adding them together. Experimental results show that our solution could simulate and predict the impact tendency of traffic incidents with high accuracy.",2010,0, 4298,SBST for on-line detection of hard faults in multiprocessor applications under energy constraints,"Software-Based Self-Test (SBST) has emerged as an effective method for on-line testing of processors integrated in non safety-critical systems. However, especially for multi-core processors, the notion of dependability encompasses not only high quality on-line tests with minimum performance overhead but also methods for preventing the generation of excessive power and heat that exacerbate silicon aging mechanisms and can cause long term reliability problems. In this paper, we initially extend the capabilities of a multiprocessor simulator in order to evaluate the overhead in the execution of the useful application load in terms of both performance and energy consumption. We utilize the derived power evaluation framework to assess the overhead of SBST implemented as a test thread in a multiprocessor environment. A range of typical processor configurations is considered. The application load consists of some representative SPEC benchmarks, and various scenarios for the execution of the test thread are studied (sporadic or continuous execution). Finally, we apply in a multiprocessor context an energy optimization methodology that was originally proposed to increase battery life for battery-powered devices. The methodology reduces significantly the energy and performance overhead without affecting the test coverage of the SBST routines.",2010,0, 4299,Identifying effective software engineering (SE) team personality types composition using rough set approach,"This paper presents an application of rough sets in identifying effective personality-type composition in software engineering (SE) teams. Identifying effective personality composition in teams is important for determining software project success. It was shown that a balance of the personality types Sensing (S), Intuitive (N), Thinking (T) and Feeling (F) assisted teams in achieving higher software quality. In addition, Extroverted (E) members also had an impact on team performance. Even though the size of empirical data was too small, the rough-set technique allows the generation of significant personality-type composition rules to assist decision makers in forming effective teams. Future works will include more empirical data in order to develop predicting model of teams' performance based on personality types.",2010,0, 4300,A proposed reusability attribute model for aspect oriented software product line components,"Reusability assessment is vital for software product line due to reusable nature of its core components. Reusability being a high level quality attribute is more relevant to the software product lines as the entire set of individual products depend on the software product line core assets. Recent research proposes the use of aspect oriented techniques for product line development to provide better separation of concerns, treatment of crosscutting concerns and variability management. There are quality models available which relate the software reusability to its attributes. These models are intended to assess the reusability of software or a software component. The assessment of aspect oriented software and a core asset differs from the traditional software or component. There is need to develop a reusability model to relate the reusability attributes of aspect oriented software product line assets. This research work is an effort towards the development of reusability attribute model for software product line development using aspect oriented techniques.",2010,0, 4301,Establishing a defect prediction model using a combination of product metrics as predictors via Six Sigma methodology,"Defect prediction is an important aspect of the Product Development Life Cycle. The rationale in knowing predicted number of functional defects earlier on in the lifecycle, rather than to just find as many defects as possible during testing phase is to determine when to stop testing and ensure all the in-phase defects have been found in-phase before a product is delivered to the intended end user. It also ensures that wider test coverage is put in place to discover the predicted defects. This research is aimed to achieve zero known post release defects of the software delivered to the end user by MIMOS Berhad. To achieve the target, the research effort focuses on establishing a test defect prediction model using Design for Six Sigma methodology in a controlled environment where all the factors contributing to the defects of the product is within MIMOS Berhad. It identifies the requirements for the prediction model and how the model can benefit them. It also outlines the possible predictors associated with defect discovery in the testing phase. Analysis of the repeatability and capability of test engineers in finding defects are demonstrated. This research also describes the process of identifying characteristics of data that need to be collected and how to obtain them. Relationship of customer needs with the technical requirements of the proposed model is then clearly analyzed and explained. Finally, the proposed test defect prediction model is demonstrated via multiple regression analysis. This is achieved by incorporating testing metrics and development-related metrics as the predictors. The achievement of the whole research effort is described at the end of this study together with challenges faced and recommendation for future research work.",2010,0, 4302,Formalization of UML class diagram using description logics,"Unified Modelling Language (UML) is as a standard object-oriented modelling notation that is widely accepted and used in software development industry. In general, the UML notation is informally defined in term of natural language description (English) and Object Constraint Language (OCL) which makes difficult to formally analyzed and error-prone. In this paper, we elucidate the preliminary result on an approach to formally define UML class diagram using logic-based representation formalism. We represent how to define the UML class diagram using Description Logics (DLs).",2010,0, 4303,Software-Implemented Hardware Error Detection: Costs and Gains,"Commercial off-the-shelf (COTS) hardware is becoming less and less reliable because of the continuously decreasing feature sizes of integrated circuits. But due to economic constraints, more and more critical systems will be based on basically unreliable COTS hardware. Usually in such systems redundant execution is used to detect erroneous executions. However, arithmetic codes promise much higher error detection rates. Yet, they are generally assumed to generate very large slowdowns. In this paper, we assess and compare the runtime overhead and error detection capabilities of redundancy and several arithmetic codes. Our results demonstrate a clear trade-off between runtime costs and gained safety. However, unexpectedly the runtime costs for arithmetic codes compared to redundancy increase only linearly, while the gained safety increases exponentially.",2010,0, 4304,FTDIS: A Fault Tolerant Dynamic Instruction Scheduling,"In this work, we target the robustness for controller scheduler of type Tomasulo for SEU faults model. The proposed fault-tolerant dynamic scheduling unit is named FTDIS, in which critical control data of scheduler is protected from driving to an unwanted stage using Triple Modular Redundancy and majority voting approaches. Moreover, the feedbacks in voters produce recovery capability for detected faults in the FTDIS, enabling both fault mask and recovery for system. As the results of analytical evaluations demonstrate, the implemented FTDIS unit has over 99% fault detection coverage in the condition of existing less than 4 faults in critical bits. Furthermore, based on experiments, the FTDIS has a 200% hardware overhead comparing to the primitive dynamic scheduling control unit and about 50% overhead in comparision to a full CPU core. The proposed unit also has no performance penalty during simulation. In addition, the experiments show that FTDIS consumes 98% more power than the primitive unit.",2010,0, 4305,From Formal Specification in Event-B to Probabilistic Reliability Assessment,"Formal methods, in particular the B Method and its extension Event-B, have proven their worth in the development of many complex software-intensive systems. However, while providing us with a powerful development platform, these frameworks poorly support quantitative assessment of dependability attributes. Yet, such an assessment would facilitate not only system certification but also system development by guiding it towards the design optimal from the dependability point of view. In this paper we demonstrate how to integrate reliability assessment performed by model checking into refinement process in Event-B. Such an integration allows us to combine logical reasoning about functional correctness with probabilistic reasoning about reliability. Hence we obtain a method that enables building the systems that are not only correct-by-construction but also have a predicted level of reliability.",2010,0, 4306,Assessing Dependability for Mobile and Ubiquitous Systems: Is there a Role for Software Architectures?,A traditional research direction in SA and dependability is to deduce system dependability properties from the Knowledge of the system Software Architecture. This will reflect the fact that traditional systems are built by using the closed world assumption. In mobile and ubiquitous systems this line of reasoning becomes too restrictive to apply due to the inherent dynamicity and heterogeneity of the systems under consideration. Indeed these systems need to relax the closed world assumption and to consider an open world where the system/component/user context is not fixed. In other words the assumption that the system SA is known and fixed at an early stage of the system development does not apply anymore. On the contrary the ubiquitous scenario promotes the view that systems can be dynamically composed out of available components whose dependability can at most be assessed in terms of components assumptions on the system context. Moreover dependability cannot be anymore designed as an absolute context free property of the system rather it may change as long as it allows the satisfaction of the user's requirements and needs. In this setting SA can only be dynamically induced by taking into consideration the respective assumptions of the system components and the current user needs. The talk will illustrate this challenge and will discuss a set of possible future research directions.,2010,0, 4307,A Grouping-Based Strategy to Improve the Effectiveness of Fault Localization Techniques,"Fault localization is one of the most expensive activities of program debugging, which is why the recent years have witnessed the development of many different fault localization techniques. This paper proposes a grouping-based strategy that can be applied to various techniques in order to boost their fault localization effectiveness. The applicability of the strategy is assessed over - Tarantula and a radial basis function neural network-based technique; across three different sets of programs (the Siemens suite, grep and gzip). Results are suggestive that the grouping-based strategy is capable of significantly improving the fault localization effectiveness and is not limited to any particular fault localization technique. The proposed strategy does not require any additional information than what was already collected as input to the fault localization technique, and does not require the technique to be modified in any way.",2010,0, 4308,Mining Performance Regression Testing Repositories for Automated Performance Analysis,"Performance regression testing detects performance regressions in a system under load. Such regressions refer to situations where software performance degrades compared to previous releases, although the new version behaves correctly. In current practice, performance analysts must manually analyze performance regression testing data to uncover performance regressions. This process is both time-consuming and error-prone due to the large volume of metrics collected, the absence of formal performance objectives and the subjectivity of individual performance analysts. In this paper, we present an automated approach to detect potential performance regressions in a performance regression test. Our approach compares new test results against correlations pre-computed performance metrics extracted from performance regression testing repositories. Case studies show that our approach scales well to large industrial systems, and detects performance problems that are often overlooked by performance analysts.",2010,0, 4309,Scenarios-Based Testing of Systems with Distributed Ports,"Current distributed systems are usually composed of several distributed components that communicate through specific ports. When testing these systems we separately observe sequences of inputs and outputs at each port rather than a global sequence and potentially cannot reconstruct the global sequence that occurred. In this paper we concentrate on the problem of formally testing systems with distributed components that, in general, have independent behaviors but that at certain points of time synchronization can occur. These situations appear very often in large real systems that regularly go through maintenance and/or update operations. If we represent the specification of the global system by using a state-based notation, we say that a scenario is any sequence of events that happens between two of these operations; we encode these special operations by marking some of the states of the specification. In order to assess the appropriateness of our new framework, we show that it represents a conservative extension of previous implementation relations defined in the context of the distributed test architecture: If we consider that all the states are marked then we simply obtain ioco (the classical relation for single-port systems) while if no state is marked then we obtain dioco (our previous relation for multi-port systems).",2010,0, 4310,Adaptive Random Testing by Exclusion through Test Profile,"One major objective of software testing is to reveal software failures such that program bugs can be removed. Random testing is a basic and simple software testing technique, but its failure-detection effectiveness is often controversial. Based on the common observation that program inputs causing software failures tend to cluster into contiguous regions, some researchers have proposed that an even spread of test cases should enhance the failure-detection effectiveness of random testing. Adaptive random testing refers to a family of algorithms to evenly spread random test cases based on various notions. Restricted random testing, an algorithm to implement adaptive random testing by the notion of exclusion, defines an exclusion region around each previously executed test case, and selects test cases only from outside all exclusion regions. Although having a high failure-detection effectiveness, restricted random testing has a very high computation overhead, and it rigidly discards all test cases inside any exclusion region, some of which may reveal software failures. In this paper, we propose a new method to implement adaptive random testing by exclusion, where test cases are simply selected based on a well-designed test profile. The new method has a low computation overhead and it does not omit any possible program inputs that can detect failures. Our experimental results show that the new method not only spreads test cases more evenly but also brings a higher failure-detection effectiveness than random testing.",2010,0, 4311,Fault Localization Based on Dynamic Slicing and Hitting-Set Computation,"Slicing is an effective method for focusing on relevant parts of a program in case of a detected misbehavior. Its application to fault localization alone and in combination with other methods has been reported. In this paper we combine dynamic slicing with model-based diagnosis, a method for fault localization, which originates from Artificial Intelligence. In particular, we show how diagnosis, i.e., root causes, can be extracted from the slices for erroneous variables detected when executing a program on a test suite. We use these diagnoses for computing fault probabilities of statements that give additional information to the user. Moreover, we present an empirical study based on our implementation JSDiagnosis and a set of Java programs of various size from 40 to more than 1,000 lines of code.",2010,0, 4312,Software Operational Profile Modeling and Reliability Prediction with an Open Environment,"With the continuous development of the internet, operational environments of software have undergone tremendous change. One of the significant changes in the software operational environment is more open. The operational profiles under open environment are much more complex and unpredictable. It is an important property for the operational profiles under open environment to change continually over time. In the traditional software operational profile modeling, there is an assumption that software operational profile should be defined in a certain probability space. However, for the open operational software systems, it is difficult to define such a certain probability space, because there is often not a limited boundary for the operations of such system. So, the question arises: how to model the software operational profile when software operational environment is open? In the paper, partial probability space is proposed to describe the varying probability space, and based on this concept an operational profile model under open operational environment is developed.",2010,0, 4313,A Methodology for Continuos Quality Assessment of Software Artefacts,"Although some methodologies for evaluating the quality of software artifacts exist, all of these are isolated proposals, which focus on specific artifacts and apply specific evaluation techniques. There is no generic and flexible methodology that allows quality evaluation of any kind of software artifact, regardless of type, much less a tool that supports this. To tackle that problem in this paper, we propose the CQA Environment, consisting of a methodology (CQA-Meth) and a tool that implements it (CQA-Tool). We began applying this environment in the evaluation of the quality of UML models (use cases, class and statechart diagrams). To do so, we have connected CQA-Tool to the different tools needed to assess the quality of models, which we also built ourselves. CQA-Tool, apart from implementing the methodology, provides the capacity for building a catalogue of evaluation techniques that integrates the evaluation techniques (e.g. metrics, checklists, modeling conventions, guidelines, etc.) which are available for each software artifact. CQA Environment is suitable for use by companies that offer software quality evaluation services, especially for clients who are software development organizations and who are outsourcing software construction. They will obtain an independent quality evaluation of the software products they acquire. Software development organizations that perform their own evaluation will be able to use it as well.",2010,0, 4314,Increasing System Availability with Local Recovery Based on Fault Localization,"Due to the fact that software systems cannot be tested exhaustively, software systems must cope with residual defects at run-time. Local recovery is an approach for recovering from errors, in which only the defective parts of the system are recovered while the other parts are kept operational. To be efficient, local recovery must be aware of which component is at fault. In this paper, we combine a fault localization technique (spectrum-based fault localization, SFL) with local recovery techniques to achieve fully autonomous fault detection, isolation, and recovery. A framework is used for decomposing the system into separate units that can be recovered in isolation, while SFL is used for monitoring the activities of these units and diagnose the faulty one whenever an error is detected. We have applied our approach to MPlayer, a large open-source software. We have observed that SFL can increase the system availability by 23.4% on average.",2010,0, 4315,Active Monitoring for Control Systems under Anticipatory Semantics,"As the increment of software complexity, traditional software analysis, verification and testing techniques can not fully guarantee the faultlessness of deployed systems. Therefore, runtime verification has been developed to continuously monitor the running system. Typically, runtime verification can detect property violations but cannot predict them, and consequently cannot prevent the failures from occurring. To remedy this weakness, active monitoring is proposed in this paper. Its purpose is not repairing the faults after failures have occurred, but predicting the possible faults in advance and triggering the necessary steering actions to prevent the software from violating the property. Anticipatory semantics of linear temporal logic is adopted in monitor construction here, and the information of system model is used for successful steering and prevention. The prediction and prevention will form a closed-loop feedback based on control theory. The approach can be regarded as an effective complement of traditional testing and verification techniques.",2010,0, 4316,An Integrated Support for Attributed Goal-Oriented Requirements Analysis Method and its Implementation,"This paper presents an integrated supporting tool for Attributed Goal-Oriented Requirements Analysis (AGORA), which is an extended version of goal-oriented analysis. Our tool assists seamlessly requirements analysts and stakeholders in their activities throughout AGORA steps including constructing goal graphs with group work, utilizing domain ontologies for goal graph construction, detecting various types of conflicts among goals, prioritizing goals, analyzing impacts when modifying a goal graph, and version control of goal graphs.",2010,0, 4317,A Novel Approach to Automatic Test Case Generation for Web Applications,"As the quantity and breadth of Web-based software systems continue to grow rapidly, it is becoming critical to assure the quality and reliability of a Web application. Web application testing is a challenging work owing to its dynamic behaviors and complex dependencies. Test case generation, in general, is costly and labor-intensive processes. How to automatically generate effective test case is important for Web applications testing. In this paper, we propose the two-phase approach to generate test cases automatically by analyzing structure of the Web application. We define the dependence relationships, data dependence and control dependence, in the Web application and detect the relationships from source code and improve the way of test case generation with analysis result. The experimental result show that our approach can reduce test case set in test case generation processes.",2010,0, 4318,Adaptive Interaction Fault Location Based on Combinatorial Testing,"Combinatorial testing aims to detect interaction faults, which are triggered by interaction among parameters in system, by covering some specific combinations of parametric values. Most works about combinatorial testing focus on detecting such interaction faults rather than locating them. Based on the model of interaction fault schema, in which the interaction fault is described as a minimum fault schema and several corresponding parent-schemas, we propose an iterative adaptive interaction fault location technique for combinatorial testing. In order to locate interaction faults that detected in combinatorial testing, such technique utilizes delta debugging strategy to filtrate suspicious schemas by generating and running additional test cases iteratively. The properties, which include both recall and precision, of adaptive interaction fault location techniques are also analyzed in this paper. Analytical results suggest that the high scores in both recall and precision are guaranteed. It means that such technique can provide an efficient guidance for the applications of combinatorial testing.",2010,0, 4319,Hardware/software co-design to secure crypto-chip from side channel analysis at design time,"Side channel analysis (SCA) is a powerful physical cryptanalysis. In common industrial practice, the SCA security evaluation is performed after the devices are manufactured. However, the post-manufactured analysis is time consuming, error prone and expensive. Motivated by the spirit of hardware/software co-design, a design-time SCA analysis is proposed to improve the efficiency. Firstly, a general SCA leakage model is presented, and then the flow of design-time security analysis is described. After that, a flexible simulation and analytical environment is built to assess the different SCA leakage. Finally, an illustrative experiment, which performs a design-time SCA analysis on the crypto-chip with AES-128 core, is given.",2010,0, 4320,Information flow metrics and complexity measurement,"Complexity metrics takes an important role to predict fault density and failure rate in the software deployment. Information flow represents the flow of data in collective procedures in the processes of a concrete system. The present static analysis of source code and the ability of metrics are incompetent to predict the actual amount of information flow complexity in the modules. In this paper, we propose metrics IF-C, F-(I+O), (C-L) and (P-C) focuses on the improved information flow complexity estimation method, which is used to evaluate the static measure of the source code and facilitates the limitation of the existing work. Using the IF-C method, the framework of information flow metrics can be significantly combined with external flow of the active procedure, which depends on various levels of procedure call in the code.",2010,0, 4321,The research on the algorithm of inner and outer contours' distinguishing and the problems of processing domains' identification,"At the present time, the choice of processing domains in CAM softwares used by enterprises is the contour boundary's hand sorting or the single machined surface's selection. These ways are low efficiency and error-prone, when they are used in the choice of various domains at one time or the processing domains' boundaries are complex. According to the need for planning tool path on the mold of tire-sidewall patterns, inner and outer contours and processing domains must be distinguished at first, so an algorithm of inner and outer contours' distinguishing is proposed which based on the algorithm of ray cross-point. Firstly, the algorithm takes one point from each contour; secondly, the mutual position relationship between contours is obtained by determining the inclusion relation of a point on one contour with other contours; at last, some solution strategies are proposed according to the targeted analysis on the composition of various domains in the practical processing.",2010,0, 4322,Notice of Retraction
Accurate and efficient reliability Markov model analysis of predictive hybrid m-out-of-n systems,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

In recent years we perceive many researches on new techniques for improving the performance of fault-tolerant control systems. Prediction of correct system output is one of these techniques which can calculate and predict the probable correct system output when the ordinary techniques are incapacitated to make decision. In this paper, a performance model of predictive m-out-of-n hybrid redundancy is introduced and analyzed. The results of equations and mathematical relations based on Markov model demonstrated that this approach can improve the system reliability in comparison with traditional m-out-of-n system.",2010,0, 4323,Notice of Retraction
Probability-based safety related requirements change impact analysis,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

Software requires change throughout its lifetime which has been found to be a particular problem in terms of schedule, budget, and quality. The problems are most extreme important for critical software that needs to be validated if the changed requirements will affect safety. Therefore, the change impact analysis is so important in safety-critical system. This paper proposes a method to analyze the change impact analysis in a quantitative way with risk probability. First, a complete safety assessment model and a reliable model to trace the change are proposed based on probabilistic assessment. A case study on well-known systems, such as pressure systems, is analyzed. Then the application of our proposed traceability method is discussed which proves the applicability of our approach in change impact analysis.",2010,0, 4324,Research on automatic detection for defect on bearing cylindrical surface,"At present, it is still by using manual method to detect the defects on micro bearing surface. The method is laborious and time consuming. Moreover, it has a low efficiency and high miss-detection rate. Due to the fact, an on-line automatic detection system is developed using linear CCD. The proposed system is composed of three subsystems: the detection environment setting subsystem, the automatic detection subsystem, and the data management subsystem. In order to lead the above subsystems to cooperate with each other, control software is developed with LabVIEW 8.5 as a platform. Experimental results indicate that the system realizes the predefined functions and caters to the requirements of stability, real-time performance, and accuracy. Thus it can be applied for actual production.",2010,0, 4325,Fractal study on fault system of Carboniferous in Junggar Basin based on GIS,"Fault system is a significant evidence of tectonic movement during crust tectonic evolution and may play an more important role in oil-gas accumulation process than other tectonic types in sedimentary basin. Carboniferous surface faults in Junggar Basin developed well and varied in size and distribution. There are about 200 faults in Carboniferous, and 187 of them are thrust faults. Chaos-fractals theories have been widely investigated and great progress has been made in the past three decades. One of the important conception-fractal dimension had become a powerful tool for describing non-linearity dynamical system characteristic. The clustered objects in nature are often fractal and fault system distribution in space is inhomogeneous, always occurs in groups, so we can describe spatial distribution of faults from the point of fractal dimension. Fractal dimension of fault system is a comprehensive factor associated with fault number, size, combination modes and dynamics mechanism, so it can evaluate the complexity of fault system quantitatively. The relationship between fault system and oil-gas accumulation is a focus and difficulty problem in petroleum geology, and fractal dimension is a new tool for describing fault distribution and predicting potential areas of hydrocarbon resources. Geographic Information System (GIS) is a kind of technological system collecting, storing, managing, computing, analyzing, displaying and describing the geospatial information supported by computer software and hardware. In the last 15-20 years, GIS have been increasingly used to address a wide variety of geoscience problems. Weights-of-evidence models use the theory of conditional probability to quantify spatial association between fractal dimension and oil-gas accumulation. The weights of evidence are combined with the prior probability of occurrence of oil-gas accumulation using Bayes'rule in a loglinear form under an assumption of conditional independence of the dimension maps t- - o derive posterior probability of occurrence of oil-gas accumulation. In this paper, we first vectorize the fault system in Carboniferous of Junggar Basin in GIS software and store it as polyline layer in Geodatabase of GIS to manage and analyze, then calculate the fractal dimension of three types which are box dimension, information dimension and cumulative length dimension using spatial functions of GIS, in the last use weights-of-evidence model to calculate the correlation coefficients in GIS environment between oil-gas accumulation and three types of fractal dimension in order to quantity the importance of fault system.",2010,0, 4326,Simulation of city development using cellular automata and agent based integrated model A case study of Qingdao city,"This paper first developed an integrated model of urban land use changing using cellular automata (CA) and agent technique. In this model, every cell not only has the current state information of the cell, but also has the land use changing related information such as policy and natural condition information; the agents with different roles are used to process the information contained in the cell and then decide the state of the next time step of the cell. For instance, the natural condition information analysis agent is in charge of processing natural condition information; the policy information analysis agent is in charge of processing policy information; surrounding sensor agent is in charge of sensing the state of the cells in the neighborhood; and the logical decision agent makes the logical decision basing on the analytical results processed by the agents mentioned above, and decides whether the cell change its state in the next time step. This model can be extended according to the increasing understanding of the development mechanism of city which is known as the complex giant system. Thanks to the addition of multi-role agents, the model has a global vision of the focused area and thereby the model has complexity and flexibility in processing which can effectively simulate urban city development. Qingdao city has changed to an important seaport, tourist, information technology and ocean science city in China from a small fishery village in the past over 100 years. The city has experienced three main phases of development, and the driving forces behind each development are different. In this paper, the model is used to simulate the three development scenes in the Qingdao city history. In the simulation, the driving forces in each phase are quantified and implemented, and the agents with different roles are assigned specific algorithms correspondingly. The simulated results are compared with the maps or remote sensing. Although the quality of the fine compa- - rison of the simulated results and the real data is affected by the poor real data quality because the real data of Qingdao City before 1949 is hard to obtain, the results show that the model has a good capability to simulate the city's development with the selected parameters of Qingdao city. The model has a good potential to analyze the driving forces related to city development and predict the city development under different scenes.",2010,0, 4327,A novel watermarking algorithm for protecting audio aggregation based on ICA,"This paper proposes an audio watermarking algorithm based on ICA, breaks through the traditional watermarking which only protects the copyright of a single audio, implements copyright protection for an audio aggregation. The implementation process of the algorithm is composed of following steps: firstly, the wavelet coefficients of each audio work in the aggregation are set as observation data, and independent components of the observation data are extracted using ICA separate matrix, and then two most important components are chosen to embed the common watermark; secondly, ICA is used to detect watermark. According to experimental results, after embedding watermark into original audio aggregation, there is little impact on audio aggregation's perceptual quality, and when detected audio aggregation suffered from various attacks, the algorithm can extract watermark from the attacked detected audio aggregation. The proposed algorithm has good imperceptibility, strong robustness, low computational complexity, and achieve blind detection.",2010,0, 4328,An improved method to simplify software metric models constructed with incomplete data samples,"Software metric models are useful in predicting the target software metric(s) for any future software project based on the project's predictor metric(s). Obviously, the construction of such a model makes use of a data sample of such metrics from analogous past projects. However, incomplete data often appear in such data samples. Worse still, the necessity to include a particular continuous predictor metric or a particular category for a certain categorical predictor metric is most likely based on an experience-related intuition that the continuous predictor metric or the category matters to the target metric. However, in the presence of incomplete data, this intuition is traditionally not verifiable retrospectively after the model is constructed, leading to redundant continuous predictor metric(s) and/or excessive categorization for categorical predictor metrics. As an improvement of the author's previous work to solve all these problems, this paper proposes a methodology incorporating the k-nearest neighbors (k-NN) multiple imputation method, kernel smoothing, Monte Carlo simulation, and stepwise regression. This paper documents this methodology and one experiment on it.",2010,0, 4329,The discovery of the fault location in NIGS,"A new method is discovered for calculating the fault distance of the overhead line of the Neutral Indirect Grounded System (NIGS) in power distribution networks, in which the single phase to ground fault point or distance is difficult to detect, because the zero sequence current is in lower value. It is found that the information of the fault distance is kept in the zero sequence voltage vector which may be measured at the tail terminal of the questioned line by digging the data. Then an algorithm to calculate the fault location on the overhead lines is proposed by considering that the zero sequence voltage vector at the tail terminal. The value of the zero sequence voltage is determined by the fault location, and the phase angle also contains the distance traveled by the load current to the fault point. The system analysis for parameters is conducted for the NIGS by considering line is actual and by the two terminals' parameters.",2010,0, 4330,A Framework for QoS and Power Management in a Service Cloud Environment with Mobile Devices,"Service cloud integrates the concepts of cloud and service-oriented computing, providing users with tremendous opportunities for composing a large variety of services to achieve desired tasks. At the same time, rapid development in mobile devices has made it a typical instrument for accessing service clouds. However, the limited battery power can greatly impact the usage of mobile devices and their availability for accessing service clouds. Power management has long been an important issue for mobile devices. When considering accessing service clouds, power and QoS have to be considered together. For example, tasks may be delegated to the cloud to save the energy on mobile devices as long as the QoS constraints are satisfied. Current research has not considered these issues in an integrated view. In this paper, we propose a framework for handling QoS and power management for mobile devices in the service cloud environment. In this framework, the service QoS profiles capturing the services' QoS and power behaviors and user profiles capturing users' service usage patterns are defined. Based on the information, service QoS behaviors and power consumption patterns can be predicted to facilitate decisions regarding whether to run a service locally or remotely and how to configure the mobile device such that the power usage can be minimized without violating QoS requirements. Moreover, service migration technology is used to minimize the communication cost such that the latency can be minimized in case the user decides to invoke a remote service.",2010,0, 4331,On an Automatic Simulation Environment Customizing Services for Cloud Simulation Center,"Simulation plays an important role in both academic research and industrial development and manufactory. The users' requirements for simulation are often complex and diverse. It is time consuming and error-prone for users to build different simulation environments manually to conduct their simulation tasks. An Automatic Simulation Environment Customizing Service for Cloud Simulation Center is presented in this paper to address this problem. The service offers an automatic simulation environment that can customize and configure service requirements of simulation users with high efficiency, flexibility and agility. The system is capable of processing diversified requests dynamically and responding the requests in real-time. The operation and experiments on the prototype system operation and experiment show higher availability, flexibility, and reusability of our method than the traditional simulation environment customizing method.",2010,0, 4332,GOS: A Global Optimal Selection Approach for QoS-Aware Web Services Composition,"Services composition technology provides a promising way to create a new service in services-oriented architecture (SOA). However, some challenges are hindering the application of services composition. One of the greatest challenges for composite service developer is how to select a set of services to instantiate composite service with quality of service (QoS) assurance across different autonomous region (e.g. organization or business). To solve QoS-aware Web service composition problem, this paper proposes a global optimization selection (GOS) based on prediction mechanism for QoS values of local services. The GOS includes two parts. First, local service selection algorithm can be used to predict the change of service quality information. Second, GOS aims at enhancing the run-time performance of global selection by reducing to aggregation operation of QoS. The simulation results show that the GOS has lower execution cost than existing approaches.",2010,0, 4333,Fault detection for high availability RAID system,"Designing storage systems to provide high availability in the face of failures needs the use of various data protection techniques, such as dual-controller RAID. The failure of controller may cause data inconsistencies of RAID storage system. Heartbeat is used to detect controllers whether survival. So, the heartbeat cycle's impact on the high availability of a dual-controller hot-standby system has become the key of current research. To address the problem of fixed setting heartbeat in building high availability system currently, an adaptive heartbeat fault detection model of dual controller, which can adjust heartbeat cycle based on the frequency of data read-write request, is designed to improve the high availability of dual-controller RAID storage system. Additionally, this heartbeat mechanism can be used for other applications in distributed settings such as detecting node failures, performance monitoring, and query optimization. Based on this model, the high availability stochastic Petri net model of fault detection was established and used to evaluate the effect of the availability. In addition, we define a AHA (Adaptive Heart Ability) parameter to scale the ability of system heartbeat cycle to adapt to the environment which is changing. The results show that, relatively speaking with fixed configuration, the design is valid and effective, and can enhance dual controller RAID system high availability.",2010,0, 4334,A reliability improvement predictive approach to software testing with Bayesian method,"The capability of improving software reliability is one of the main objectives of software testing. However, the previous testing methods did not pay much attention to how to improve software testing strategy based on software reliability improvement. The relationship between software testing and software reliability is very complex and this is mainly due to the complexity of software products and development processes. The software testing strategy with improving reliability on line needs to possess the ability to predict reliability. Model predictive control provides a good framework to improve predictive effect on line. However, one of the main issues in model predictive control is how to estimate the concern parameter. In this case, Bayesian method is used to estimate the concern parameter: reliability. This proposed reliability improvement predictive approach to software testing with Bayesian method can optimize test allocation scheme on line. The case study shows that it is not definitely true for a software testing method that can find more defects than others can get higher reliability. And the case study also shows that the proposed approach can get better result in the sense of improving reliability than random testing.",2010,0, 4335,Static slicing for PLC program with ladder transformation,"PLC (programmable logic controller) is one type of general industrial control platforms with high reliability, which has been widely used in many real-time control systems, such as transfer lines and continuous casting machines. With the increasing size and complexity of PLC programs, the traditional manual test cannot meet the needs of industrial fields due to its inefficiency and error-prone features. Program slicing is a method of program analysis and understanding. Based on some slicing criterion, it removes the irrelevant statements from the source code to obtain a group of interested program segments. In this way, the scope of the program under study is narrowed. In this paper, program slicing of PLC programs will be studied. The slicing of PLC programs needs special treatment which is usually not necessary in other software written in high level language or assembly language. For example, PLC programs run in cyclic operating mode, and they usually allow each ladder to include more than one output ports making the number of outputs in one statement much larger than that of those in other software. We first introduce a ladder transformation as a preparation for program slicing. Then algorithms for static slicing for PLC programs are proposed. A demo is given to show that this method can effectively reduce the scale of program.",2010,0, 4336,An optimal release policy for software testing process,"In this paper, we discuss the dynamic release problem in software testing processes. If we stop testing too early, there may be too many defects in the software, resulting in too many failures during operation and leading to significant losses due to the failure penalty or user dissatisfaction. If we spend too much time in testing, there may be a high testing cost. Therefore, there is a tradeoff between software testing and releasing. The release time should be dynamically determined by the testing process. The more defects have been detected and removed, the less time will be used for further testing. A continuous time Markov process is proposed to model the testing process. By formulating with dynamic programming we obtain the Hamilton- Jacobi-Bellman equation of the optimal cost function, and derive the threshold structure of the optimal policy. Furthermore, the dynamic optimal release policy is compared with the static optimal release policy by numerical examples, showing that dynamic policy may outperforms static policy very much in some situations.",2010,0, 4337,Two-dimensional bar code mobile commerce - Implementation and performance analysis,"Two-dimensional bar code is called in Japan, QR Code (Quick Response Code, quick response codes). Holding a camera phone in front of this small black box according to what, after the phone software to identify, quickly turned into a line on the site, connected to the site, you can get the information you need. Of course, not necessarily a website, small black box may also be represented by e-mail, image data or text data. The study of this system in different two-dimensional bar code Shoujizuoye. The Implementation and performance analysis, in which experiments can see, network quality of service is subject to various factors Ying Xiang, to send Therefore, in the Internet video O'clock, must consider the status of the network, to the decision to adopt the GOP pattern; on streams in the network, the packet error rate will change the packet loss probability, affect the image quality.",2010,0, 4338,Condition-based reliability modeling for systems with partial and standby redundancy,"In this paper, we introduce a temporal approach for assessing the reliability of systems with components that are subject to change their conditions over time. Such changes could be due to poor working conditions, improper maintenance, or any unexpected malformations throughout the system's useful life. This approach is mainly based on adopting time-dependent reliability functions for assessing condition-based reliability. Our approach can be applied to common situations where the failure rates of the system components have dissimilar distributions.",2010,0, 4339,Using search-based metric selection and oversampling to predict fault prone modules,"Predictive models can be used in the detection of fault prone modules using source code metrics as inputs for the classifier. However, there exist numerous structural measures that capture different aspects of size, coupling and complexity. Identifying a metric subset that enhances the performance for the predictive objective would not only improve the model but also provide insights into the structural properties that lead to problematic modules. Another difficulty in building predictive models comes from unbalanced datasets, which are common in empirical software engineering as a majority of the modules are not likely to be faulty. Oversampling attempts to overcome this deficiency by generating new training instances from the faulty modules. We present the results of applying search-based metric selection and oversampling to three NASA datasets. For these datasets, oversampling results in the largest improvement. Metric subset selection was able to reduce up to 52% of the metrics without decreasing the predictive performance gained with oversampling.",2010,0, 4340,Scenario-Based Early Reliability Model for Distributed Software,"The ability to predict the reliability of software system early can help to improve the quality of system. A scenario-based early reliability model for distributed software system is proposed in this paper. The distributed system is composed of subsystems and components. Using the scenarios of the component interactions, which are described by sequence diagrams, we construct a simplified communication diagram to obtain the interaction parameters in the scenarios. The error propagation between the components is taken into account in our model, which is often overlooked in previous researches. An example is illustrated to show the effectiveness of the proposed model.",2010,0, 4341,Fault-Tolerance in Dataflow-Based Scientific Workflow Management,"This paper addresses the challenges of providing fault-tolerance in scientific workflow management. The specification and handling of faults in scientific workflows should be defined precisely in order to ensure the consistent execution against the process-specific requirements. We identified a number of typical failure patterns that occur in real-life scientific workflow executions. Following the intuitive recovery strategies that correspond to the identified patterns, we developed the methodologies that integrate recovery fragments into fault-prone scientific workflow models. Compared to the existing fault-tolerance mechanisms, the propositions reduce the effort of workflow designers by defining recovery fragments automatically. Furthermore, the developed framework implements the necessary mechanisms to capture the faults from the different layers of a scientific workflow management architecture. Experience indicates that the framework can be employed effectively to model, capture and tolerate the typical failure patterns that we identified.",2010,0, 4342,A Tree Based Strategy for Test Data Generation and Cost Calculation for Uniform and Non-Uniform Parametric Values,"Software testing is a very important phase of software development to ensure that the developed system is reliable. Due to huge number of possible combinations involved in testing and the limitation in the time and resources, it is usually too expensive and sometimes impossible to test systems exhaustively. To reduce the number of test cases to an acceptable level, combinatorial software interaction testing has been suggested and used by many researchers in the software testing field. It is also reported in literature that pairwise (2-way) combinatorial interaction testing can detect most of the software faults. In this paper we propose a new strategy for test data generation, a Tree Based Test Case Generation and Cost Calculation strategy (TBGCC) that supports uniform and non-uniform values, for input parameters (i.e. parameters with same and different number of values). Our strategy is distinct from others work since we include only the test cases which covers the maximum number of pairs in the covering array at every iteration. Additionally, the whole set of test cases will be checked as one block at every iteration only until the covering array is covered. Other strategies check each test case (N-1) times, where N is the maximum number of the input parameters. A detail description of the tree generation strategy, the iterative cost calculation strategy and efficient empirical results are presented.",2010,0, 4343,Towards Estimating Physical Properties of Embedded Systems using Software Quality Metrics,"The complexity of embedded devices poses new challenges to embedded software development in addition to the traditional physical requirements. Therefore, the evaluation of the quality of embedded software and its impact on these traditional properties becomes increasingly relevant. Concepts such as reuse, abstraction, cohesion, coupling, and other software attributes have been used as quality metrics in the software engineering domain. However, they have not been used in the embedded software domain. In embedded systems development, another set of tools is used to estimate physical properties such as power consumption, memory footprint, and performance. These tools usually require costly synthesis-and-simulation design cycles. In current complex embedded devices, one must rely on tools that can help design space exploration at the highest possible level, identifying a solution that represents the best design strategy in terms of software quality, while simultaneously meeting physical requirements. We present an analysis of the cross-correlation between software quality metrics, which can be extracted before the final system is synthesized, and physical metrics for embedded software. Using a neural network, we investigate the use of these cross-correlations to predict the impact that a given modification on the software solution will have on embedded software physical metrics. This estimation can be used to guide design decisions towards improving physical properties of embedded systems, while maintaining an adequate trade-off regarding software quality.",2010,0, 4344,Service Level Agreements in a Rental-based System,"In this paper, we investigate how Service Level Agreeements (SLAs) can be incorporated as part of the system's scheduling and rental decisions to satisfy the different performance promises of high performance computing (HPC) applications. Such SLAs are contracts which specify a set of application-driven requirements such as the estimated total load, contract duration, total utility value and the estimated total number of generated jobs. We present several scheduling and rental based policies that make use of these SLA parameters and demonstrate the effectiveness of such policies to accurately predict and plan for resource levels in a rental-based system.",2010,0, 4345,An empirical approach for software fault prediction,"Measuring software quality in terms of fault proneness of data can help the tomorrow's programmers to predict the fault prone areas in the projects before development. Knowing the faulty areas early from previous developed projects can be used to allocate experienced professionals for development of fault prone modules. Experienced persons can emphasize the faulty areas and can get the solutions in minimum time and budget that in turn increases software quality and customer satisfaction. We have used Fuzzy C Means clustering technique for the prediction of faulty/ non-faulty modules in the project. The datasets used for training and testing modules available from NASA projects namely CM1, PC1 and JM1 include requirement and code metrics which are then combined to get a combination metric model. These three models are then compared with each other and the results show that combination metric model is found to be the best prediction model among three. Also, this approach is compared with others in the literature and is proved to be more accurate. This approach has been implemented in MATLAB 7.9.",2010,0, 4346,Software security testing based on typical SSD:A case study,"Due to the increasing complexity of Web applications, traditional function security testing ways, which only test and validate software security mechanisms, are becoming ineffective to detect latent software security defects (SSD). The number of reported web application vulnerabilities is increasing dramatically. However, the most of vulnerabilities result from some typical SSD. Based on SSD, this paper presents an effective software security testing (SST) model, which extends traditional security testing process to defects behavior analysis which incorporates advantages of traditional testing method and SSD-based security testing methodology. Primary applications show the effectiveness of our test model.",2010,0, 4347,Modeling and Evaluation of Control Flow Vulnerability in the Embedded System,"Faults in control flow-changing instructions are critical for correct execution because the faults could change the behavior of programs very differently from what they are expected to show. The conventional techniques to deal with control flow vulnerability typically add extra instructions to detect control flow-related faults, which increase both static and dynamic instructions, consequently, execution time and energy consumption. In contrast, we make our own control flow vulnerability model to evaluate the effects of different compiler optimizations. We find that different programs show very different degrees of control flow vulnerabilities and some compiler optimizations have high correlation to control flow vulnerability. The results observed in this work can be used to generate more resilient code against control flow-related faults.",2010,0, 4348,Using Content and Text Classification Methods to Characterize Team Performance,"Because of the critical role that communication plays in a team's ability to coordinate action, the measurement and analysis of online transcripts in order to predict team performance is becoming increasingly important in domains such as global software development. Current approaches rely on human experts to classify and compare groups according to some prescribed categories, resulting in a laborious and error-prone process. To address some of these issues, the authors compared and evaluated two methods for analyzing content generated by student groups engaged in a software development project. A content analysis and semi-automated text classification methods were applied to the communication data from a global software student project involving students from the US, Panama, and Turkey. Both methods were evaluated in terms of the ability to predict team performance. Application of the communication analysis' methods revealed that high performing teams develop consistent patterns of communicating which can be contrasted to lower performing teams.",2010,0, 4349,Effect of Replica Placement on the Reliability of Large-Scale Data Storage Systems,"Replication is a widely used method to protect large-scale data storage systems from data loss when storage nodes fail. It is well known that the placement of replicas of the different data blocks across the nodes affects the time to rebuild. Several systems described in the literature are designed based on the premise that minimizing the rebuild times maximizes the system reliability. Our results however indicate that the reliability is essentially unaffected by the replica placement scheme. We show that, for a replication factor of two, all possible placement schemes have mean times to data loss (MTTDLs) within a factor of two for practical values of the failure rate, storage capacity, and rebuild bandwidth of a storage node. The theoretical results are confirmed by means of event-driven simulation. For higher replication factors, an analytical derivation of MTTDL becomes intractable for a general placement scheme. We therefore use one of the alternate measures of reliability that have been proposed in the literature, namely, the probability of data loss during rebuild in the critical mode of the system. Whereas for a replication factor of two this measure can be directly translated into MTTDL, it is only speculative of the MTTDL behavior for higher replication factors. This measure of reliability is shown to lie within a factor of two for all possible placement schemes and any replication factor. We also show that for any replication factor, the clustered placement scheme has the lowest probability of data loss during rebuild in critical mode among all possible placement schemes, whereas the declustered placement scheme has the highest probability. Simulation results reveal however that these properties do not hold for the corresponding MTTDLs for a replication factor greater than two. This indicates that some alternate measures of reliability may not be appropriate for comparing the MTTDL of different placement schemes.",2010,0, 4350,Use of differential evolution in low NOx combustion optimization of a coal-fired boiler,"The present work focuses on low NOx emissions combustion modification of a 300MW dual-furnaces coal-fired utility boiler through a combination of support vector regression (SVR) and a novel and modern differential evolution optimization technique (DE). SVR, used as a more versatile type of regression tool, was employed to build a complex model between NOx emissions and operating conditions by using available experimental results in a case boiler. The trained SVR model performed well in predicting the NOx emissions with an average relative error of less than 1.14% compared with the experimental results in the case boiler. The optimal ten inputs (namely operating conditions to be optimized by operators of the boiler) of NOx emissions characteristics model were regulated by DE so that low NOx emissions were achieved, given that the boiler load is determined. Two cases were optimized in this work to check the possibility of reducing NOx emissions by DE under high and low boiler load. The time response of DE was typical of 20 sec, at the same time with the better quality of optimized results. Remarkable good results were obtained when DE was used to optimize NOx emissions of this boiler, supporting its applicability for the development of an advanced on-line and real-time low NOx emissions combustion optimization software package in modern power plants.",2010,0, 4351,Reliability-based structural integrity assessment of Liquefied Natural Gas tank with hydrogen blistering defects by MCS method,"Hydrogen blistering is one of the serious threats to safe operation of a Liquefied Natural Gas (LNG) tank, therefore safety analysis of hydrogen blistering defects is very important. In order to assess the reliability-based structural integrity of the LNG tank with defects of hydrogen blistering, the following steps were carried out. Firstly, Abaqus code, one of the Finite Element Method (FEM) software, was utilized to calculate 100 J-integral values of crack tip by defining directly. Secondly, the 100 J-integral values of crack tip were used as training data and testing data by Optimized Least Squares Support Vector Machine (OLS-SVM), Least Squares Support Vector Machine (LS-SVM) and Artificial Neural Networks (ANN) to get other 20000 J-integral values of crack tip. Finally, Monte-Carlo Simulation (MCS) was used to assess the reliability-based structural integrity analysis. The results showed that the hydrogen blistering defect with crack will propagate with about 14 percent chance in such a case. It also proved that MCS combined with FEM and SVM was an effective and prospective method for research and application of integrity assessment, which could overcome the data source problem.",2010,0, 4352,Predicting mechanical properties of hot-rolling steel by using RBF network method based on complex network theory,"Recently, producing high-precision and high-quality steel products becomes the major aim of the large-scale iron and steel enterprises. Because of the internal multiplex components of products and complex changes in the production process, it is too difficult to achieve precise control in hot rolling production process. In this paper, radial basis function neural network is used to complete performance prediction. It has the advantage of fast training and high accuracy, and overcomes shortcomings of BP neural network used previously, such as local minimum. When determining the center of radial basis function we make use of complex network visualization which can clearly figure out the relationship between input vectors and receive the center and width according to the relationship of the nodes. Experiments show that the method that is combining community discovery algorithm and RBF enjoy high stability, small training time which means to be suitable to analysis large-scale data. More importantly, it can reach high accuracy.",2010,0, 4353,A Quality Framework to check the applicability of engineering and statistical assumptions for automated gauges,"In high-volume part manufacturing, interactions between program data and program flow can depart significantly from the initial statistical assumptions used during software development. This is a particular challenge for industrial gauging systems used in automotive part production where the applicability of statistical models affects system correctness. This paper uses a Quality Framework to track high-level engineering and statistical assumptions during development. Statistical Process Control (SPC) metrics define an in-control region where the statistical assumptions apply, and an outlier region where they do not apply. The gauge is monitored on-line to verify that production corresponds to the area of the operation where the gauge algorithms are known to work. If outliers are detected in the on-line manufacturing process, then parts can be quarantined, improved gauging algorithms selected, and/or process improvement activities can be initiated.",2010,0, 4354,Cooperative Co-evolution for large scale optimization through more frequent random grouping,"In this paper we propose three techniques to improve the performance of one of the major algorithms for large scale continuous global function optimization. Multilevel Cooperative Co-evolution (MLCC) is based on a Cooperative Co-evolutionary framework and employs a technique called random grouping in order to group interacting variables in one subcomponent. It also uses another technique called adaptive weighting for co-adaptation of subcomponents. We prove that the probability of grouping interacting variables in one subcomponent using random grouping drops significantly as the number of interacting variables increases. This calls for more frequent random grouping of variables. We show how to increase the frequency of random grouping without increasing the number of fitness evaluations. We also show that adaptive weighting is ineffective and in most cases fails to improve the quality of found solution, and hence wastes considerable amount of CPU time by extra evaluations of objective function. Finally we propose a new technique for self-adaptation of the subcomponent sizes in CC. We demonstrate how a substantial improvement can be gained by applying these three techniques.",2010,0, 4355,The jMetal framework for multi-objective optimization: Design and architecture,"jMetal is a Java-based framework for multi-objective optimization using metaheuristics. It is a flexible, extensible, and easy-to-use software package that has been used in a wide range of applications. In this paper, we describe the design issues underlying jMetal, focusing mainly on its internal architecture, with the aim of offering a comprehensive view of its main features to interested researchers. Among the covered topics, we detail the basic components facilitating the implementation of multi-objective metaheuristics (solution representations, operators, problems, density estimators, archives), the included quality indicators to assess the performance of the algorithms, and jMetal's support to carry out full experimental studies.",2010,0, 4356,Towards a formal framework for developing concurrent programs: Modeling dynamic behavior,"It is now widely accepted that programming concurrent software is a complex, error-prone task. Therefore, there is a big interest in the specification, verification and development of concurrent programs using formal methods. In our work-in-progress project, we are attempting to make a constructive framework for developing concurrent programs formally. In this paper, we first demonstrate how one can apply an intermediate artifact of our work, a Z-based formalism, to specify the dynamic behavior of a concurrent system. More precisely, we show how one can use this formalism to explicitly specify the nondeterministic interleaving of processes in a concurrent system. Such a specification will constructively result in a functional program involving all allowable interleaved executions of concurrent processes. As the second contribution of the paper, we introduce a verification method to prove safety properties of concurrent systems specified in the proposed Z-based formalism.",2010,0, 4357,State of art and practice of COTS components search engines,"COTS-Based Software Development has emerged as an approach aiming to improve a number of drawbacks found in the software development industry. The main idea is the reuse of well-tested software products, known as Commercial-Off-The-Shelf (COTS) components, that will be assembled together in order to develop larger systems. The potential benefits of this approach are mainly its reduced costs and shorter development time, while ensuring the quality. One of the most critical activities in COTS-based development is the identification of the COTS candidates to be integrated into the system under development. Nowadays, the Web is the most used means to find COTS candidates. Thus, the use of search engines turns out to be crucial. This paper deals with existing search engines especially proposed to find COTS components satisfying some needs on the Web. It presents a state of the art and practice of search engines followed by a study assessing to which extent they are able to accomplish their objectives.",2010,0, 4358,Proving Model Transformations,"Within the MDA context, model transformations (MT) play an important role as they ensure consistency and significant time savings. Several MT frameworks have been deployed and successfully used in practice. Like for any software, the development of MT programs is error prone. However there is limited support for verification and validation in current MDA technologies. This paper presents an approach to prove model transformations. Model transformations are firstly formalized in B. Then the B provers will be used to analyze and prove the correctness of transformation rules w.r.t. met models and transformation invariants. We also analyze and prove the consistency of transformation rules w.r.t. each other.",2010,0, 4359,Probabilistic Model of System Survivability,"The paper completely formalizes the concept of system survivability on the basis of Knight's research. We present a computable probabilistic model of survivable system which is divided into two layers, i.e. the function and service. The probabilistic refinement is introduced to reason about the survivable system, which is modeled by a probabilistic choice of accepted services with respect to the operating environment. Furthermore, we present an elegant survivability specification and the differences with Knight's related works are discussed. The command-and-control example is also revisited in our framework.",2010,0, 4360,The study of environmental pollution based on second-order partial differential equation model,"In order to forecast and simulate the quality of atmospheric environment, the author constructed the basic three-dimension model of environmental fluid mechanics, and consequently improved the transport model of emissions of air pollutants, based on the analysis of atmospheric pollution and water pollution process. Through the use of Matlab software, the number of days needed for each pollutant to reach the mark has been calculated. The model is able to be applied to a variety of pollution process researches, and can predict the location of a certain concentration of pollutants at a given moment. Finally, cost comparison has been generated, and strategies for environmental problems had been proposed.",2010,0, 4361,Performance analysis of distributed software systems: A model-driven approach,"The design of complex software systems is a challenging task because it involves a wide range of quality attributes such as security, performance, reliability, to name a few. Dealing with each of these attributes requires specific set of skills, which quite often, involves making various trade-offs. This paper proposes a novel Model-Driven Software Performance Engineering (MDSPE) process that can be used for performance analysis requirements of distributed software systems. An example assessment is given to illustrate how our MDSPE process can comply with well-known performance models to assess the performance measures.",2010,0, 4362,Power quality analysis of a Synchronous Static Series Compensator (SSSC) connected to a high voltage transmission network,"This paper presents the power quality analysis of a Synchronous Static Series Compensator (SSSC) connected to a high-voltage transmission network. The analysis employs a simplified harmonic equivalent model of the network that is described in the paper. To facilitate the calculations, a software toolset was developed to measure the voltage harmonics content introduced by the SSSC at the Point of Common Coupling (PCC). These harmonics are then evaluated against the stringent power quality legislation set on the Spanish transmission network. Subsequently, the compliance of an SSSC based on two types of Voltage Sourced Converters (VSC): a 48-pulse and a PWM one, usual in this type of applications, is assessed. Finally, a parameter sensitivity analysis is presented, which looks at the influence of parameters such as the line length and short circuit impedances on the PCC voltage harmonics.",2010,0, 4363,Cause-effect modeling and simulation of power distribution fault events,"Modeling and simulation are important to study power distribution faults due to the limited actual data and high cost of experimentations. Although a number of software packages are available to simulate the electric signals, approaches for simulating fault events in different environments are not well developed yet. In this paper, we propose a framework for modeling and simulating fault events in power distribution systems based on environmental factors and cause-effect relations among them. The spatial and temporal aspects of significant environmental factors leading to various faults are modeled as raster maps and probability distributions, respectively. The cause-effect relations are expressed as fuzzy rules and a hierarchical fuzzy inference system is built to infer the probability of faults given the simulated environments. This work will be helpful in fault diagnosis for different local systems and provide a configurable data source to other researchers and engineers in similar areas as well. A sample fault simulator we have developed is used to illustrate the approaches.",2010,0, 4364,Research on the Copy Detection Algorithm for Source Code Based on Program Organizational Structure Tree Matching,Code plagiarism is an ubiquitous phenomenon in the teaching of Programming Language. A large number of source code can be automatically detected and uses the similarity value to determine whether the copy is present. It can greatly improve the efficiency of teachers and promote teaching quality. A algorithm is provided that firstly match program organizational structure tree and then process the methods of program to calculate the similarity value. It not only applies to process-oriented programming languages but also applies to object-oriented programming language.,2010,0, 4365,WSIM: Detecting Clone Pages Based on 3-Levels of Similarity Clues,"Code clones often result in code inconsistencies, which eventually increase cost and degrade quality. Web applications have higher rate of clones than normal software and it is more and more necessary to detect clones in web applications. In this paper, three levels of views in detecting clone pairs are suggested for a web application. The proposed technique utilizes relationships between web pages, passed parameters, and target entities as similarity clues. The results of the experiments also represent the trade-off between recall rate and accuracy. And then, two approaches, static and dynamic selection, are suggested for deciding candidates of clone pairs. As a result, the combined strategy of three levels of methods and two approaches of candidate selection is recommended. Finally, applicability of the proposed approach is shown from the experiments.",2010,0, 4366,A Study on Defect Density of Open Source Software,"Open source software (OSS) development is considered an effective approach to ensuring acceptable levels of software quality. One facet of quality improvement involves the detection of potential relationship between defect density and other open source software metrics. This paper presents an empirical study of the relationship between defect density and download number, software size and developer number as three popular repository metrics. This relationship is explored by examining forty-four randomly selected open source software projects retrieved from SourceForge.net. By applying simple and multiple linear regression analysis, the results reveal a statistically significant relationship between defect density and number of developers and software size jointly. However, despite theoretical expectations, no significant relationship was found between defect density and number of downloads in OSS projects.",2010,0, 4367,Detecting Altered Fingerprints,"The widespread deployment of Automated Fingerprint Identification Systems (AFIS) in law enforcement and border control applications has prompted some individuals with criminal background to evade identification by purposely altering their fingerprints. Available fingerprint quality assessment software cannot detect most of the altered fingerprints since the implicit image quality does not always degrade due to alteration. In this paper, we classify the alterations observed in an operational database into three categories and propose an algorithm to detect altered fingerprints. Experiments were conducted on both real-world altered fingerprints and synthetically generated altered fingerprints. At a false alarm rate of 7%, the proposed algorithm detected 92% of the altered fingerprints, while a well-known fingerprint quality software, NFIQ, only detected 20% of the altered fingerprints.",2010,0, 4368,Software defect prediction using static code metrics underestimates defect-proneness,"Many studies have been carried out to predict the presence of software code defects using static code metrics. Such studies typically report how a classifier performs with real world data, but usually no analysis of the predictions is carried out. An analysis of this kind may be worthwhile as it can illuminate the motivation behind the predictions and the severity of the misclassifications. This investigation involves a manual analysis of the predictions made by Support Vector Machine classifiers using data from the NASA Metrics Data Program repository. The findings show that the predictions are generally well motivated and that the classifiers were, on average, more confident in the predictions they made which were correct.",2010,0, 4369,Quality Models for Free/Libre Open Source Software Towards the Silver Bullet?,"Selecting the right software is of crucial importance for businesses. Free/Libre Open Source Software (FLOSS) quality models can ease this decision-making. This paper introduces a distinction between first and second generation quality models. The former are based on relatively few metrics, require deep insights into the assessed software, relying strongly on subjective human perception and manual labour. Second generation quality models strive to replace the human factor by relying on tools and a multitude of metrics. The key question this paper addresses is whether the emerging FLOSS quality models provide the silver bullet overcoming the shortcomings of first generation models. In order to answer this question, OpenBRR, a first generation quality model, and QualOSS, a second generation quality model, are used for a comparative assessment of Asterisk, a FLOSS implementation of a telephone private branch exchange. Results indicate significant progress, but apparently the silver bullet has not yet been found.",2010,0, 4370,"An Analysis of the """"Inconclusive' Change Report Category in OSS Assisted by a Program Slicing Metric","In this paper, we investigate the Barcode open-source system (OSS) using one of Weiser's original slice-based metrics (Tightness) as a basis. In previous work, low numerical values of this slice-based metric were found to indicate fault-free (as opposed to fault-prone) functions. In the same work, we deliberately excluded from our analysis a category comprising 221 of the 775 observations representing `inconclusive' log reports extracted from the OSS change logs. These represented OSS change log descriptions where it was not entirely clear whether a fault had occurred or not in a function and, for that reason, could not reasonably be incorporated into our analysis. In this paper we present a methodology through which we can draw conclusions about that category of report.",2010,0, 4371,AI-Based Models for Software Effort Estimation,"Decision making under uncertainty is a critical problem in the field of software engineering. Predicting the software quality or the cost/ effort requires high level expertise. AI based predictor models, on the other hand, are useful decision making tools that learn from past projects' data. In this study, we have built an effort estimation model for a multinational bank to predict the effort prior to projects' development lifecycle. We have collected process, product and resource metrics from past projects together with the effort values distributed among software life cycle phases, i.e. analysis & test, design & development. We have used Clustering approach to form consistent project groups and Support Vector Regression (SVR) to predict the effort. Our results validate the benefits of using AI methods in real life problems. We attain Pred(25) values as high as 78% in predicting future projects.",2010,0, 4372,Optimization of forming process for transiting part of combustion chamber,"In deep drawing, one of the sheet metal forming processes is a widely used technique for producing parts from sheet metal blanks in a variety of fields such as aerospace and automobile. Predicting the behaviors of deformation during a forming process is one of the main challenges in cold forming. Instead of the traditional method called trial and error process, numerical simulation based on finite element analysis method could be used to achieve a better understanding of forming deformation during the process and to predict tools for several failure modes to reduce the number of costly experimental verification tests. In this paper, optimization of forming process for transiting part of combustion chamber is mainly performed by ABAQUS software. The numerical simulation model was constructed by CAD software. This part structure is very complex and the defects such as wrinkles and springback occur during forming. The present research work is aimed at avoiding the wrinkles. Using numerical simulation technology, the influences of friction coefficient and blankholder force in wrinkle have been investigated and optimized values for these two important process parameters are suggested in order to eliminate wrinkles. Moreover two criterions for fracture (FLD criterion and reduction of thickness criterion) are briefly presented.",2010,0, 4373,Checkpointing vs. Migration for Post-Petascale Supercomputers,"An alternative to classical fault-tolerant approaches for large-scale clusters is failure avoidance, by which the occurrence of a fault is predicted and a preventive measure is taken. We develop analytical performance models for two types of preventive measures: preventive checkpointing and preventive migration. We also develop an analytical model of the performance of a standard periodic checkpoint fault-tolerant approach. We instantiate these models for platform scenarios representative of current and future technology trends. We find that preventive migration is the better approach in the short term by orders of magnitude. However, in the longer term, both approaches have comparable merit with a marginal advantage for preventive checkpointing. We also find that standard non-prediction-based fault tolerance achieves poor scaling when compared to prediction-based failure avoidance, thereby demonstrating the importance of failure prediction capabilities. Finally, our results show that achieving good utilization in truly large-scale machines (e.g., 220 nodes) for parallel workloads will require more than the failure avoidance techniques evaluated in this work.",2010,0, 4374,Application of statistical learning theory to predict corrosion rate of injecting water pipeline,"Support Vector Machines (SVM) represents a new and very promising approach to pattern recognition based on small dataset. The approach is systematic and properly motivated by Statistical Learning Theory (SLT). Training involves separating the classes with a surface that maximizes the margin between them. An interesting property of this approach is that it is an approximate implementation of Structural Risk Minimization (SRM) induction principle, therefore, SVM is more generalized performance and accurate as compared to artificial neural network which embodies the Embodies Risk Minimization (ERM) principle. In this paper, according to corrosion rate complicated reflection relation with influence factors, we studied the theory and method of Support Vector Machines based the statistical learning theory and proposed a pattern recognition method based Support Vector Machine to predict corrosion rate of injecting water pipeline. The outline of the method is as follows: First, we researched the injecting water quality corrosion influence factors in given experimental zones with Gray correlation method; then we used the LibSVM software based Support Vector Machine to study the relationship of those injecting water quality corrosion influence factors, and set up the mode to predict corrosion rate of injecting water pipeline. Application and analysis of the experimental results in Shengli oilfield proved that SVM could achieve greater accuracy than the BP neural network do, which also proved that application of SVM to predict corrosion rate of injecting water pipeline, even to the other theme in petroleum engineering, is reliable, adaptable, precise and easy to operate.",2010,0, 4375,Extend argumentation frameworks based on degree of attack,"To capture the quantity and quality properties of attacks in argumentation process, a fuzzy argumentation framework is presented in this paper. Degree of attack is introduced into argumentation, which contributes to formalizing the internal structure of arguments. Then based on probability theory, total attack is proposed to calculate the effect of all attacks. In addition, the extensional semantics for fuzzy argumentation frameworks are explored in the same way as for extended argumentation frameworks. Finally, the application of FAF in investigation is stated.",2010,0, 4376,Intelligent agent-based system using dissolved gas analysis to detect incipient faults in power transformers,Condition monitoring and software-based diagnosis tools are central to the implementation of efficient maintenance management strategies for many engineering applications including power transformers.,2010,0, 4377,MATLAB Design and Research of Fault Diagnosis Based on ANN for the C3I System,Artificial neural networks (ANN) are an information-processing method of a simulation of the structure for biological neurons. C3I system as a modern combat unit can control and command the army action and can communicate to others. This paper makes a research on the approach of the artificial neural network for fault diagnosis of C3I system and constructs a fault diagnosis system of C3I system with ANN. And the system can analyze fault phenomena and detect C3I system fault. It will greatly improve the response to the C3I system fault diagnosis and maintenance efficiency.,2010,0, 4378,Testing the Verticality of Slow-Axis of the Dual Optical Fiber Based on Image Processing,"The polarization dual fiber collimator is one of the important passive devices, which could be used to transform divergent light from the fiber into parallel beam or focus parallel beam into the fiber so as to improve the coupling efficiency of fiber device. However, the quality of double-pigtail fiber used for assembling collimator directly impacts on the performance of fiber collimator. Therefore, it is necessary to detect the quality of double-pigtail fiber before collimators has been packaged to ensure the quality of fiber collimator. The paper has pointed out that the verticality of two slow-axis of the double-pigtail fiber is the major factor to affect the quality of fiber collimator. With """"Panda""""-type double-pigtail fiber as the research object, a novel method to detect the verticality of slow-axis is proposed. First, with the red light of LED as background light source, the clear images of the cross-section end surface of double-pigtail fiber has been obtained by micro-imaging systems; Then after a series of image pre-processing operations, the coordinate information of """"cat's eye"""" light heart can been extracted with the image centroid algorithm; Finally, the angle between the two slow axes of the double-pigtail fiber can be calculated quickly and accurately according to pre-established mathematical model. The detection system mainly consists of CCD, microscopic imaging device, image board and image processing software developed with VC++. The experimental results prove that the system is both practical and effective and can satisfy the collimator assembly precision demands.",2010,0, 4379,Model Checking Security Vulnerabilities in Software Design,"Software faults in the design are frequent sources of security vulnerabilities. Mode checking shows the great promise in detecting and eradicating security vulnerabilities in the programs. The wide use of the system modeling language UML with precise syntax and semantics enables software engineers to analyze the design in details. We present a method of integrating the two techniques to detect design faults which may become security vulnerabilities in the software. Given a software design in UML and security policy, our method extracts the security properties and formally expresses them in temporal logic language. Combining with the security properties, we convert the UML models into PROMELA models, which are input of the model checker SPIN. The method either statically proves that the model satisfies the security property, or provides an execution path that exhibits a violation of the property. A case study shows the feasibility of the method.",2010,0, 4380,New Conceptual Coupling and Cohesion Metrics for Object-Oriented Systems,"The paper presents two novel conceptual metrics for measuring coupling and cohesion in software systems. Our first metric, Conceptual Coupling between Object classes (CCBO), is based on the well-known CBO coupling metric, while the other metric, Conceptual Lack of Cohesion on Methods (CLCOM5), is based on the LCOM5 cohesion metric. One advantage of the proposed conceptual metrics is that they can be computed in a simpler (and in many cases, programming language independent) way as compared to some of the structural metrics. We empirically studied CCBO and CLCOM5 for predicting fault-proneness of classes in a large open source system and compared these metrics with a host of existing structural and conceptual metrics for the same task. As the result, we found that the proposed conceptual metrics, when used in conjunction, can predict bugs nearly as precisely as the 58 structural metrics available in the Columbus source code quality framework and can be effectively combined with these metrics to improve bug prediction.",2010,0, 4381,Fully-automatic annotation of scene vidoes: Establish eye tracking effectively in various industrial applications,"Modern mobile eye-tracking systems record participants' gaze behavior while they move freely within the environment and have haptic contact with the objects of interest. Whereas these mobile systems can be easily set up and operated, the analysis of the stored gaze data is still quite difficult and cumbersome: the recorded scene video with overlaid gaze cursor has to be manually annotated - a very time consuming and error-prone process - preventing the use of eye-tracking techniques in various application fields. In order to overcome these problems, we developed a new software application (JVideoGazer) that uses translation, scale and rotation invariant object detection and tracking algorithms for a fully-automatic video analysis and annotation process. We evaluated our software by comparing its results to those of a manual annotation using scene videos of a typical day-by-day task. Preliminary results show that our software guarantees reliable automatic video analysis even under challenging recording conditions, while it significantly speeds up the annotation process. To the best of our knowledge, the JVideoGazer is the first software for a fully-automatic analysis of gaze videos. With it, modern eye-tracking techniques can be effectively applied to real world situations in various application fields, like research, control, quality management, human-machine interactions, as well as information processing and other industrial applications.",2010,0, 4382,Benchmarking IP blacklists for financial botnet detection,"Every day, hundreds or even thousands of computers are infected with financial malware (i.e. Zeus) that forces them to become zombies or drones, capable of joining massive financial botnets that can be hired by well-organized cyber-criminals in order to steal online banking customers' credentials. Despite the fact that detection and mitigation mechanisms for spam and DDoS-related botnets have been widely researched and developed, it is true that the passive nature (i.e. low network traffic, fewer connections) of financial botnets greatly hinder their countermeasures. Therefore, cyber-criminals are still obtaining high economical profits at relatively low risk with financial botnets. In this paper we propose the use of publicly available IP blacklists to detect both drones and Command & Control nodes that are part of financial botnets. To prove this hypothesis we have developed a formal framework capable of evaluating the quality of a blacklist by comparing it versus a baseline and taking into account different metrics. The contributed framework has been tested with approximately 500 million IP addresses, retrieved during a one-month period from seven different well-known blacklist providers. Our experimental results showed that these IP blacklists are able to detect both drones and C&C related with the Zeus botnet and most important, that it is possible to assign different quality scores to each blacklist based on our metrics. Finally, we introduce the basics of a high-performance IP reputation system that uses the previously obtained blacklists' quality scores, in order to reply almost in real-time whether a certain IP is a member of a financial botnet or not. Our belief is that such a system can be easily integrated into e-banking anti-fraud systems.",2010,0, 4383,Using vulnerability information and attack graphs for intrusion detection,"Intrusion Detection Systems (IDS) have been used widely to detect malicious behavior in network communication and hosts. IDS management is an important capability for distributed IDS solutions, which makes it possible to integrate and handle different types of sensors or collect and synthesize alerts generated from multiple hosts located in the distributed environment. Sophisticated attacks are difficult to detect and make it necessary to integrate multiple data sources for detection and correlation. Attack graph (AG) is used as an effective method to model, analyze, and evaluate the security of complicated computer systems or networks. The attack graph workflow consists of three parts: information gathering, attack graph construction, and visualization. This paper proposes the integration of the AG workflow with an IDS management system to improve alert and correlation quality. The vulnerability and system information is used to prioritize and tag the incoming IDS alerts. The AG is used during the correlation process to filter and optimize correlation results. A prototype is implemented using automatic vulnerability extraction and AG creation based on unified data models.",2010,0, 4384,Cycle accurate simulator generator for NoGap,"Application Specific Instruction-set Processors (ASIPs) are needed to handle the future demand of flexible yet high performance computation in mobile devices. However designing an ASIP is complicated by the fact that not only the processor but, also tools such as assemblers, simulators, and compilers have to be designed. Novel Generator of Accelerators And Processors (NoGap), is a design automation tool for ASIP design that imposes very few limitations on the designer. Yet NoGap supports the designer by automating much of the tedious and error prone tasks associated with ASIP design. This paper will present the techniques used to generate a stand alone software simulator for a processor designed with NoGap. The focus will be on the core algorithms used. Two main problems had to be solved, simulation of a data path graph and simulation of leaf functional units. The concept of sequentialization is introduced and the algorithms used to perform both the leaf unit sequentialization and data path sequentialization is presented. A key component of the sequentialization process is the Micro Architecture Generation Essentials (Mage) dependency graph. The mage dependency graph and the algorithm used for its generation are also presented in this paper. A NoGap simulator was generated for a simple processor and the results were verified.",2010,0, 4385,Weibull distribution in modeling component faults,"Cost efficiency and the issue of quality are pushing software companies to constantly invest in efforts to produce enough quality applications that will arrive in time, with good enough quality to the customer. Quality is not for free, it has a price. Using the different methods of prediction, characteristic parameters will be obtained and will lead to the conclusions about quality even prior the beginning of the project. The Weibull distribution is by far the world's most popular statistical model for life data. On the other hand, exponential distribution and Rayleigh distribution are special cases of Weibull distribution. If we want to model and predict software component quality with mentioned distribution we should take some assumption regarding them. Prediction of component quality will take us to preventive and corrective action in the organization. Based on the results of prediction and modeling of software components faults prior the project start, during project execution and finally during maintenance stage of the component lifecycle some conclusion can be made. In this paper software component prediction using different mathematical models will be presented.",2010,0, 4386,Fault tolerant amplifier system using evolvable hardware,"This paper proposes the use evolvable hardware (EHW) for providing fault tolerance to an amplifier system in a signal-conditioning environment. The system has to maintain a given gain despite the presence of faults, without direct human intervention. The hardware setup includes a reconfigurable system on chip device and an external computer where a genetic algorithm is running. For detecting a gain fault, we propose a software-based built-in self-test strategy that establishes the actual values of gain achievable by the system. The performance evaluation of the fault tolerance strategy proposed is made by adopting two different types of fault-models. The fault simulation results show that the technique is robust and that the genetic algorithm finds the target gain with low error.",2010,0, 4387,The study of effectiveness of computer-assisted instruction versus traditional lecture in probability and statistics,"The purpose of this study is to enhance teaching quality and effect of probability and statistics by computer-assisted instruction (CAI). Using multimedia technology and statistical software, the paper researched mainly three aspects of probability and statistics teaching. Firstly, the paper summarized traditional lecture of probability and statistics and pointed its shortcoming in enhancing teaching quality and effect. Secondly, the paper expatiated CAI in probability and statistics, including some skills of doing multimedia courseware, application of statistical software and how to make use of network. At last, using statistical package for social science (SPSS) 13.0, the paper stressed the effectiveness CAI versus traditional lecture in probability and statistics through a living example.",2010,0, 4388,A Single-Network ANN-based Oracle to verify logical software modules,"Test Oracle is a mechanism to determine if an application executed correctly. In addition, it may be difficult to verify logical software modules due to the complexity of their structures. In this paper, an attempt has been made to study the applications of Artificial Neural Networks as Single-Network Oracles to verify logical modules. First, the logical module under test was modeled by the neural network using a training dataset generated based on the software specifications. Next, the proposed approach was applied to test a subject-registration application; meanwhile, the quality of the proposed oracle is measured by assessing its accuracy, precision, misclassification error and practicality in practice, using mutation testing by implementing two different versions of the case study: a Golden Version and a Mutated Version. The results indicate that neural networks may be reliable and applicative as oracles to verify logical modules.",2010,0, 4389,A neutral network for identifying the out-of-control signals of MEWMA control charts,"Multivariate quality control charts show some advantages to monitor several variables in comparison with the simultaneous use of univariate charts, nevertheless, there are some disadvantages. The main problem is how to interpret the out-of-control signal of a multivariate chart. The MEWMA quality control chart is a very powerful scheme to detect small shifts in the mean vector. There are no previous specific works about the interpretation of the out-of-control signal of this chart. In this paper neural networks are designed to interpret the out-of-control signal of the MEWMA chart, and the percentage of correct classifications is studied for different cases.",2010,0, 4390,An analytical model for performance evaluation of software architectural styles,"Software architecture is an abstract model that gives syntactic and semantic information about the components of a software system and the relationship among them. The success of the software depends on whether the system can satisfy the quality attributes. One of the most critical aspects of the quality attributes of a software system is its performance. Performance analysis can be useful for assessing whether a proposed architecture can meet the desired performance specifications and whether it can help in making key architectural decisions. An architecture style is a set of principles which an architect uses in designing software architecture. Since software architectural styles have frequently been used by architects, these styles have a specific effect on quality attributes. If this effect is measurable for each existing style, it will enable the architect to evaluate and make architectural decisions more easily and precisely. In this paper an effort has been made to introduce a model for investigating this attributes in architectural styles. So, our approach initially models the system as Discrete Time Markov Chain or DTMC, and then extracts the parameters to predict the response time of the system.",2010,0, 4391,Web based ETL component extended with loading and reporting facilitations a financial application tool,"The data warehousing environment includes components that are inherently technical in nature. These cleansing components function to extract, clean, model, transform, transfer and load data from multiple operational systems into a single coherent data model hosted within the data warehouse. The analytical environment is the domain of the business users who use application to query, report, analyze and act upon data in the data warehouse. The conventional process of developing custom code or scripts for this is always a costly, error prone and time-consuming. In this paper, we propose a web based frame work model for representing extraction of data from one or more data sources and use transformation business logic, load the data within the data warehouse. The entire above mentioned have been modeled using UML because the structural and dynamic properties of an information system at the conceptual level are more natural than other classical approaches. New feature of entire loading process of data movement between source and target system is also made visible to the users. In addition a reporting capability to log all successful transformations is provided.",2010,0, 4392,Exploratory failure analysis of open source software,"Reliability growth modeling in software system plays an important role in measuring and controlling software quality during software development. One main approach to reliability growth modeling is based on the statistical correlation of observed failure intensities versus estimated ones by the use of statistical models. Although there are a number of statistical models in the literature, this research concentrates on the following seven models: Weibull, Gamma, S-curve, Exponential, Lognormal, Cubic, and Schneidewind. The failure data collected are from five popular open source software (OSS) products. The objective is to determine which of the seven models best fits the failure data of the selected OSS products as well as predicting the future failure pattern based on partial failure history. The outcome reveals that the best model fitting the failure data is not necessarily the best predictor model.",2010,0, 4393,Auto-generation and redundancy reduction of test cases for reactive systems,"Testing is the fundamental technique to assess the correctness of software systems, but it is cost-labored to generate test cases. One solution to change the situation is to automatize some parts of the testing process, especially the generation of test cases using formal theory and technology. The research work in the direction shows the good perspective. This paper targets on the automatic generation of test cases based on IOSTS, which is widely used to model reactive systems with data. When selecting test cases based on a set of test purposes specified by IOSTS or temporal logic, in general, the redundancy phenomena are unavoidable in the derived test suite. Hence, some strategies are presented for eliminating the redundancies in order to reduce the cost of implementing testing. More importantly, the strategies are directly applied to test cases in form of IOSTS, such that it can reduce not only the size of test suite, but also the cost of deriving test cases.",2010,0, 4394,Using clone detection to identify bugs in concurrent software,"In this paper we propose an active testing approach that uses clone detection and rule evaluation as the foundation for detecting bug patterns in concurrent software. If we can identify a bug pattern as being present then we can localize our testing effort to the exploration of interleavings relevant to the potential bug. Furthermore, if the potential bug is indeed a real bug, then targeting specific thread interleavings instead of examining all possible executions can increase the probability of the bug being detected sooner.",2010,0, 4395,Pairwise test set calculation using k-partite graphs,Many software faults are triggered by unusual combinations of input values and can be detected using pairwise test sets that cover each pair of input values. The generation of pairwise test sets with a minimal size is an NP-complete problem which implies that many algorithms are either expensive or based on a random process. In this paper we present a deterministic algorithm that exploits our observation that the pairwise testing problem can be modeled as a k-partite graph problem. We calculate the test set using well investigated graph algorithms that take advantage of properties of k-partite graphs. We present evaluation results that prove the applicability of our algorithm and discuss possible improvement of our approach.,2010,0, 4396,Test generation via Dynamic Symbolic Execution for mutation testing,"Mutation testing has been used to assess and improve the quality of test inputs. Generating test inputs to achieve high mutant-killing ratios is important in mutation testing. However, existing test-generation techniques do not provide effective support for killing mutants in mutation testing. In this paper, we propose a general test-generation approach, called PexMutator, for mutation testing using Dynamic Symbolic Execution (DSE), a recent effective test-generation technique. Based on a set of transformation rules, PexMutator transforms a program under test to an instrumented meta-program that contains mutant-killing constraints. Then PexMutator uses DSE to generate test inputs for the meta-program. The mutant-killing constraints introduced via instrumentation guide DSE to generate test inputs to kill mutants automatically. We have implemented our approach as an extension for Pex, an automatic structural testing tool developed at Microsoft Research. Our preliminary experimental study shows that our approach is able to strongly kill more than 80% of all the mutants for the five studied subjects. In addition, PexMutator is able to outperform Pex, a state-of-the-art test-generation tool, in terms of strong mutant killing while achieving the same block coverage.",2010,0, 4397,Understanding where requirements are implemented,"Trace links between requirements and code reveal where requirements are implemented. Such trace links are essential for code understanding and change management. The lack thereof is often cited as a key reason for software engineering failure. Unfortunately, the creation and maintenance of requirements-to-code traces remains a largely manual and error prone task due to the informal nature of requirements. This paper demonstrates that reasoning about requirements-to-code traces can be done, in part, by considering the calling relationships within the source code (call graph). We observed that requirements-to-code traces form regions along calling dependencies. Better knowledge about these regions has several direct benefits. For example, erroneous traces become detectable if a method inside a region does not trace to a requirement. Or, a missing trace (incompleteness) can be identified. Knowledge of requirement regions can also be used to help guide developers in establishing requirements-to-code traces in a more efficient manner. This paper discusses requirement regions and sketches their benefits.",2010,0, 4398,An approach to improving software inspections performance,Software inspections allow finding and removing defects close to their point of injection and are considered a cheap and effective way to detect and remove defects. A lot of research work has focused on understanding the sources of variability and improving software inspections performance. In this paper we studied the impact of inspection review rate in process performance. The study was carried out in an industrial context effort of bridging the gap from CMMI level 3 to level 5. We supported a decision for process change and improvement based on statistical significant information. Study results led us to conclude that review rate is an important factor affecting code inspections performance and that the applicability of statistical methods was useful in modeling and predicting process performance.,2010,0, 4399,Reverse engineering object-oriented distributed systems,"A significant part of the modern software systems are designed and implemented as object-oriented distributed applications, addressing the needs of a globally-connected society. While they can be analyzed focusing only on their object-oriented nature, their understanding and quality assessment require very specific, technology-dependent analysis approaches. This doctoral dissertation describes a methodology for understanding object-oriented distributed systems using a process of reverse engineering driven by the assessment of their technological and domain-specific particularities. The approach provides both system-wide and class-level characterizations, capturing the architectural traits of the systems, and assessing the impact of the distribution-aware features throughout the application. The methodology describes a mostly-automated analysis process fully supported by a tools infrastructure, providing means for detailed understanding of the distribution-related traits and including basic support for the potentially consequent system restructuring.",2010,0, 4400,Fine-grained incremental learning and multi-feature tossing graphs to improve bug triaging,"Software bugs are inevitable and bug fixing is a difficult, expensive, and lengthy process. One of the primary reasons why bug fixing takes so long is the difficulty of accurately assigning a bug to the most competent developer for that bug kind or bug class. Assigning a bug to a potential developer, also known as bug triaging, is a labor-intensive, time-consuming and fault-prone process if done manually. Moreover, bugs frequently get reassigned to multiple developers before they are resolved, a process known as bug tossing. Researchers have proposed automated techniques to facilitate bug triaging and reduce bug tossing using machine learning-based prediction and tossing graphs. While these techniques achieve good prediction accuracy for triaging and reduce tossing paths, they are vulnerable to several issues: outdated training sets, inactive developers, and imprecise, single-attribute tossing graphs. In this paper we improve triaging accuracy and reduce tossing path lengths by employing several techniques such as refined classification using additional attributes and intra-fold updates during training, a precise ranking function for recommending potential tossees in tossing graphs, and multi-feature tossing graphs. We validate our approach on two large software projects, Mozilla and Eclipse, covering 856,259 bug reports and 21 cumulative years of development. We demonstrate that our techniques can achieve up to 83.62% prediction accuracy in bug triaging. Moreover, we reduce tossing path lengths to 1.5-2 tosses for most bugs, which represents a reduction of up to 86.31% compared to original tossing paths. Our improvements have the potential to significantly reduce the bug fixing effort, especially in the context of sizable projects with large numbers of testers and developers.",2010,0, 4401,Model-driven detection of Design Patterns,"Tracing source code elements of an existing Object Oriented software system to the components of a Design Pattern is a key step in program comprehension or re-engineering. It helps, mainly for legacy systems, to discover the main design decisions and trade-offs that are often not documented. In this paper an approach is presented to automatically detect Design Patterns in existing Object Oriented systems by tracing system's source code components to the roles they play in the Patterns. Design Patterns are modelled by high level structural Properties (e.g. inheritance, dependency, invocation, delegation, type nesting and membership relationships) that are checked, by source code parsing, against the system structure and components. The approach allows to detect also Pattern variants, defined by overriding the Pattern structural properties. The approach was applied to some open-source systems to validate it. Results on the detected patterns, discovered variants and on the overall quality of the approach are provided and discussed.",2010,0, 4402,Physical and conceptual identifier dispersion: Measures and relation to fault proneness,"Poorly-chosen identifiers have been reported in the literature as misleading and increasing the program comprehension effort. Identifiers are composed of terms, which can be dictionary words, acronyms, contractions, or simple strings. We conjecture that the use of identical terms in different contexts may increase the risk of faults. We investigate our conjecture using a measure combining term entropy and term context coverage to study whether certain terms increase the odds ratios of methods to be fault-prone. Entropy measures the physical dispersion of terms in a program: the higher the entropy, the more scattered across the program the terms. Context coverage measures the conceptual dispersion of terms: the higher their context coverage, the more unrelated the methods using them. We compute term entropy and context coverage of terms extracted from identifiers in Rhino 1.4R3 and ArgoUML 0.16. We show statistically that methods containing terms with high entropy and context coverage are more fault-prone than others.",2010,0, 4403,Assessment of product maintainability for two space domain simulators,"The software life-cycle of applications supporting space missions follows a rigorous process in order to ensure the application compliance with all the specified requirements. Ensuring the correct behavior of the application is critical since an error can lead, ultimately, to the loss of a complete space mission. However, it is not only important to ensure the correct behavior of the application but also to achieve good product quality since the applications need to be maintained for several years. Then, the question arises, is a rigorous process enough to guarantee good product maintainability? In this paper we assess the software product maintainability of two simulators used to support space missions. The assessment is done using both a standardized analysis, using the SIG quality model for maintainability, and a customized copyright license analysis. The assessment results revealed several quality problems leading to three lessons. First, rigorous process requirements by themselves do not ensure product quality. Second, quality models can be used not only to pinpoint code problems but also to reveal team issues. Finally, tailored analyses, complementing quality models, are necessary for in-depth investigation of quality.",2010,0, 4404,Influences of different excitation parameters upon PEC testing for deep-layered defect detection with rectangular sensor,"In pulsed eddy current testing, repetitive excitation signals with different parameters: duty-cycle, frequency and amplitude have different response representations. This work studies the influences of different excitation parameters on pulsed eddy current testing for deep-layered defects detection of stratified samples with rectangular sensor. The sensor had been proved to be superior in quantification and classification of defects in multi-layered structures compared with traditional circular ones. Experimental results show necessities to optimize the parameters of pulsed excitation signal, and advantages of obtaining better performances to enhance the POD of PEC testing.",2010,0, 4405,FTA-based assessment methodology for adaptability of system under electromagnetic environment,"In order to develop a methodology for assessing the adaptability of system under electromagnetic environment especially RF environment, a Fault-Tree-Analysis-based method is proposed considering the possible performance degradation, disfunction and fault. A component-level and a system-level assessment are performed. An approach to describing logical and hypotactic relation of nodes in Fault-Tree Analysis is designed and illustrated in detail. A software platform is developed to implement the system-level assessment.",2010,0, 4406,Enterprise services (business) collaboration using portal and SOA-based semantics,"With the spread of Internet technologies, the severe competition among businesses, many organizations are moving towards integrating their services online. Service Oriented Architecture (SOA) has shown a potential features in facilitating and managing services integration and expose them through a Portal application. The marriage of these two new standards will definitely lead to a full fledge service integration where key features of both paradigms will smoothen the heterogeneous service integration process. This is achieved by the mean of automatic service composition and orchestration feature of SOA and the automatic customization and profiling feature of the portal technology. This research endeavored to integrate Service Oriented Architecture of (SOA) with portal technology. In this paper, we propose an architecture to help integrating (composing) services online and exposing them via one single point of access known a portal. Composition of Web Services is semantically supported, and relies on user requirements and profile. To ensure smooth integration of services while ensuring good quality for instance high availability, good response time, and processing time, a monitoring technique has been proposed to detect and report if any QoS violation of service composition occurs. The designed architecture has been applied to a use case scenario: a e-commerce portal (ECP) and the results of a system prototype have been reported to demonstrate some relevant features of the proposed approach.",2010,0, 4407,Power electronics health monitoring test platform for assessment of modern power drives and electric machines with regeneration capabilities,This work presents a power electronics health monitoring test platform for assessing modern power drives and electric machines with regeneration capabilities. This versatile platform combines data acquisition of critical system signals that are used for analysis as health indicators for the overall system and individual components such as power semiconductor devices. The test platform combines hardware and software in the loop to allow health monitoring and control techniques for fault tolerance.,2010,0, 4408,An Anomaly Detection Framework for Autonomic Management of Compute Cloud Systems,"In large-scale compute cloud systems, component failures become norms instead of exceptions. Failure occurrence as well as its impact on system performance and operation costs are becoming an increasingly important concern to system designers and administrators. When a system fails to function properly, health-related data are valuable for troubleshooting. However, it is challenging to effectively detect anomalies from the voluminous amount of noisy, high-dimensional data. The traditional manual approach is time-consuming, error-prone, and not scalable. In this paper, we present an autonomic mechanism for anomaly detection in compute cloud systems. A set of techniques is presented to automatically analyze collected data: data transformation to construct a uniform data format for data analysis, feature extraction to reduce data size, and unsupervised learning to detect the nodes acting differently from others. We evaluate our prototype implementation on an institute-wide compute cloud environment. The results show that our mechanism can effectively detect faulty nodes with high accuracy and low computation overhead.",2010,0, 4409,Simulation of High-Performance Memory Allocators,"Current general-purpose memory allocators do not provide sufficient speed or flexibility for modern high-performance applications. To optimize metrics like performance, memory usage and energy consumption, software engineers often write custom allocators from scratch, which is a difficult and error-prone process. In this paper, we present a flexible and efficient simulator to study Dynamic Memory Managers (DMMs), a composition of one or more memory allocators. This novel approach allows programmers to simulate custom and general DMMs, which can be composed without incurring any additional runtime overhead or additional programming cost. We show that this infrastructure simplifies DMM construction, mainly because the target application does not need to be compiled every time a new DMM must be evaluated. Within a search procedure, the system designer can choose the """"best"""" allocator by simulation for a particular target application. In our evaluation, we show that our scheme will deliver better performance, less memory usage and less energy consumption than single memory allocators.",2010,0, 4410,A Scenario of Service-Oriented Principles Adaptation to the Telecom Providers Service Delivery Platform,"Telecom service providers face a challenge how to increase average revenue per user by new-generation services. In view of the fact that it is extremely difficult to predict the success of the certain kind of service(s), as a result the providers are in need for a dynamic architecture that has to be capable to deliver new services promptly, add resources for successful services as demand increases, or remove unsuccessful services effortlessly. Such architecture has to be a modular standards-based service platform that supports different protocols and interfaces as well as QoS-based transformations and gateways. The potential candidate for this delivery platform is Service Oriented Architecture (SOA). Thus, the aim of our work is to develop a SOA implementation methodology considering telecom service providers existing enterprise network architecture and potential future growth.",2010,0, 4411,Developing Fault Tolerant Distributed Systems by Refinement,"Distributed systems are usually large and complex systems composed of various components. System components are subject to various errors. These failures often require error recovery to be conducted at architectural-level. However, due to complexity of distributed systems, specifying fault tolerance mechanisms at architectural level is complex and error prone. In this paper, we propose a formal approach to specifying components and architectures of fault tolerant distributed and reactive systems. Our approach is based on refinement in the action system formalism - a framework for formal model-driven development of distributed systems. We demonstrate how to specify and refine fault tolerant components and complex distributed systems composed of them. The proposed approach provides designers with a systematic method for developing distributed fault tolerant systems.",2010,0, 4412,Software Fault Prediction Models for Web Applications,"Our daily life increasingly relies on Web applications. Web applications provide us with abundant services to support our everyday activities. As a result, quality assurance for Web applications is becoming important and has gained much attention from software engineering community. In recent years, in order to enhance software quality, many software fault prediction models have been constructed to predict which software modules are likely to be faulty during operations. Such models can be utilized to raise the effectiveness of software testing activities and reduce project risks. Although current fault prediction models can be applied to predict faulty modules of Web applications, one limitation of them is that they do not consider particular characteristics of Web applications. In this paper, we try to build fault prediction models aiming for Web applications after analyzing major characteristics which may impact on their quality. The experimental study shows that our approach achieves very promising results.",2010,0, 4413,The Level of Decomposition Impact on Component Fault Tolerance,"In fault tolerant software systems, the Level of Decomposition (LoD) where design diversity is applied has a major impact on software system reliability. By disregarding this impact, current fault tolerance techniques are prone to reliability decrease due to the inappropriate application level of design diversity. In this paper, we quantify the effect of the LoD on system reliability during software recomposition when the functionalities of the system are redistributed across its components. We discuss the LoD in fault tolerant software architectures according to three component failure transitions: component failure occurrence, component failure propagation, and component failure impact. We illustrate the component aspects that relate the LoD to each of these failure transitions. Finally, we quantify the effect of the LoD on system reliability according to a series of decomposition and/or merge operations that may occur during software recomposition.",2010,0, 4414,The Right Tool for the Right Job: Assessing Model Transformation Quality,"Model-Driven Engineering (MDE) is a software engineering discipline in which models play a central role. One of the key concepts of MDE is model transformations. Because of the crucial role of model transformations in MDE, they have to be treated in a similar way as traditional software artifacts. They have to be used by multiple developers, they have to be maintained according to changing requirements and they should preferably be reused. It is therefore necessary to define and assess their quality. In this paper, we give two definitions for two different views on the quality of model transformations. We will also give some examples of quality assessment techniques for model transformations. The paper concludes with an argument about which type of quality assessment technique is most suitable for either of the views on model transformation quality.",2010,0, 4415,Optimizing Software Quality Assurance,"A major concern for managers of software projects are the triple constraints of cost, schedule and quality due to the difficulties to quantify accurately the trade-offs between them. Project managers working for accredited companies with a high maturity will typically use software cost estimation models like COCOMO II and predict software quality by the estimated number of defects the product is likely to contain at release. However, most of these models are used separately and the interplay between cost/effort estimation, project scheduling and the resultant quality of the software product is not well understood. In this paper, we propose a regression-based model that allows project managers to estimate the trade-off between the quality, cost and development time of a software product, based on previously collected data.",2010,0, 4416,Using Coverage Information to Guide Test Case Selection in Adaptive Random Testing,"Random Testing (RT) is a fundamental software testing technique. Adaptive Random Testing (ART) improves the fault-detection capability of RT by employing the location information of previously executed test cases. Compared with RT, test cases generated in ART are more evenly spread across the input domain. ART has conventionally been applied to programs that have only numerical input types, because the distance between numerical inputs is readily measurable. The vast majority of computer programs, however, involve non-numerical inputs. To apply ART to these programs requires the development of effective new distance measures. Different from those measures that focus on the concrete values of program inputs, in this paper we propose a method to measure the distance using coverage information. The proposed method enables ART to be applied to all kinds of programs regardless of their input types. Empirical studies are further conducted for the branch coverage Manhattan distance measure using the replace and space programs. Experimental results show that, compared with RT, the proposed method significantly reduces the number of test cases required to detect the first failure. This method can be directly applied to prioritize regression test cases, and can also be incorporated into code-based and model-based test case generation tools.",2010,0, 4417,Natural Language Processing Based Detection of Duplicate Defect Patterns,"A Defect pattern repository collects different kinds of defect patterns, which are general descriptions of the characteristics of commonly occurring software code defects. Defect patterns can be widely used by programmers, static defect analysis tools, and even runtime verification. Following the idea of web 2.0, defect pattern repositories allow these users to submit defect patterns they found. However, submission of duplicate patterns would lead to a redundancy in the repository. This paper introduces an approach to suggest potential duplicates based on natural language processing. Our approach first computes field similarities based on Vector Space Model, and then employs Information Entropy to determine the field importance, and next combines the field similarities to form the final defect pattern similarity. Two strategies are introduced to make our approach adaptive to special situations. Finally, groups of duplicates are obtained by adopting Hierarchical Clustering. Evaluation indicates that our approach could detect most of the actual duplicates (72% in our experiment) in the repository.",2010,0, 4418,A Knowledge Discovery Case Study of Software Quality Prediction: ISBSG Database,"Software becomes more and more important in modern society. However, the quality of software is influenced by many un-trustworthy factors. This paper applies MCLP model on ISBSG database to predict the quality of software and reveal the relation between the quality and development attributes. The experimental result shows that the quality level of software can be well predicted by MCLP Model. Besides, several useful conclusions have been drawn from the experimental result.",2010,0, 4419,Similarity-Based Bayesian Learning from Semi-structured Log Files for Fault Diagnosis of Web Services,"With the rapid development of XML language which has good flexibility and interoperability, more and more log files of software running information are represented in XML format, especially for Web services. Fault diagnosis by analyzing semi-structured and XML like log files is becoming an important issue in this area. For most related learning methods, there is a basic assumption that training data should be in identical structure, which does not hold in many situations in practice. In order to learn from training data in different structures, we propose a similarity-based Bayesian learning approach for fault diagnosis in this paper. Our method is to first estimate similarity degrees of structural elements from different log files. Then the basic structure of combined Bayesian network (CBN) is constructed, and the similarity-based learning algorithm is used to compute probabilities in CBN. Finally, test log data can be classified into possible fault categories based on the generated CBN. Experimental results show our approach outperforms other learning approaches on those training datasets which have different structures.",2010,0, 4420,A Hybrid Approach for Model-Based Random Testing,"Random testing is a valuable supplement to systematic test methods because it discovers defects that are very hard to detect with systematic test strategies. We propose a novel approach for random test generation that combines the benefits of model-based testing, constraint satisfaction, and pure random testing. The proposed method has been incorporated into the IDATG (Integrating Design and Automated Test case Generation) tool-set and validated in a number of case studies. Their results indicate that using the new approach it is indeed possible to generate effective test cases in acceptable time.",2010,0, 4421,The SQALE Analysis Model: An Analysis Model Compliant with the Representation Condition for Assessing the Quality of Software Source Code,This paper presents the analysis model of the assessment method of software source code SQALE (Software Quality Assessment Based on Lifecycle Expectations). We explain what brought us to develop consolidation rules based in remediation indices. We describe how the analysis model can be implemented in practice.,2010,0, 4422,Software life cycle-based defects prediction and diagnosis technique research,"A model based on Bayesian network is put forward to predict and diagnose software defects before project. In the model causes and effects inference is used to predict defects, and the Bayesian formula is introduced to analyze the prediction result that helps to find the root reason of bring defects. The model considers every phase of software life cycle, such as requirement, design, development and testing, maintenance. The model computes predict result through variational weight of affect-factor. The computation results of specifically affect-factor by model compared with practical defect in same condition which to indict that the model is validity. The model can predict and find defects early and effectively, it can control the software quality and the cost of development.",2010,0, 4423,A framework to discover potential deviation between program and requirement through mining object graph,"Software is expected to be derived from requirements whose properties have been established perfectly. However, requirements are often inaccurate, incomplete or inconsistent as it is a very difficult task to define and analyze requirements. On the other hand, programs most likely deviates from requirements during implementation as the result of misunderstanding or/and neglecting requirements of software engineers. Deviations between programs and requirements are error prone, or cause software to act in unpredictable or unexpected ways. In this paper, we propose a novel framework that uses graph-based mining techniques to discover software execution patterns from object graph firstly, and then searches and matches within a pattern repository to determine whether the discovered software execution patterns are potential deviations from requirements corresponding to neglected requirements or not. After that, the new discovered software execution patterns are labeled and saved back into pattern repository. Hence, the framework is evolutionary and its ability will be more powerful. We give a case study to show how the framework works. The work indicates that the framework is effective and reasonably efficient for improving software quality.",2010,0, 4424,Computational Modeling of Electrical Contact Crimping and Mechanical Strength Analysis,"Several thousands electrical connections are necessary in airplanes. Electrical cables are joined to contacts using manual crimping devices in industrial plants. When mechanical defects appear, the replacement of the defective connections has to be made directly on the airplane. This operation is difficult and time-consuming due to reduced accessibility, making it very costly. The aim of this paper is to simulate the crimping operation and then evaluate the mechanical strength of the obtained connection in order to identify, understand and eliminate defects due to unsatisfactory crimping operations. Material behavior data is accessed through mechanical testing performed on cables and contact; accurate material data is mandatory to improve the accuracy of the simulation results. The Forge software is used to perform the numerical simulation to predict the stress and strain distribution as well as the crimping force during crimping operations. Resulting mechanical strength of the joint is also analysed using numerical simulation. Experimental and numerical results are then compared.",2010,0, 4425,A short-term prediction for QoS of Web Service based on RBF neural networks including an improved K-means algorithm,"The structure of RBF neural networks and an improved K-means algorithm will be introduced in the paper. Based on this, RBF neural networks is applied to predict the QoS of Web Service and the functions of the MATLAB toolbox are adopted to create a network model for QoS prediction. Finally the simulation experiments will prove that using RBF neural networks based on the improved K-means algorithm to predict the QoS of Web Service is effective and efficient.",2010,0, 4426,An improved tone mapping algorithm for High Dynamic Range images,"Real world scenes contain a large range of light intensities. To adapt to display device, High Dynamic Range (HDR) image should be converted into Low Dynamic Range (LDR) image. A common task of tone mapping algorithms is to reproduce high dynamic range images on low dynamic range display devices. In this paper, a new tone mapping algorithm is proposed for high dynamic range images. Based on the probabilistic model is proposed for high dynamic image's tone reproduction, the proposed method uses a logarithmic normal distribution instead of normal distribution. Therefore, the algorithm can preserve visibility and contrast impression of high dynamic range scenes in the common display devices. Experimental results show the superior performance of the app roach in terms of visual quality.",2010,0, 4427,Comparison research of two typical UML-class-diagram metrics: Experimental software engineering,"Measuring UML class diagram complexity can help developers select one with lowest complexity from a variety of different designs with the same functionality; also provide guidance for developing high quality class diagrams. This paper compared the advantages and disadvantages of two typical class-diagram complexity metrics based on statistics and entropy-distance respectively from the view of newly experimental software engineering. 27 class diagrams related to the banking system were classified and predicted their understandability, analyzability and maintainability by means of algorithm C5.0 in well-known software SPSS Clementine. Results showed that UML class diagrams complexity metric based on statistics has higher classification accuracy than that based on entropy-distance.",2010,0, 4428,Rate control scheme for H.264/AVC video encoding,"Rate control is a critical part of video compression systems, while the introduction of Rate-Distortion Optimization makes rate control in H.264/AVC more complex than previous standards. The frame coding complexity MAD is predicted in H.264/AVC rate control. In this paper, we present a novel estimation method for frame coding complexity in H.264/AVC. Meanwhile, a rate control scheme is proposed to improve the coding efficiency. Experimental results demonstrate that our rate control scheme gains better visual quality than that of joint model in H.264/AVC reference software.",2010,0, 4429,Development of a real-time machine vision system for detecting defeats of cord fabrics,"Automatic detection techniques based on machine vision can be used in fabric industry for quality control, which constantly pursues intelligent methods to replace human inspections of product. This work introduces the principle components of a real-time machine vision system for defeat detection of cord fabrics, which is usually a challenging task in practice. The work aims at solving some difficulties usually incurring in such kind of tasks. The design and implementation of the algorithm, software and hardware are introduced. Based on the Gabor wavelet techniques, the system can automatically detect regular texture defects. Our experiments show the proposed algorithm is favorably suited for detecting several types of cord fabric defects. The system testing has been carried in both on-line and off-line situations. The corresponding results show the system has good performance with high detection accuracy, quick response and strong robustness.",2010,0, 4430,Research on formal description of data flow software faults,"Software plays an important part in our society. The occurrence of software fault may lead to serious disaster. Data flow software fault is a kind of important software fault. In this paper, the properties of data dependency relationship are studied, the formal definitions of some data flow software faults, such as using undefined variable, nonused variable since definition, and redefining nonused variable since definition are given, the corresponding detecting methods are proposed, and some sample data flow software faults are given to demonstrate the effectiveness of the proposed methods.",2010,0, 4431,Practical Aspects in Analyzing and Sharing the Results of Experimental Evaluation,"Dependability evaluation techniques such as the ones based on testing, or on the analysis of field data on computer faults, are a fundamental process in assessing complex and critical systems. Recently a new approach has been proposed consisting in collecting the row data produced in the experimental evaluation and store it in a multidimensional data structure. This paper reports the work in progress activities of the entire process of collecting, storing and analyzing the experimental data in order to perform a sound experimental evaluation. This is done through describing the various steps on a running example.",2010,0, 4432,Quantifying Resiliency of IaaS Cloud,"Cloud based services may experience changes - internal, external, large, small - at any time. Predicting and quantifying the effects on the quality-of-service during and after a change are important in the resiliency assessment of a cloud based service. In this paper, we quantify the resiliency of infrastructure-as-a-service (IaaS) cloud when subject to changes in demand and available capacity. Using a stochastic reward net based model for provisioning and servicing requests in a IaaS cloud, we quantify the resiliency of IaaS cloud w.r.t. two key performance measures - job rejection rate and provisioning response delay.",2010,0, 4433,CCDA: Correcting control-flow and data errors automatically,"This paper presents an efficient software technique to detect and correct control-flow errors through addition of redundant codes in a given program. The key innovation performed in the proposed technique is detection and correction of the control-flow errors using both control-flow graph and data-flow graph. Using this technique, most of control-flow errors in the program are detected first, and next corrected, automatically; so, both errors in the control-flow and program data which is caused by control-flow errors can be corrected. In order to evaluate the proposed technique, a post compiler is used, so that the technique can be applied to every 8086 binaries, transparently. Three benchmarks quick sort, matrix multiplication and linked list are used, and a total of 5000 transient faults are injected on several executable points in each program. The experimental results demonstrate that at least 93% of the control-flow errors can be detected and corrected by the proposed technique automatically without any data error generation. Moreover, the performance and memory overheads of the technique are noticeably less than traditional techniques.",2010,0, 4434,An approach for mining web service composition patterns from execution logs,"A service-oriented application is composed of multiple web services to fulfill complex functionality that cannot be provided by individual web service. The combination of services is not random. In many cases, a set of services are repetitively used together in various applications. We treat such a set of services as a service composition pattern. The quality of the patterns is desirable due to the extensive uses and testing in the large number of applications. Therefore, the service composition patterns record the best practices in designing and developing reliable service-oriented applications. The execution log tracks the execution of services in a service-oriented application. To document the service composition patterns, we propose an approach that automatically identifies service composition patterns from various applications using execution logs. We locate a set of associated services using Apriori algorithm and recover the control flows among the services by analyzing the order of service invocation events in the execution log. A case study shows that our approach can effectively detect service composition patterns.",2010,0, 4435,Self-adaptive management of Web processes,"Nowadays, we are assisting to a paradigmatic shift for the development of web applications due to the pervasive distribution of their components among a lot of servers, which are dynamically interconnected by web links. As a consequence, the application logic is often defined by exploiting workflow languages since they are more suitable to address the complexity of these new running environments. Moreover, in many business environments, the behaviour of a large-scale distributed web application is significantly influenced by context events, whose handling could require run-time adaptations of the application logic to properly react to the changing conditions of the execution context. This paper addresses the need for adaptation in large-scale web applications by proposing a programming paradigm based on autonomic workflows, i.e. workflows that are able to self-change their structure in order to allow for the continuation of the execution towards the termination, even if unexpected anomalies occur during the execution. The proposed approach exploits semantic languages for service description, autonomic managers driven by policies specified using a dedicated language, and a knowledge base containing information collected during processes execution. Autonomic actions are performed using Event Condition Action (ECA) rules for assessing system and process conditions, and a set of operations that allow for dynamic adaptation of the running processes. Furthermore, the correctness of workflow adaptation is checked before the modifications are performed, by using both syntactical and semantic constraints.",2010,0, 4436,Measuring web service interfaces,"The following short paper describes a tool supported method for measuring web service interfaces. The goal is to assess the complexity and quality of these interfaces as well as to determine their size for estimating evolution and testing effort. Besides the metrics for quantity, quality and complexity, rules are defined for ensuring maintainability. In the end a tool - WSDAudit - is described which the author has developed for the static analysis of web service definitions. The WSDL schemas are automatically audited and measured for quality assurance and cost estimation. Work is underway to verify them against the BPEL procedures from which they are invoked.",2010,0, 4437,A Rigorous Method for Inspection of Model-Based Formal Specifications,"Writing formal specifications can help developers understand users' requirements, and build a solid foundation for implementation. But like other activities in software development, it is error-prone, especially for large-scale systems. In practice, effective detection of specification errors still remains a challenge. In this paper, we put forward a rigorous, systematic method for the inspection of model-based formal specifications. The method makes good use of the well-defined consistency properties of a specification to provide precise rules and guidelines for inspection. The inspection process utilizes both well-defined expressions derived from the specification and human inspectors' judgments to find errors. We present a case study of the method by describing how it is applied to inspect an Automated Teller Machine (ATM) software specification to investigate the method's feasibility, and explore potential challenges in using it. We also describe a prototype software tool including its functions and distinct features to demonstrate the tool supportability of the method.",2010,0, 4438,Resonance verification of Tehran-Karaj electrical railway,"In this paper analyses results of harmonic and resonance behavior of Tehran-Karaj electric railway are presented. This special electric traction system supplied by 225 kV Autotransformers (ATs) in the west side and a simple mode (without any return feeder) in the east side is considered here where the east side will be equipped with ATs in future development. Also, some parameters which can change quantity and extremity of resonance are explored. For harmonic analysis, Total Harmonic Distortion (THD) factor and Total Demand Distortion (TDD) factor are assessed. To detect the resonance points of the traction power system, harmonic frequency scans are applied. Harmonic analysis of the model is simulated by PSCAD/EMTDC software.",2010,0,4439 4439,Resonance verification of Tehran-Karaj electrical railway,"In this paper analyses results of harmonic and resonance behavior of Tehran-Karaj electric railway are presented. This special electric traction system supplied by 2 25 kV Autotransformers (ATs) in the west side and a simple mode (without any return feeder) in the east side is considered here where the east side will be equipped with ATs in future development. Also, some parameters which can change quantity and extremity of resonance are explored. For harmonic analysis, Total Harmonic Distortion (THD) factor and Total Demand Distortion (TDD) factor are assessed. To detect the resonance points of the traction power system, harmonic frequency scans are applied. Harmonic analysis of the model is simulated by PSCAD/EMTDC software.",2010,0, 4440,Automatic detection of schwalbe's line in the anterior chamber angle of the eye using HD-OCT images,"Angle-closure glaucoma is a major cause of blindness in Asia and could be detected by measuring the anterior chamber angle (ACA) using gonioscopy, ultrasound biomicroscopy or anterior segment (AS) optical coherence tomography (OCT). The current software in the VisanteTM OCT system by Zeiss is based on manual labeling of the scleral spur, cornea and iris and is a tedious process for ophthalmologists. Furthermore, the scleral spur can not be identified in about 20% to 30% of OCT images and thus measurements of the ACA are not reliable. However, high definition (HD) OCT has identified a more consistent landmark: Schwalbe's line. This paper presents a novel algorithm which automatically detects Schwalbe's line in HD-OCT scans. The average deviation between the values detected using our algorithm and those labeled by the ophthalmologist is less than 0.5% and 0.35% in the horizontal and vertical image dimension, respectively. Furthermore, we propose a new measurement to quantify ACA which is defined as Schwalbe's line bounded area (SLBA).",2010,0, 4441,An expert system for hydrocephalus patient feedback,"Diagnosis of hydrocephalus symptoms and shunting system faults currently are based on clinical observation, monitoring of cranial growth, transfontanelle pressure, imaging techniques and, on occasion, studies of cerebrospinal fluid (CSF) dynamics. Up to date, the patient has to visit the hospital or meet consultant to diagnose the symptoms that occur due to rising of intracranial pressure or any shunt complications, which cause suffering for the patient and his family. This work presents the design and implementation of an expert system based on real-time patient feedback that aims to provide a suitable decision for hydrocephalus management and shunt diagnosis. Such decision would help in personalising the management as well as detecting and identifying of any shunt malfunctions without the need to contact or visit the hospital. In this paper, the development of patient feedback expert system is described. The outcome of such system would help satisfy the patient's needs regarding his/her shunt.",2010,0, 4442,Automatic code generation for solvers of cardiac cellular membrane dynamics in GPUs,"The modeling of the electrical activity of the heart is of great medical and scientific interest, as it provides a way to get a better understanding of the related biophysical phenomena, allows the development of new techniques for diagnoses and serves as a platform for drug tests. However, due to the multi-scale nature of the underlying processes, the simulations of the cardiac bioelectric activity are still a computational challenge. In addition to that, the implementation of these computer models is a time consuming and error prone process. In this work we present a tool for prototyping ordinary differential equations (ODEs) in the area of cardiac modeling that aim to provide the automatic generation of high performance solvers tailored to the new hardware architecture of the graphic processing units (GPUs). The performance of these automatic solvers was evaluated using four different cardiac myocyte models. The GPU version of the solvers were between 75 and 290 times faster than the CPU versions.",2010,0, 4443,Analysis of image quality parameter of conventional and dental radiographic digital images,"The image quality obtained by a radiographic equipment is very useful to characterize the physical properties of the image radiographic chain, in a quality control of the radiographic equipment. In the radiographic technique it is necessary that the evaluation of the image can guarantee the constancy of its quality to carry out a suitable diagnosis. In this work we have designed some radiographic phantoms for different radiographic digital devices, as dental, conventional, equipments with computed radiography (phosphor plate) and direct radiography (sensor) technology. Additionally, we have developed a software to analyse the image obtained by the radiographic equipment with digital processing techniques, as edge detector, morphological operators, statistical test for the detected combinations.. The design of these phantoms let the evaluation of a wide range of operating conditions of voltage, current and time of the digital equipments. Moreover, the image quality analysis by the automatic software, let study it with objective parameters.",2010,0, 4444,iWander: An Android application for dementia patients,"Non-pharmacological management of dementia puts a burden on those who are taking care of a patient that suffer from this chronic condition. Caregivers frequently need to assist their patients with activities of daily living. However, they are also encouraged to promote functional independence. With the use of a discrete monitoring device, functional independence is increased among dementia patients while decreasing the stress put on caregivers. This paper describes a tool which improves the quality of treatment for dementia patients using mobile applications. Our application, iWander, runs on several Android based devices with GPS and communication capabilities. This allows for caregivers to cost effectively monitor their patients remotely. The data uncollected from the device is evaluated using Bayesian network techniques which estimate the probability of wandering behavior. Upon evaluation several courses of action can be taken based on the situation's severity, dynamic settings and probability. These actions include issuing audible prompts to the patient, offering directions to navigate them home, sending notifications to the caregiver containing the location of the patient, establishing a line of communication between the patient-caregiver and performing a party call between the caregiver-patient and patient's local 911. As patients use this monitoring system more, it will better learn and identify normal behavioral patterns which increases the accuracy of the Bayesian network for all patients. Normal behavior classifications are also used to alert the caregiver or help patients navigate home if they begin to wander while driving allowing for functional independence.",2010,0, 4445,SeyeS - support system for preventing the development of ocular disabilities in leprosy,"Leprosy is an infectious disease caused by Mycobacterium Leprae, and generally compromises neural fibers, leading to the development of disabilities. These limit daily activities or social life. In leprosy, the study of disability considered functional (physical) and activity limitations; and social participation. These are measured respectively by EHF and SALSA scales; by and PARTICIPATION SCALE. The objective of this work was to propose a support system, SeyeS, to eyes disabilities development and progression identification, applying Bayesians network - BN's. It is expected that the proposed system be applied in monitoring the patient during treatment and after therapeutic cure of leprosy. SeyeS presented specificity 1 and sensitivity 0.6 in the identification of ocular disabilities development. With Seyes was discovered that the presence of trichiasis and lagophthalmos, tend to increase the probability of developing more disabilities. Otherwise, characteristics as cataracts tend to decrease development of other disabilities, considering that medical interventions could reduce it. The more import of this system is to indicate what should be monitored, and which elements needs interventions to not increasing patient's ocular disabilities.",2010,0, 4446,A Hierarchical Formal Framework for Adaptive N-variant Programs in Multi-core Systems,"We propose a formal framework for designing and developing adaptive N-variant programs. The framework supports multiple levels of fault detection, masking, and recovery through reconfiguration. Our approach is two-fold: we introduce an Adaptive Functional Capability Model (AFCM) to define levels of functional capabilities for each service provided by the system. The AFCM specifies how, once a fault is detected, a system shall scale back its functional capabilities while still maintaining essential services. Next, we propose a Multilayered Assured Architecture Design (MAAD) to implement reconfiguration requirements specified by AFCMs. The layered design improves system resilience in two dimensions: (1) unlike traditional fault-tolerant architectures that treat functional requirements uniformly, each layer of the assured architecture implements a level of functional capability defined in AFCM. The architecture design uses lower-layer functionalities (which are simpler and more reliable) as reference to monitor high-layer functionalities. The layered design also facilitates an orderly system reconfiguration (resulting in graceful degradation) while maintaining essential system services. (2) Each layer of the assured architecture uses N-variant techniques to improve fault detection. The degree of redundancy introduced by Nvariant implementation determines the mix of faults that can be tolerated at each layer. Our hybrid fault model allows us to consider fault types ranging from benign faults to Byzantine faults. Last but not least, multi-layers combined with N-variant implementations are especially suitable for multi-core systems.",2010,0, 4447,Hybrid Probabilistic Relational Models for System Quality Analysis,"The formalism Probabilistic Relational Models (PRM) couples discrete Bayesian Networks with a modeling formalism similar to UML class diagrams and has been used for architecture analysis. PRMs are well-suited to perform architecture analysis with respect to system qualities since they support both modeling and analysis within the same formalism. A particular strength of PRMs is the ability to perform meaningful analysis of domains where there is a high level of uncertainty, as is often the case when performing system quality analysis. However, the use of discrete Bayesian networks in PRMs complicates the analysis of continuous phenomena. The main contribution of this paper is the Hybrid Probabilistic Relational Models (HPRM) formalism which extends PRMs to enable continuous analysis thus extending the applicability for architecture analysis and especially for trade-off analysis of system qualities. HPRMs use hybrid Bayesian networks which allow combinations of discrete and continuous variables. In addition to presenting the HPRM formalism, the paper contains an example which details the use of HPRMs for architecture trade-off analysis.",2010,0, 4448,A Multi-Agent Approach for Self-Diagnosis of a Hydrocephalus Shunting System,"The human brain is immersed in cerebrospinal fluid, which protects it from mechanical stresses and helps support its weight through buoyancy. A constant overproduction, blockage or reabsorption difficulty can result in a build-up of fluid in the skull (Hydrocephalus), which can lead to brain damage or even death. Existing treatments rely on passive implantable shunts that drain the excess fluid out of the skull cavity, thus keeping intracranial pressure in equilibrium. Shunt malfunction is one of the most common clinical problems in pediatric neurosurgery. Unfortunately, symptoms of various shunt complications can be very similar thus complicating the diagnosing process. It is proposed to complement the existing implanted valve with an intracranial pressure sensor, flowmeter and transceiver to be able to make a self-diagnosis. By using such method, all current shunt malfunctions should be detected early and the types of these malfunctions would be predicted. Currently, a mechatronic valve with control software is under investigation and it will be a future solution for most of current shunt problems. This paper describes the design of a multi-agent system for self-diagnosis of hydrocephalus shunting system. An intelligent concept for intelligent agents is proposed that would deal with any shunt malfunctions in an independent and efficient way, with different agents cooperating and communicating through message exchange, each agent specialised in specific tasks of the diagnosis process. Six types of agents have been proposed to detect any faults in hierarchical way. This paper proposed one of the most promising methods for the self-diagnosis and monitoring of hydrocephalus shunting system based on a novel multi-agent approach.",2010,0, 4449,Predicting Faults in High Assurance Software,"Reducing the number of latent software defects is a development goal that is particularly applicable to high assurance software systems. For such systems, the software measurement and defect data is highly skewed toward the not-fault-prone program modules, i.e., the number of fault-prone modules is relatively very small. The skewed data problem, also known as class imbalance, poses a unique challenge when training a software quality estimation model. However, practitioners and researchers often build defect prediction models without regard to the skewed data problem. In high assurance systems, the class imbalance problem must be addressed when building defect predictors. This study investigates the roughly balanced bagging (RBBag) algorithm for building software quality models with data sets that suffer from class imbalance. The algorithm combines bagging and data sampling into one technique. A case study of 15 software measurement data sets from different real-world high assurance systems is used in our investigation of the RBBag algorithm. Two commonly used classification algorithms in the software engineering domain, Naive Bayes and C4.5 decision tree, are combined with RBBag for building the software quality models. The results demonstrate that defect prediction models based on the RBBag algorithm significantly outperform models built without any bagging or data sampling. The RBBag algorithm provides the analyst with a tool for effectively addressing class imbalance when training defect predictors during high assurance software development.",2010,0, 4450,Proved Metamodels as Backbone for Software Adaptation,"In this paper we demonstrate the error-prone status of the UML 2.3 metamodel relating to state machines. We consequently provide a corrected version based on formal proofs written and processed with the help of the Coq system prover. The purpose of the proposed research is to support dynamical adaptation by means of models at runtime. Software components are internally endowed with complex state machines (models) realizing their behavior. Adaptation amounts to dynamically changing the state machines' structure (for instance, adding a new state). This occurs via SimUML, a state machine execution engine that is constructed on the top of a metamodel resulting from correctness proofs.",2010,0, 4451,A Framework for Qualitative and Quantitative Formal Model-Based Safety Analysis,"In model-based safety analysis both qualitative aspects i.e. what must go wrong for a system failure) and quantitative aspects (i.e. how probable is a system failure) are very important. For both aspects methods and tools are available. However, until now for each aspect new and independent models must be built for analysis. This paper proposes the SAML framework as a formal foundation for both qualitative and quantitative formal model-based safety analysis. The main advantage of SAML is the combination of qualitative and quantitative formal semantics which allows different analyses on the same model. This increases the confidence in the analysis results, simplifies modeling and is less error-prone. The SAML framework is tool-independent. As proof-of-concept, we present sound transformation of the formalism into two state of the art model-checking notations. Prototypical tool support for the sound transformation of SAML into PRISM and MRMC for probabilistic analysis as well as different variants of the SMV model checker for qualitative analysis is currently being developed.",2010,0, 4452,Request Path Driven Model for Performance Fault Diagnoses,"Locating and diagnosing performance faults in distributed systems is crucial but challenging. Distributed systems are increasingly complex, full of various correlation and dependency, and exhibit dramatic dynamics. All these made traditional approaches prone to high false alarms. In this paper, we propose a novel system modeling technique, which encodes component's dynamic dependencies and behavior characteristics into system's meta-model and takes it as a unifying framework to deploy component's sub-models. We propose an automatic analyze approach to distill, from request travel paths, request path signatures, the essential information of component's dynamic behaviors, and use it to induce metamodel with Bayesian network, and then use the model to make fault location and diagnoses. We take up fault-injection experiments with RUBiS, a TPCW alike benchmark, simulating eBay.com. The results indicate that our model approach provides effective problem diagnosis, i.e., Bayesian network technique is effective for fault detecting and pinpointing, in terms of request tracing context. Moreover, meta-model induced with request paths, provides an effective guidance for learning statistical correlations among metrics across the system, which effectively avoid 'false alarms' in fault pinpointing. As a case study, we construct a proactive recovery framework, which integrate our system modeling technique with software rejuvenation technique to guarantee system's quality of services.",2010,0, 4453,Earliest Start Time Estimation for Advance Reservation-Based Resource Brokering within Computational Grids,"The ability to conduct advance reservations within grid environments is crucial for applications that want to utilize distributed resources in a predictable and efficient way. Advance reservations are essential for supporting deadline driven applications and the co-allocation of distributed resources. Further, advance reservations can significantly enhance the capabilities of resource brokers. Many Local Resource Management Systems (LRMSs), e.g. Torque/Maui, PBSPro and LSF, provide support for advance reservations. However, these capabilities are usually not accessible via the grid middleware. This paper presents and evaluates a job broker architecture that addresses this deficit and provides support for advance reservation on grid-level as well as a reservation-aware grid resource broker. A core component of the presented job broker architecture is the Earliest Start Time Estimator (ESE), which predicts queuing delays at grid resources. The job broker service is part of the Migol system - a grid middleware that addresses the fault tolerance of applications by offering capabilities such as automatic monitoring and recoveries.",2010,0, 4454,A Method of Detecting Vulnerability Defects Based on Static Analysis,"This paper proposes a method for detecting vulnerability defects caused by tainted data based on state machine. It first uses state machine to define various defect patterns. If the states of state machine is considered as the value propagated in dataflow analysis and the union operation of the state sets as the aggregation operation of dataflow analysis, the defect detection can be treated as a forward dataflow analysis problem. To reduce the false positives caused by intraprocedural analysis, the dynamic information of program was represented approximately by abstract value of variables, and then infeasible path can be identified when some variable's abstract value is empty in the state condition. A function summary method is proposed to get the information needed for performing interprocedural defect detection. The method proposed has been implemented in a defect testing tools.",2010,0, 4455,Hypervisor-Based Virtual Hardware for Fault Tolerance in COTS Processors Targeting Space Applications,"Commercial off the shelf processors are becoming mandatory in space applications to satisfy the ever-growing demand for on-board computing power. As a result, architecture able to withstand the harshness of the space environment are needed to cope with the errors that may affect such processors, which are not specifically designed for being used in space. Beside design and implementation costs, validation of the obtained architecture is a very cost- and time-consuming operation. In this paper we propose an architecture to quickly develop dependable embedded systems using time redundancy. The main novelty of the approach lies in the usage of a hyper visor for implementing seamlessly time redundancy, consistency checking, temporal and spatial segregation of programs that are needed to guarantee a safe execution of the application software. The proposed architecture needs to be validated only once then, provided that the same hyper visor is available for different hardware platforms, it can be deployed without the need for re-validation. We describe a prototypical implementation of the approach and we provide experimental data that assess the effectiveness of the approach.",2010,0, 4456,Characterizing Failures in Mobile OSes: A Case Study with Android and Symbian,"As smart phones grow in popularity, manufacturers are in a race to pack an increasingly rich set of features into these tiny devices. This brings additional complexity in the system software that has to fit within the constraints of the devices (chiefly memory, stable storage, and power consumption) and hence, new bugs are revealed. How this evolution of smartphones impacts their reliability is a question that has been largely unexplored till now. With the release of open source OSes for hand-held devices, such as, Android (open sourced in October 2008) and Symbian (open sourced in February 2010), we are now in a position to explore the above question. In this paper, we analyze the reported cases of failures of Android and Symbian based on bug reports posted by third-party developers and end users and documentation of bug fixes from Android developers. First, based on 628 developer reports, our study looks into the manifestation of failures in different modules of Android and their characteristics, such as, their persistence and dependence on environment. Next, we analyze similar properties of Symbian bugs based on 153 failure reports. Our study indicates that Development Tools, Web Browsers, and Multimedia applications are most error-prone in both these systems. We further analyze 233 bug fixes for Android and categorized the different types of code modifications required for the fixes. The analysis shows that 77% of errors required minor code changes, with the largest share of these coming from modifications to attribute values and conditions. Our final analysis focuses on the relation between customizability, code complexity, and reliability in Android and Symbian. We find that despite high cyclomatic complexity, the bug densities in Android and Symbian are surprisingly low. However, the support for customizability does impact the reliability of mobile OSes and there are cautionary tales for their further development.",2010,0, 4457,Comparing SQL Injection Detection Tools Using Attack Injection: An Experimental Study,"System administrators frequently rely on intrusion detection tools to protect their systems against SQL Injection, one of the most dangerous security threats in database-centric web applications. However, the real effectiveness of those tools is usually unknown, which may lead administrators to put an unjustifiable level of trust in the tools they use. In this paper we present an experimental evaluation of the effectiveness of five SQL Injection detection tools that operate at different system levels: Application, Database and Network. To test the tools in a realistic scenario, Vulnerability and Attack Injection is applied in a setup based on three web applications of different sizes and complexities. Results show that the assessed tools have a very low effectiveness and only perform well under specific circumstances, which highlight the limitations of current intrusion detection tools in detecting SQL Injection attacks. Based on experimental observations we underline the strengths and weaknesses of the tools assessed.",2010,0, 4458,Change Bursts as Defect Predictors,"In software development, every change induces a risk. What happens if code changes again and again in some period of time? In an empirical study on Windows Vista, we found that the features of such change bursts have the highest predictive power for defect-prone components. With precision and recall values well above 90%, change bursts significantly improve upon earlier predictors such as complexity metrics, code churn, or organizational structure. As they only rely on version history and a controlled change process, change bursts are straight-forward to detect and deploy.",2010,0, 4459,The Impact of Coupling on the Fault-Proneness of Aspect-Oriented Programs: An Empirical Study,"Coupling in software applications is often used as an indicator of external quality attributes such as fault-proneness. In fact, the correlation of coupling metrics and faults in object oriented programs has been widely studied. However, there is very limited knowledge about which coupling properties in aspect-oriented programming (AOP) are effective indicators of faults in modules. Existing coupling metrics do not take into account the specificities of AOP mechanisms. As a result, these metrics are unlikely to provide optimal predictions of pivotal quality attributes such as fault-proneness. This impacts further by restraining the assessments of AOP empirical studies. To address these issues, this paper presents an empirical study to evaluate the impact of coupling sourced from AOP-specific mechanisms. We utilise a novel set of coupling metrics to predict fault occurrences in aspect-oriented programs. We also compare these new metrics against previously proposed metrics for AOP. More specifically, we analyse faults from several releases of three AspectJ applications and perform statistical analyses to reveal the effectiveness of these metrics when predicting faults. Our study shows that a particular set of fine-grained directed coupling metrics have the potential to help create better fault prediction models for AO programs.",2010,0, 4460,Prioritizing Mutation Operators Based on Importance Sampling,"Mutation testing is a fault-based testing technique for measuring the adequacy of a test suite. Test suites are assigned scores based on their ability to expose synthetic faults (i.e., mutants) generated by a range of well-defined mathematical operators. The test suites can then be augmented to expose the mutants that remain undetected and are not semantically equivalent to the original code. However, the mutation score can be increased superfluously by mutants that are easy to expose. In addition, it is infeasible to examine all the mutants generated by a large set of mutation operators. Existing approaches have therefore focused on determining the sufficient set of mutation operators and the set of equivalent mutants. Instead, this paper proposes a novel Bayesian approach that prioritizes operators whose mutants are likely to remain unexposed by the existing test suites. Probabilistic sampling methods are adapted to iteratively examine a subset of the available mutants and direct focus towards the more informative operators. Experimental results show that the proposed approach identifies more than 90% of the important operators by examining ? 20% of the available mutants, and causes a 6% increase in the importance measure of the selected mutants.",2010,0, 4461,Improving the Precision of Dependence-Based Defect Mining by Supervised Learning of Rule and Violation Graphs,"Previous work has shown that application of graph mining techniques to system dependence graphs improves the precision of automatic defect discovery by revealing subgraphs corresponding to implicit programming rules and to rule violations. However, developers must still confirm, edit, or discard reported rules and violations, which is both costly and error-prone. In order to reduce developer effort and further improve precision, we investigate the use of supervised learning models for classifying and ranking rule and violation subgraphs. In particular, we present and evaluate logistic regression models for rules and violations, respectively, which are based on general dependence-graph features. Our empirical results indicate that (i) use of these models can significantly improve the precision and recall of defect discovery, and (ii) our approach is superior to existing heuristic approaches to rule and violation ranking and to an existing static-warning classifier, and (iii) accurate models can be learned using only a few labeled examples.",2010,0, 4462,Assessing Asymmetric Fault-Tolerant Software,"The most popular forms of fault tolerance against design faults use """"asymmetric"""" architectures in which a """"primary"""" part performs the computation and a """"secondary"""" part is in charge of detecting errors and performing some kind of error processing and recovery. In contrast, the most studied forms of software fault tolerance are """"symmetric"""" ones, e.g. N-version programming. The latter are often controversial, the former are not. We discuss how to assess the dependability gains achieved by these methods. Substantial difficulties have been shown to exist for symmetric schemes, but we show that the same difficulties affect asymmetric schemes. Indeed, the latter present somewhat subtler problems. In both cases, to predict the dependability of the fault-tolerant system it is not enough to know the dependability of the individual components. We extend to asymmetric architectures the style of probabilistic modeling that has been useful for describing the dependability of """"symmetric"""" architectures, to highlight factors that complicate the assessment. In the light of these models, we finally discuss fault injection approaches to estimating coverage factors. We highlight the limits of what can be predicted and some useful research directions towards clarifying and extending the range of situations in which estimates of coverage of fault tolerance mechanisms can be trusted.",2010,0, 4463,A Multi-factor Software Reliability Model Based on Logistic Regression,"This paper proposes a multi-factor software reliability model based on logistic regression and its effective statistical parameter estimation method. The proposed parameter estimation algorithm is composed of the algorithm used in the logistic regression and the EM (expectation-maximization) algorithm for discrete-time software reliability models. The multi-factor model deals with the metrics observed in testing phase (testing environmental factors), such as test coverage and the number of test workers, to predict the number of residual faults and other reliability measures. In general, the multi-factor model outperforms the traditional software reliability growth model like discrete-time non-homogeneous models in terms of data-fitting and prediction abilities. However, since it has a number of parameters, there is the problem in estimating model parameters. Our modeling framework and its estimation method are quite simpler than the existing methods, and are promising for expanding the applicability of multi-factor software reliability model. In numerical experiments, we examine data-fitting ability of the proposed model by comparing with the existing multi-factor models. The proposed method provides the similar fitting ability to existing multi-factor models, although the computation effort of parameter estimation is low.",2010,0, 4464,"Flexible, Any-Time Fault Tree Analysis with Component Logic Models","This article presents a novel approach to facilitating fault tree analysis during the development of software-controlled systems. Based on a component-oriented system model, it combines second-order probabilistic analysis and automatically generated default failure models with a level-of-detail concept to ensure early and continuous analysability of system failure behaviour with optimal effort, even in the presence of incomplete information and dissimilar levels of detail in different parts of an evolving system model. The viability and validity of the method are demonstrated by means of an experiment.",2010,0, 4465,Towards a Bayesian Approach in Modeling the Disclosure of Unique Security Faults in Open Source Projects,"Software security has both an objective and a subjective component. A lot of the information available about that today is focused on the security vulnerabilities and their disclosure. It is less frequent that security breaches and failures rates are reported, even in open source projects. Disclosure of security problems can take several forms. A disclosure can be accompanied by a release of the fix for the problem, or not. The latter category can be further divided into voluntary and involuntary security issues. In widely used software there is also considerable variability in the operational profile under which the software is used. This profile is further modified by attacks on the software that may be triggered by security disclosures. Therefore a comprehensive model of software security qualities of a product needs to incorporate both objective measures, such as security problem disclosure, repair and, failure rates, as well as less objective metrics such as implied variability in the operational profile, influence of attacks, and subjective impressions of exposure and severity of the problems, etc. We show how a classical Bayesian model can be adapted for use in the security context. The model is discussed and assessed using data from three open source software projects. Our results show that the model is suitable for use with a certain subset of disclosed security faults, but that additional work will be needed to identify appropriate shape and scaling functions that would accurately reflect end-user perceptions associated with security problems.",2010,0, 4466,A Consistency Check Algorithm for Component-Based Refinements of Fault Trees,"The number of embedded systems in our daily lives that are distributed, hidden, and ubiquitous continues to increase. Many of them are safety-critical. To provide additional or better functionalities, they are becoming more and more complex, which makes it difficult to guarantee safety. It is undisputed that safety must be considered before the start of development, continue until decommissioning, and is particularly important during the design of the system and software architecture. An architecture must be able to avoid, detect, or mitigate all dangerous failures to a sufficient degree. For this purpose, the architectural design must be guided and verified by safety analyses. However, state-of-the-art component-oriented or model-based architectural design approaches use different levels of abstraction to handle complexity. So, safety analyses must also be applied on different levels of abstraction, and it must be checked and guaranteed that they are consistent with each other, which is not supported by standard safety analyses. In this paper, we present a consistency check for CFTs that automatically detects commonalities and inconsistencies between fault trees of different levels of abstraction. This facilitates the application of safety analyses in top-down architectural designs and reduces effort.",2010,0, 4467,DoDOM: Leveraging DOM Invariants for Web 2.0 Application Robustness Testing,"Web 2.0 applications are increasing in popularity. However, they are also prone to errors because of their dynamic nature. This paper presents DoDOM, an automated system for testing the robustness of Web 2.0 applications based on their Document Object Models (DOMs). DoDOM repeatedly executes the application under a trace of recorded user actions and observes the client-side behavior of the application in terms of its DOM structure. Based on the observations, DoDOM extracts a set of invariants on the web application's DOM structure. We show that invariants exist for real applications and can be learned within a reasonable number of executions. We further use fault-injection experiments to demonstrate the uses of the invariants in detecting errors in web applications. The invariants are found to provide high coverage in detecting errors that impact the DOM, with a low rate of false positives.",2010,0, 4468,Using Search Methods for Selecting and Combining Software Sensors to Improve Fault Detection in Autonomic Systems,"Fault-detection approaches in autonomic systems typically rely on runtime software sensors to compute metrics for CPU utilization, memory usage, network throughput, and so on. One detection approach uses data collected by the runtime sensors to construct a convex-hull geometric object whose interior represents the normal execution of the monitored application. The approach detects faults by classifying the current application state as being either inside or outside of the convex hull. However, due to the computational complexity of creating a convex hull in multi-dimensional space, the convex-hull approach is limited to a few metrics. Therefore, not all sensors can be used to detect faults and so some must be dropped or combined with others. This paper compares the effectiveness of genetic-programming, genetic-algorithm, and random-search approaches in solving the problem of selecting sensors and combining them into metrics. These techniques are used to find 8 metrics that are derived from a set of 21 available sensors. The metrics are used to detect faults during the execution of a Java-based HTTP web server. The results of the search techniques are compared to two hand-crafted solutions specified by experts.",2010,0, 4469,A Quantitative Approach to Software Maintainability Prediction,"Software maintainability is one important aspect in the evaluation of software evolution of a software product. Due to the complexity of tracking maintenance behaviors, it is difficult to accurately predict the cost and risk of maintenance after delivery of software products. In an attempt to address this issue quantitatively, software maintainability is viewed as an inevitable evolution process driven by maintenance behaviors, given a health index at the time when a software product are delivered. A Hidden Markov Model (HMM) is used to simulate the maintenance behaviors shown as their possible occurrence probabilities. And software metrics is the measurement of the quality of a software product and its measurement results of a product being delivered are combined to form the health index of the product. The health index works as a weight on the process of maintenance behavior over time. When the occurrence probabilities of maintenance behaviors reach certain number which is reckoned as the indication of the deterioration status of a software product, the product can be regarded as being obsolete. Longer the time, better the maintainability would be.",2010,0, 4470,Search-based Prediction of Fault-slip-through in Large Software Projects,"A large percentage of the cost of rework can be avoided by finding more faults earlier in a software testing process. Therefore, determination of which software testing phases to focus improvements work on, has considerable industrial interest. This paper evaluates the use of five different techniques, namely particle swarm optimization based artificial neural networks (PSO-ANN), artificial immune recognition systems (AIRS), gene expression programming (GEP), genetic programming (GP) and multiple regression (MR), for predicting the number of faults slipping through unit, function, integration and system testing phases. The objective is to quantify improvement potential in different testing phases by striving towards finding the right faults in the right phase. We have conducted an empirical study of two large projects from a telecommunication company developing mobile platforms and wireless semiconductors. The results are compared using simple residuals, goodness of fit and absolute relative error measures. They indicate that the four search-based techniques (PSO-ANN, AIRS, GEP, GP) perform better than multiple regression for predicting the fault-slip-through for each of the four testing phases. At the unit and function testing phases, AIRS and PSO-ANN performed better while GP performed better at integration and system testing phases. The study concludes that a variety of search-based techniques are applicable for predicting the improvement potential in different testing phases with GP showing more consistent performance across two of the four test phases.",2010,0, 4471,Differential Histogram Modification-Based Reversible Watermarking with Predicted Error Compensation,"Reversible watermarking inserts watermark into digital media in such a way that visual transparency is preserved and the original media can be restored from the marked one without any loss of media quality. High capacity and high visual quality are a major requirement for reversible watermarking. In this paper, we present a novel reversible watermarking scheme that embeds message bits by modifying the differential histogram of adjacent pixels. Also, overflow and underflow problem is prevented with the proposed predicted error compensation scheme. Through experiments on various images, we prove that the presented scheme achieves 100% reversibility, high capacity, and high visual quality over other methods, while maintaining the induced-distortion low.",2010,0, 4472,Design and Implementation of Movie Camera Recording on Worker's Motion Tracing System by Terrestrial Magnetism Sensors,"A basis of quality control on industrial products is to confirm that materials parts, assembly and processing and so on satisfy regulation. There are some cases depending on types of assembly processes that the result of certain process cannot be confirmed whether it satisfies regulation. For example, order to fasten some screws to fix a part is determined to guaranty accuracy to fox the part. However, in case of screwing order is not correct, the part is fixed. In this case, the accuracy is not guarantied. If accuracy is enough high accidentally, inspection on samples of assembled products cannot find the violation of screwing order. Therefore, we are now developing a system that monitors routine works using terrestrial magnetism sensors. In this system, a terrestrial magnetism sensor is attached into a worker to a tool, and the system judges whether certain routine work is correctly done by using output of the sensor. We have realized almost precise detection of wrong works, however, we could not identify which kind of wrong work is detected by our system without seeing worker's motion. So grasping which kind of wrong works happened is depends on worker's report on his/her work. Therefore, we added a video recording feature in our system. In this paper, we describe its design and implementation and simple evaluation result.",2010,0, 4473,Classification of power quality disturbances using Wavelet and Artificial Neural Networks,"An automated classification system based on Wavelet transform as a feature extraction tool in combination with Artificial Neural Network as algorithm classifier is presented. Perturbed signals generated according to mathematical models have been used to obtain experimental results in two stages, first, with a data set with simple disturbances and, later, including complex disturbances, more usual in real electrical system. In both cases noise is added to the signals from 40dB to 20dB. Two different neural networks have been used as classifier algorithm, a backpropagation and probabilistic. A data set with several disturbances, simple and complex, has been generated by simulation software based on electrical models, to test the implemented system. Evaluation results verifying the accuracy of the proposed method are presented.",2010,0, 4474,Effort and Quality of Recovering Requirements-to-Code Traces: Two Exploratory Experiments,"Trace links between requirements and code are essential for many software development and maintenance activities. Despite significant advances in traceability research, creating links remains a human-intensive activity and surprisingly little is known about how humans perform basic tracing tasks. We investigate fundamental research questions regarding the effort and quality of recovering traces between requirements and code. Our paper presents two exploratory experiments conducted with 100 subjects who recovered trace links for two open source software systems in a controlled environment. In the first experiment, subjects recovered trace links between the two systems' requirements and classes of the implementation. In the second experiment, trace links were established between requirements and individual methods of the implementation. In order to assess the validity of the trace links cast by subjects, key developers of the two software systems participated in our research and provided benchmarks. Our study yields surprising observations: trace capture is surprisingly fast and can be done within minutes even for larger classes; the quality of the captured trace links, while good, does not improve with higher trace effort; and it is not harder though slightly more expensive to recover the trace links for larger, more complex classes.",2010,0, 4475,Automated Requirements Traceability: The Study of Human Analysts,"The requirements traceability matrix (RTM) supports many software engineering and software verification and validation (V&V) activities such as change impact analysis, reverse engineering, reuse, and regression testing. The generation of RTMs is tedious and error-prone, though, thus RTMs are often not generated or maintained. Automated techniques have been developed to generate candidate RTMs with some success. When using RTMs to support the V&V of mission-or safety-critical systems, however, a human analyst must vet the candidate RTMs. The focus thus becomes the quality of the final RTM. This paper investigate show human analysts perform when vetting candidate RTMs. Specifically, a study was undertaken at two universities and had 26 participants analyze RTMs of varying accuracy for a Java code formatter program. The study found that humans tend to move their candidate RTM toward the line that represents recall = precision. Participants who examined RTMs with low recall and low precision drastically improved both.",2010,0, 4476,An Experimental Comparison Regarding the Completeness of Functional Requirements Specifications,"Providing high-quality software within budget is a goal pursued by most software companies. Incomplete requirements specifications can have an adverse effect on this goal and thus on a company's competitiveness. Several empirical studies have investigated the effects of requirements engineering methods on the completeness of a specification. In order to increase this body of knowledge, we suggest using an objective evaluation scheme for assessing the completeness of specification documents, as objectifying the term completeness facilitates the interpretation of evaluations and hence comparison among different studies. This paper reports experience from applying the scheme to a student experiment comparing a use case with a textual approach common in industry. The statistical analysis of the specification's completeness indicates that use case descriptions lead to more complete requirements specifications. We further experienced that the scheme is applicable to experiments and delivers meaningful results.",2010,0, 4477,Hiding traces of the blurring effect in digital forgeries,"The confliction between image tampering and digital forensic has persisted for more than a decade. With powerful photo editing software, everyone can easily forge images by Copy-Paste operation. To eliminate the visual edge introduced by tampering, they may employ edge and region smoothing after the contents are manipulated or altered. This process often introduces disharmony between authentic regions and tampering regions. The traces of digital tampering can be detected by estimating the sharpness of the image regions. To remove the fragility of Copy-Paste operation, we proposed a genetic based algorithm to eliminate the blurring in the forgery image. After the processing, the sharpness of the blurred region is recovered and the image quality is preserved.",2010,0, 4478,Assessing the Quality of B Models,"This paper proposes to define and assess the notion of quality of B models aiming at providing an automated feedback on a model by performing systematic checks on its content. We define and classify classes of automatic verification steps that help the modeller in knowing whether his model is well-written or not. This technique is defined in the context of ``behavioral models'' that describe the behavior of a system using the generalized substitutions mechanism. From these models, verification conditions are automatically computed and discharged using a dedicated tool. This technique has been adapted to the B notation, especially on B abstract machines, and implemented within a tool interfaced with a constraint solver that is able to find counter-examples to unvalid verification conditions.",2010,0, 4479,Back-annotation of Simulation Traces with Change-Driven Model Transformations,"Model-driven analysis aims at detecting design flaws early in high-level design models by automatically deriving mathematical models. These analysis models are subsequently investigated by formal verification and validation (V&V) tools, which may retrieve traces violating a certain requirement. Back-annotation aims at mapping back the results of V&V tools to the design model in order to highlight the real source of the fault, to ease making necessary amendments. Here we propose a technique for the back-annotation of simulation traces based on change-driven model transformations. Simulation traces of analysis models will be persisted as a change model with high-level change commands representing macro steps of a trace. This trace is back-annotated to the design model using change-driven transformation rules, which bridge the conceptual differences between macro steps in the analysis and design traces. Our concepts will be demonstrated on the back-annotation problem for analyzing BPEL processes using a Petri net simulator.",2010,0, 4480,A self-hosting configuration management system to mitigate the impact of Radiation-Induced Multi-Bit Upsets in SRAM-based FPGAs,"This paper presents an efficient circuit to mitigate the impact of Radiation-Induced Multi-Bit Upsets in Xilinx FPGAs from Virtex-II on. The proposed internal scrubber detects and corrects single bit upsets and double, triple and quadruple multi bit upsets by efficiently exploiting permuted and compressed Hamming check codes. When implemented using a Xilinx XC2V1000 Virtex-II device, it occupies just 1488 slices and dissipates less than 30 mW at a 50MHz running frequency, taking just 18us to complete the error checking over a single frame, and 18.76us to repair the corrupted frame.",2010,0, 4481,Rapid design optimisation of microwave structures through automated tuning space mapping,"Tuning space mapping (TSM) is one of the latest developments in space mapping technology. TSM algorithms offer a remarkably fast design optimisation with satisfactory results obtained after one or two iterations, which amounts to just a few electromagnetic simulations of the optimised microwave structure. The TSM algorithms (as exemplified by `Type-1` tuning) could be simply implemented manually. The approach may require interaction between various electromagnetic-based and circuit models, as well as handling different sets of design variables and control parameters. As a result, certain TSM algorithms (especially so-called `Type-0` tuning) may be tedious, thus, error-prone to implement. Here, we present a fully automated tuning space mapping implementation that exploits the functionality of our user-friendly space mapping software, the SMF system. The operation and performance of our new implementation is illustrated through the design of a box-section Chebyshev bandpass filter and a capacitively coupled dual-behaviour resonator filter.",2010,0, 4482,Identification of major responding proteins of Abnormal Leaf and Flower in soybean with an integrative omics strategy,"Proteomics has been utilized as an effective approach to bridge the gap between phenotype and genome sequence, however, more effective strategies need to be explored to find target gene(s) from many differentially expressed proteins (DEPs). Here, we utilized an interdisciplinary approach employing a range of methodologies and software tools of genomics, proteomics and metabolomics to identify important responding proteins of the Abnormal Leaf and Flower (ALF) gene involved in soybean leaf and flower development, then to get a global insight into the relevant regulating networks underpinning the alf phenotype. The main results were as follows: (1) A pair of soybean near-isogenic lines (NILs), i.e. NJS-10H-W and NJS-10H-M, differing from ALF locus, was developed with highly consistent genetic background verified by 167 simple sequence repeats (SSR) molecular markers, and an optimized 2-DE procedure was established to separate the whole proteins of leaves of the NILs. Among more than 1000 visualized protein spots, 58 spots presented expression difference, of which 41 proteins were successfully identified by mass spectrometry. The DEPs distributed on all the twenty soybean chromosomes, indicating a complicated regulation network involved in the development of leaf and flower in soybean. (2) The ALF gene was located at the end of the short arm of linkage group C1 (Chromosome 4) by gene mapping method using an F2 population. Three DEPs were also detected in the same region. (3) Ten proteins/genes of DEPs were located in the metabolism pathway by Kyoto Encyclopedia of Genes and Genomes Application Programming Interface (KEGG API), and most of the defects occurred at intersections among carbohydrate, amino acid, energy and cofactors and vitamins metabolism. The Gene Ontology (GO) annotation results of DEPs demonstrated considerable part of proteins as DNA-binding factors, metalloproteases and oxidoreduction enzymes. The GSA (Glutamate-1-Semialdehyde2,1-Aminomutase) and PIN - - (Peptidyl-prolyl cis-trans isomerase) genes were selected as potential candidate genes for ALF locus based on the affluent information from different omics analyses, and the possible regulating profile underpinning the phenome of the mutant was also inferred. In conclusion, some important responding proteins as upstream regulated factors within ALF expression network were identified and marked to the involved pathways for further analysis of the target gene. It showed that combination of omics methods could accelerate the process to isolate new gene(s) and provide potential information for further study on genes and proteins regulatory network.",2010,0, 4483,Effects of Internal Hole-Defects Location on the Propagation Parameters of Acoustic Wave in the Maple Wood,"To detect the effects of internal hole-defects location on the propagation parameters of acoustic wave in the wood, forty maple wood samples used as the study objects are tested by using PLG (Portable Lumber Grader) instrument in this paper. The propagation velocity and vibration frequency of acoustic wave in intact wood and defective wood were compared, and then the correlation between the propagation velocity or vibration frequency and the elastic modulus were discussed respectively. The analysis results showed that: (1) there were significant positive correlations between the propagation velocity or vibration frequency of acoustic wave and the elastic modulus of the intact and defective maple wood samples; (2) the propagation velocity and vibration frequency of acoustic wave in defective wood samples were lower than those of intact wood samples; and (3) the changes of acoustic wave propagation parameters were different when the location of internal hole-defects of wood samples were different.",2010,0, 4484,A clustering algorithm for software fault prediction,"Software metrics are used for predicting whether modules of software project are faulty or fault free. Timely prediction of faults especially accuracy or computation faults improve software quality and hence its reliability. As we can apply various distance measures on traditional K-means clustering algorithm to predict faulty or fault free modules. Here, in this paper we have proposed K-Sorensen-means clustering that uses Sorensen distance for calculating cluster distance to predict faults in software projects. Proposed algorithm is then trained and tested using three datasets namely, JM1, PCI and CM1 collected from NASA MDP. From these three datasets requirement metrics, static code metrics and alliance metrics (combining both requirement metrics and static code metrics) have been built and then K-Sorensen-means applied on all datasets to predict results. Alliance metric model is found to be the best prediction model among three models. Results of K-Sorensen-means clustering shown and corresponding ROC curve has been drawn. Results of K-Sorensen-means are then compared with K-Canberra-means clustering that uses other distance measure for evaluating cluster distance.",2010,0, 4485,Model-based diagnosis of induction motor failure modes,"Induction motor failure modes can be instantaneous or progressive. This paper proposes a model-based methodology employed to attempt to identify the root causes of the main fault modes of progressive failures. Utilising indicators from the monitoring of three-phase motor currents, this paper explains how leading-edge and traditional theories could be combined and optimised to provide an integrated software algorithm set that could accurately predict the root causes of the five main failure modes of standard cage type Induction Motors. Utilising mathematical modelling and simulation of each specific fault mode, comparisons can be made between the known symptoms of specific faults and Induction Motor signals obtained from devices operating at full load in the field. Comparison of variances between model states and the state of the operating machine could enable the specific root cause diagnosis to be made.",2010,0, 4486,A stochastic model for performance evaluation and bottleneck discovering on SOA-based systems,"The Service-Oriented Architecture (SOA) has become a unifying technical architecture that may be embodied through Web Service technologies. Predicting the variable behavior of SOA systems can mean a way to improve the quality of the business transactions. This paper proposes a simulation modeling approach based on stochastic Petri nets to estimate the performance of SOA-applications. Using the proposed model is possible to predict resource consumption and service levels degradation in scenarios with different compositions and workloads, even before developing the application. A case study was conducted in order to validate our approach, comparing its accuracy with the results from an analytical model existent in the literature.",2010,0, 4487,Software reliability analysis with optimal release problems based on hazard rate model for an embedded OSS,"An OSS (open source software) system is frequently applied as server use, instead of client use. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. We focus on software quality/reliability problems that can prohibit the progress of embedded OSS. In this paper, we propose a method of software reliability assessment based on a hazard rate model for the embedded OSS. In particular, we derive several assessment measures from the model. Also, we analyze actual software failure-occurrence time-interval data to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.",2010,0, 4488,Dynamic parameter control of interactive local search in UML software design,"User-centered Interactive Evolutionary Computation (IEC) has been applied to a wide variety of areas, including UML software design. The performance of evolutionary search is important as user interaction fatigue remains an on-going challenge in IEC. However, to obtain optimal search performance, it is usually necessary to tune evolutionary control parameters manually, although tuning control parameters can be time-consuming and error-prone. To address this issue in other fields of evolutionary computation, dynamic parameter control including deterministic, adaptive and self-adaptive mechanisms have been applied extensively to real-valued representations. This paper postulates that dynamic parameter control may be highly beneficial to IEC in general, and UML software design in particular, wherein a novel object-based solution representation is used. Three software design problems from differing design domains and of differing scale have been investigated with mutation probabilities modified by simulated annealing, the Rechenberg 1/5 success rule and self-adaptation within local search. Results indicate that self-adaptation appears to be the most robust and scalable mutation probability modification mechanism. The use of self-adaption with an object-based representation is novel, and results indicate that dynamic parameter control offers great potential within IEC.",2010,0, 4489,An integrated approach to Design for Quality (DfQ) in the high value added printed circuit assembly (PCA) manufacturing: A pilot tool,"High value added electronics manufacturing is a challenging sector that requires compliance with demanding quality standards. Tools and methods to support Design for Quality (DfQ) are limited within the domain. In this paper a software toolkit to support (DfQ) is proposed based on the underlying concept of integrated modelling. Based on this principle a simulation module to predict quality in terms of manufacturing defects and a Root Cause Analysis (RCA) module to support their elimination have been developed. The focus of this paper is on the latter RCA module. After a description of the integrated modelling concept and the software toolkit, a case study is presented to demonstrate the value of the software. The results illustrate possible roles of the quality support toolkit in solving real quality problems in printed circuit assemblies (PCA's) manufacturing. The results include measures of product quality improvements and savings in terms of time and cost.",2010,0, 4490,Reverse Engineering Utility Functions Using Genetic Programming to Detect Anomalous Behavior in Software,"Recent studies have shown the promise of using utility functions to detect anomalous behavior in software systems at runtime. However, it remains a challenge for software engineers to hand-craft a utility function that achieves both a high precision (i.e., few false alarms) and a high recall (i.e., few undetected faults). This paper describes a technique that uses genetic programming to automatically evolve a utility function for a specific system, set of resource usage metrics, and precision/recall preference. These metrics are computed using sensor values that monitor a variety of system resources (e.g., memory usage, processor usage, thread count). The technique allows users to specify the relative importance of precision and recall, and builds a utility function to meet those requirements. We evaluated the technique on the open source Jigsaw web server using ten resource usage metrics and five anomalous behaviors in the form of injected faults in the Jigsaw code and a security attack. To assess the effectiveness of the technique, the precision and recall of the evolved utility function was compared to that of a hand-crafted utility function that uses a simple thresholding scheme. The results show that the evolved function outperformed the hand-crafted function by 10 percent.",2010,0, 4491,Reverse Engineering Self-Modifying Code: Unpacker Extraction,"An important application of binary-level reverse engineering is in reconstructing the internal logic of computer malware. Most malware code is distributed in encrypted (or """"packed"""") form, at runtime, an unpacker routine transforms this to the original executable form of the code, which is then executed. Most of the existing work on analysis of such programs focuses on detecting unpacking and extracting the unpacked code. However, this does not shed any light on the functionality of different portions of the code so obtained, and in particular does not distinguish between code that performs unpacking and code that does not, identifying such functionality can be helpful for reverse engineering the code. This paper describes a technique for identifying and extracting the unpacker code in a self-modifying program. Our algorithm uses offline analysis of a dynamic instruction trace both to identify the point(s) where unpacking occurs and to identify and extract the corresponding unpacker code.",2010,0, 4492,"Design, Modeling, and Evaluation of a Scalable Multi-level Checkpointing System","High-performance computing (HPC) systems are growing more powerful by utilizing more hardware components. As the system mean-time-before-failure correspondingly drops, applications must checkpoint more frequently to make progress. However, as the system memory sizes grow faster than the bandwidth to the parallel file system, the cost of checkpointing begins to dominate application run times. Multi-level checkpointing potentially solves this problem through multiple types of checkpoints with different costs and different levels of resiliency in a single run. This solution employs lightweight checkpoints to handle the most common failure modes and relies on more expensive checkpoints for less common, but more severe failures. This theoretically promising approach has not been fully evaluated in a large- scale, production system context. We have designed the Scalable Checkpoint/Restart (SCR) library, a multi-level checkpoint system that writes checkpoints to RAM, Flash, or disk on the compute nodes in addition to the parallel file system. We present the performance and reliability properties of SCR as well as a probabilistic Markov model that predicts its performance on current and future systems. We show that multi-level checkpointing improves efficiency on existing large-scale systems and that this benefit increases as the system size grows. In particular, we developed low-cost checkpoint schemes that are 100x-1000x faster than the parallel file system and effective against 85% of our system failures. This leads to a gain in machine efficiency of up to 35%, and it reduces the the load on the parallel file system by a factor of two on current and future systems.",2010,0, 4493,Linguistic Driven Refactoring of Source Code Identifiers,"Identifiers are an important source of information during program understanding and maintenance. Programmers often use identifiers to build their mental models of the software artifacts. We have performed a preliminary study to examine the relation between the terms in identifiers, their spread in entities, and fault proneness. We introduced term entropy and context-coverage to measure how scattered terms are across program entities and how unrelated are the methods and attributes containing these terms. Our results showed that methods and attributes containing terms with high entropy and context-coverage are more fault-prone. We plan to build on this study by extracting linguistic information form methods and classes. Using this information, we plan to establish traceability link from domain concepts to source code, and to propose linguistic based refactoring.",2010,0, 4494,Predicting Re-opened Bugs: A Case Study on the Eclipse Project,"Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on the Eclipse project. We structure our study along 4 dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed on), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). Our case study on the Eclipse Platform 3.0 project shows that the comment and description text, the time it took to fix the bug, and the component the bug was found in are the most important factors in determining whether a bug will be re-opened. Based on these dimensions we create decision trees that predict whether a bug will be re-opened after its closure. Using a combination of our dimensions, we can build explainable prediction models that can achieve 62.9% precision and 84.5% recall when predicting whether a bug will be re-opened.",2010,0, 4495,Accuracy of automatic speaker recognition for telephone speech signal quality,"This paper was performed by examining the accuracy of speaker identification on telephone quality voice signals. Speaker recognizer was implemented using HTK. Influence of the considered telephone channels on transmitted voice signal is seen through its basic characteristics, types of the applied codecs and the effects caused by the condition of the transmission channel. These effects were observed by a factor of transmission error probability, while the VoIP telephone channels were analyzed and the appearance of echo. Simulation of the appropriate codecs and the probability of various errors made during transmission by using publicly available library of software tools, ITU-T STL2005, while the echo phenomenon was simulated using effect Delay / Echo-Simple suite Sony Sound Forge 9.0.",2010,0, 4496,Bearing fault detection based on order bispectrum,"In order to process the non-stationary vibration signals such as speed up or speed down vibration signals effectively, the order bispectrum analysis technique is presented. This new method combines computed order tracking technique with bispectrum analysis. Firstly, the vibration signal is sampled at constant time increments and then uses software to resample the data at constant angle increments. Therefore, the time domain transient signal is converted into angle domain stationary one. In the end, the resampled signals are processed by bispectrum analysis technology. The experimental results show that order bispectrum analysis can effectively detect the bearing fault.",2010,0, 4497,Energy optimal on-line Self-Test of microprocessors in WSN nodes,"Wireless Sensor Network (WSN) applications often need to be deployed in harsh environments, where the possibility of faults due to environmental hazards is significantly increased, while silicon aging and wearout effects are also exacerbated. For such applications, periodic on-line testing of the WSN nodes is an important step towards correctness of operation. However, on-line testing of processors integrated in WSN nodes has to address the additional challenge of minimum energy consumption, because these devices operate on battery, which usually cannot be replaced and in the absence of catastrophic failures determines the lifetime of the system. In this paper initially we derive analytically the optimal way for executing on-line periodic test with adjustable period, taking into account the degrading behavior of the system due to silicon aging effects but also the limited energy budget of WSN applications. The test is applied in the form of Software-Based Self-Test (SBST) routines, thus we proceed to the power optimized development of SBST routines targeting the transition delay fault model that is well suited for detecting timing violations due to silicon aging. Simulation results show that energy savings for the final SBST routine at processor level are up to 35.4% and the impact of test in the battery life of the system is negligible.",2010,0, 4498,A simple hybrid image segmentation method for embedded visual display devices,"Image segmentation plays a major role in computer vision. It is a fundamental task for feature extraction and pattern matching applications. This paper proposes a simple hybrid Image segmentation method which is mainly based on mathematical morphological operations and filtering techniques. The main aim of the proposed hybrid segmentation method is to segment the foreground object in the given image and mark the segmented region with precision. The purpose of developing this method is to identify a prominent single object based photographs automatically in real time. Also the algorithm must work for worst cases (fog, mist, blur, noise etc). This requirement needs a precise segmentation approach which must be computationally less costly and easy to implement with better quality of segmenting the object as Region of interest. The images are at first subjected to Gaussian filtering to make the image smooth for segmentation. Later, applying Sobel edge detection algorithm to detect the edges properly and then applying morphological operations logically arranged in a novel way for morphological image cleaning purposes. In the final stage, the object of interest is segmented and marked which proves the efficiency of the proposed hybrid image segmentation algorithm. Furthermore, the proposed hybrid image segmentation algorithm is implemented in MATLAB (version 7.0 on an Intel P-4 dual core) and evaluated on 350 jpeg images with satisfactory results. The images sizes which were tested are 384 * 288, 480 * 320, 640 *480, 720 * 480, 800 * 600, 912 *608, 912 * 684, 1024 *768 and 1600 *1200.",2010,0, 4499,Inter-frame error concealment using graph cut technique for video transmission,"Due to channel noise and congestion, video data packets can be lost during transmission in error-prone networks, which severely affects the quality of received video sequences. The conventional inter-frame error concealment (EC) methods estimate a motion vector (MV) for a corrupted block or reconstruct the corrupted pixel values using spatial and temporal weighted interpolation, which may result in boundary discontinuity and blurring artifacts of the reconstructed region. In this paper, we reconstruct corrupted macroblock (MB) by predicting sub-partitions and synthesizing the corrupted MB to reduce boundary discontinuity and avoid blurring artifacts. First, we select the optimal MV for each neighboring boundary using minimum side match distortion from a candidate MV set, and then we calculate the optimal cut path between the overlapping regions to synthesize the corrupted MB. The simulation results show that our proposed method is able to achieve significantly higher PSNR as well as better visual quality than using the H.264/AVC reference software.",2010,0, 4500,Lightning overvoltages on an overhead transmission line during backflashover and shielding failure,Analysis of induced voltages on transmission tower and conductors has been performed when a high voltage line is subjected to propagation of lightning transient. PSCAD/EMTDC software program is used to carry out the modelling and simulation works. Lightning strikes on the tower top or conductors result in large overvoltages appearing between the tower and the conductors. Two cases considered for these effects are: (i) direct strike to a shield wire or tower top; (ii) shielding failure. The probability of a lightning strike terminating on a shield wire or tower top is higher than that of a phase conductor. Voltages produced during shielding failure on conductors are more significant than during back flashovers. The severity of the induced voltages from single stroke and multiple stroke lightning is illustrated using the simulation results. The results demonstrate high magnitude of induced voltages by the multiple stroke lightning compared to those by single strokes. Analytical studies were performed to verify the results obtained from the simulation. Analysis of the performance of the line using IEEE Flash version 1.81 computer programme was also carried out.,2010,0, 4501,Implementation of finite mutual impedances and its influence on earth potential rise estimation along transmission lines,"As the proximity of high fault current power lines to residential areas increases, the need for accurate prediction of earth potential rise (EPR) is of crucial importance for both safety and equipment protection. To date, the most accurate methods for predicting EPR are power system modelling software tools, such as EMTP, or recursive methods that use a span by span approach to model a transmission line. These techniques are generally used in conjunction with impedances and admittances that are derived from the assumption of infinite line length. In this paper a span by span model was created to predict the EPR along a dual circuit transmission line in EMTP-RV, where the mutual impedances were considered to be between finite length conductors. A series of current injection tests were also performed on the system under study in order to establish the accuracy of both the finite and infinite methods.",2010,0, 4502,"Disturbance detection, identification, and recovery by gait transition in legged robots","We present a framework for detecting, identifying, and recovering within stride from faults and other leg contact disturbances encountered by a walking hexapedal robot. Detection is achieved by means of a software contact-event sensor with no additional sensing hardware beyond the commercial actuators' standard shaft encoders. A simple finite state machine identifies disturbances as due either to an expected ground contact, a missing ground contact indicating leg fault, or an unexpected wall contact. Recovery proceeds as necessary by means of a recently developed topological gait transition coordinator. We demonstrate the efficacy of this system by presenting preliminary data arising from two reactive behaviors - wall avoidance and leg-break recovery. We believe that extensions of this framework will enable reactive behaviors allowing the robot to function with guarded autonomy under widely varying terrain and self-health conditions.",2010,0, 4503,Wooded hedgerows characterization in rural landscape using very high spatial resolution satellite images,"The objective of this study is to evaluate the very high spatial resolution satellite images capacity to detect and characterize the hedgerows network. The significant qualitative attributes to characterize hedgerows are composition, morphology and spatial arrangement of small elements. Remote sensing images Spot 5 and Kompsat, respectively 5m and 1m spatial resolution, were used. We applied an object-based image analysis method. The first step consists on a multi-scale segmentation, and the second step consists on a multi-criterion classification. Then, the characterization consists on a combination of shape values on the fields boundaries. Results shows that both Spot 5 and Kompsat image allows to detect automatically with a good precision the hedgerow network (84.5% and 97% respectively). Only the Kompsat image is enable to detect the finest elements of hedgerows. The very high spatial resolution image, less than 1m, allows to characterize the hedgerow cover quality, the continuity and discontinuity. This study highlights an efficient, reliable and generic method. Moreover, this characterization of landscape structures elements allows to affine the knowledge of ecological elements.",2010,0, 4504,Submerged aquatic vegetation habitat product development: On-screen digitizing and spatial analysis of Core Sound,"A hydrophyte of high relevance, submerged aquatic vegetation (SAV) is of great importance to estuarine environments. SAV helps improve water quality, provides food and shelter for waterfowl, fish, and shellfish, as well as protects shorelines from erosion. In coastal bays most SAV was eliminated by disease in the 1930's. In the late 1960's and 1970's a dramatic decline of all SAV species was correlated with increasing nutrient and sediment inputs from development of surrounding watersheds (MDNP et. al 2004). Currently state programs work to protect and restore existing wetlands, however, increasing development and population pressure continue to degrade and destroy both tidal and non-tidal wetlands and hinder overall development of SAV growth. The focus of this research was to utilize spatial referencing software in the mapping of healthy submerged aquatic vegetation (SAV) habitats. In cooperation with the United States Fish and Wildlife Service (USFWS), and the National Oceanic and Atmospheric Administration (NOAA), students from Elizabeth City State University (ECSU) developed and applied Geographic Information Systems (GIS) skills to evaluate the distribution and abundance of SAV in North Carolina's estuarine environments. Utilizing ESRI ArcView, which includes ArcMap, ArcCatalog and ArcToolbox, and the applications of on-screen digitizing, an assessment of vegetation cover was made through the delineation of observable SAV beds in Core Sound, North Carolina. Aerial photography of the identified coastal water bodies was taken at 12,000 feet above mean terrain (AMT) scale 1:24,000. The georeferenced aerial photographs were assessed for obscurities and the SAV beds were digitized. Through the adoption of NOAA guidelines and criteria for benthic habitat mapping using aerial photography for image acquisition and analysis, students delineated SAV beds and developed a GIS spatial database relevant to desired results. This newly created database yielded products in - - the form of usable shapefiles of SAV polygons as well as attribute information with location information, area in hectares, and percent coverage of SAV.",2010,0, 4505,Evaluation of the VIIRS Land algorithms at Land PEATE,"The Land Product Evaluation and Algorithm Testing Element (Land PEATE), a component of the Science Data Segment of the National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP), is being developed at the NASA Goddard Space Flight Center (GSFC). The primary task of the Land PEATE is to assess the quality of the Visible Infrared Imaging Radiometer Suite (VIIRS) Land data products made by the Interface Data Processing System (IDPS) using the Operational (OPS) Code during the NPP era and to recommend improvements to the algorithms in the IDPS OPS code. The Land PEATE uses a version of the MODIS Adaptive Processing System (MODAPS), NPPDAPS, that has been modified to produce products from the IDPS OPS code and software provided by the VIIRS Science Team, and uses the MODIS Land Data Operational Product Evaluation (LDOPE) team for evaluation of the data records generated by the NPPDAPS. Land PEATE evaluates the algorithms by comparing data products generated using different versions of the algorithm and also by comparing to heritage products generated from different instrument such as MODIS using various quality assessment tools developed at LDOPE. This paper describes the Land PEATE system and some of the approaches used by the Land PEATE for evaluating the VIIRS Land algorithms during the pre-launch period of the NPP mission and the proposed plan for long term monitoring of the quality of the VIIRS Land products post-launch.",2010,0, 4506,Application-Aware diagnosis of runtime hardware faults,"Extreme technology scaling in silicon devices drastically affects reliability, particularly because of runtime failures induced by transistor wearout. Current online testing mechanisms focus on testing all components in a microprocessor, including hardware that has not been exercised, and thus have high performance penalties. We propose a hybrid hardware/software online testing solution where components that are heavily utilized by the software application are tested more thoroughly and frequently. Thus, our online testing approach focuses on the processor units that affect application correctness the most, and it achieves high coverage while incurring minimal performance overhead. We also introduce a new metric, Application-Aware Fault Coverage, measuring a test's capability to detect faults that might have corrupted the state or the output of an application. Test coverage is further improved through the insertion of observation points that augment the coverage of the testing system. By evaluating our technique on a Sun OpenSPARC T1, we show that our solution maintains high Application-Aware Fault Coverage while reducing the performance overhead of online testing by more than a factor of 2 when compared to solutions oblivious to application's behavior. Specifically, we found that our solution can achieve 95% fault coverage while maintaining a minimal performance overhead (1.3%) and area impact (0.4%).",2010,0, 4507,SETS: Stochastic execution time scheduling for multicore systems by joint state space and Monte Carlo,"The advent of multicore platforms has renewed the interest in scheduling techniques for real-time systems. Historically, `scheduling decisions' are implemented considering fixed task execution times, as for the case of Worst Case Execution Time (WCET). The limitations of scheduling considering WCET manifest in terms of under-utilization of resources for large application classes. In the realm of multicore systems, the notion of WCET is hardly meaningful due to the large set of factors influencing it. Within soft real-time systems, a more realistic modeling approach would be to consider tasks featuring varying execution times (i.e. stochastic). This paper addresses the problem of stochastic task execution time scheduling that is agnostic to statistical properties of the execution time. Our proposed method is orthogonal to any number of linear acyclic task graphs and their underlying architecture. The joint estimation of execution time and the associated parameters, relying on the interdependence of parallel tasks, help build a `nonlinear Non-Gaussian state space' model. To obtain nearly Bayesian estimates, irrespective of the execution time characteristics, a recursive solution of the state space model is found by means of the Monte Carlo method. The recursive solution reduces the computational and memory overhead and adapts statistical properties of execution times at run time. Finally, the variable laxity EDF scheduler schedules the tasks considering the predicted execution times. We show that variable execution time scheduling improves the utilization of resources and ensures the quality of service. Our proposed new solution does not require any a priori knowledge of any kind and eliminates the fundamental constraints associated with the estimation of execution times. Results clearly show the advantage of the proposed method as it achieves 76% better task utilization, 68% more task scheduling and deadline miss reduction by 53% compared to current state-of-the-ar- - t methods.",2010,0, 4508,Definition and Validation of Metrics for ITSM Process Models,"Process metrics can be used to establish baselines, to predict the effort required to go from an as-is to a to-be scenario or to pinpoint problematic ITSM process models. Several metrics proposed in the literature for business process models can be used for ITSM process models as well. This paper formalizes some of those metrics and proposes some new ones, using the Metamodel-Driven Measurement (M2DM) approach that provides precision, objectiveness and automatic collection. According to that approach, metrics were specified with the Object Constraint Language (OCL), upon a lightweight BPMN metamodel that is briefly described. That metamodel was instantiated with a case study consisting of two ITSM processes with two scenarios (as-is and to-be) each. Values collected automatically by executing the OCL metrics definitions, upon the instantiated metamodel, are presented. Using a larger sample with several thousand meta-instances, we analyzed the collinearity of the formalized metrics and were able to identify a smaller set, which will be used to perform further research work on the complexity of ITSM processes.",2010,0, 4509,Requirements Certification for Offshoring Using LSPCM,"Requirements hand-over is a common practice in software development off shoring. Cultural and geographical distance between the outsourcer and supplier, and the differences in development practices hinder the communication and lead to the misinterpretation of the original set of requirements. In this article we advocate requirements quality certification using LSPCM as a prerequisite for requirements hand-over. LSPCM stands for LaQuSo Software Product Certification Model that can be applied by non-experienced IT assessors to verify software artifacts in order to contribute to the successfulness of the project. To support our claim we have analyzed requirements of three off shoring projects using LSPCM. Application of LSPCM revealed severe flaws in one of the projects. The responsible project leader confirmed later that the development significantly exceeded time and budget. In the other project no major flaws were detected by LSPCM and it was confirmed that the implementation was delivered within time and budget. Application of LSPCM to the projects above also allowed us to refine the model for requirements hand-over in software development off shoring.",2010,0, 4510,Managing Risk in Decision to Outsource IT Projects,"Organizations all around the world are increasingly adopting the activity of outsourcing their IT function to service providers. However, the decision to outsource IT project is not an easy task. The risk will mostly affect the organizations as opposed to service provider. Therefore, it is important for the managers to manage this activity. This paper presents how organizations manage their decision to outsource IT projects. Mixed method was used to gather the information regarding current practices. The analysis revealed that some organizations did not have structured process to come up with the right decision to outsource. In the other hand, some of them assessed the risk associated with the decision to outsource. The analysis of the findings was then used as a basis of the proposed framework of Risk Management in Decision to Outsource IT Project. The proposed framework can act as a guideline to help organizations in making the decision to outsource as well as assessing and managing risk associated with the decision to outsource IT project.",2010,0, 4511,Model-driven development of ARINC 653 configuration tables,"Model-driven development (MDD) has become a key technique in systems and software engineering, including the aeronautic domain. It facilitates on systematic use of models from a very early phase of the design process and through various model transformation steps (semi-)automatically generates source code and documentation. However, on one hand, the use of model-driven approaches for the development of configuration data is not as widely used as for source code synthesis. On the other hand, we believe that, particular systems that make heavy use of configuration tables like the ARINC 653 standard can benefit from model-driven design by (i) automating error-prone configuration file editing and (ii) using model based validation for early error detection. In this paper, we will present the results of the European project DIANA that investigated the use of MDD in the context of Integrated Modular Avionics (IMA) and the ARINC 653 standard. In the scope of the project, a tool chain was implemented that generates ARINC 653 configuration tables from high-level architecture models. The tool chain was integrated with different target systems (VxWorks 653, SIMA) and evaluated during case studies with real-world and real-sized avionics applications.",2010,0, 4512,Conveying Conceptions of Quality through Instruction,"Building up an understanding of aspects of quality, and how to critically assess them, is a complex problem. This paper provides an overview of research on student conceptions of what constitutes quality in different programming domains. These conceptions are linked to tertiary education and computing education research results. Using this literature as a background we discuss how to develop and use instructional approaches that might assist students in developing a better understanding of software quality.",2010,0, 4513,Model-driven development of ARINC 653 configuration tables,"Model-driven development (MDD) has become a key technique in systems and software engineering, including the aeronautic domain. It facilitates on systematic use of models from a very early phase of the design process and through various model transformation steps (semi-)automatically generates source code and documentation. However, on one hand, the use of model-driven approaches for the development of configuration data is not as widely used as for source code synthesis. On the other hand, we believe that, particular systems that make heavy use of configuration tables like the ARINC 653 standard can benefit from model-driven design by (i) automating error-prone configuration file editing and (ii) using model based validation for early error detection. In this paper, we will present the results of the European project DIANA that investigated the use of MDD in the context of Integrated Modular Avionics (IMA) and the ARINC 653 standard. In the scope of the project, a tool chain was implemented that generates ARINC 653 configuration tables from high-level architecture models. The tool chain was integrated with different target systems (VxWorks 653, SIMA) and evaluated during case studies with real-world and real-sized avionics applications.",2010,0,4511 4514,Do Testers' Preferences Have an Impact on Effectiveness?,"Both verification and validation aim to improve the quality of software products during the development process. They use techniques like formal methods, symbolic execution, formal reviews, testing techniques, etc. Technique effectiveness depends not only on project size and complexity but also on the experience of the subject responsible for testing. We have looked at whether the opinions and preferences of subjects match the number of detected defects. Opinions and preferences can influence the decisions that testers have to make. In this paper, we present a piece of research that has explored this aspect by comparing the opinions of subjects (qualitative aspects) with the quantitative results. To do this, we use qualitative methods applied to a quantitative study of code evaluation technique effectiveness.",2010,0, 4515,A Tool for Automatic Defect Detection in Models Used in Model-Driven Engineering,"In the Model-Driven Engineering (MDE) field, the quality assurance of the involved models is fundamental for performing correct model transformations and generating final software applications. To evaluate the quality of models, defect detection is usually performed by means of reading techniques that are manually applied. Thus, new approaches to automate the defect detection in models are needed. To fulfill this need, this paper presents a tool that implements a novel approach for automatic defect detection, which is based on a model-based functional size measurement procedure. This tool detects defects related to the correctness and the consistency of the models. Thus, our contribution lays in the new approach presented and its automation for the detection of defects in MDE environments.",2010,0, 4516,Analyzing the Similarity among Software Projects to Improve Software Project Monitoring Processes,"Software project monitoring and control is crucial to detect deviation considering the project plan and to take appropriate actions, when needed. However, to determine which action should be taken is not an easy task, since project managers have to analyze the context of the deviation event and search for actions that were successfully taken on previous similar contexts, trying to repeat the effectiveness of these actions. To do so, usually managers use previous projects data or their own experience, and frequently there is no measure or similarity criteria formally established. Thus, in this paper we present the results of a survey that aimed to identify characteristics that can determine the similarity among software projects and also a measure to indicate the level of similarity among them. A recommendation system to support the execution of corrective actions based on previous projects is also described. We believe these results can support the improvement of software project monitoring process, providing important knowledge to project managers in order to improve monitoring and control activities on the projects.",2010,0, 4517,A Gap Analysis Methodology for the Team Software Process,"Over the years software quality is becoming more and more important in software engineering. Like in other engineering disciplines where quality is already a commodity, software engineering is moving into these stages. The Team Software Process (TSP) was created by the Software Engineering Institute (SEI) with the main objective of helping software engineers and teams to ensure high-quality software products and improve process management in the organization. This paper presents a methodology for assessing an organization against the TSP practices so that it is possible to assess the future gains and needs an organization will have during and after the implementation of TSP. The gap analysis methodology has two pillars in terms of data collection: interviews and documentation analysis. Questionnaires have been developed to guide the assessment team on the task of conducting interviews and further guidance has been developed in what and where to look for information in an organization. A model for the rating has also been developed based on the knowledge and experience of working in several organizations on software quality. A report template was also created for documenting the analysis conclusions. The methodology developed was successfully applied in one well known Portuguese organization with the support and validation of SEI, and several refinements were introduced based on the lessons learnt. It is based on the most know reference models and standards for software process assessment - Capability Maturity Model Integration (CMMI) and ISO/IEC 15504. The objective of this methodology is to be fast and inexpensive when compared with those models and standards or with the SEI TSP assessment pilot.",2010,0, 4518,Classification and Comparison of Agile Methods,"This manuscript describes a technique and its tool support to perform comparisons on agile methods, based on a set of relevant features and attributes. This set includes attributes related to four IEEE's Software Engineering Body of Knowledge (SWEBOK) Knowledge Areas (KAs) and to the agile principles defined in the Agile Manifesto. With this set of attributes, by analysing the practices proposed by each method, we are able to assess (1) the coverage degree for the considered KAs and (2) the agility degree. In this manuscript, the application of the technique is exemplified in comparing extreme Programming (XP) and Scrum.",2010,0, 4519,A Method for Continuous Code Quality Management Using Static Analysis,The quality of source code is a key factor for any software product and its continuous monitoring is an indispensable task for a software development project. We have developed a method for systematic assessing and improving the code quality of ongoing projects by using the results of various static code analysis tools. With different approaches for monitoring the quality (a trend-based one and a benchmarking-based one) and an according tool support we are able to manage the large amount of data that is generated by these static analyses. First experiences when applying the method with software projects in practice have shown the feasibility of our method.,2010,0, 4520,Towards Automated Quality Models for Software Development Communities: The QualOSS and FLOSSMetrics Case,"Quality models for software products and processes help both to developers and users to better understand their characteristics. In the specific case of libre (free, open source) software, the availability of a mature and reliable development community is an important factor to be considered, since in most cases both the evolvability and future fitness of the product depends on it. Up to now, most of the quality models for communities have been based on the manual examination by experts, which is time-consuming, generally inconsistent and often error-prone. In this paper, we propose a methodology, and some examples of how it works in practice, of how a quality model for development communities can be automated. The quality model used is a part of the QualOSS quality model, while the metrics are those collected by the FLOSS Metrics project.",2010,0, 4521,Reducing Subjectivity in Code Smells Detection: Experimenting with the Long Method,"Guidelines for refactoring are meant to improve software systems internal quality and are widely acknowledged as among software's best practices. However, such guidelines remain mostly qualitative in nature. As a result, judgments on how to conduct refactoring processes remain mostly subjective and therefore non-automatable, prone to errors and unrepeatable. The detection of the Long Method code smell is an example. To address this problem, this paper proposes a technique to detect Long Method objectively and automatically, using a Binary Logistic Regression model calibrated by expert's knowledge. The results of an experiment illustrating the use of this technique are reported.",2010,0, 4522,IDS: An Immune-Inspired Approach for the Detection of Software Design Smells,"We propose a parallel between object-oriented system designs and living creatures. We suggest that, like any living creature, system designs are subject to diseases, which are design smells (code smells and anti patterns). Design smells are conjectured in the literature to impact the quality and life of systems and, therefore, their detection has drawn the attention of both researchers and practitioners with various approaches. With our parallel, we propose a novel approach built on models of the immune system responses to pathogenic material. We show that our approach can detect more than one smell at a time. We build and test our approach on Gantt Project v1.10.2 and Xerces v2.7.0, for which manually-validated and publicly available smells exist. The results show a significant improvement in detection time, precision, and recall, in comparison to the state-of-the-art approaches.",2010,0, 4523,Study of LEO satellite constellation systems based on quantum communications networks,"Quantum cryptography, or more specifically quantum key distribution (QKD), is the first offspring of quantum information that has reached the stage of real-world application. Its security is based on the fact that a possible spy (Eve, the eavesdropper) cannot obtain information about the bits that Alice sends to Bob, without introducing perturbations. Therefore authorized partners can detect the spy by estimating the amount of error in their lists. The central objective of this paper is to implement and improve practical systems for quantum cryptography. The essential work carried in our research laboratory concerns the software development to implement of Quantum Key Distribution (QKD) Network based on LEO orbit number and reduce the telecommunication interruption risks and this will provide indeed a better communication quality, and investigations into the causes of losses in the system and attempts to minimize the quantum bit error rate (QBER).",2010,0, 4524,Rapid prototyping and compact testing of CPU emulators,"In this paper, we propose a novel rapid prototyping technique to produce a high quality CPU emulator at reduced development cost. Specification mining from published CPU manuals, automated code generation of both the emulator and its test vectors from the mined CPU specifications, and a hardware-oracle based test strategy all work together to close the gaps between specification analysis, development and testing. The hardware-oracle is a program which allows controlled execution of one or more instructions on the CPU, so that its outputs can be compared to that of the emulator. The hardware-oracle eliminates any guesswork about the true behavior of an actual CPU, and it helps in the identification of several discrepancies between the published specifications vs. the actual processor behavior, which would be very hard to detect otherwise.",2010,0, 4525,A Metrics-Based Approach to Technical Documentation Quality,"Technical documentation is now fully taking the step from stale printed booklets (or electronic versions of these) to interactive and online versions. This provides opportunities to reconsider how we define and assess the quality of technical documentation. This paper suggests an approach based on the Goal-Question-Metric paradigm: predefined quality goals are continuously assessed and visualized by the use of metrics. To test this approach, we perform two experiments. We adopt well known software analysis techniques, e.g., clone detection and test coverage analysis, and assess the quality of two real world documentations, that of a mobile phone and of (parts of) a warship. The experiments show that quality issues can be identified and that the approach is promising.",2010,0, 4526,Algorithm for QOS-aware web service composition based on flow path tree with probabilities,"This paper first presents a novel model for QoS-aware web services composition based on workflow patterns, and an executing path tree with probability. Then a discrete PSO algorithm is proposed to fit our model, which also requires some other preliminary algorithms, such as the generation of all reachable paths and executing path tree. The experiments show the performance of that algorithm and its advantages, compared with genetic algorithm. Finally, we suggest some drawbacks of it and future works.",2010,0, 4527,An Enhanced Prediction Method for All-Zero Block in H.264,"During the process of H.264 encoding, after the operation of quantization, it will generate a lot of all-zero blocks. In this paper we analyses the feature of the coefficients of the 44 residual block after transformation, on that basis a new algorithm for predicting all-zero block is proposed. And then, we utilize the sum of absolute residual(SAD)of the block and obtain thresholds to predict the all-zero block. So we can avoid doing transformation, quantization, and other operations on them, thus reduce the computing time in encoding process. Finally we use the H.264 reference software on VC platform to simulate the method, and compare it with other methods which are involved in the paper. The simulation results show that our method have the prediction rate up to 95.1%, which is more effective than other methods. And at the same time, the loss of the image quality is less than 0.2dB, which can be ignored.",2010,0, 4528,Software Defect Prediction Using Dissimilarity Measures,"In order to improve the accuracy of software defect prediction, a novel method based on dissimilarity measures is proposed. Different from traditional predicting methods based on feature space, we solve the problem in dissimilarity space. First the new unit features in dissimilarity space are obtained by measuring the dissimilarity between the initial units and prototypes. Then proper classifier is chosen to complete prediction. By prototype selecting, we can reduce the dimension of units' features and the computational complexity of prediction. The empirical results in the NASA database KC2 and CM1 show that the prediction accuracies of KNN, Bayes, and SVM classifier in dissimilarity space are higher than that of feature space from 1.86% to 9.39%. Also the computational complexities reduce from 18% to 67%.",2010,0, 4529,The Development about a Prediction System for Coal and Gas Outbursts Based on GIS,"Based on the differences of coal and gas outburst sensitivity index, importance and others in various mining, the author puts forward a system integration strategy about coal and gas outburst prediction. Applying for fault tree analysis, the influence factors about regional outburst and working outburst was analyzed, combined with comprehensive evaluation method, an evaluation index system about coal and gas outburst was established. Based on C # language for the development of tools, GIS software is nested in this system, the coal and gas outburst prediction system was developed. Preliminary application shows that the system can exact predict coal and gas outburst in mines, it is of good application prospects.",2010,0, 4530,Avoiding extended failures in large interconnected power systems Theoretical and practical issues,"This paper is treating the problem of correct and complete analysis of a large interconnected power system by evaluating the extended failures occurrence probability. Both theoretical and practical issues are treated from the perspective of complete periodically computations that have to be performed in order to have a clear image about the security margins available in a power system for a given stage of time. Main aspects related with steady-state stability limits and transient stability limits are presented from a practical point of view. The key aspects of the well known approaches related with the steady-state stability limits (SSSL) computation is reviewed and an algorithm for SSSL identification, developed by the authors in order to be fitted for power systems planning and operation is also described. The benefits of using a practical methodology to assess the transient stability margins in a real power system are going to be also presented. In order to cover this, the main theoretical aspects related with transient stability are described from a practical point of view. The paper highlights a methodology to assess the risk of a blackout in a power system by means of static and dynamic simulations using a dedicated software tool. Proposed methodology covers both SSSL and transient stability limits (TSL). Original aspects of the paper consists in a new approach to compute the SSSL on a specific constrained area in a power system as well as in a practical method to assess TSL for a given maximum loading of the analyzed power system.",2010,0, 4531,From Use Case Model to Service Model: An Environment Ontology Based Approach,"One fundamental problem in services computing is how to bridge the gap between business requirements and various heterogeneous IT services. This involves eliciting business requirements and building a solution accordingly by reusing available services. While the business requirements are commonly elicited through use cases and scenarios, it is not straightforward to transform the use case model into a service model, and the existing manual approach is cumbersome and error-prone. In this paper, the environment ontology, which is used to model the problem space, is utilized to facilitate the model transformation process. The environment ontology provides a common understanding between business analysts and software engineers. The required software functionalities as well as the available services' capabilities are described using this ontology. By semi-automatically matching the required capability of each use case to the available capabilities provides by services, a use case is realized by that set of services. At the end of this paper, a fictitious case study was used to illustrate how this approach works.",2010,0, 4532,Analysis and Design of the System for Detecting Fuel Economy,"The technology of testing fuel economy is not only a comprehensive parameter for evaluating the level of cars' technology and maintenance, but also an important reference for diagnosing and analyzing the troubles of cars. Using the method of Ultrasonic Flow Detection to measure the fuel consumption, this essay is based on a comprehensive comparison of several traditional fuel economy detection systems and designs a system for detecting fuel economy, that system is based on ESB software. That system can work real-timely, and detect automatically.",2010,0, 4533,An enterprise business intelligence maturity model (EBIMM): Conceptual framework,"Business Intelligence (BI) market trend is hot for the past few years. According to the results from the Gartner Research 2009, Business Intelligence's market is ranked top of ranking in Business and Technology Priorities in 2009. CMM can be applied into various disciplines such as software field, engineering field or IT field. However, there is limited research of CMM that applied in Enterprise Business Intelligence (EBI) domain. This is because BI market is a quite new area. Based on the literature in BI and CMM, a multi-dimensional set of critical factors for characterizing the levels of EBI maturity has been proposed. Specifically, there are three key dimensions of capability influencing the EBI effort: data warehousing, information quality and knowledge process. From a practical standpoint, the proposed model provides useful basis to firms aspiring to elevate their business intelligence endeavour to higher levels of maturity. The research also serves as a foundation for initiating other relevant studies and for assessing EBI maturity.",2010,0, 4534,A feedback-based method for adaptive ROI protection in H.264/AVC,"The bit error and packet loss in unreliable channels such as Internet and wireless networks lead the quality of video data unacceptable, which force us to take the network adaptive transmission into account. In this paper, we firstly propose a novel Greedy Spread (GS) ROI extraction method to extract ROI effectively, and then the network state changing is quickly perceived without occupying additional network overhead. According to the feedback about current network state from the receiver, Gilbert model is used to predict network packet loss, which enables the source encoder to dynamically adjust the ROI protecting schemes. Experimental results shows that the GS method extracts the ROI at high accuracy, also the adaptive method is of low computational complexity and high protection capability, suitable for real-time applications, so make a notable improvement in visual quality.",2010,0, 4535,Exterior quality inspection of rice based on computer vision,"To develop an online inspection system of rice exterior quality (head rice rate, chalk rice, crackle rice) based on computer vision. The system was developed after analyzing the optic characteristics of rice kernel in the platform of VC + + 6.0 software. The five varieties of rice kernel Jinyou974, Gangyou182, Zhongyou205, Jiahe212, and Changnonggeng-2 were selected as experimental samples. The methods, such as gray transformation, automatic threshold segmentation, area labeling, were applied to extract single rice kernel image from collected mass rice kernel images. The chalk rice and crackle rice were inspected by the above methods. To inspect the head rice rate, the ten characteristic parameters, such as the area and perimeter of rice kernel, were selected as the inspection characteristic of head rice, and the method of principal component analysis was carried out to process substantive data. The optimal threshold of distinguishing head rice was made sure. The results showed that the accurate ratio of detecting crackle rice was 96.41%, the correct ratio of detecting chalk rice was 94.79%, and the accurate ratio of detecting head rice was 96.20%. The analysis indicated efficient discrimination from different rice exterior quality by computer vision.",2010,0, 4536,Hand-held multi-parameter water quality recorder,"Water resources monitoring has become an important research topic because an unreasonable use of water resources results in the deterioration of water environment and affects the development of human beings and nature. The scheme of the hand-held multi-parameter water quality recorder based on the sensor of the YSI6600 was designed. Through the multi-parameter sensor of the YSI6600, the recorder can detect and analyze the water quality of 17 parameters, such as turbidity, PH value etc. The core controller by using the low-power single-chip MSP430F149 makes the entire system with battery-powered. The use of multi-parameter sensor YSI6600 makes collection of data-processing speed and high precision, small error. The advantages of the recorder are the small size, the low cost, the low consumption, the large-screen LCD display, and easily carrying. It can communicate with the computer by RS232, position water quality data of a water area by circumscribed GPS, and provide a favorable data protection for the water sector and environmental sector by the water quality data. It has been played a significant role in environmental monitoring and water quality monitoring. Comparing with the traditional sampling from the river and instrument in the past, the recorder has greatly improved the efficiency of detection.",2010,0, 4537,Wavelet-based one-terminal fault location algorithm for aged cables without using cable parameters applying fault clearing voltage transients,"This paper presents a novel fault location algorithm, which in spite of using only voltage samples taken from one terminal, is capable to calculate precise fault location in aged power cables without any need to line parameters. Voltage transients generated after circuit breaker opening action are sampled and using wavelet and traveling wave theorem, first and second inceptions of voltage traveling wave signals are detected. Then wave speed is determined independent of cable parameters and finally precise location of fault is calculated. Because of using one terminal data, algorithm does not need to communication equipments and global positioning system (GPS). Accuracy of algorithm is not affected by aging, climate and temperature variations, which change the wave speed. In addition, fault resistance, fault inception angle and fault distance does not affect accuracy of algorithm. Extent simulations carried out with SimPowerSystem toolbox of MATLAB software, confirm capability and high accuracy of proposed algorithm to calculate fault location in the different faults and system conditions.",2010,0, 4538,Static Detection Unsafe Use of variables In Java,"Exception handling has been introduced into object oriented programming languages to help developing robust software. At the same time, it makes programming more difficult and it is not easy to write high quality exception handling codes. Careless exception handling code will introduce bugs and it usually forms certain kind of bug pattern. In this paper we propose a new bug pattern unsafe use of variables due to exception occurrences. It may cause the dependency safety property violation in a program. We also develop a static approach to automatically detect unsafe use of variables that may introduce potential bugs in Java program. This approach can be integrated into current bug finding tools to help developer improve the quality of Java program.",2010,0, 4539,Energy-Efficient Clustering Rumor Routing Protocol for Wireless Sensor Networks,"To develop an energy-efficient routing protocol becomes one of the most important and difficult key tasks for wireless sensor networks. Traditional Rumor Routing is effective in random path building but winding or even looped path is prone to be formed, leading to enormous energy wasting. Clustering Rumor Routing proposed in this paper has advantages in energy saving by making full use of three features-clustering structure, selective next-hop scheme and double variable energy thresholds for rotation of cluster-heads. Thus CRR can effectively avoid the problems that occur in RR and a more energy-efficient routing path from event agents to queries can be built. The performance between CRR and traditional RR are evaluated by simulations. The results indicate that compared with traditional RR, the CRR can save more energy consumption, provide better path quality, and improve the delivery rate as well.",2010,0, 4540,Fault Localization in Constraint Programs,"Constraint programs such as those written in high level modeling languages (e.g., OPL, ZINC, or COMET) must be thoroughly verified before being used in applications. Detecting and localizing faults is therefore of great importance to lower the cost of the development of these constraint programs. In a previous work, we introduced a testing framework called CPTEST enabling automated test case generation for detecting non-conformities. In this paper, we enhance this framework to introduce automatic fault localization in constraint programs. Our approach is based on constraint relaxation to identify the constraint that is responsible of a given fault. CPTEST is henceforth able to automatically localize faults in optimized OPL programs. We provide empirical evidence of the effectiveness of this approach on classical benchmark problems, namely Golomb rulers, n-queens, social golfer and car sequencing.",2010,0, 4541,A study on quality improvement of railway software,"The digital system performs more varying and highly complex functions efficiently compared to the existing analog system because software can be flexibly designed and implemented. The flexible design makes it difficult to predict the software failures. Even though the main characteristic of railway system is to ensure safety, nowadays software is widely used in the safety critical railway system just after evaluation of system function itself. The railway system is also safety critical system and the software is widely used in the railway system. For this reason, the safety criteria are suggested to secure the software safety for the field of railway system. The software used in the safety critical system has to be examined whether it is properly developed according to the safety criteria and certification process. This paper also suggests a development methodology for the railway company to easily apply the criteria to the railway system.",2010,0, 4542,Requirement based test case prioritization,"Test case prioritization involves scheduling test cases in an order that increases the effectiveness in achieving some performance goals. One of the most important performance goals is the rate of fault detection. Test cases should run in an order that increases the possibility of fault detection and also that detects faults at the earliest in its testing life cycle. In this paper, an algorithm is proposed for system level test case prioritization (TCP) from software requirement specification to improve user satisfaction with quality software and also to improve the rate of severe fault detection. The proposed model prioritizes the system test cases based on the three factors: customer priority, changes in requirement, implementation complexity. The proposed prioritization technique is validated with two different sets of industrial projects and the results show that the proposed prioritization technique improves the rate of severe fault detection.",2010,0, 4543,A Case Study of Software Security Test Based on Defects Threat Tree Modeling,"Due to the increasing complexity of software applications, traditional function security testing ways, which only test and validate software security mechanisms, are becoming ineffective to detect latent software security defects (SSD). However, the most of vulnerabilities result from some typical SSD. According to CERT/CC, ten defects known are responsible for 75% of security breaches in today software applications. On the base of threat tree modeling, we use it in the integrated software security test model. For introducing the usefulness of the method, we use the test model in M3TR software security test.",2010,0, 4544,Interpreting the out-of-control signals of the Generalized Variance |S| control chart,"Multivariate quality control charts have some advantages for monitoring more than one variable. Nevertheless, there are some disadvantages when multivariate schemes are employed. The main problem is how to interpret the out-of-control signal. For example, in the case of control charts designed to monitor the mean vector, the chart signals show that there is a shift in the vector, but no indication is given about the variables that have shifted. Generalized Variance |S| quality control chart is a very powerful way to detect small shifts in the mean vector. Unfortunately, there are no previous works about the interpretation of the out-of-control signal of this chart. In this paper neural networks are used to interpret the out-of-control signal of the Generalized Variance |S| Chart. The utilization of this neural network in the industry is very easy, thanks to the developed software.",2010,0, 4545,A study of applying the bounded Generalized Pareto distribution to the analysis of software fault distribution,"Software is currently a key part of many safety-critical applications. But the main problem facing the computer industry is how to develop a software with (ultra) high reliability on time, and assure the quality of software. In the past, some researchers reported that the Pareto distribution (PD) and the Weibull distribution (WD) models can be used for software reliability estimation and fault distribution modeling. In this paper we propose a modified PD model to predict and assess the software fault distribution. That is, we suggest using a special form of the Generalized Pareto distribution (GPD) model, named the bounded Generalized Pareto distribution (BGPD) model. We will show that the BGPD model eliminates several modeling issues that arise in the PD model, and perform detailed comparisons based on real software fault data. Experimental result shows that the proposed BGPD model presents very high fitness to the actual fault data. In the end, we conclude that the distribution of faults in a large software system can be well described by the Pareto principle.",2010,0, 4546,Multimedia system verification through a usage model and a black test box,"This paper presents an automated verification methodology aimed at detecting failures in multimedia systems based on a black box testing approach. Moreover, the verification is performed using a black test box as part of a test harness. The quality of a system is examined against functional failures using a model-based testing approach for generating test scenarios. System under test (specifically, the software of the system) is modeled to represent the most probable system usage. In this way, failures that occur most frequently during system exploitation are detected through the testing. Test case execution is fully automated and test oracle is based on image quality analysis. The proposed framework is primarily intended for detecting software-related failures, but will also detect the failures that result from system hardware defects.",2010,0, 4547,Improving of STS algorithm to detecting voltage unbalance in low voltage distribution networks,"Thyristor-based static transfer switches (STS's) are feeding sensitive loads with two independent sources by monitoring voltage quality. STS is used in distribution networks to provide connection to alternate sources of ac power for critical loads when the main source fails. In distribution system sensitive loads are endangered by either voltage sag or voltage unbalance. Simulation results show that, the STS-2 of the IEEE benchmark in voltage sag and transient conditions operates properly; but do not detect voltage un-balance. In this paper, operating of the STS-2 under unbalance distribution network - as a scenario to improve the power quality for a three-phase critical load in industrial areas - is discussed. This paper presents an improved algorithm to detecting voltage unbalance with the STS-2 based on the ratio of the negative and the positive sequences. Simulation is presented in the well known software ATP/EMTP. Also load and network parameters are based on the industrial network of the Naghadeh - Iran.",2010,0, 4548,Analysis of integrated storage and grid interfaced photovoltaic system via nine-switch three-level inverter,"This paper focuses on the modeling, controller design, simulation and analysis of a two-string photovoltaic (PV) and storage system connected to grid via a centralized nine-switch three-level inverter. A circuit-based PV array model is presented for use. A boost converter is employed to enable PV to operate at its maximum power point. The storage system consists of a capacitor bank connected at the DC bus via a full-bridge DC-DC converter. The centralized inverter is controlled by a decoupled current control technique and interfaced with grid via a transformer and double transmission lines. And three-level PWM is generated by applying two symmetrical triangular carriers. Three-phase voltage source and RL load are used for simulation. Eventually, the system stability is assessed with respect to fault conditions and solar irradiation change in simulation software PSCAD.",2010,0, 4549,Visual Indicator Component Software to Show Component Design Quality and Characteristic,"Good design is one of the prerequisites of high quality product. To measure the quality of software design, software metrics are used. Unfortunately in software development practice, there are a lot of software developers who are not concerned with the component's quality and characteristic. Software metrics does not interest them, because to understand the measurement of the metrics, a deep understanding about degree, dimension, and capacity of some attribute of the software product is needed. This event triggers them to build software's whose quality is below the standard. What is more dangerous is that these developers are not aware of quality and do not care with their work product. Of course these occurrences is concerning and a solution needed to be found. Through this paper the researcher is trying to formulate an indicator of component software that shows component design quality and characteristic visually. This indicator can help software developers to make design decision and refactoring decision, detect the design problem more quickly, able to decide which area to apply refactoring, and enable us to do early or final detection of design defects.",2010,0, 4550,Intelligent monitoring and fault tolerance in large-scale distributed systems,"Summary form only. Electronic devices are starting to become widely available for monitoring and controlling large-scale distributed systems. These devices may include sensing capabilities for online measurement, actuators for controlling certain variables, microprocessors for processing information and making realtime decisions based on designed algorithms, and telecommunication units for exchanging information with other electronic devices or possibly with human operators. A collection of such devices may be referred to as a networked intelligent agent system. Such systems have the capability to generate a huge volume of spatial-temporal data that can be used for monitoring and control applications of large-scale distributed systems. One of the most important research challenges in the years ahead is the development of information processing methodologies that can be used to extract meaning and knowledge out of the ever-increasing electronic information that will become available. Even more important is the capability to utilize the information that is being produced to design software and devices that operate seamlessly, autonomously and reliably in some intelligent manner. The ultimate objective is to design networked intelligent agent systems that can make appropriate real-time decisions in the management of large-scale distributed systems, while also providing useful high-level information to human operators. One of the most important classes of large-scale distributed systems deals with the reliable operation and intelligent management of critical infrastructures, such as electric power systems, telecommunication networks, water systems, and transportation systems. The design, control and fault monitoring of critical infrastructure systems is becoming increasingly more challenging as their size, complexity and interactions are steadily growing. Moreover, these critical infrastructures are susceptible to natural disasters, frequent failures, as well as malicio- - us attacks. There is a need to develop a common system-theoretic fault diagnostic framework for critical infrastructure systems and to design architectures and algorithms for intelligent monitoring, control and security of such systems. The goal of this presentation is to motivate the need for health monitoring, fault diagnosis and security of critical infrastructure systems and to provide a fault diagnosis methodology for detecting, isolating and accommodating both abrupt and incipient faults in a class of complex nonlinear dynamic systems. A detection and approximation estimator based on computational intelligence techniques is used for online health monitoring. Various adaptive approximation techniques and learning algorithms will be presented and illustrated, and directions for future research will be discussed.",2010,0, 4551,Control of airship in case of unpredictable environment conditions,"The article demonstrates how to generate the trajectory, taking into account the concept of the tunnel of error that ensures route trace with an error no greater than assumed, even in difficult to predict environmental conditions. The mathematical model of kinematics and dynamics using spatial vectors is presented in short. The theoretical assumptions are tested by simulation. The model used in the simulations takes into account the structure of the drives in the form of two engines placed symmetrically on the sides of the object.",2010,0, 4552,Reliable online water quality monitoring as basis for fault tolerant control,"Clean data are essential for any kind of alarm or control system. To achieve the required level of data quality in online water quality monitoring, a system for fault tolerant control was developed. A modular approach was used, in which a sensor and station management module is combined with a data validation and an event detection module. The station management module assures that all relevant data, including operational data, is available and the state of the monitoring devices is fully documented. The data validation module assures that unreliable data is detected, marked as such, and that the need for sensor maintenance is timely indicated. Finally, the event detection module marks unusual system states and triggers measures and notifications. All these modules were combined into a new software package to be used on water quality monitoring stations.",2010,0, 4553,Representing and Reasoning about Web Access Control Policies,"The advent of emerging technologies such as Web services, service-oriented architecture, and cloud computing has enabled us to perform business services more efficiently and effectively. However, we still suffer from unintended security leakages by unauthorized services while providing more convenient services to Internet users through such a cutting-edge technological growth. Furthermore, designing and managing Web access control policies are often error-prone due to the lack of logical and formal foundation. In this paper, we attempt to introduce a logic-based policy management approach for Web access control policies especially focusing on XACML (eXtensible Access Control Markup Language) policies, which have become the de facto standard for specifying and enforcing access control policies for various applications and services in current Web-based computing technologies. Our approach adopts Answer Set Programming (ASP) to formulate XACML that allows us to leverage the features of ASP solvers in performing various logical reasoning and analysis tasks such as policy verification, comparison and querying. In addition, we propose a policy analysis method that helps identify policy violations in XACML policies accommodating the notion of constraints in role-based access control (RBAC). We also discuss a proof-of-concept implementation of our method called XACMLl2ASP with the evaluation of several XACML policies from real-world software systems.",2010,0, 4554,Target Setting for Technical Requirements in Time-Stamped Software Quality Function,"Technical requirements are hard to determine in software development. They are often specified subjectively in practice. Poorly determined technical requirements often lead to poor customer satisfaction, cost overrun, and delay in schedule, and poor quality. Quality Function Deployment (QFD) is one of major engineering methods used to elicit customer's needs and transforms them into technical requirements in industry. It has been applied to develop numerous products, including software systems, to improve their quality. However, target setting for technical software requirements is a complicated and challenging task in product development in Software Quality Function Deployment (SQFD). Current methods for target setting for technical software requirements do not consider their technical trends for a given timeframe. As a result, by the time of completion of a project the target values of technical requirements may not be competitive any more. In this paper, we first discuss benchmarking, primitive linear regression and target setting based on impact analysis to set targets for technical requirements in Software Quality Function Deployment (SQFD). We then develop a method of target setting for technical requirements by incorporating timeframe and the technical trend. It can help us to assess impact of both under-achieved and over-achieved targets. By incorporating the technical trend and the time of delivery of the product into target setting process, we can set targets for technical requirements that provide a competitive edge for our product over the competitor's products and a high level of customer satisfaction.",2010,0, 4555,An Empirical Comparison of Fault-Prone Module Detection Approaches: Complexity Metrics and Text Feature Metrics,"In order to assure the quality of software product, early detection of fault-prone products is necessary. Fault-prone module detection is one of the major and traditional area of software engineering. However, comparative study using the fair environment rarely conducted so far because there is little data publicly available. This paper tries to conduct a comparative study of fault-prone module detection approaches.",2010,0, 4556,"Managing Consistency between Textual Requirements, Abstract Interactions and Essential Use Cases","Consistency checking needs to be done from the earliest phase of requirements capture as requirements captured by requirement engineers are often vague, error-prone and inconsistent with users' needs. To improve such consistency checking we have applied a traceability approach with visualization capability. We have embedded this into a light-weight automated tracing tool in order to allow users to capture their requirements and generate Essential Use Case models of these requirements automatically. Our tool supports inconsistency checking between textual requirements, abstract interactions that derive from the text and Essential Use Case models. A preliminary evaluation has been conducted with target end users and the tool usefulness and ease of use are evaluated. We describe our motivation for this research, our prototype tool and results of our evaluation.",2010,0, 4557,An Optimized Checkpointing Based Learning Algorithm for Single Event Upsets,"With the arrival of the CMOS technology, the sizes of the transistors are anything but increasing. Due to the current transistor sizes single event upsets, which were over looked for the previous generation are not so anymore. With memories and other peripherals well protected from single event upsets, processors are in a critical state. Hard errors too have a higher probability of occurrence. This work is aimed at detection of soft errors (SEUs) and making programs more resilient to them and to detect hard errors and eliminate them. SEUs are transient errors and hard errors are permanent in nature. The idea is to use the concept of CFGs, DFGs and data dependency graphs with Integer Linear Programming to improve the program and testing it on fault induced architectures.",2010,0, 4558,Design and Analysis of Cost-Cognizant Test Case Prioritization Using Genetic Algorithm with Test History,"During software development, regression testing is usually used to assure the quality of modified software. The techniques of test case prioritization schedule the test cases for regression testing in an order that attempts to increase the effectiveness in accordance with some performance goal. The most general goal is the rate of fault detection. It assumes all test case costs and fault severities are uniform. However, those factors usually vary. In order to produce a more satisfactory order, the cost-cognizant metric that incorporates varying test case costs and fault severities is proposed. In this paper, we propose a cost-cognizant test case prioritization technique based on the use of historical records and a genetic algorithm. We run a controlled experiment to evaluate the proposed technique's effectiveness. Experimental results indicate that our proposed technique frequently yields a higher Average Percentage of Faults Detected per Cost (APFDc). The results also show that our proposed technique is also useful in terms of APFDc when all test case costs and fault severities are uniform.",2010,0, 4559,A Study on the Applicability of Modified Genetic Algorithms for the Parameter Estimation of Software Reliability Modeling,"In order to assure software quality and assess software reliability, many software reliability growth models (SRGMs) have been proposed for estimation of reliability growth of products in the past three decades. In principle, two widely used methods for the parameter estimation of SRGMs are the maximum likelihood estimation (MLE) and the least squares estimation (LSE). However, the approach of these two estimations may impose some restrictions on SRGMs, such as the existence of derivatives from formulated models or the needs for complex calculation. Thus in this paper, we propose a modified genetic algorithm (MGA) to estimate the parameters of SRGMs. Experiments based on real software failure data are performed, and the results show that the proposed genetic algorithm is more effective and faster than traditional genetic algorithms.",2010,0, 4560,A Method for Detecting Defects in Source Codes Using Model Checking Techniques,This paper proposes a method of detecting troublesome defects in the Java source codes for enterprise systems using a model checking technique. A supporting tool also provides a function to automatically translate source code into a model which is simulated by UPPAAL model checker.,2010,0, 4561,Using Load Tests to Automatically Compare the Subsystems of a Large Enterprise System,"Enterprise systems are load tested for every added feature, software updates and periodic maintenance to ensure that the performance demands on system quality, availability and responsiveness are met. In current practice, performance analysts manually analyze load test data to identify the components that are responsible for performance deviations. This process is time consuming and error prone due to the large volume of performance counter data collected during monitoring, the limited operational knowledge of analyst about all the subsystem involved and their complex interactions and the unavailability of up-to-date documentation in the rapidly evolving enterprise. In this paper, we present an automated approach based on a robust statistical technique, Principal Component Analysis (PCA) to identify subsystems that show performance deviations in load tests. A case study on load test data of a large enterprise application shows that our approach do not require any instrumentation or domain knowledge to operate, scales well to large industrial system, generate few false positives (89% average precision) and detects performance deviations among subsystems in limited time.",2010,0, 4562,Study on the Evaluation Model of Student Satisfaction Based on Factor Analysis,"The paper first determined index system of the student satisfaction by the systematic analysis approach, established the mathematical evaluation model of student satisfaction based on factor analysis, then assessed and modified the model through empirical study. It turned out that perceived quality had the greatest impact on satisfaction rating. Student expectations and perceived value had significant effects, too. So the primary factor to improve the student satisfaction is to increase the input of teaching equipment and teaching materials, especially network and e-learning resources. Guiding the learning method by teachers should also be stressed.",2010,0, 4563,A Benchmarking Framework for Domain Specific Software,"With the development of the software engineering, finding the """"best-in-class"""" practice of a specific domain software and locating its position is an urgent task. This paper proposes a systematic, practical and simplified benchmarking framework for domain specific software. The methodology is useful to assess characteristics, sub-characteristics and attributes that influence the domain specific software product quality qualitative and quantitative. It is helpful for the business managers and the software designers to obtain the competitiveness analysis results of software. Using the domain specific benchmarking framework, the software designers and the business managers can easily recognize the positive and negative gap of their product and locate its position in the specific domain.",2010,0, 4564,Importance Sampling Based Safety-Critical Software Statistical Testing Acceleration,"It is necessary to assess the reliability of safety-critical software to a high degree of confidence before they are deployed in the field. However, safety-critical software often includes some rarely executed critical operations that are often inadequately tested in statistical testing based reliability estimation. This paper discusses software statistical testing acceleration based on importance sampling technique. When both the critical operations and the entire software are adequately tested, the method can still get the unbiased software reliability from the test results with much less test cases. Thus, the statistical testing cost of safety-critical software can be reduced effectively The simulated annealing algorithm for calculating optimum transition probabilities of the Markov chain usage model for software statistical testing accelerating is also presented.",2010,0, 4565,A Novel Evaluation Method for Defect Prediction in Software Systems,"In this paper, we propose a novel evaluation method for defect prediction in object-oriented software systems. For each metric to evaluate, we start by applying it to the dependency graph extracted from the target software system, and obtain a list of classes ordered by their predicted degree of defect under that metric. By utilizing the actual defect data mined from the subversion database, we evaluate the quality of each metric through means of a weighted reciprocal ranking mechanism. Our method can tell not only the overall quality of each evaluated metric, but also the quality of the prediction result for each class, especially those costly ones. Evaluation results and analysis show the efficiency and rationality of our method.",2010,0, 4566,The Application of Ant Colony Algorithm in Web Service Selection,"As an intelligent algorithm with the mechanism of positive feedback, the ant colony algorithm is useful in solving the optimal problem. Web service selection is the foundation of the Web service composition which is one of the most important ways to satisfy the users' personalized requirements. Firstly, analyzed the problem of Web service selection based on expounding the basic principle of ant colony algorithm and transformed the problem of Web service selection driven by QoS into the problem of finding the shortest path. Secondly, given the steps for solving the problem of Web Service selection based on the ant colony algorithm and contrasted the results under different parameters. Lastly, verified the validity of ant colony algorithm in Web Service selection.",2010,0, 4567,A Recommendation Method of BPEL Activity Based on Association Rules Mining,"As the de facto standard, Business Process Execution Language (BPEL) is widely used in the composition and orchestration of web services; however, it is tiring and error-prone if we just rely on the developers to create every activity icon and assign it an atomic service. On the basis of the author's early work, this paper creatively applies association rules mining to the analysis of BPEL process, and put forwards a recommendation method of BPEL activity based on Labeled Activity Tree (LAT) and sub-activity sequence (subASeq). Finally, the experiment result of the BPEL processes in a multimedia conference system indicates this method's validity.",2010,0, 4568,Automated Verification of Goal Net Models,"Multi-agent systems are increasingly complex and the problem of their verification is acquiring increasing importance. Most of the existing approaches are based on model checking techniques which require system specification in a formal language. This is an error-prone task and can only be performed by experts who have considerable experiences in logic reasoning. We proposed in this paper an easy-to-use approach to the verification of multi-agent systems based on the Goal Net modeling methodology. Goal Net is a goal-oriented framework which aspires to simplify the process of designing and implementing multi-agent systems. In our approach, a multi-agent system is modeled as a goal net. The goal net and its properties will be automatically converted into FSP (Finite State Processes) specification, which can be automatically verified by the model checker LTSA (Labeled Transition System Analyzer).",2010,0, 4569,Improved Intra-Prediction Mode Selection Algorithm for H.264,"H.264 introduces a technology of intra-prediction to improve the coding efficiency of I frames, but also greatly increased the computational complexity of encoding. This paper proposes a fast intra- prediction mode selection algorithm. The algorithm predicted macroblock boundaries and the fast intra- prediction mode selection to speed up encoding rate. The result of experiments show that the proposed algorithm improve the coding rate effectively based on the guarantee of the image quality.",2010,0, 4570,Research of Real-Time Algorithm for Chestnut's Size Based on Computer Vision,"A grading system was developed to classify chestnut automatically into various grades of quality in terms of size. The chestnut is scanned with a color charge-coupled-device camera and then the size is extracted by image processing. In the image processing there are two kinds of algorithm, one is the minimum enclosing rectangle (MER), and the other is the distance between centroid and border (DCB). The algorithm finds out the chestnut's major axis and minor axis which are used to predict the size of the chestnut. By regression analysis, it's found that the relative error of the MER is 0.7465%, and the relative error of the DBCB is 1.83%. The MER is better to predict the chestnut's size than the DCB.",2010,0, 4571,Seamless high speed simulation of VHDL components in the context of comprehensive computing systems using the virtual machine faumachine,"Testing the interaction between hard- and software is only possible once prototype implementations of the hardware exist. HDL simulations of hardware models can help to find defects in the hardware design. To predict the behavior of entire software stacks in the environment of a complete system, virtual machines can be used. Combining a virtual machine with HDL-simulation enables to project the interaction between hard- and software implementations, even if no prototype was created yet. Hence it allows for software development to begin at an earlier stage of the manufacturing process and helps to decrease the time to market. In this paper we present the virtual machine FAUmachine that offers high speed emulation. It can co-simulate VHDL components in a transparent manner while still offering good overall performance. As an example application, a PCI sound card was simulated using the presented environment.",2010,0, 4572,Effectiveness of the cumulative vs. normal mode of operation for combinatorial testing,"This paper discusses the state of the art of applying combinatorial interaction testing (CIT) in conjunction with mutation testing for hardware testing. In addition, the paper discusses the art of the practice of applying CIT in normal and cumulative mode in order to derive an optimal test suite that can be used for hardware testing in a production line. Our previous study based on applying CIT in cumulative mode; described the systematic application of the strategy for testing 4-bit Magnitude Comparator Integrated Circuits in a production line. Complementing our previous work, this paper compares the effectiveness of cumulative mode versus normal mode of operation. Our result demonstrates that the use of CIT in cumulative mode is more practical than normal mode of operation as far as detecting faults introduced by mutation.",2010,0, 4573,Preventing insider malware threats using program analysis techniques,"Current malware detection tools focus largely on malicious code that is injected into target programs by outsiders by exploiting inadvertent vulnerabilities such as failing to guard against a buffer overflow or failure to properly validate a user input in those programs. Hardly any attention is paid to threats arising from software developers, who, with their intimate knowledge of the inner workings of those programs, can easily sneak logic bombs, Trojan horses, and backdoors in those programs. Traditional software validation techniques such as testing based on user requirements are unlikely to detect such malware, because normal use cases will not trigger them and thus will fail to expose them. The state-of-the-art in preventing such malware involves manual inspection of the target program, which is a highly tedious, time consuming, and error prone process. We propose a dynamic, test driven approach that automatically steers program analysts towards examining and discovering such insider malware threats. It uses program analysis techniques to identify program parts whose execution automatically guarantees execution of a large number of previously unexplored parts of the program. It effectively leads analysts into creating test cases which may trigger, in a protected test environment, any malware code hidden in that application as early as possible, so it can be removed from the application before it is deployed in the field. We also present a tool that helps translate this approach into practice.",2010,0, 4574,An efficient negotiation based algorithm for resources advanced reservation using hill climbing in grid computing system,"Ensuring quality of services and reducing the blocking probability is one of the important cases in the environment of grid computing. In this paper, we present a deadline aware algorithm to give solution to ensuring the end-to-end QoS and improvement of the efficiency of grid resources. Investigating requests as a group in the start of time slot and trying to accept the highest number of them. Totally can cause to increase the possibility of acceptance of requests and also increase efficiency of grid resources. Deadline aware algorithm reaches us to this goal. Simulations show that the deadline aware algorithm improves efficiency of advance reservation resources in both case. Possibility of acceptance of requests and optimizing resources in short time slot and also in rate of high entrance of requests in each time.",2010,0, 4575,Beautiful picture of an ugly place. Exploring photo collections using opinion and sentiment analysis of user comments,"User generated content in the form of customer reviews, feedbacks and comments plays an important role in all types of Internet services and activities like news, shopping, forums and blogs. Therefore, the analysis of user opinions is potentially beneficial for the understanding of user attitudes or the improvement of various Internet services. In this paper, we propose a practical unsupervised approach to improve user experience when exploring photo collections by using opinions and sentiments expressed in user comments on the uploaded photos. While most existing techniques concentrate on binary (negative or positive) opinion orientation, we use a real-valued scale for modeling opinion and sentiment strengths. We extract two types of sentiments: opinions that relate to the photo quality and general sentiments targeted towards objects depicted on the photo. Our approach combines linguistic features for part of speech tagging, traditional statistical methods for modeling word importance in the photo comment corpora (in a real-valued scale), and a predefined sentiment lexicon for detecting negative and positive opinion orientation. In addition, a semi-automatic photo feature detection method is applied and a set of syntactic patterns is introduced to resolve opinion references. We implemented a prototype system that incorporates the proposed approach and evaluates it on several regions in the World using real data extracted from Flickr.",2010,0, 4576,A robust and fault-tolerant distributed intrusion detection system,"Since it is impossible to predict and identify all the vulnerabilities of a network, and penetration into a system by malicious intruders cannot always be prevented, intrusion detection systems (IDSs) are essential entities for ensuring the security of a networked system. To be effective in carrying out their functions, the IDSs need to be accurate, adaptive, and extensible. Given these stringent requirements and the high level of vulnerabilities of the current days' networks, the design of an IDS has become a very challenging task. Although, an extensive research has been done on intrusion detection in a distributed environment, distributed IDSs suffer from a number of drawbacks e.g., high rates of false positives, low detection efficiency etc. In this paper, the design of a distributed IDS is proposed that consists of a group of autonomous and cooperating agents. In addition to its ability to detect attacks, the system is capable of identifying and isolating compromised nodes in the network thereby introducing fault-tolerance in its operations. The experiments conducted on the system have shown that it has high detection efficiency and low false positives compared to some of the currently existing systems.",2010,0, 4577,An Upper Bound on the Probability of Instability of a DVB-T/H Repeater with a Digital Echo Canceller,"The architecture of a digital On-Channel Repeater (OCR) for DVB-T/H signals is described in this paper. The presence of a coupling channel between the transmitting and the receiving antennas gives origin to one or more echoes, having detrimental effects on the quality of the repeated signal and critically affecting the overall system stability. A low-complexity echo canceller unit is then proposed, performing a coupling channel estimation based on the local transmission of low-power training signals. In particular, in this paper we focus on the stability issues which arise due to the non perfect echo cancellation. An upper bound on the probability of instability of the system is analytically found, providing useful guidelines for conservative OCR design, and some performance figures concerning different propagation scenarios are provided.",2010,0, 4578,A Distributed Multi-Target Software Vulnerability Discovery and Analysis Infrastructure for Smart Phones,"Smart phones of today have increasingly sophisticated software. As the feature set grows further, the probability of system security related defects is likely to increase as well. Today, the security of mobile platforms and applications comes under great scrutiny as they are getting widely adopted. It is therefore crucial that code for mobile devices gets well tested and security bugs eliminated where possible. A popular and effective testing technique to identify severe security bugs in source code is fuzz testing. However, it is extremely time consuming to generate randomized input and test them on each version of the mobile phone and its software. This paper presents, MAFIA - Multi-target Automated Fuzzing Infrastructure and Arsenal, a composite, distributed client-server fuzz testing infrastructure for software applications and libraries in virtually any smartphone platform. The set of tools in MAFIA is file-format agnostic and can be used across various applications & libraries. With MAFIA, we conducted a large number of tests against image-handling libraries and logged more than 13,000 mutated inputs that successfully crash several Symbian OS retail phones models. The system is scalable and can be easily extended to be used on new devices and operating systems.",2010,0, 4579,Perverse UML for generating web applications using YAMDAT,"In the current environment of accelerating technological change, software development continues to be difficult, unpredictable, expensive, and error-prone. Model Driven Architecture (MDA), sometimes known as Executable UML, offers a possible solution. MDA provides design notations with precisely defined semantics. Using these notations, developers can create a design model that is detailed and complete enough that the model can be verified and tested via simulation (""""execution""""). Design faults, omissions, and inconsistencies can be detected without writing any code. Furthermore, implementation code can be generated directly from the model. In fact, implementations in different languages or for different platforms can be generated from the same model. The YAMDATTMproject (Yet Another Model Driven Architecture Tool) will provide a convenient suite of tools for design, collaboration, model verification, and code generation. Parts of the system (notably, the code generation for state machines) are already sufficiently well-developed to support development of complex C++ and Java systems. This paper describes an experiment in repurposing a standard UML component, the class diagram, to support the design and automatic code generation for a domain not usually considered for MDA: the page interactions of a web application.",2010,0, 4580,Regression test cases prioritization using Failure Pursuit Sampling,"The necessity of lowering the execution of system tests' cost is a consensual point in the software development community. The present study presents an optimization of the regression tests' activity, by adapting a test cases prioritization technique called Failure Pursuit Sampling-previously used and validated for the prioritization of tests in general-improving its efficiency for the exclusive execution of regression test. For this purpose, the clustering and sampling phases of the original technique were modified, so that it becomes capable of receive information from tests made on the previous version of a program, and can use this information to drive de efficiency of the new developed technique, for tests made on a present version. The adapted technique was implemented and executed using the Schedule program, of the Siemens suit. By using Average of the Percentage of Faults Detected charts, the modified Failure Pursuit Sampling technique presented a high level of efficiency improvement.",2010,0, 4581,Event driven multi-context trust model,"Agent reasoning in large scale multi-agent systems requires techniques which often work with uncertainty and probability. In our research, we use trust and reputation principles to support agent reasoning and decisioning. Information about agents past behaviour and their qualities are transformed to multi-context trust. It allows to view a single agent from different point of views, because agents are judged in different aspects - contexts. In this paper we describe event driven multi-context trust model as extension of Hierarchical Model of Trust in Contexts (HMTC), when different types of events causes trust updates. This extension of HMTC also provides some solutions for avoiding conflicts which may appear in previous HMTC.",2010,0, 4582,Fault Evaluator: A tool for experimental investigation of effectiveness in software testing,"The specifications for many software systems, including safety-critical control systems, are often described using complex logical expressions. It is important to find effective methods to test implementations of such expressions. Analyzing the effectiveness of the testing of logical expressions manually is a tedious and error prone endeavor, thus requiring special software tools for this purpose. This paper presents Fault Evaluator, which is a new tool for experimental investigation of testing logical expressions in software. The goal of this tool is to evaluate logical expressions with various test sets that have been created according to a specific testing method and to estimate the effectiveness of the testing method for detecting specific faulty variations of the original expressions. The main functions of the tool are the generation of complete sets of faults in logical expressions for several specific types of faults; gaining expected (Oracle) values of logical expressions; testing faulty expressions and detecting whether a test set reveals a specific fault; and evaluating the effectiveness of a testing approach.",2010,0, 4583,A comprehensive evaluation methodology for domain specific software benchmarking,"With the wide using of information system, software users are demanding higher quality software and the software developers are pursue the best-in-class practice in a specific domain. But till now it is difficult to get an evaluation result due to lack of benchmark method. This paper proposes a systematic, practical and simplified evaluation methodology for domain specific software benchmarking. In this methodology, an evaluation model with software test characteristics (elementary set) and domain characteristics (extended set) is described; in order to weaken the uncertainty of subjective and objective, the rough set theory is applied to gain the attributes weights, and the gray analysis method is introduced to remedy the incomplete and inadequate information. The method is useful to assess characteristics, sub-characteristics and attribute that influence the domain specific software product quality qualitative and quantitative. The evaluation and benchmarking results prove that the model is practical and effective.",2010,0, 4584,Detecting faults in technical indicator computations for financial market analysis,"Many financial trading and charting software packages provide users with technical indicators to analyze and predict price movements in financial markets. Any computation fault in technical indicator may lead to wrong trading decisions and cause substantial financial losses. Testing is a major software engineering activity to detect computation faults in software. However, there are two problems in testing technical indicators in these software packages. Firstly, the indicator values are updated with real-time market data that cannot be generated arbitrarily. Secondly, technical indicators are computed based on a large amount of market data. Thus, it is extremely difficult, if not impossible, to derive the expected indicator values to check the correctness of the computed indicator values. In this paper, we address the above problems by proposing a new testing technique to detect faults in computation of technical indicators. We show that the proposed technique is effective in detecting computation faults in faulty technical indicators on the MetaTrader 4 Client Terminal.",2010,0, 4585,"Surface water quality forecasting based on ANN and GIS for the Chanzhi Reservoir, China","Artificial neural network (ANN) based on Arc Engine (AE) of GIS is used to predict surface water quality in the Chanzhi Reservoir, Qingdao, China. The results can reflect the water quality change trends with less than 10% average relative error. Using MS SQL Server database technology combined with Geodatabase, the system achieves the fundamental geographic information and hydrological data management, and water quality prediction. It uses GIS component technology to build an efficient and stable platform which can apply to general surface water quality prediction. This software has good information extraction and query functions to help decision-makers to manage water resource better.",2010,0, 4586,An improved analytic hierarchy process model on Software Quality Evaluation,"With new global demands for better quality of software products, effective and efficient SQE (Software Quality Evaluation) becomes necessary and indispensible. Existing methods of SQE are as it is, however, subjective, non-quantitative and uncompleted to some extent. In this paper, we introduce a novel method, an improved AHPM (Analytic Hierarchy Process Model) incorporated traditional AHPM and some quality characteristics from ISO/IEC 9126, which attempts to propose a better method of software quality evaluation that is relatively objective, quantitative and completed. On this basis, current software can be well assessed as well as future products can be well predicted.",2010,0, 4587,Dependency-aware fault diagnosis with metric-correlation models in enterprise software systems,"The normal operation of enterprise software systems can be modeled by stable correlations between various system metrics; errors are detected when some of these correlations fail to hold. The typical approach to diagnosis (i.e., pinpoint the faulty component) based on the correlation models is to use the Jaccard coefficient or some variant thereof, without reference to system structure, dependency data, or prior fault data. In this paper we demonstrate the intrinsic limitations of this approach, and propose a solution that mitigates these limitations. We assume knowledge of dependencies between components in the system, and take this information into account when analyzing the correlation models. We also propose the use of the Tanimoto coefficient instead of the Jaccard coefficient to assign anomaly scores to components. We evaluate our new algorithm with a Trade6-based test-bed. We show that we can find the faulty components within top-3 components with the highest anomaly score in four out of nine cases, while the prior method can only find one.",2010,0, 4588,Congestion control research of underwater acoustic networks,"Congestion control is the key technology to ensure quality of network service. In underwater acoustic communication networks an important research area is how to control network congestion. Once detect the network congestion, we should take measures to solve congestion and make network return to normal data transmission. In this paper, we use congestion controller to solve network congestion problems. The simulation is done by software platform of OPNET and the results show that the mechanism we studied is feasible. It can resolve the network congestion effectively, achieve higher network throughput and less data packet delay.",2010,0, 4589,A Quasi-best Random Testing,"Random testing, having been employed in both hardware and software for a long time, is well known for its simplicity and straightforwardness, in which each test is selected randomly regardless of the tests previously generated. However, traditionally, it seems to be inefficient for its random selection of test patterns. Therefore, a new concept of quasi-best distance random testing is proposed in the paper to make it more effective in testing. The new idea is based on the fact that the distance between two adjacent selected test vectors in a test sequence would greatly influence the efficiency of fault testing. Procedures of constructing such a testing sequence are presented and discussed in detail. The new approach has shown its remarkable advantage of fitting in most circuits. Experimental results and mathematical analysis of efficiency are also given to assess the performances of the proposed approach.",2010,0, 4590,A New Approach to Generating High Quality Test Cases,"High quality test cases can effectively detect software errors and ensure software quality. However, except the regular expression-based test generation method, test cases generated from other model-based test generation methods have not contain the whole information of the model, resulting in test inadequacy. And test cases derived from regular expression have the prohibited lengths that cause the sustainable increase of test cost. To obtain high quality test cases, we suggest a new method for test generation by way of regular expression decomposition. Unlike the previous model decomposition techniques, our method lays emphasis on information completeness after regular expression is decomposed. Based on two empirical assumptions, we propose two processes of regular expression decomposition and three decomposition rules. Then we perform a case study to demonstrate our approach. The results show that our approach generates high quality test cases as well as avoids the problem of test complexity.",2010,0, 4591,A Taxonomy for the Analysis of Scientific Workflow Faults,"Scientific workflows generally involve the distribution of tasks to distributed resources, which may exist in different administrative domains. The use of distributed resources in this way may lead to faults, and detecting them, identifying them and subsequently correcting them remains an important research challenge. We introduce a fault taxonomy for scientific workflows that may help in conducting a systematic analysis of faults, so that the potential faults that may arise at execution time can be corrected (recovered from). The presented taxonomy is motivated by previous work [4], but has a particular focus on workflow environments (compared to previous work which focused on Grid-based resource management) and demonstrated through its use in Weka4WS.",2010,0, 4592,SQ^(2)E: An Approach to Requirements Validation with Scenario Question,"Adequate requirements validation could prevent errors from propagating into later development phase, and eventually improve the quality of software systems. However, often validating textual requirements is difficult and error prone. We develop a feedback-based requirements validation methodology that provides an interactive and systematic way to validate a requirements model. Our approach is based on the notion of querying a model, which is built from a requirements specification, with scenario questions, in order to determine whether the model's behavior satisfies the given requirements. To investigate feasibility of our approach, we implemented a Scenario Question Query Engine (SQ2E), which uses scenario questions to query a model, and performed a preliminary case study using a real-world application. The results show that the approach we proposed was effective in detecting both expected and unexpected behaviors in a model. We believe that our approach could improve the quality of requirements and ultimately the quality of software systems.",2010,0, 4593,Quality Attributes Assessment for Feature-Based Product Configuration in Software Product Line,"Product configuration based on a feature model in software product lines is the process of selecting the desired features based on customers' requirements. In most cases, application engineers focus on the functionalities of the target product during product configuration process whereas the quality attributes are handled until the final product is produced. However, it is costly to fix the problem if the quality attributes have not been considered in the product configuration stage. The key issue of assessing a quality attribute of a product configuration is to measure the impact on a quality attribute made by the set of functional variable features selected in a configuration. Current existing approaches have several limitations, such as no quantitative measurements provided or requiring existing valid products and heavy human effort for the assessment. To overcome theses limitations, we propose an Analytic Hierarchical Process (AHP) based approach to estimate the relative importance of each functional variable feature on a quality attribute. Based on the relative importance value of each functional variable feature on a quality attribute, the level of quality attributes of a product configuration in software product lines can be assessed. An illustrative example based on the Computer Aided Dispatch (CAD) software product line is presented to demonstrate how the proposed approach works.",2010,0, 4594,An Interaction-Pattern-Based Approach to Prevent Performance Degradation of Fault Detection in Service Robot Software,"In component-based robot software, it is crucial to monitor software faults and deal with them on time before they lead to critical failures. The main causes of software failures include limited resources, component-interoperation mismatches, and internal errors of components. Message-sniffing is one of the popular methods to monitor black-box components and handle these types of faults during runtime. However, this method normally causes some performance problems of the target software system because the fault monitoring and detection process consumes a significant amount of resources of the target system. There are three types of overheads that cause the performance degradation problems: frequent monitoring, transmission of a large amount of monitoring-data, and the processing time for fault analysis. In this paper, we propose an interaction-pattern-based approach to reduce the performance degradation caused by fault monitoring and detection in component-based service robot software. The core idea of this approach is to minimize the number of messages to monitor and analyze in detecting faults. Message exchanges are formalized as interaction patterns which are commonly observed in robot software. In addition, important messages that need to be monitored are identified in each of the interaction patterns. An automatic interaction pattern-identification method is also developed. To prove the effectiveness of our approach, we have conducted a performance simulation. We are also currently applying our approach to silver-care robot systems.",2010,0, 4595,Testing Inter-layer and Inter-task Interactions in RTES Applications,"Real-time embedded systems (RTESs) are becoming increasingly ubiquitous, controlling a wide variety of popular and safety-critical devices. Effective testing techniques could improve the dependability of these systems. In this paper we present an approach for testing RTESs, intended specifically to help RTES application developers detect faults related to functional correctness. Our approach consists of two techniques that focus on exercising the interactions between system layers and between the multiple user tasks that enact application behaviors. We present results of an empirical study that shows that our techniques are effective at detecting faults.",2010,0, 4596,An Automatic Testing Approach for Compiler Based on Metamorphic Testing Technique,"Compilers play an important role in software development, and it is quite necessary to perform abundant testing to ensure the correctness of compilers. A critical task in compiler testing is to validate the semantic-soundness property which requires consistence between semantics of source programs and behavior of target executables. For validating this property, one main challenging issue is generation of a test oracle. Most existing approaches fall into two main categories when dealing with this issue: reference-based approaches and assertion-based approaches. All these approaches have their weakness when new programming languages are involved or test automation is required. To overcome the weakness in the existing approaches, we propose a new automatic approach for testing compiler. Our approach is based on the technique of metamorphic testing, which validates software systems via so-called metamorphic relations. We select the equivalence-preservation relation as the metamorphic relation and propose an automatic metamorphic testing framework for compiler. We also propose three different techniques for automatically generating equivalent source programs as test inputs. Based on our approach, we developed a tool called Mettoc. Our mutation experiments show that Mettoc is effective to reveal compilers' errors in terms of the semantic-soundness property. Moreover, the empirical results also reveal that simple approaches for constructing test inputs are not weaker than complicated ones in terms of fault-detection capability. We also applied Mettoc in testing a number of open source compilers, and two real errors in GCC-4.4.3 and UCC-1.6 respectively have been detected by Mettoc.",2010,0, 4597,Combinatorial Testing with Shielding Parameters,"Combinatorial testing is an important approach to detecting interaction errors for a system with several parameters. Existing research in this area assumes that all parameters of the system under test are always effective. However, in many realistic applications, there may exist some parameters that can disable other parameters in certain conditions. These parameters are called shielding parameters. Shielding parameters make test cases generated by the existing test model, which uses the Mixed Covering Array (MCA), fail in exposing some potential errors that should be detected. In this paper, the Mixed Covering Array with Shielding parameters (MCAS) is proposed to describe such problems. Then test cases can be generated by constructing MCAS's in three different approaches. According to the experimental results, our test model can generate satisfactory test cases for combinatorial testing with shielding parameters.",2010,0, 4598,Evaluating Mutation Testing Alternatives: A Collateral Experiment,"Mutation testing while being a successful fault revealing technique for unit testing, it is a rather expensive one for practical use. To bridge these two aspects there is a need to establish approximation techniques able to reduce its expenses while maintaining its effectiveness. In this paper several second order mutation testing strategies are introduced, assessed and compared along with weak mutation against strong. The experimental results suggest that they both constitute viable alternatives for mutation as they establish considerable effort reductions without greatly affecting the test effectiveness. The experimental assessment of weak mutation suggests that it reduces significantly the number of the produced equivalent mutants on the one hand and that the test criterion it provides is not as weak as is thought to be on the other. Finally, an approximation of the number of first order mutants needed to be killed in order to also kill the original mutant set is presented. The findings indicate that only a small portion of a set of mutants needs to be targeted in order to be killed while the rest can be killed collaterally.",2010,0, 4599,Using Faults-Slip-Through Metric as a Predictor of Fault-Proneness,"Background: The majority of software faults are present in small number of modules, therefore accurate prediction of fault-prone modules helps improve software quality by focusing testing efforts on a subset of modules. Aims: This paper evaluates the use of the faults-slip-through (FST) metric as a potential predictor of fault-prone modules. Rather than predicting the fault-prone modules for the complete test phase, the prediction is done at the specific test levels of integration and system test. Method: We applied eight classification techniques, to the task of identifying fault prone modules, representing a variety of approaches, including a standard statistical technique for classification (logistic regression), tree-structured classifiers (C4.5 and random forests), a Bayesian technique (Naive Bayes), machine-learning techniques (support vector machines and back-propagation artificial neural networks) and search-based techniques (genetic programming and artificial immune recognition systems) on FST data collected from two large industrial projects from the telecommunication domain. Results: Using area under the receiver operating characteristic (ROC) curve and the location of (PF, PD) pairs in the ROC space, the faults slip-through metric showed impressive results with the majority of the techniques for predicting fault-prone modules at both integration and system test levels. There were, however, no statistically significant differences between the performance of different techniques based on AUC, even though certain techniques were more consistent in the classification performance at the two test levels. Conclusions: We can conclude that the faults-slip-through metric is a potentially strong predictor of fault-proneness at integration and system test levels. The faults-slip-through measurements interact in ways that is conveniently accounted for by majority of the data mining techniques.",2010,0, 4600,Clustering Performance on Evolving Data Streams: Assessing Algorithms and Evaluation Measures within MOA,"In today's applications, evolving data streams are ubiquitous. Stream clustering algorithms were introduced to gain useful knowledge from these streams in real-time. The quality of the obtained clusterings, i.e. how good they reflect the data, can be assessed by evaluation measures. A multitude of stream clustering algorithms and evaluation measures for clusterings were introduced in the literature, however, until now there is no general tool for a direct comparison of the different algorithms or the evaluation measures. In our demo, we present a novel experimental framework for both tasks. It offers the means for extensive evaluation and visualization and is an extension of the Massive Online Analysis (MOA) software environment released under the GNU GPL License.",2010,0, 4601,Online Dynamic Control of Secondary Cooling for the Continuous Casting Process,"Aiming to reduce the occurrence of the surface and internal defects in the products, a dynamic secondary cooling control system was developed. The purpose of the system was to keep the surface temperature of the strand constant regardless of changes in casting speed. To accurately predict and control temperature in real time during the continuous casting process, A fast, accurate transient solidification and heat transfer model that serves as a software sensor was developed, which provide feedback to a control system. The control methodology and software suitable for online control of continuous casting cooling process were also designed. In order to maintain the target temperature profile throughout the steel, the new software system continuously read the operating conditions and adjusts the spray-water flow rates in the secondary cooling zone of the caster. The control system is demonstrated by simulation and the results show that the control system is capable of running in real time on billet caster.",2010,0, 4602,Design of Control System for Intelligent Coater of Sealant,"The paper introduces a control system design of a new type of coater of sealant, and details the function and principle of the system, and describes the software architecture and protection system of the system. The coater of sealant is one of the most important part in the gearbox assembly line . Besides the functions of automatic glue and detect automatically to give an alarm, it also can communicate with assembly supervise system by internet, thereby you can manage and control the system using network. through using Trio motion coordinator, servo motor etc, it provides quality guarantee on the safe side for gearbox glue. Design of the protection system for various possible malfunction, can avoid damaging the coating mouth, and prevent to destroy the motor, and increase system stability.",2010,0, 4603,Design of a Image Acquisition System with High Dynamic Range,"This paper presents a multi-sensor image acquisition platform which combines with hardware and software and can adjusts the dynamic range of sensor according to changes in light source. The hardware mainly consiste of image sensor MT9V032 and FPGA, and use the DMA in PCIE bus to obtain high-speed data transmission. Based on the High Dynamic Range characteristic of MT9V032, we propose a algorithm for determining image quality to track changes in light and consequently adjust the dynamic range and photoelectric conversion rate of image sensor. It can enhance the contrast of interested area and improve the image quality.",2010,0, 4604,LavA: An Open Platform for Rapid Prototyping of MPSoCs,"Configurable hardware is becoming increasingly powerful and less expensive. This allows embedded system developers to exploit hardware parallelism in order to improve real time properties and energy efficiency. However, hardware design, even if performed using high-level hardware description languages, is error-prone and time consuming, especially when designing complex heterogeneous multiprocessor systems. To reduce the time to market for such systems, it is necessary to support the designer with a flexible workflow and methods for efficient reuse of existing components. In software engineering, this is enabled by using model-driven design flows and tools for configuration. In this paper, we describe LavA, a system which adapts these concepts to hardware design. By providing a streamlined toolchain and workflow to rapidly prototype complex, heterogeneous multiprocessor systems-on-chip based on a model-driven approach, developers can reduce turnaround times in design as well as design space exploration.",2010,0, 4605,VirtCFT: A Transparent VM-Level Fault-Tolerant System for Virtual Clusters,"A virtual cluster consists of a multitude of virtual machines and software components that are doomed to fail eventually. In many environments, such failures can result in unanticipated, potentially devastating failure behavior and in service unavailability. The ability of failover is essential to the virtual cluster's availability, reliability, and manageability. Most of the existing methods have several common disadvantages: requiring modifications to the target processes or their OSes, which is usually error prone and sometimes impractical; only targeting at taking checkpoints of processes, not whole entire OS images, which limits the areas to be applied. In this paper we present VirtCFT, an innovative and practical system of fault tolerance for virtual cluster. VirtCFT is a system-level, coordinated distributed checkpointing fault tolerant system. It coordinates the distributed VMs to periodically reach the globally consistent state and take the checkpoint of the whole virtual cluster including states of CPU, memory, disk of each VM as well as the network communications. When faults occur, VirtCFT will automatically recover the entire virtual cluster to the correct state within a few seconds and keep it running. Superior to all the existing fault tolerance mechanisms, VirtCFT provides a simpler and totally transparent fault tolerant platform that allows existing, unmodified software and operating system (version unawareness) to be protected from the failure of the physical machine on which it runs. We have implemented this system based on the Xen virtualization platform. Our experiments with real-world benchmarks demonstrate the effectiveness and correctness of VirtCFT.",2010,0, 4606,Cross-Layer Design to Merge Structured P2P Networks over MANET,"Peer-to-peer (P2P) network is an alternative of client/server system for sharing resources, e.g. files. P2P network is a robust, distributed and fault tolerant architecture. There are basic two types of P2P networks, structured P2P network and unstructured P2P network. Each of them has its own applications and advantages. Due recent advances in wireless and mobile technology, the P2P network can be deployed over mobile ad hoc network (MANET). We consider the scenarios of P2P network over MANET where all nodes are not the members of P2P network. Due to limited radio range and the mobility of nodes in MANET, there can occur network partition and merging of networks in the physical network. This can also lead to P2P network partition and merging at overlay layer. When two physical networks merge by coming into communication range of each other then their P2P networks would not be connected at overlay layer. Because P2P network operates at application layer as an overlay network. That is their P2P networks are connected in physical network but these P2P networks are disconnected at overlay layer. To detect this situation and merge these P2P networks at overlay layer, we extend the ODACP, an address auto-configuration protocol. Then we propose an approach to efficiently merge P2P networks such that routing traffic is minimized. Considering limited radio range and mobility of nodes, the simulation results shows that CAN over MANET performs better as compared to Chord over MANET in term of routing traffic and false-negative ratio.",2010,0, 4607,Development on surface defect holes inspection based on image recognition,"A novel method of surface defect holes inspection was proposed based on both 2D and 3D image processing & recognition. In this method, the first step is to detect the holes in the binary image converted by the 3D image which is scanned by a 3D laser scanner, and the second step is to confirm the defect holes by dimension calculation using the data of the scanned 3D image. The software is developed in MATLAB.",2010,0, 4608,A very fast unblocking scheme for distance protection to detect symmetrical faults during power swings,"Power swing blocking function in distance relays is necessary to distinguish between a power swing and a fault. However the distance relay should be fast and reliably unblocked if any fault occurs during a power swing. Although unblocking the relay under asymmetrical fault conditions is straightforward based on detecting the zero- or negative-sequence component of current but symmetrical fault detection during a power swing presents a challenge since there is no unbalancing. This paper presents a very fast detection method used to detect symmetrical faults occurring during a power swing. Based on a 50 Hz component getting on three-phase active power after symmetrical fault inception and using Fast Fourier Transform (FFT), the proposed detection method can reliably and quickly detect symmetrical faults occurring during power swing in one power cycle, i.e. 0.02 second. This detection method is easy to set and immune to the fault inception time and fault location. Power swing and fault conditions are simulated by using software PSCAD/EMTDC. FFT is performed by using On-Line Frequency Scanner block included in the software.",2010,0, 4609,Power quality problem classification based on Wavelet Transform and a Rule-Based method,"This paper describes a Wavelet Transform and Rule-Based method for detection and classification of various events of power quality disturbances. In this model, wavelet Multi-Resolution Analysis (MRA) technique was used to decompose the signal into its various details and approximation signals, and unique features from the 1st, 4th, 7th and 8th level detail are obtained as criteria for classifying the type of disturbance occurred. These features and together with the duration of disturbance of occurrence obtained from 1st level of detail, they form the criteria for a Rule-Based software algorithm for detecting different kinds of power quality disturbances effectively. It is presented in this paper that the choice of sampling frequency is important since it affects the average energy profile of the details and eventually may cause error in detection of power quality disturbances. The model is tested by using MATLAB toolbox. The simulation produces satisfactory result in identifying the disturbance and proof that it is possible to use this model for power disturbance classification. Since the method can reduce the number of parameters needed in classification, less memory space and computing time are required for its implementation. Thus it stands up to be a suitable model to be used in real time implementation through a dsPIC-based embedded system.",2010,0, 4610,A proposed Genetic Algorithm to optimize service restoration in electrical networks with respect to the probability of transformers failure,"Power system reliability, stability and efficiency are the most important issues to insure continuously feeding of customers. However in process of time, system will be age and the probability of failures will increase and faults inevitably will occur. When a fault occurs, the first reaction is isolation of the faulty area, then with aid of software and/or skillful person quick restoration is essentially needed. To minimize the out-of-service area and activity time of restoration many methods are suggested depend on objectives and constraints of restoration strategy. In many researches a Genetic Algorithm is employed as a powerful tool to solve this multi-objective, multi-constraint optimization problem. Out-of-service area minimization, reduce the number of switching operation and minimizing the minimum electrical power loss in restored system are the prior objectives of restoration plan. In this paper, as transformers are the most expensive and more effective equipments in the electrical network, failure probability increasing is introduced as a new constraint in genetic algorithm by authors. Expected results of this new algorithm should lead to a new plan of restoration in permissible ranges of transformer loading in respect of their age, previous experienced faults and condition monitoring.",2010,0, 4611,How dynamic is the Grid? Towards a quality metric for Grid information systems,"Grid information systems play a core role in today's production Grid Infrastructures. They provide a coherent view of the Grid services in the infrastructure while addressing the performance, robustness and scalability issues that occur in dynamic, large-scale, distributed systems. Quality metrics for Grid information systems are required in order to compare different implementations and to evaluate suggested improvements. This paper proposes the adoption of a quality metric, first used in the domain of Web search, to measure the quality of Grid information systems with respect to their information content. The application of this metric requires an understanding of the dynamic nature of Grid information. An empirical study based on information from the EGEE Grid infrastructure is carried out to estimate the frequency of change for different types of Grid information. Using this data, the proposed metric is assessed with regards to its applicability to measuring the quality of Grid information systems.",2010,0, 4612,Analysis and modeling of time-correlated failures in large-scale distributed systems,"The analysis and modeling of the failures bound to occur in today's large-scale production systems is invaluable in providing the understanding needed to make these systems fault-tolerant yet efficient. Many previous studies have modeled failures without taking into account the time-varying behavior of failures, under the assumption that failures are identically, but independently distributed. However, the presence of time correlations between failures (such as peak periods with increased failure rate) refutes this assumption and can have a significant impact on the effectiveness of fault-tolerance mechanisms. For example, the performance of a proactive fault-tolerance mechanism is more effective if the failures are periodic or predictable; similarly, the performance of checkpointing, redundancy, and scheduling solutions depends on the frequency of failures. In this study we analyze and model the time-varying behavior of failures in large-scale distributed systems. Our study is based on nineteen failure traces obtained from (mostly) production large-scale distributed systems, including grids, P2P systems, DNS servers, web servers, and desktop grids. We first investigate the time correlation of failures, and find that many of the studied traces exhibit strong daily patterns and high autocorrelation. Then, we derive a model that focuses on the peak failure periods occurring in real large-scale distributed systems. Our model characterizes the duration of peaks, the peak inter-arrival time, the inter-arrival time of failures during the peaks, and the duration of failures during peaks; we determine for each the best-fitting probability distribution from a set of several candidate distributions, and present the parameters of the (best) fit. Last, we validate our model against the nineteen real failure traces, and find that the failures it characterizes are responsible on average for over 50% and up to 95% of the downtime of these systems.",2010,0, 4613,Adaptively detecting changes in Autonomic Grid Computing,"Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and grid-running logs. Toward Autonomic Grid Computing, adaptively detecting the changes in a grid system can help to alarm the anomalies, clean the noises, and report the new patterns. In this paper, we proposed an approach of self-adaptive change detection based on the Page-Hinkley statistic test. It handles the non-stationary distribution without the assumption of data distribution and the empirical setting of parameters. We validate the approach on the EGEE streaming jobs, and report its better performance on achieving higher accuracy comparing to the other change detection methods. Meanwhile this change detection process could help to discover the device fault which was not claimed in the system logs.",2010,0, 4614,Validation of CK Metrics for Object Oriented Design Measurement,"Since object oriented system is becoming more pervasive, it is necessary that software engineers have quantitative measurements for accessing the quality of designs at both the architectural and components level. These measures allow the designer to access the software early in the process, making changes that will reduce complexity and improve the continuing capability of the product. Object oriented design metrics is an essential part of software engineering. This paper presents a case study of applying design measures to assess software quality. Six Java based open source software systems are analyzed using CK metrics suite to find out quality of the system and possible design faults that will reversely affect different quality parameters such as reusability, understandability, testability, maintainability. This paper also presents general guidelines for interpretation of reusability, understandability, testability, maintainability in the context of selected projects.",2010,0, 4615,Finite Element Simulation for the Surface Flaw in Tube Open-die Cold Extrusion Process,"Numerical simulated for tube open-die cold extrusion process using finite element software Deform-3D. Analyzed the influence rule of the surface crack defects on the tube under the processing parameters which is half die angle, friction coefficient in open-die cold extrusion process. The simulation result shows that the closer the central of the tube roughcast the bigger the inner and outer of the maximum speed difference. So it can determine that the surface crack defects are prone at the central of the tube roughcast. It would be bring surface crack defects when the half die angle and the friction coefficient increase. So some necessary measures are taken to effectively avoid the surface crack defects appearance. Advisably decrease the half die angle, ensure favorable superficial treatment and lubrication are the effective measures.",2010,0, 4616,The Equipment Development of Detecting the Beer Bottles' Thickness in Real Time,"In recent years, the requirement of beer bottle quality becomes ever-strict, so the annual detection can't satisfy the requirement of a modern industry, in this paper, we design a equipment which can detect the beer bottles' thickness in real time. The measurement system takes the chip computer W77E58 as the core, and takes the theory of ultrasonic measurement as the basis, we can observe directly the thickness of the detected beer bottle and judge the bottle is qualified or not by the visualization software. Through the analysis of experimental data, ultrasonic testing is relatively stable, it can meet the requirement of modern industry.",2010,0, 4617,A Short-Term Prediction for QoS of Web Service Based on Wavelet Neural Networks,"The prediction will play a positive role when people need choose the best Web Service from numerous services, so a study on predicting QoS of Web Service will be shown in this paper. Concretely, the structure of wavelet neural networks (WNN) and a related algorithm will be introduced in the paper. Based on this, WNN is applied to predict the QoS of Web Service and the functions of the MATLAB toolbox are adopted to create a network model for QoS prediction. Finally the simulation experiments will prove that using WNN to predict the QoS of Web Service is more effective than common back propagation (BP) neural networks.",2010,0, 4618,A Bayesian Based Method for Agile Software Development Release Planning and Project Health Monitoring,"Agile software development (ASD) techniques are iteration based powerful methodologies to deliver high quality software. To ensure on time high quality software, the impact of factors affecting the development cycle should be evaluated constantly. Quick and precise factor evaluation results in better risk assessment, on time delivery and optimal use of resources. Such an assessment is easy to carry out for a small number of factors. However, with the increase of factors, it becomes extremely difficult to assess in short time periods. We have designed and developed a project health measurement model to evaluate the factors affecting software development of the project. We used Bayesian networks (BNs) as an approach that gives such an estimation. We present a quantitative model for project health evaluation that helps decision makers make the right decision early to amend any discrepancy that may hinder on time and high quality software delivery.",2010,0, 4619,A Decentralized Approach for Monitoring Timing Constraints of Event Flows,"This paper presents a run-time monitoring framework to detect end-to-end timing constraint violations of event flows in distributed real-time systems. The framework analyzes every event on possible event flow paths and automatically inserts timing fault checks for run-time detection. When the framework detects a timing violation, it provides users with the event flow's run-time path and the time consumption of each participating software module. In addition, it invokes a timing fault handler according to the timing fault specification, which allows our approach to aid the monitoring and management of the deployed systems. The experimental results show that the framework correctly detects timing constraint with insignificant overhead and provides related diagnostic information.",2010,0, 4620,Analysis of in-loop denoising in lossy transform coding,"When compressing noisy image sequences, the compression efficiency is limited by the noise amount within these image sequences as the noise part cannot be predicted. In this paper, we investigate the influence of noise within the reference frame on lossy video coding of noisy image sequences. We estimate how much noise is left within a lossy coded reference frame. Therefore we analyze the transform and quantization step inside a hybrid video coder, specifically H.264/AVC. The noise power after transform, quantization, and inverse transform is calculated analytically. We use knowledge of the noise power within the reference frame in order to improve the inter frame prediction. For noise filtering of the reference frame, we implemented a simple denoising algorithm inside the H.264/AVC reference software JM15.1. We show that the bitrate can be decreased by up to 8.1% compared to the H.264/AVC standard for high resolution noisy image sequences.",2010,0, 4621,Towards Identifying the Best Variables for Failure Prediction Using Injection of Realistic Software Faults,"Predicting failures at runtime is one of the most promising techniques to increase the availability of computer systems. However, failure prediction algorithms are still far from providing satisfactory results. In particular, the identification of the variables that show symptoms of incoming failures is a difficult problem. In this paper we propose an approach for identifying the most adequate variables for failure prediction. Realistic software faults are injected to accelerate the occurrence of system failures and thus generate a large amount of failure related data that is used to select, among hundreds of system variables, a small set that exhibits a clear correlation with failures. The proposed approach was experimentally evaluated using two configurations based on Windows XP. Results show that the proposed approach is quite effective and easy to use and that the injection of software faults is a powerful tool for improving the state of the art on failure prediction.",2010,0, 4622,A Software Accelerated Life Testing Model,"Software system developed for a specific user under contract undergoes a period of testing by the user before acceptance. This is known as user acceptance testing and is useful to debug the software in the user's operational circumstance. In this paper we first present a simple non-homogeneous Poisson process (NHPP)-based software reliability model to assess the quantitative software reliability under the user acceptance test, where the idea of an accelerated life testing model is introduced to represent the user's operational phase and to investigate the impact of user's acceptance test. This idea is applied to the reliability assessment of web applications in a different testing environment, where two stress tests with normal and higher workload conditions are executed in parallel. Through numerical examples with real software fault data observed in actual user acceptance and stress tests, we show the applicability of the software accelerated life testing model to two different software testing schemes.",2010,0, 4623,Two Efficient Software Techniques to Detect and Correct Control-Flow Errors,"This paper proposes two efficient software techniques, Control-flow and Data Errors Correction using Data-flow Graph Consideration (CDCC) and Miniaturized Check-Pointing (MCP), to detect and correct control-flow errors. These techniques have been implemented based on addition of redundant codes in a given program. The creativity applied in the methods for online detection and correction of the control-flow errors is using data-flow graph alongside of using control-flow graph. These techniques can detect most of the control-flow errors in the program firstly, and next can correct them, automatically. Therefore, both errors in the control-flow and program data which is caused by control-flow errors can be corrected, efficiently. In order to evaluate the proposed techniques, a post compiler is used, so that the techniques can be applied to every 8086 binaries, transparently. Three benchmarks quick sort, matrix multiplication and linked list are used, and a total of 5000 transient faults are injected on several executable points in each program. The experimental results demonstrate that at least 93% and 89% of the control-flow errors can be detected and corrected without any data error generation by the CDCC and MCP, respectively. Moreover, the strength of these techniques is significant reduction in the performance and memory overheads in compare to traditional methods, for as much as remarkable correction abilities.",2010,0, 4624,Automatic Static Fault Tree Analysis from System Models,"The manual development of system reliability models such as fault trees could be costly and error prone in practice. In this paper, we focus on the problems of some traditional dynamic fault trees and present our static solutions to represent dynamic relations such as functional and sequential dependencies. The implementation of a tool for the automatic synthesis of our static fault trees from SysML system models is introduced.",2010,0, 4625,PBTrust: A Priority-Based Trust Model for Service Selection in General Service-Oriented Environments,"How to choose the best service provider (agent), which a service consumer can trust in terms of the quality and success rate of the service in an open and dynamic environment, is a challenging problem in many service-oriented applications such as Internet-based grid systems, e-trading systems, as well as service-oriented computing systems. This paper presents a Priority-Based Trust (PBTrust) model for service selection in general service-oriented environments. The PBTrust is robust and novel from several perspectives. (1) The reputation of a service provider is derived from referees who are third parties and had interactions with the provider in a rich context format, including attributes of the service, the priority distribution on attributes and a rating value for each attribute from a third party, (2) The concept of 'Similarity' is introduced to measure the difference in terms of distributions of priorities on attributes between requested service and a refereed service in order to precisely predict the performance of a potential provider on the requested service, (3) The concept of general performance of a service provider on a service in history is also introduced to improve the success rate on the requested service. The experimental results can prove that PBtrust has a better performance than that of the CR model in a service-oriented environment.",2010,0, 4626,SProt - from local to global protein structure similarity,"Similarity search in protein databases is one of the most essential issues in proteomics. With the growing number of experimentally solved protein structures, the focus shifted from sequence to structure. The area of structure similarity forms a big challenge since even no standard definition of optimal similarity exists in the field. In this paper, we propose a protein structure similarity method called SProt. SProt concentrates on high-quality modeling of local similarity in the process of feature extraction. SProt's features are based on spherical spatial neighborhood where similarity can be well defined. On top of the partial local similarities, global measure assessing similarity to a pair of protein structures is built. SProt outperforms other methods in classification accuracy, while it is at least comparable to the best existing solutions in terms of precision-recall or quality of alignment.",2010,0, 4627,Joint H.264/SVC-MIMO rate control for wireless video transmission,"In this research, we propose a novel joint H.264/SVC-MIMO rate control (RC) algorithm for video compression and transmission over Multiple Input Multiple Output (MIMO) wireless systems. We first present system architecture for H.264/SVC compression and transmission over MIMO systems. Then, we use a packet-level two-state Markov model to estimate MIMO channel states and predict the number of retransmitted bits in the presence of automatic repeat request (ARQ). Finally, an efficient joint rate controller is proposed to regulate the output bit rate of each layer. Our extensive simulation results demonstrate that, our algorithm can respond to the sudden bandwidth fluctuation of MIMO channels, outperform JVT-W043 rate control algorithm, adopted in the H.264/SVC reference software, by providing more accurate output bit rate, reducing buffer overflow, depressing quality fluctuations, and finally, improves the overall coding quality.",2010,0, 4628,Reliability models and open source software: An empirical study,"Open source communities have successfully developed a great deal of software. Due to its free availability and highly secure operating system environment it is promoted by most of the countries all over the world. India also started contributing in this field. Government of India is also promoting usage of Open Source softwares due to its economical feasibility and security. Due to its huge demand in software field a major concern is of its reliability, which is defined as the probability of failure-free software operation for a specified period of time in a given environment. In this paper on the basis of literature survey of open source and reliability, facts regarding Open Source softwares as well as different reliability concepts are elaborated. Important Models to estimate reliability are studied and then main factors are explained here. This paper will be helpful for research scholars doing their research in software reliability.",2010,0, 4629,Sequence-based techniques for black-box test case prioritization for composite service testing,"Web service may evolve autonomously, making peer web services in the same service composition uncertain as to whether the evolved behaviors may still be compatible to its originally collaborative agreement. Although peer services may wish to conduct regression testing to verify the original collaboration, the source code of the former service can be inaccessible to them. Traditional code-based regression testing strategies are inapplicable to web services. In this paper, we formulate new test case prioritization strategies using sequences in XML messages to reorder regression test cases for composite web services, against the tag based techniques given in and reveal how the test cases use the interface specifications of the composite services. The results were evaluated experimentally and the results show that the new techniques can have a high probability of outperforming random ordering and the techniques given in.",2010,0, 4630,Towards adaptive web services QoS prediction,"Quality of Service (QoS) has been widely used to support dynamic Web Service (WS) selection and composition. Due to the volatile nature of QoS parameters, QoS prediction has been put forward to understand the trend of QoS data volatility and estimate QoS values in dynamic environments. In order to provide adaptive and effective QoS prediction, we propose a WS QoS prediction approach, named WS-QoSP, based on the technique of forecast combination. Different from the existing QoS prediction approaches that choose a most feasible forecasting model and predict relying on this best model, WS-QoSP selects multiple potential forecasting models, combines the results of the selected models to optimize the overall forecast accuracy. Results of real data experiments demonstrate the diversified forecast accuracy gains by using WS-QoSP.",2010,0, 4631,Detection of spoofed GPS signals at code and carrier tracking level,"Due to the large amount of different new applications based on GNSS systems, the issue of interference monitoring is becoming an increasing concern in the satellite navigation community. Threats for GNSS can be classified as unintentional interference, jamming and spoofing. Among them, spoofing is more deceitful because the target receiver might not be able to detect the attack and consequently generate misleading position solutions. Different kind of spoofing attacks can be implemented depending on their complexity. The paper analyzes what is known as intermediate spoofing attack, by means of a spoofer device developed at the Navigation Signal Analysis and Simulation (NavSAS) laboratory. The work focuses on the spoofing detection, performed by implementing proper signal quality monitoring techniques at code and carrier tracking level.",2010,0, 4632,A Hybrid Collaborative Filtering Algorithm Based on User-Item,"Collaborative filtering is one of the most important technologies in e-commerce recommendation system. Traditional similarity measure methods work poorly when the user rating data are extremely sparse. Aiming at this issue a hybrid collaborative filtering is proposed. This method used a novel similarity measure method to predict the target item rating and it fused the advantages of the user-based algorithm and item-based algorithm with the control factor . The experimental results show that this improved algorithm obviously enhances the recommended accuracy, and provide better recommendation quality.",2010,0, 4633,Chest X-ray analysis for an active distributed E-health system with computer-aided diagnosis,"The quality of life of a country's citizens is much depended on its healthcare system. People have the right to know the status of their health. Healthcare providers need to know the medical histories of patients to offer better treatment. Therefore, demand for improvement in the access of healthcare information has been increased. This paper presents the design of an active distributed E-health system, which is scalable, and more advanced softwares can be easily added. Some works on chest X-ray analysis are presented to demonstrate the capabilities of the system as CAD tools for some chest diseases like congestive heart failure, lung collapse, etc. Experimental result obtained with an algorithm to detect early nodules for lung cancer and TP is very encouraging. Data mining and other artificial intelligent techniques may be used to make the system becoming more active and powerful expert system.",2010,0, 4634,Wide area measurement based out-of-step detection technique,"The electrical power systems function as a huge interconnected network dispersed over a large area. A balance exist between generated and consumed power, any disturbance to this balance in the system caused due to change in load as well as faults and their clearance often results in electromechanical oscillations. As a result there is variation in power flow between two areas. This phenomenon is referred as Power Swing. This paper uses PMU data to measure the currents and voltages of the three phases of two buses connected to a 400 kV line. The measured data is then used for differentiating between a swing or fault condition, and if a swing is detected, to predict whether the swing is a stable or an unstable one. The performance of the method has been tested on a simulated system using PSCAD and MATLAB software.",2010,0, 4635,The prediction of software aging trend based on user intention,"Owing to the limitation of traditional software aging trend prediction method that based on time and based on measurement in dealing with sudden large scale concurrent questions, this paper proposes a new software aging trend prediction method which is based on user intention. This method predicts the trend of software aging according to the quantity of user requests for each components during the moment of system operation, and the software aging damage with each component is requested once.The experiment indicates, compared with the measurement method, this method has highter accuracy in dealing with sudden large scale concurrent questions.",2010,0, 4636,Diagnosing the root-causes of failures from cluster log files,"System event logs are often the primary source of information for diagnosing (and predicting) the causes of failures for cluster systems. Due to interactions among the system hardware and software components, the system event logs for large cluster systems are comprised of streams of interleaved events, and only a small fraction of the events over a small time span are relevant to the diagnosis of a given failure. Furthermore, the process of troubleshooting the causes of failures is largely manual and ad-hoc. In this paper, we present a systematic methodology for reconstructing event order and establishing correlations among events which indicate the root-causes of a given failure from very large syslogs. We developed a diagnostics tool, FDiag, to extract the log entries as structured message templates and uses statistical correlation analysis to establish probable cause and effect relationships for the fault being analyzed. We applied FDiag to analyze failures due to breakdowns in interactions between the Lustre file system and its clients on the Ranger supercomputer at the Texas Advanced Computing Center (TACC). The results are positive. FDiag is able to identify the dates and the time periods that contain the significant events which eventually led to the occurrence of compute node soft lockups.",2010,0, 4637,Automating Coverage Metrics for Dynamic Web Applications,"Building comprehensive test suites for web applications poses new challenges in software testing. Coverage criteria used for traditional systems to assess the quality of test cases are simply not sufficient for complex dynamic applications. As a result, faults in web applications can often be traced to insufficient testing coverage of the complex interactions between the components. This paper presents a new set of coverage criteria for web applications, based on page access, use of server variables, and interactions with the database. Following an instrumentation transformation to insert dynamic tracking of these aspects, a static analysis is used to automatically create a coverage database by extracting and executing only the instrumentation statements of the program. The database is then updated dynamically during execution by the instrumentation calls themselves. We demonstrate the usefulness of our coverage criteria and the precision of our approach on the analysis of the popular internet bulletin board system PhpBB 2.0.",2010,0, 4638,Effort-Aware Defect Prediction Models,"Defect Prediction Models aim at identifying error-prone modules of a software system to guide quality assurance activities such as tests or code reviews. Such models have been actively researched for more than a decade, with more than 100 published research papers. However, most of the models proposed so far have assumed that the cost of applying quality assurance activities is the same for each module. In a recent paper, we have shown that this fact can be exploited by a trivial classifier ordering files just by their size: such a classifier performs surprisingly good, at least when effort is ignored during the evaluation. When effort is considered, many classifiers perform not significantly better than a random selection of modules. In this paper, we compare two different strategies to include treatment effort into the prediction process, and evaluate the predictive power of such models. Both models perform significantly better when the evaluation measure takes the effort into account.",2010,0, 4639,SQM 2010: Fourth International Workshop on System Quality and Maintainability,"Software is playing a crucial role in modern societies. Not only do people rely on it for their daily operations or business, but for their lives as well. For this reason, correct and consistent behaviour of software systems is a fundamental part of end user expectations. Additionally, businesses require cost-effective production, maintenance, and operation of their systems. Thus, the demand for good quality software is increasing and is setting it as a differentiator for the success or failure of a software product. In fact, high quality software is becoming not just a competitive advantage but a necessary factor for companies to be successful. The main question that arises now is how quality is measured. What, where and when we assess and assure quality, are still open issues. Many views have been expressed about software quality attributes, including maintainability, evolvability, portability, robustness, reliability, usability, and efficiency. These have been formulated in standards such as ISO/IEC-9126 and CMMI. However, the debate about quality and maintainability between software producers, vendors and users is ongoing, while organizations need the ability to evaluate the software systems that they use or develop from multiple angles.",2010,0, 4640,InCode: Continuous Quality Assessment and Improvement,"While significant progress has been made over the last ten years in the research field of quality assessment, developers still can't take full advantage of the benefits of these new tools and technique. We believe that there at least two main causes for this lack of adoption: (i) the lack of integration in mainstream IDEs and (ii) the lack of support for a continuous (daily) usage of QA tools. In this context we created INCODE as an Eclipe plug in that would transform quality assessment and code inspections from a standalone activity, into a continuous, agile process, fully integrated in the development life-cycle. But INCODE not only assesses continuously the quality of Java systems, it also assists developers in taking restructuring decisions, and even supports them in triggering refactorings.",2010,0, 4641,Predicting grade of prostate cancer using image analysis software,"The prognosis of prostate cancer is determined by using the Gleason grading system. This grading is done based upon the tissue pattern obtained from the tumor, after staining the biopsy with Heamatoxylin and Eosin (H&E). Presently, experienced pathologists manually grade on prostate cancers subjectively. The grading therefore depends upon the experience of the pathologists, quality of the staining and various other factors. To overcome this, an image analysis system is developed using MATLAB that can examine the biopsy image and grade it objectively. Size distribution of the sample image is utilized to recognize the pattern. The prediction is done based on the pattern of lumen, nuclei and the glandular organization in the representative areas of biopsy-image taken from a microscope. The results obtained show a remarkable accuracy and is closer to the manual grading scores. This Computer-Adaptive-Diagnosis (CAD) system may be used as a powerful adjunct for effectively diagnosing the prostate cancers and grading them.",2010,0, 4642,An End-to-End Framework for Business Compliance in Process-Driven SOAs,"It is significant for companies to ensure their businesses conforming to relevant policies, laws, and regulations as the consequences of infringement can be serious. Unfortunately, the divergence and frequent changes of different compliance sources make it hard to systematically and quickly accommodate new compliance requirements due to the lack of an adequate methodology for system and compliance engineering. In addition, the difference of perception and expertise of multiple stakeholders involving in system and compliance engineering further complicates the analyzing, implementing, and assessing of compliance. For these reasons, in many cases, business compliance today is reached on aper-case basis by using ad hoc, hand-crafted solutions for specific rules to which they must comply. This leads in the long run to problems regarding complexity, understandability, and maintainability of compliance concerns in a SOA. To address the aforementioned challenges, we present in this invited paper a comprehensive SOA business compliance software framework that enables a business to express, implement, monitor, and govern compliance concerns.",2010,0, 4643,The Q-ImPrESS Method -- An Overview,"The difficulty in evolving service-oriented architectures with extra-functional requirements seriously hinders the spread of this paradigm in critical application domains. The Q-ImPrESS method offsets this disadvantage by introducing a quality impact prediction, which allows software engineers to predict the consequences of alternative design decisions on the quality of software services and select the optimal architecture without having to resort to costly prototyping. The method takes a wider perspective on software quality by explicitly considering multiple quality attributes (performance, reliability and maintainability), and the typical trade-offs between these attributes. The benefit of using this approach is that it enables the creation of service-oriented systems with predictable end-to-end quality.",2010,0, 4644,Classification of Software Defect Detected by Black-Box Testing: An Empirical Study,"Software defects which are detected by black box testing (called black-box defect) are very large due to the wide use of black-box testing, but we could not find a defect classification which is specifically applicable to them in existing defect classifications. In this paper, we present a new defect classification scheme named ODC-BD (Orthogonal Defect Classification for Black-box Defect), and we list the detailed values of every attribute in ODC-BD, especially the 300 detailed black-box defect type. We aim to help black-box defect analyzers and black-box testers improve their analysis and testing efficiency. The classification study is based on 1860 black-box defects collected from 39 industry projects and 2 open source projects. Furthermore, two empirical studies are included to validate the use of our ODC-BD. The results show that our ODC-BD can improve the efficiency of black-box testing and black-box defect analysis.",2010,0, 4645,SXMTool A Tool for Stream X-Machine Testing,"One of the great benefits of using a Stream X-machine (SXM) to specify a system is its associated testing method. Under certain test conditions, this method produces a test suite that can determine the correctness of the implementation under test (IUT). However, the size of the test suite is generally very large, the manual test suite generation is very complex and error-prone. With the more and more application for SXM in test area, developing the automatic support tool is urgent. The paper introduces the algorithm of obtaining the key values and sets, and develops the tool SXMtool which supports the editing of SXM models, automatic generation of SXM test suite. An example of using the SXMtool is then given to present its function.",2010,0, 4646,Application of Fuzzy FMECA in Gas Network Safety Assessment,"In order to provide a basis for the natural gas pipeline system to effectively perform a safe management, dangerous sources in this system are analyzed firstly, and a Fault Tree of serious accidents frequently occurring is then established by assessing the system with the fuzzy FMECA method, and some appropriate preventive measures may be finally taken in accordance with the Fault Tree and a FMECA table.",2010,0, 4647,Development and application of data analysis software for transformers PD UWB RF location,"Ultra-Wideband (UWB) RF partial discharge (PD) location technique is a new method for PD sources location in power transformers, which is based on multiple sensors array detection and Huygens-Fresnel principle of PD electromagnetic radiation signal. In this paper, a classification algorithm of multi-PD sources has been proposed, which is based on dynamic search in the TDOA (time difference of arrival) sample space, it is able to distinguish multiple PD sources and complete the TDOA classification of every PD source automatically when there are multiple PD sources has been detected simultaneously. Based on the classification algorithm of multi-PD sources and location algorithm based on Huygens-Fresnel principle, a set of data analysis software used for PD UWB RF location system has been developed based on Labview8.5. It manages multiple test projects using Access Database, and establishes the TDOA sample space for every testing program, finishes the multiple PD sources classification and location calculation for every PD source finally. Meanwhile, multiple functions have been integrated into the software, such as signal processing, report generation, human interface design and so on. Finally, an illustrative example has proved that the software's validity.",2010,0, 4648,Estimating design quality of digital systems via machine learning,"Although the term design quality of digital systems can be assessed from many aspects, the distribution and density of bugs are two decisive factors. This paper presents the application of machine learning techniques to model the relationship between specified metrics of high-level design and its associated bug information. By employing the project repository (i.e., high level design and bug repository), the resultant models can be used to estimate the quality of associated designs, which is very beneficial for design, verification and even maintenance processes of digital systems. A real industrial microprocessor is employed to validate our approach. We hope that our work can shed some light on the application of software techniques to help improve the reliability of various digital designs.",2010,0, 4649,A Cognitive QoS Method Based on Parameter Sensitivity,"Different applications in the network have different sensitivity for the certain QoS parameters. The existing mechanisms cannot modify the packet loss policy exactly according to the needs of QoS parameters. Therefore, the network will not maximize its overall efficiency. This paper proposes a novel cognitive approach for QoS. It classifies the applications by various combinations of the QoS parameters. And it uses a cognitive layer to modify the loss probability of the packet. Simulation results prove that this approach can reduce the account of useless packets sent when the congestion occurs in the network. The overall efficiency of the useful data transmission can be improved as well. The method can be used by Internet of Things to improve its message processing and sensor data fusion to reduce energy consumption.",2010,0, 4650,Using pattern detection techniques and refactoring to improve the performance of ASMOV,"One of the most important challenges in semantic Web is ontology matching. Ontology matching is a technology that enables semantic interoperability between structurally and semantically heterogeneous resources on the Web. Despite serious research efforts on ontology matching, matchers still suffers from severe problems with respect to the quality of matching results. Furthermore, Most of them take a lot of time for finding the correspondences. The aim of this paper is improving ontology matching results by adding the preprocessing phase for analyzing the input ontologies. This phase is added in order to solve problems caused by ontology diversity. We select one of the best matchers of Ontology Alignment Evaluation Initiative (OAEI) which is Automated Semantic Matching of Ontologies with Verification, called ASMOV. In preprocessing phase, some new patterns of ontologies are detected and then refactoring operations are used for reaching assimilated ontologies. Afterward, we applied ASMOV for testing our approach on both the original ontologies and their refactored counterparts. Experimental results show that these refactored ontologies are more efficient than the original unrepaired ones with respect to the standard evaluation measures i.e. Precision, Recall, and F-Measure.",2010,0, 4651,Measuring testability of aspect oriented programs,"Testability design is an effective way to realize the fault detection and isolation. It becomes crucial in the case of Aspect Oriented designs where control flows are generally not hierarchical, but are diffuse and distributed over the whole architecture. In this paper, we concentrate on detecting, pinpointing and suppressing potential testability weaknesses of a UML Aspect-class diagram. The attribute significant from design testability is called class interaction: it appears when potentially concurrent client/supplier relationships between classes exist in the system. These interactions point out parts of the design that need to be improved, driving structural modifications or constraints specifications, to reduce the final testing effort. This paper does an extensive review on testability of aspect oriented software, and put forth some relevant information about class-level testability.",2010,0, 4652,The research and development of comprehensive evaluation and management system for harmonic and negative-sequence,"Power quality interference sources generate a large number of harmonics and negative-sequence current into power grid in the process of using electricity. Harmonics and negative-sequence current not only affect power grid's safety and economical operation also cause interference to other normal users, which brings large threat and danger to the power grid and users. As an effective method to prevent and control power quality interference source, it is important to evaluate the harmonic and negative-sequence current at the installation part of power quality interference source, guide the user taking measures to control power quality in the design of current electricity-consummation. This paper describes the research and development of comprehensive evaluation and management system for harmonic and negative-sequence current. This software has the following functions: harmonic computation, comprehensive assessment of user harmonic and negative-sequence, filter design and checking, SVC measurement, harmonic source database management. This software models the system components, load and typical harmonic source under the CIGRE standard with graphical interface. It models the power supply network by the means of one-line diagram, completes the harmonic calculation by Monte-Carlo or Laguerre polynomial, and simulates the distribution of power system harmonics by statistical moment with harmonics power flow techniques. The evaluation of user's harmonic and negative-sequence based mainly on the GB, combined with the access point short-circuit capacity, power capacity and protocol capacity to calculate the limit value of harmonic and negative-sequence current that assigned to user. According to the typical user's harmonic emission level or measured value, we can calculate the value of harmonic current generated by user and execute assessment by comparing them. One way to design filter and carry on SVC evaluation is calculating by custom method. Another way is to cut the ex- - isting data into calculation and checking, and then the bus voltage waveform before and after inputting filter or SVC by graphical virtual operation can be obtained, and a rich and intuitive results reports and graphics can be generated at the same time. This software is easy to operate, and its calculation results are accurate. It is an effective tool to assess and manage power quality.",2010,0, 4653,A hierarchical fault detection and recovery in a computational grid using watchdog timers,"Grid computing basically means applying the resources of individual computers in a network to focus on a single problem/task at the same time. But the disadvantage of this feature is that the computers which are actually performing the calculations might not be always trustworthy and may fail periodically. Hence larger the number of nodes in the grid, greater is the probability that a node fails. Hence in order to execute the workflows in a fault tolerant manner we go for fault tolerance and recovery strategies. This paper proposes a method in which the instantaneous snapshot of the local state of processes within each node is recorded. An efficient algorithm is introduced for the detection of the node failures using watch dog timers. For recovery we make use of divide and conquer algorithm that avoids redoing of already completed jobs, enabling faster recovery.",2010,0, 4654,Inspection system for detecting defects in a transistor using Artificial neural network (ANN),"A machine vision system based on ANN for identification of defects occurred in transistor fabrication is presented in this paper. The developed intelligent system can identify commonly occurring errors in the transistor fabrication. The developed machine vision and ANN module is compared with the commercial MATLAB software and found results were satisfactory. This work is broadly divided into four stages, namely intelligent inspection system, machine vision module, ANN module and Inspection expert system. In the first a system with a camera is developed to capture the various segments of the transistor. The second stage is the image processing stage, in this the captured bitmap format image of the transistor is filtered and its size is altered to an acceptable size for the developed ANN using Set Partitioning Hierarchical Tree (SPIHT). These modified data are given as input to the ANN in the third stage. Generalized ANN with Back propagation algorithm is used to inspect the transistor. The ANN is trained and the weight values are updated in such a way that the error in identification is the least possible. The output of ANN is the inspected report. The developed system is explained with a real time industrial application. Thus, the developed algorithms will solve most of the problems in identifying defects in a transistor.",2010,0, 4655,Influence of voltage stability in power quality,"Simulations and analysis of voltage stability and its influence on power quality are presented in this article. The voltage stability concern to the capacity of the power system to maintain an appropriate voltage profile, both in normal operation and in the event of severe disruption. Methods of assessing stability used here are based on the use of algebraic equations obtained from the model of power flow (static methods). From the continuation power flow, PV curves are used to carry out stability analysis. Such analysis allows evaluating how close the system is of a voltage collapse. Softwares produced by CEPEL: ANAREDE and PlotCEPEL were used for simulation and analysis of results.",2010,0, 4656,Design and development of a software for fault diagnosis in radial distribution networks,"This paper presents an on-line fault diagnosis software in primary distribution feeders. The software is written in DELPHI and C++ languages and its interaction with the operator is made in a very friendly environment. The input data are the currents of the feeder per phase, monitored only in the substation. An artificial immune system was developed using the negative selection algorithm to detect and classify the faults. The fault location was identified by a genetic algorithm which is triggered by negative selection algorithm. The main application of the software is to assist in the operation during a fault, and supervise the protection system. A 103-bus non-transposed real feeder is used to evaluate the proposed software. The results show that the software is effective for diagnosing all types of faults involving short-circuits and it has great potential for online applications.",2010,0, 4657,Logical method for detecting faults by fault detection table,"Algebro-logic vector method for diagnosing faults of systems and their components based on the use a fault detection table and transactional graph, is proposed. The method allows decreasing the verification time of software model.",2010,0, 4658,Color detection for vision machine defect inspection on electronic devices,"This paper presents a recent innovation introduced by Ismeca in our novel vision platform, NativeNET, for the detection of surface defects in electronic device packages due to decoloration and which could not detected before. Up to now, mainly due to cost and processing-time constraints, most of inspection vision systems were working with monochrome images. Moreover, there is a need from semiconductor packaging industry to be able to provide new smart inspection which can detect more defects.",2010,0, 4659,Yield model for estimation of yield impact of semiconductor manufacturing equipment,"A yield model was developed allowing the calculation of yield using defect density data of manufacturing equipment. The approach allows studying impact of semiconductor manufacturing equipment on yield, to calculate and monitor yield during semiconductor manufacturing and predicting yield based on real time input of semiconductor manufacturing equipment regarding failures, defect density etc. The yield model bases on generic flows of manufacturing processes. The model assigns each functional layer a yield loss during the sequence of manufacturing steps. The yield model was implemented in a software code. The software was used to study yield impact of specific equipment for different technologies and products.",2010,0, 4660,The alice data quality monitoring system,"ALICE (A Large Ion Collider Experiment) is a heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). Due to the complexity of ALICE in terms of number of detectors and performance requirements, data quality monitoring (DQM) plays an essential role in providing an online feedback on the data being recorded. It intends to provide operators with precise and complete information to quickly identify and overcome problems, and, as a consequence, to ensure acquisition of high quality data. DQM typically involves the online gathering of data samples, their analysis by user-defined algorithms and the visualization of the monitoring results. In this paper, we illustrate the final design of the DQM software framework of ALICE, AMORE (Automatic Monitoring Environment), and its latest features and developments. We describe how this system is used to monitor the event data coming from the ALICE detectors allowing operators and experts to access a view of monitoring elements and to detect potential problems. Important features include the integration with the offline analysis and reconstruction framework and the interface with the electronic logbook that makes the monitoring results available everywhere through a web browser. Furthermore, we show the advantage of using multi-core processors through a parallel images/results production and the flexibility of the graphic user interface that gives to the user the possibility to apply filters and customize the visualization. We finally review the wide range of usage people make of this framework, from the basic monitoring of a single sub-detector to the most complex ones within the High Level Trigger farm or using the Prompt Reconstruction. We also describe the various ways of accessing the monitoring results. We conclude with our experience, after the LHC restart, when monitoring the data quality in a realworld and challenging environment.",2010,0, 4661,Quality Assurance and Data Quality Monitoring for the ALICE Silicon Drift Detectors,"In this paper, the Quality Assurance (QA) and Data Quality Monitoring (DQM) of the ALICE Silicon Drift Detectors will be discussed. The Quality Assurance functionality has been implemented as part of AliRoot, the ALICE software framework, so as to use the same code in offline and online mode. The QA system manages the three sub-detectors of the Inner Tracking System (ITS) in a modular way, in order to run the Quality Assurance of all or just one of them, creating the QA distributions for the selected detector(s). The ITS-QA framework can also be used for the online monitoring of the sub-detectors thanks to its interface to the AMORE (Automatic MonitoRing Environment) Data Quality Monitoring (DQM) framework, in turn interfaced to the Data Acquisition System. The online mode is steered by a specific subdetector agent that makes use of functionality provided by AMORE to connect and receive events from the Data Acquisition System, invokes AliRoot code for their reconstruction and analysis and for filling the QA distributions and then handles these distributions to AMORE that publishes them to its Database. A dedicated GUI allows the operators to retrieve and display the subdetector QA distributions from the AMORE Database. The SDD QA and DQM are fully operational since the beginning of the ALICE data taking and are important tools to assess the data quality both in real time and in the offline analysis.",2010,0, 4662,A complete data recording and reporting system for the EU commercial fishing fleets,"Fisheries authorities and scientists agree that the management of fisheries is dependent on good quality data, and that such data is of critical importance in the light of declining fish stocks, worldwide. Historically, however, reliable data has been largely unavailable, due to a culture of protecting catch data amongst fishers, fishing companies and even formal state-run offices, and also because data collection was paper-based, which is unreliable, prone to error, and lacking in suitable controls for ensuring data quality. Electronic, real-time reporting of vessel activity is expected to overcome many of these deficiencies and the EU has published regulations requiring fishing vessels to electronically record and report fishing activities. Olfish Dynamic Data Logger is a software solution that has been developed by the South African company, OLRAC, in order to meet EU regulations and to provide the EU fishing fleet with an EU-compliant electronic logbook. The development project sought to meet EU requirements as well as to improve the likelihood of improved data quality through a number of innovative features that offer significant value to the fishing community. This paper discusses the project and the solution developed, as well as the lessons learnt in the development process.",2010,0, 4663,Step Up Transformer online monitoring experience in Tucurui Power Plant,"The Step Up Transformers at the Tucurui Hydroelectric Power Plant are very important for the National Interconnected System (SIN). Due to that, and due to the severe work conditions, Eletrobras Eletronorte has always kept a rigorous preventive maintenance program for these equipments. However, transformer failure history in the first powerhouse (older ones) led to the implantation of the online monitoring system, in order to detect the defects when they start, and mitigate the risks even more. System installation started in 2006, with sensors and software. Four transformers which were already operating began to be monitored and implantation for three more was in progress, taking advantage of the modularity and expandability features of the decentralized architecture used. The Architecture and the solutions applied in system implantation, as well as the results obtained, will be described in this paper. Some of the goals successfully attained were easier insurance negotiation for some equipment and more safety for the personnel, the equipment and the facility.",2010,0, 4664,Supporting evidence-based Software Engineering with collaborative information retrieval,"The number of scientific publications is constantly increasing, and the results published on Empirical Software Engineering are growing even faster. Some software engineering publishers have began to collaborate with research groups to make available repositories of software engineering empirical data. However, these initiatives are limited due to issues related to the available search tools. As a result, many researchers in the area have adopted a semi-automated approach for performing searches for systematic reviews as a mean to extract empirical evidence from published material. This makes this activity labor intensive and error prone. In this paper, we argue that the use of techniques from information retrieval, as well as text mining, can support systematic reviews and improve the creation of repositories of SE empirical evidence.",2010,0, 4665,On trust guided collaboration among cloud service providers,"Cloud computing has emerged as a popular paradigm that offers computing resources (e.g. CPU, storage, bandwidth, software) as scalable and on-demand services over the Internet. As more players enter this emerging market, a heterogeneous cloud computing market is expected to evolve, where individual players will have different volumes of resources, and will provide specialized services, and with different levels of quality of services. It is expected that service providers will thus, besides competing, also collaborate to complement their resources in order to improve resource utilization and combine individual services to offer more complex value chains and end-to-end solutions required by the customers. It is challenging to select suitable partners in a decentralized setting due to various factors such as lack of global coordination or information, as well as diversity and scale. Trust is known to play an important role in promoting cooperation in many decentralized settings including the society at large, as well as on the Internet, e.g., in e-commerce, etc. In this paper, we explore how trust can promote collaboration among service providers. The novelty of our approach is a framework to combine disparate trust information - from direct interactions and from (indirect) references among service providers, as well as from customer feedbacks, depending on availability of these different kinds of information. Doing so provides decision making guidance to service providers to initialize collaborations by selecting trustworthy partners. Simulation results demonstrate the promise of our approach by showing that compared to random selection, our proposal can help effectively select trustworthy collaborators to achieve better quality of services.",2010,0, 4666,PINCETTE Validating changes and upgrades in networked software,"Summary form only given. PINCETTE is a STREP project under the European Community's 7th Framework Programme [FP7/2007-2013]. The project focuses on detecting failures resulting from software changes, thus improving the reliability of networked software systems. The goal of the project is to produce technology for efficient and scalable verification of complex evolving networked software systems, based on integration of static and dynamic analysis and verification algorithms, and the accompanying methodology. The resulting technology will also provide quality metrics to measure the thoroughness of verification. The PINCETTE consortium is composed of the following partners: IBM Israel, University of Oxford, Universita della Svizzera Italiana (USI), Universita degli Studi di Milano-Bicocca (UniMiB), Technical Research Center of Finland (VTT), ABB, and Israeli Aerospace Industries (IAI).",2010,0, 4667,Analyzing personality types to predict team performance,"This paper presents an approach in analyzing personality types, temperament and team diversity to determine software engineering (SE) teams performance. The benefits of understanding personality types and its relationships amongst team members are crucial for project success. Rough set analysis was used to analyze Myers-Briggs Type Indicator (MBTI) personality types, Keirsey temperament, team diversity, and team performance. The result shows positive relationships between these attributes.",2010,0, 4668,Test effort optimization by prediction and ranking of fault-prone software modules,"Identification of fault-prone or not fault-prone modules is very essential to improve the reliability and quality of a software system. Once modules are categorized as fault-prone or not fault-prone, test effort are allocated accordingly. Testing effort and efficiency are primary concern and can be optimized by prediction and ranking of fault-prone modules. This paper discusses a new model for prediction and ranking of fault-prone software modules for test effort optimization. Model utilizes the classification capability of data mining techniques and knowledge stored in software metrics to classify the software module as fault-prone or not fault-prone. A decision tree is constructed using ID3 algorithm for the existing project data. Rules are derived form the decision tree and integrated with fuzzy inference system to classify the modules as either fault-prone or not fault-prone for the target data. The model is also able to rank the fault-prone module on the basis of its degree of fault-proneness. The model accuracy are validated and compared with some other models by using the NASA projects data set of PROMOSE repository.",2010,0, 4669,Effect of Class-IV power supply failure frequency on Core Damage Frequency (CDF),"In India, grid disturbance is a major cause of plant transients in nuclear power plants, and thus having an impact on plant safety. With better regulation and load management, the frequency of grid disturbance has come down substantially with time. Nevertheless the plant transients initiated by Class-IV power supply failure are experienced regularly. Further the incapability of emergency diesel generators' to restore the power supply following Class-IV failure raises several safety issues. Similarly, the maintenance down time of diesel generators either planned or unplanned contributes to the system unavailability. Realizing the significance of Class-IV power supply for nuclear power plants, due weightage is given to the event sequence progression initiated due to Class-IV failure by incorporating a dedicated event tree while estimating Core Damage Frequency (CDF) in Probabilistic Safety Assessment (PSA) Level-I. At MAPS, in the recently carried out PSA Level-I analysis, Class-IV failure was observed to be the 6th most significant contributor to the CDF. On a closure scrutiny it was observed that the Class-IV failure frequencies were high till nineties due to poor grid conditions which came down subsequently through the concerted efforts of grid authorities. In this paper, we have presented a parametric study to assess the sensitivity of CDF to Class-IV failure by using the PSA software package RISKSPECTRUM with the observed Class-IV related failure data for the entire operating year of MAPS station. It has been observed that with time the contribution of Class-IV power supply failure to CDF has got reduced. The Class-IV power supply failure frequency has hardly any effect on CDF, inferring that any external cause like this has no significant impact on CDF.",2010,0, 4670,Improving operational reliability of Indus accelerators by implementation of EPICS based Control System for Microtron injector,"The Experimental Physics and Industrial Control System (EPICS) is a comprehensive set of software tools for creating control applications. The home-grown VME infrastructure at RRCAT, along with LabVIEW was used for the control of common injector of Indus rings i.e. Microtron. Increasing demands and continuous evolution of the system entailed upgrade of the control system. An EPICS based control system is recently commissioned and deployed for Microtron that renders enhanced SCADA functionalities. The SoftIOC running on Linux, talks to a VME station, an oscilloscope, a digital teslameter, a temperature scanner and an RF synthesizer on RS232 and TCP/IP. The OPI runs EDM on Linux. This paper discusses the operational improvements achieved in the control system by upgrading to EPICS. The reliability of the system is further enhanced by modules like Fault Diagnostics and Cathode Emission Auto-correction. The fault diagnostics module predicts anomalies in the system behavior and eases fault troubleshooting. The cathode emission auto-correction is a closed loop control for electron emission from the cathode. This paper also presents a system optimization perspective on hardware and software aspects chosen for the new system, and the design & implementation constraints on Windows and Linux.",2010,0, 4671,Reliability comparison of computer based core temperature monitoring system with two and three thermocouples per sub-assembly for Fast Breeder Reactors,"Prototype Fast Breeder Reactor (PFBR) is a mixed oxide fuelled, sodium cooled, 500 MWe, pool type fast breeder reactor under construction at Kalpakkam, India. The reactor core consists of fuel pins assembled in a number of hexagonal shaped, vertically stacked SubAssemblies (SA). Sodium flows from the bottom of the SAs, takes heat from the fission reaction, comes out through the top. Reactor protection systems are provided to trip the reactor in case of design basis events which may cause the safety parameters (like clad, fuel and coolant temperature) to cross their limits. Computer based Core temperature monitoring system (CTMS) is one of the protection systems. CTMS for PFBR has two thermocouples (TC) at the outlet of each SA(other than central SA) to measure coolant outlet temperature, three TC at central SA outlet and six thermocouples to measure coolant inlet temperature. Each thermocouple at SA outlet is electronically triplicated and fed to three computer systems for further processing and generate reactor trip signal whenever necessary. Since the system has two sensors per SA and three processing units the redundancy provided is not independent. A study is done to analyze the reliability implications of providing three thermocouples at the outlet of each SA and thereby feed independent thermocouple signals to three computer systems. Failure data derived from fast reactor experiences and from reliability prediction methods provided by handbooks are used. Fault trees are built for the existing CTMS system with two TC per SA and for the proposed system with three TC per SA. Failure probability upon demand and spurious trip rates are estimated as reliability indicators. Since the computer systems have software intelligence to sense invalid field inputs, not all sensor failures would directly affect the system probability to fail upon a demand. For instance, the coolant outlet temperature cannot be lower than the coolant inlet temperature. This intelligence is ta- en into account by assuming different fault coverage percentage and comparing the results. A 100% fault coverage means the software algorithm could detect all of the possible thermocouple faults. It was found that the system probability to fail upon demand is reduced in the new independent system but the spurious trip rate is slightly worse. The diagnostic capability is marginally affected due to complete independence. The paper highlights how an intelligent computer based safety system poses difficulties in modeling and the checks and balances between an interlinked and independent redundancy.",2010,0, 4672,Regulatory review of computer based systems: Indian perspectives,"The use of state of art digital instrumentation and control (I&C) in safety and safety related systems in nuclear power plants has become prevalent due to the performance in terms of accuracy, computational capabilities and data archiving capability for future diagnosis. Added advantages in computer based systems are fault tolerance, self-testing, signal validation capability and process system diagnostics. But, uncertainty exists about the quality, reliability and performance of such software based nuclear instrumentation which poses new challenges for the industry and regulators in using them for safety and safety related systems. To obtain adequate confidence in licensing them for use in NPPs, CBS were deployed gradually from monitoring system to control system (i.e, non-safety, safety related & lastly safety systems). Based upon the experience over a decade, AERB safety guide AERB/SGID-25 was prepared to prescribe the criteria and requirements to assess the qualitative reliability of such software based nuclear instrumentation. This paper describes the regulatory review and audit process as required by the above guide. Further, Software Configuration Management (SCM) is an important item during life cycle of CBS, whether it is design phase or operating phase. Configuration control becomes necessary due to operation feedback, introduction of additional features and due to obsolescence. Therefore configuration control during operating phase for CBS becomes all the more important. This paper elaborates on the regulatory approach adopted by AERB for regulatory review and control of design modifications in operating phase of NPPs. This paper also covers a case study of AERB audit on verification & validation activities for software based safety and safety related systems used in an Indian plant.",2010,0, 4673,Software reliability estimation through black box and white box testing at prototype level,"Software reliability refers to the probability of failure-free operation of a system. It is related to many aspects of software, including the testing process. Directly estimating software reliability by quantifying its related factors can be difficult. Testing is an effective sampling method to measure software reliability. Guided by the operational profile, software testing (usually black-box testing) can be used to obtain failure data, and an estimation model can be further used to analyze the data to estimate the present reliability and predict future reliability. White box testing is based on inter-component interactions which deal with probabilistic software behavior. It uses an internal perspective of the system to design test cases based on internal structure at requirements and design phases. This paper has been applied for evolution of effective reliability quantification analysis at prototype level of a financial application case study with both failure data test data of software Development Life cycle (SDLC) phases captured from defect consolidation table in the form orthogonal defect classification as well functional requirements at requirement and design phases captured through software architectural modeling paradigms.",2010,0, 4674,"Clone detection: Why, what and how?","Excessive code duplication is a bane of modern software development. Several experimental studies show that on average 15 percent of a software system can contain source code clones - repeatedly reused fragments of similar code. While code duplication may increase the speed of initial software development, it undoubtedly leads to problems during software maintenance and support. That is why many developers agree that software clones should be detected and dealt with at every stage of software development life cycle. This paper is a brief survey of current state-of-the-art in clone detection. First, we highlight main sources of code cloning such as copy-and-paste programming, mental code patterns and performance optimizations. We discuss reasons behind the use of these techniques from the developer's point of view and possible alternatives to them. Second, we outline major negative effects that clones have on software development. The most serious drawback duplicated code have on software maintenance is increasing the cost of modifications - any modification that changes cloned code must be propagated to every clone instance in the program. Software clones may also create new software bugs when a programmer makes some mistakes during code copying and modification. Increase of source code size due to duplication leads to additional difficulty of code comprehension. Third, we review existing clone detection techniques. Classification based on used source code representation model is given in this work. We also describe and analyze some concrete examples of clone detection techniques highlighting main distinctive features and problems that are present in practical clone detection. Finally, we point out some open problems in the area of clone detection. Currently questions like """"What is a code clone?"""", """"Can we predict the impact clones have on software quality"""" and """"How can we increase both clone detection precision and recall at the same time? """" stay open to further re- - search. We list the most important questions in modern clone detection and explain why they continue to remain unanswered despite all the progress in clone detection research.",2010,0, 4675,Defect detection for multithreaded programs with semaphore-based synchronization,"The solution to the problem of automatic defects detection in multithreaded programs is covered in this paper. General approaches for defect detection are considered. Static analysis is chosen because of its full automation and soundness properties. Overview of papers about static analysis usage for defect detection in parallel programs is presented. The approach for expansion of static analysis algorithms to multithreaded programs is suggested. This approach is based on Thread Analysis Algorithm. Thread analysis algorithm provides analysis of threads creation and thread-executed functions. This algorithm uses static analysis algorithm results in particular to identify semaphore objects. Thread analysis algorithm and static analysis algorithms are processing jointly. Thread analysis algorithm interprets thread control functions calls (create, join, etc.) and synchronization functions calls (wait, post, etc.). The algorithm determines program blocks which may execute in parallel and interaction pairs of synchronization functions calls. This information is taking into consideration to analyze threads cooperation and detect synchronization errors. To analyze threads cooperation this algorithm uses join of shared objects values in -functions. Basic rules of thread analysis algorithm are considered in the paper. Application of these rules to multithreaded program example is presented. The suggested approach allows us to detect all single-threaded program defect types and some synchronization errors such as Race condition or Deadlock. This approach gives sound results. It obtains analysis of programs with any number of semaphores and threads. It is possible to analyze dynamically created threads. The approach can be extended to other classes of parallel programs and other types of synchronization objects.",2010,0, 4676,Header-driven generation of sanity API tests for shared libraries,"There are thousands of various software libraries being developed in the modern world - completely new libraries emerge as well as new versions of existing ones regularly appear. Unfortunately, developers of many libraries focus on developing functionality of the library itself but neglect ensuring high quality and backward compatibility of application programming interfaces (APIs) provided by their libraries. The best practice to address these aspects is having an automated regression test suite that can be regularly (e.g., nightly) run against the current development version of the library. Such a test suite would ensure early detection of any regressions in the quality or compatibility of the library. But developing a good test suite can cost significant amount of efforts, which becomes an inhibiting factor for library developers when deciding QA policy. That is why many libraries do not have a test suite at all. This paper discusses an approach for low cost automatic generation of basic tests for shared libraries based on the information automatically extracted from the library header files and additional information about semantics of some library data types. Such tests can call APIs of target libraries with some correct parameters and can detect typical problems like crashes out-of-the-box. Using this method significantly lowers the barrier for developing an initial version of library tests, which can be then gradually improved with a more powerful test development framework as resources appear. The method is based on analyzing API signatures and type definitions obtained from the library header files and creating parameter initialization sequences through comparison of target function parameter types with other functions' return values or out-parameters (usually, it is necessary to call some function to get a correct parameter value for another function and the initialization sequence of the necessary function calls can be quite long). The - - paper also describes the structure of a tool that implements the proposed method for automatic generation of basic tests for Linux shared libraries (for C and C++ languages). Results of practical usage of the tool are also presented.",2010,0, 4677,Implementation by capture with executable UML,"Despite all the progress and raise of abstraction involved in the software development, the main building block of each software system is still the traditional, mostly manually written, """"line of code"""". This paper is about the solution to low-level model behavior """"coding"""", applied by my team in the context of development of the executable UML tool """"Enterprise Analyst"""". Writing computer programs requires knowledge and skill. Programmers must learn and memorize a large knowledge base of rules and must be able to """"foresee """" the execution of code, while writing it. In addition, manual codification is highly error prone process. Any improvement in the realization of this essential software activity will naturally bring improvement in resulting software system as a whole. We have designed and implemented a variation of the technique known as """"programming by example"""", with the basic idea in """"teaching """" the machine how to perform sequence of tasks instead of """"programming"""" it.",2010,0, 4678,Practical review of software requirements,"Quality of the requirements is more important than quality of any other work document of the software lifecycle. On the other hand, typical requirements quality assurance methods, such as peer review are always costly and often detect only formal and cosmetic defects. According to Luxoft experience, review is more effective when it is combined with practical validation of the requirements. The reviewers should not go through a checklist with abstract non-ambiguity, verifiability, or feasibility,.. criteria but should generate draft implementations of the requirements instead, to see if they can be really put into design, test cases, and user documentation. The approach improves quality and non-volatility of the requirements, decreases rework rate on the subsequent phases, and yet does not affect project budget.",2010,0, 4679,Source code modification technology based on parameterized code patterns,"Source code modification is one of the most frequent operations which developers perform in software life cycle. Such operation can be performed in order to add new functionality, fix bugs or bad code style, optimize performance, increase readability, etc. During the modification of existing source code developer needs to find parts of code, which meet to some conditions, and change it according to some rules. Usually developers perform such operations repeatedly by hand using primitive search/replace mechanisms and """"copy and paste programming"""", and that is why manual modification of large-scale software systems is a very error-prone and time-consuming process. Automating source code modifications is one of the possible ways of coping with this problem because it can considerably decrease both the amount of errors and the time needed in the modification process. Automated source code modification technique based on parameterized source code patterns is considered in this article. Intuitive modification description that does not require any knowledge of complex transformation description languages is the main advantage of our technique. We achieve this feature by using a special source code pattern description language which is closely tied with the programming language we're modifying. This allows developers to express the modification at hand as simple """"before """"/""""after"""" source code patterns very similar to source code. Regexp-like elements are added to the language to increase its expressionalpower. The source code modification is carried out using difference abstract syntax trees. We build a set of transformation operations based on """"before""""/""""after"""" patterns (using algorithm for change detection in hierarchically structured information) and apply them to those parts of source code that match with the search """"before """" pattern. After abstract syntax tree transformation is completed we pretty-print them back to source code. A prototype of source code modification sy- - stem based on this technique has been implemented for the Java programming language. Experimental results show that this technique in some cases can increase the speed of source code modifications by several orders of magnitude, at the same time completely avoiding """"copy-and paste """" errors. In future we are planning to integrate prototype with existing development environments such as Eclipse and NetBeans.",2010,0, 4680,Diagnostic systems and resource utilization of the ATLAS high level trigger,"With the first year of successful data taking of proton-proton collisions at LHC, the full chain of the ATLAS Trigger and Data Acquisition System could be tested under real conditions. The trigger monitoring and data quality infrastructure was essential to this success. We describe the software tools used to monitor the trigger system performance, assess the overall quality of the trigger selection and analyze the resource utilization during collision runs. Monitoring the performance and operation of these systems required smooth and parallel running of many complex software tools depending on each other. These are the basis for rate measurements, data quality determination of selected objects and supervision of the system during the data taking. Based on the data taking experience with first collisions, we describe the ATLAS trigger monitoring and operations performance.",2010,0, 4681,Detector response function of the NanoPETTM/CT system,"Even with the huge advance of computers and computing power, full three dimensional reconstructions of PET and CT scans belong to the future as far as commercially available scanners and software are concerned. Recent investigations have shown that with the aid of Graphical Processing Units (GPUs) extremely high computational speed might be achieved, which lends itself to the implementation of iterative 3D reconstruction techniques. Moreover, these techniques make it possible to make use of off-line and on-the-fly Monte Carlo calculations. A consortium of several Hungarian institutions has been working on the development and optimization of a Monte Carlo supported 3D iterative reconstruction program. The in-body scatter processes are modeled by real-time Monte Carlo, however, the detector response is calculated off-line. Therefore, the effective implementation of this MC code requires the calculation of a detector response function in advance. The paper describes our analysis of the response function characteristics of the NanoPETTM (Mediso) detector system. Using MCNPX we constructed a data base consisting of 300 simulations (different incoming photon angles and energies). We studied the sensitivity of the system to several parameters. It was found that the spatial dependence is stronger than the energy dependence. At right angle (at 511 keV) the side-neighbors have an order of magnitude less probability compared to the central pixel, while for photons reaching the central pixel with 350 keV or 511 keV there is only 40% difference in the probabilities. We studied different techniques to include the response function into the MC code mentioned in order to find the optimal strategy. With the approach described in the paper a significant improvement in image quality can be obtained.",2010,0, 4682,Design considerations for a high voltage compact power transformer,"This paper presents a new topology for a high voltage 50kV, high frequency (HF) 20kHz, multi-cored transformer. The transformer is suitable for use in pulsed power application systems. The main requirements are: high voltage capability, small size and weight. The HV, HF transformer is a main critical block of a high frequency power converter system. The transformer must have high electrical efficiency and in the proposed approach has to be optimized by the number of the cores. The transformer concept has been investigated analytically and through software simulations and experiments. This paper introduces the transformer topology and discusses the design procedure. Experimental measurements to predict core losses are also presented.",2010,0, 4683,Maturity model for process of academic management,"The segment of education in Brazil, especially in higher education, has undergone major changes. The search for professionalization, for cost reduction and process standardization has led many institutions to associate through partnerships or acquisitions. On the other hand, maturity models have been used very successfully in several areas of knowledge especially in software development, as an approach for quality models. This paper presents a methodology for assessing the maturity of academic process management in private institutions of higher education that, initially, aims the Brazilian market, but its idea can be applied to a global maturity model of academic process management.",2010,0, 4684,"Measuring complexity, effectiveness and efficiency in software course projects","This paper discusses results achieved in measuring complexity, effectiveness and efficiency, in a series of related software course projects, spanning a period of seven years. We focus on how the complexity of those projects was measured, and how the success of the students in effectively and efficiently taming that complexity was assessed. This required defining, collecting, validating and analyzing several indicators of size, effort and quality; their rationales, advantages and limitations are discussed. The resulting findings helped to improve the process itself.",2010,0, 4685,A discriminative model approach for accurate duplicate bug report retrieval,"Bug repositories are usually maintained in software projects. Testers or users submit bug reports to identify various issues with systems. Sometimes two or more bug reports correspond to the same defect. To address the problem with duplicate bug reports, a person called a triager needs to manually label these bug reports as duplicates, and link them to their """"master"""" reports for subsequent maintenance work. However, in practice there are considerable duplicate bug reports sent daily; requesting triagers to manually label these bugs could be highly time consuming. To address this issue, recently, several techniques have be proposed using various similarity based metrics to detect candidate duplicate bug reports for manual verification. Automating triaging has been proved challenging as two reports of the same bug could be written in various ways. There is still much room for improvement in terms of accuracy of duplicate detection process. In this paper, we leverage recent advances on using discriminative models for information retrieval to detect duplicate bug reports more accurately. We have validated our approach on three large software bug repositories from Firefox, Eclipse, and OpenOffice. We show that our technique could result in 17-31%, 22-26%, and 35-43% relative improvement over state-of-the-art techniques in OpenOffice, Firefox, and Eclipse datasets respectively using commonly available natural language information only.",2010,0, 4686,Has the bug really been fixed?,"Software has bugs, and fixing those bugs pervades the software engineering process. It is folklore that bug fixes are often buggy themselves, resulting in bad fixes, either failing to fix a bug or creating new bugs. To confirm this folklore, we explored bug databases of the Ant, AspectJ, and Rhino projects, and found that bad fixes comprise as much as 9% of all bugs. Thus, detecting and correcting bad fixes is important for improving the quality and reliability of software. However, no prior work has systematically considered this bad fix problem, which this paper introduces and formalizes. In particular, the paper formalizes two criteria to determine whether a fix resolves a bug: coverage and disruption. The coverage of a fix measures the extent to which the fix correctly handles all inputs that may trigger a bug, while disruption measures the deviations from the program's intended behavior after the application of a fix. This paper also introduces a novel notion of distance-bounded weakest precondition as the basis for the developed practical techniques to compute the coverage and disruption of a fix. To validate our approach, we implemented Fixation, a prototype that automatically detects bad fixes for Java programs. When it detects a bad fix, Fixation returns an input that still triggers the bug or reports a newly introduced bug. Programmers can then use that bug-triggering input to refine or reformulate their fix. We manually extracted fixes drawn from real-world projects and evaluated Fixation against them: Fixation successfully detected the extracted bad fixes.",2010,0, 4687,Mining API mapping for language migration,"To address business requirements and to survive in competing markets, companies or open source organizations often have to release different versions of their projects in different languages. Manually migrating projects from one language to another (such as from Java to C#) is a tedious and error-prone task. To reduce manual effort or human errors, tools can be developed for automatic migration of projects from one language to another. However, these tools require the knowledge of how Application Programming Interfaces (APIs) of one language are mapped to APIs of the other language, referred to as API mapping relations. In this paper, we propose a novel approach, called MAM (Mining API Mapping), that mines API mapping relations from one language to another using API client code. MAM accepts a set of projects each with two versions in two languages and mines API mapping relations between those two languages based on how APIs are used by the two versions. These mined API mapping relations assist in migration of projects from one language to another. We implemented a tool and conducted two evaluations to show the effectiveness of MAM. The results show that our tool mines 25,805 unique mapping relations of APIs between Java and C# with more than 80% accuracy. The results also show that mined API mapping relations help reduce 54.4% compilation errors and 43.0% defects during migration of projects with an existing migration tool, called Java2CSharp. The reduction in compilation errors and defects is due to our new mined mapping relations that are not available with the existing migration tool.",2010,0, 4688,Detecting atomic-set serializability violations in multithreaded programs through active randomized testing,"Concurrency bugs are notoriously difficult to detect because there can be vast combinations of interleavings among concurrent threads, yet only a small fraction can reveal them. Atomic-set serializability characterizes a wide range of concurrency bugs, including data races and atomicity violations. In this paper, we propose a two-phase testing technique that can effectively detect atomic-set serializability violations. In Phase I, our technique infers potential violations that do not appear in a concrete execution and prunes those interleavings that are violation-free. In Phase II, our technique actively controls a thread scheduler to enumerate these potential scenarios identified in Phase I to look for real violations. We have implemented our technique as a prototype system AssetFuzzer and applied it to a number of subject programs for evaluating concurrency defect analysis techniques. The experimental results show that AssetFuzzer can identify more concurrency bugs than two recent testing tools RaceFuzzer and AtomFuzzer.",2010,0, 4689,Falcon: fault localization in concurrent programs,"Concurrency fault are difficult to find because they usually occur under specific thread interleavings. Fault-detection tools in this area find data-access patterns among thread interleavings, but they report benign patterns as well as actual faulty patterns. Traditional fault-localization techniques have been successful in identifying faults in sequential, deterministic programs, but they cannot detect faulty data-access patterns among threads. This paper presents a new dynamic fault-localization technique that can pinpoint faulty data-access patterns in multi-threaded concurrent programs. The technique monitors memory-access sequences among threads, detects data-access patterns associated with a program's pass/fail results, and reports dataaccess patterns with suspiciousness scores. The paper also presents the description of a prototype implementation of the technique in Java, and the results of an empirical study we performed with the prototype on several Java benchmarks. The empirical study shows that the technique can effectively and efficiently localize the faults for our subjects.",2010,0, 4690,Predicting build outcome with developer interaction in Jazz,Investigating the human aspect of software development is becoming prominent in current research. Studies found that the misalignment between the social and technical dimensions of software work leads to losses in developer productivity and defects. We use the technical and social dependencies among pairs of developers to predict the success of a software build. Using the IBM JazzTM data we found information about developers and their social and technical relation can build a powerful predictor for the success of a software build. Investigating human aspects of software development is becoming prominent in current research. High misalignment between the social and technical dimensions of software work lowers productivity and quality.,2010,0, 4691,Integrating legacy systems with MDE,"Integrating several legacy software systems together is commonly performed with multiple applications of the Adapter Design Pattern in OO languages such as Java. The integration is based on specifying bi-directional translations between pairs of APIs from different systems. Yet, manual development of wrappers to implement these translations is tedious, expensive and error-prone. In this paper, we explore how models, aspects and generative techniques can be used in conjunction to alleviate the implementation of multiple wrappers. Briefly the steps are, (1) the automatic reverse engineering of relevant concepts in APIs to high-level models; (2) the manual definition of mapping relationships between concepts in different models of APIs using an ad-hoc DSL; (3) the automatic generation of wrappers from these mapping specifications using AOP. This approach is weighted against manual development of wrappers using an industrial case study. Criteria are the relative code length and the increase of automation.",2010,0, 4692,Can clone detection support quality assessments of requirements specifications?,"Due to their pivotal role in software engineering, considerable effort is spent on the quality assurance of software requirements specifications. As they are mainly described in natural language, relatively few means of automated quality assessment exist. However, we found that clone detection, a technique widely applied to source code, is promising to assess one important quality aspect in an automated way, namely redundancy that stems from copy & paste operations. This paper describes a large-scale case study that applied clone detection to 28 requirements specifications with a total of 8,667 pages. We report on the amount of redundancy found in real-world specifications, discuss its nature as well as its consequences and evaluate in how far existing code clone detection approaches can be applied to assess the quality of requirements specifications in practice.",2010,0, 4693,Flexible architecture conformance assessment with ConQAT,"The architecture of software systems is known to decay if no counter-measures are taken. In order to prevent this architectural erosion, the conformance of the actual system architecture to its intended architecture needs to be assessed and controlled; ideally in a continuous manner. To support this, we present the architecture conformance assessment capabilities of our quality analysis framework ConQAT. In contrast to other tools, ConQAT is not limited to the assessment of use-dependencies between software components. Its generic architectural model allows the assessment of various types of dependencies found between different kinds of artifacts. It thereby provides the necessary tool-support for flexible architecture conformance assessment in diverse contexts.",2010,0, 4694,JDF: detecting duplicate bug reports in Jazz,"Both developers and users submit bug reports to a bug repository. These reports can help reveal defects and improve software quality. As the number of bug reports in a bug repository increases, the number of the potential duplicate bug reports increases. Detecting duplicate bug reports helps reduce development efforts in fixing defects. However, it is challenging to manually detect all potential duplicates because of the large number of existing bug reports. This paper presents JDF (representing Jazz Duplicate Finder), a tool that helps users to find potential duplicates of bug reports on Jazz, which is a team collaboration platform for software development and process management. JDF finds potential duplicates for a given bug report using natural language and execution information.",2010,0, 4695,An incremental methodology for quantitative software architecture evaluation with probabilistic models,Probabilistic models are crucial in the quantification of non-functional attributes in safety-and mission-critical software systems. These models are often re-evaluated in assessing the design decisions. Evaluation of such models is computationally expensive and exhibits exponential complexity with the problem size. This research aims at constructing an incremental quality evaluation framework and delta evaluation scheme to address this issue. The proposed technique will provide a computational advantage for the probabilistic quality evaluations enabling their use in automated design space exploration by architecture optimization algorithms. The expected research outcomes are to be validated with a range of realistic architectures and case studies from automotive industry.,2010,0, 4696,Exploratory study of a UML metric for fault prediction,"This paper describes the use of a UML metric, an approximation of the CK-RFC metric, for predicting faulty classes before their implementation. We built a code-based prediction model of faulty classes using Logistic Regression. Then, we tested it in different projects, using on the one hand their UML metrics, and on the other hand their code metrics. To decrease the difference of values between UML and code measures, we normalized them using Linear Scaling to Unit Variance. Our results indicate that the proposed UML RFC metric can predict faulty code as well as its corresponding code metric does. Moreover, the normalization procedure used was of great utility, not just for enabling our UML metric to predict faulty code, using a code-based prediction model, but also for improving the prediction results across different packages and projects, using the same model.",2010,0, 4697,Capturing the long-term impact of changes,"Developers change source code to add new functionality, fix bugs, or refactor their code. Many of these changes have immediate impact on quality or stability. However, some impact of changes may become evident only in the long term. The goal of this thesis is to explore the long-term impact of changes by detecting dependencies between code changes and by measuring their influence on software quality, software maintainability, and development effort. Being able to identify the changes with the greatest long-term impact will strengthen our understanding of a project's history and thus shape future code changes and decisions.",2010,0, 4698,Failure preventing recommendations,"Software becomes more and more integral to our lives thus software failures affect more people than ever. Failures are not only responsible for billions of dollars lost to industry but can cause lethal accidents. Although there has been much research into predicting such failures, those predictions usually concentrate either on the technical or the social level of software development. With the ever growing size of software teams we think that coordination among developers is becoming increasingly more important. Therefore, we propose to leverage the combination of both social and technical dimensions to create recommendation upon which developers can act to prevent software failures.",2010,0, 4699,The Demand Side: Assessing Trade-offs and Making Choices,"This chapter contains sections titled: The User Survey Methodology, Investing in Cost Assessment for Open Source and Proprietary Software, Identifying the Cost Trade-offs for Open Source and Proprietary Software, Quality, Mixing by Consumers: The Cohabitation of Open Source and Proprietary Software, Appendix 5.1: An Economic Model of the Decision to Invest in TCO Analysis, Appendix 5.2: Who Does TCO Analysis: Econometric Estimates, Appendix 5.3: Cost Structure of Open Source and Proprietary Software: Econometric Estimates",2010,0, 4700,Communication and Agreement Abstractions for Fault-Tolerant Asynchronous Distributed Systems,"Understanding distributed computing is not an easy task. This is due to the many facets of uncertainty one has to cope with and master in order to produce correct distributed software. Considering the uncertainty created by asynchrony and process crash failures in the context of message-passing systems, the book focuses on the main abstractions that one has to understand and master in order to be able to produce software with guaranteed properties. These fundamental abstractions are communication abstractions that allow the processes to communicate consistently (namely the register abstraction and the reliable broadcast abstraction), and the consensus agreement abstractions that allows them to cooperate despite failures. As they give a precise meaning to the words """"communicate"""" and """"agree"""" despite asynchrony and failures, these abstractions allow distributed programs to be designed with properties that can be stated and proved. Impossibility results are associated with these abstracti ns. Hence, in order to circumvent these impossibilities, the book relies on the failure detector approach, and, consequently, that approach to fault-tolerance is central to the book. Table of Contents: List of Figures / The Atomic Register Abstraction / Implementing an Atomic Register in a Crash-Prone Asynchronous System / The Uniform Reliable Broadcast Abstraction / Uniform Reliable Broadcast Abstraction Despite Unreliable Channels / The Consensus Abstraction / Consensus Algorithms for Asynchronous Systems Enriched with Various Failure Detectors / Constructing Failure Detectors",2010,0, 4701,Cost-sensitive boosting neural networks for software defect prediction,"Software defect predictors which classify the software modules into defect-prone and not-defect-prone classes are effective tools to maintain the high quality of software products. The early prediction of defect-proneness of the modules can allow software developers to allocate the limited resources on those defect-prone modules such that high quality software can be produced on time and within budget. In the process of software defect prediction, the misclassification of defect-prone modules generally incurs much higher cost than the misclassification of not-defect-prone ones. Most of the previously developed predication models do not consider this cost issue. In this paper, three cost-sensitive boosting algorithms are studied to boost neural networks for software defect prediction. The first algorithm based on threshold-moving tries to move the classification threshold towards the not-fault-prone modules such that more fault-prone modules can be classified correctly. The other two weight-updating based algorithms incorporate the misclassification costs into the weight-update rule of boosting procedure such that the algorithms boost more weights on the samples associated with misclassified defect-prone modules. The performances of the three algorithms are evaluated by using four datasets from NASA projects in terms of a singular measure, the Normalized Expected Cost of Misclassification (NECM). The experimental results suggest that threshold-moving is the best choice to build cost-sensitive software defect prediction models with boosted neural networks among the three algorithms studied, especially for the datasets from projects developed by object-oriented language.",2010,1, 4702,"Defect prediction from static code features: current results, limitations, new approaches","Building quality software is expensive and software quality assurance (QA) budgets are limited. Data miners can learn defect predictors from static code features which can be used to control QA resources; e.g. to focus on the parts of the code predicted to be more defective. Recent results show that better data mining technology is not leading to better defect predictors. We hypothesize that we have reached the limits of the standard learning goal of maximizing area under the curve (AUC) of the probability of false alarms and probability of detection ""AUC(pd, pf)""; i.e. the area under the curve of a probability of false alarm versus probability of detection. Accordingly, we explore changing the standard goal. Learners that maximize ""AUC(effort, pd)"" find the smallest set of modules that contain the most errors. WHICH is a meta-learner framework that can be quickly customized to different goals. When customized to AUC(effort, pd), WHICH out-performs all the data mining methods studied here. More importantly, measured in terms of this new goal, certain widely used learners perform much worse than simple manual methods. Hence, we advise against the indiscriminate use of learners. Learners must be chosen and customized to the goal at hand. With the right architecture (e.g. WHICH), tuning a learner to specific local business goals can be a simple task.",2010,1, 4703,A systematic and comprehensive investigation of methods to build and evaluate fault prediction models,"This paper describes a study performed in an industrial setting that attempts to build predictive models to identify parts of a Java system with a high fault probability. The system under consideration is constantly evolving as several releases a year are shipped to customers. Developers usually have limited resources for their testing and would like to devote extra resources to faulty system parts. The main research focus of this paper is to systematically assess three aspects on how to build and evaluate fault-proneness models in the context of this large Java legacy system development project: (1) compare many data mining and machine learning techniques to build fault-proneness models, (2) assess the impact of using different metric sets such as source code structural measures and change/fault history (process measures), and (3) compare several alternative ways of assessing the performance of the models, in terms of (i) confusion matrix criteria such as accuracy and precision/recall, (ii) ranking ability, using the receiver operating characteristic area (ROC), and (iii) our proposed cost-effectiveness measure (CE). The results of the study indicate that the choice of fault-proneness modeling technique has limited impact on the resulting classification accuracy or cost-effectiveness. There is however large differences between the individual metric sets in terms of cost-effectiveness, and although the process measures are among the most expensive ones to collect, including them as candidate measures significantly improves the prediction models compared with models that only include structural measures and/or their deltas between releases - both in terms of ROC area and in terms of CE. Further, we observe that what is considered the best model is highly dependent on the criteria that are used to evaluate and compare the models. And the regular confusion matrix criteria, although popular, are not clearly related to the problem at hand, namely the cost-effectiveness of using fault-proneness prediction models to focus verification efforts to deliver software with less faults at less cost.",2010,1, 4704,A Comparative Study of Ensemble Feature Selection Techniques for Software Defect Prediction,"Feature selection has become the essential step in many data mining applications. Using a single feature subset selection method may generate local optima. Ensembles of feature selection methods attempt to combine multiple feature selection methods instead of using a single one. We present a comprehensive empirical study examining 17 different ensembles of feature ranking techniques (rankers) including six commonly-used feature ranking techniques, the signal-to-noise filter technique, and 11 threshold-based feature ranking techniques. This study utilized 16 real-world software measurement data sets of different sizes and built 13,600 classification models. Experimental results indicate that ensembles of very few rankers are very effective and even better than ensembles of many or all rankers.",2010,1, 4705,Automated Derivation of Application-Aware Error Detectors Using Static Analysis: The Trusted Illiac Approach,"This paper presents a technique to derive and implement error detectors to protect an application from data errors. The error detectors are derived automatically using compiler-based static analysis from the backward program slice of critical variables in the program. Critical variables are defined as those that are highly sensitive to errors, and deriving error detectors for these variables provides high coverage for errors in any data value used in the program. The error detectors take the form of checking expressions and are optimized for each control-flow path followed at runtime. The derived detectors are implemented using a combination of hardware and software and continuously monitor the application at runtime. If an error is detected at runtime, the application is stopped so as to prevent error propagation and enable a clean recovery. Experiments show that the derived detectors achieve low-overhead error detection while providing high coverage for errors that matter to the application.",2011,0, 4706,"Assessing, Comparing, and Combining State Machine-Based Testing and Structural Testing: A Series of Experiments","A large number of research works have addressed the importance of models in software engineering. However, the adoption of model-based techniques in software organizations is limited since these models are perceived to be expensive and not necessarily cost-effective. Focusing on model-based testing, this paper reports on a series of controlled experiments. It investigates the impact of state machine testing on fault detection in class clusters and its cost when compared with structural testing. Based on previous work showing this is a good compromise in terms of cost and effectiveness, this paper focuses on a specific state-based technique: the round-trip paths coverage criterion. Round-trip paths testing is compared to structural testing, and it is investigated whether they are complementary. Results show that even when a state machine models the behavior of the cluster under test as accurately as possible, no significant difference between the fault detection effectiveness of the two test strategies is observed, while the two test strategies are significantly more effective when combined by augmenting state machine testing with structural testing. A qualitative analysis also investigates the reasons why test techniques do not detect certain faults and how the cost of state machine testing can be brought down.",2011,0, 4707,Self-Supervising BPEL Processes,"Service compositions suffer changes in their partner services. Even if the composition does not change, its behavior may evolve over time and become incorrect. Such changes cannot be fully foreseen through prerelease validation, but impose a shift in the quality assessment activities. Provided functionality and quality of service must be continuously probed while the application executes, and the application itself must be able to take corrective actions to preserve its dependability and robustness. We propose the idea of self-supervising BPEL processes, that is, special-purpose compositions that assess their behavior and react through user-defined rules. Supervision consists of monitoring and recovery. The former checks the system's execution to see whether everything is proceeding as planned, while the latter attempts to fix any anomalies. The paper introduces two languages for defining monitoring and recovery and explains how to use them to enrich BPEL processes with self-supervision capabilities. Supervision is treated as a cross-cutting concern that is only blended at runtime, allowing different stakeholders to adopt different strategies with no impact on the actual business logic. The paper also presents a supervision-aware runtime framework for executing the enriched processes, and briefly discusses the results of in-lab experiments and of a first evaluation with industrial partners.",2011,0, 4708,GUI Interaction Testing: Incorporating Event Context,"Graphical user interfaces (GUIs), due to their event-driven nature, present an enormous and potentially unbounded way for users to interact with software. During testing, it is important to adequately cover this interaction space. In this paper, we develop a new family of coverage criteria for GUI testing grounded in combinatorial interaction testing. The key motivation of using combinatorial techniques is that they enable us to incorporate context into the criteria in terms of event combinations, sequence length, and by including all possible positions for each event. Our new criteria range in both efficiency (measured by the size of the test suite) and effectiveness (the ability of the test suites to detect faults). In a case study on eight applications, we automatically generate test cases and systematically explore the impact of context, as captured by our new criteria. Our study shows that by increasing the event combinations tested and by controlling the relative positions of events defined by the new criteria, we can detect a large number of faults that were undetectable by earlier techniques.",2011,0, 4709,Recovery Device for Real-Time Dual-Redundant Computer Systems,"This paper proposes the design of specialized hardware, called Recovery Device, for a dual-redundant computer system that operates in real-time. Recovery Device executes all fault-tolerant services including fault detection, fault type determination, fault localization, recovery of system after temporary (transient) fault, and reconfiguration of system after permanent fault. The paper also proposes the algorithms for determination of fault type (whether the fault is temporary or permanent) and localization of faulty computer without using self-testing techniques and diagnosis routines. Determination of fault type allows us to eliminate only the computer with a permanent fault. In other words, the determination of fault type prevents the elimination of nonfaulty computer because of short temporary fault. On the other hand, localization of faulty computer without using self-testing techniques and diagnosis routines shortens the recovery point time period and reduces the probability that a fault will occur during the execution of fault-tolerant procedure. This is very important for real-time fault-tolerant systems. These contributions bring both an increase in system performance and an increase in the degree of system reliability.",2011,0, 4710,hiCUDA: High-Level GPGPU Programming,"Graphics Processing Units (GPUs) have become a competitive accelerator for applications outside the graphics domain, mainly driven by the improvements in GPU programmability. Although the Compute Unified Device Architecture (CUDA) is a simple C-like interface for programming NVIDIA GPUs, porting applications to CUDA remains a challenge to average programmers. In particular, CUDA places on the programmer the burden of packaging GPU code in separate functions, of explicitly managing data transfer between the host and GPU memories, and of manually optimizing the utilization of the GPU memory. Practical experience shows that the programmer needs to make significant code changes, often tedious and error-prone, before getting an optimized program. We have designed hiCUDA}, a high-level directive-based language for CUDA programming. It allows programmers to perform these tedious tasks in a simpler manner and directly to the sequential code, thus speeding up the porting process. In this paper, we describe the hiCUDA} directives as well as the design and implementation of a prototype compiler that translates a hiCUDA} program to a CUDA program. Our compiler is able to support real-world applications that span multiple procedures and use dynamically allocated arrays. Experiments using nine CUDA benchmarks show that the simplicity hiCUDA} provides comes at no expense to performance.",2011,0, 4711,Certifying the Floating-Point Implementation of an Elementary Function Using Gappa,"High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error relative to some ideal value is well bounded. This certification may require a time-consuming proof for each line of code, and it is usually broken by the smallest change to the code, e.g., for maintenance or optimization purpose. Certifying floating-point programs by hand is, therefore, very tedious and error-prone. The Gappa proof assistant is designed to make this task both easier and more secure, due to the following novel features: It automates the evaluation and propagation of rounding errors using interval arithmetic. Its input format is very close to the actual code to validate. It can be used incrementally to prove complex mathematical properties pertaining to the code. It generates a formal proof of the results, which can be checked independently by a lower level proof assistant like Coq. Yet it does not require any specific knowledge about automatic theorem proving, and thus, is accessible to a wide community. This paper demonstrates the practical use of this tool for a widely used class of floating-point programs: implementations of elementary functions in a mathematical library.",2011,0, 4712,Toward a Formalism for Conservative Claims about the Dependability of Software-Based Systems,"In recent work, we have argued for a formal treatment of confidence about the claims made in dependability cases for software-based systems. The key idea underlying this work is """"the inevitability of uncertainty"""": It is rarely possible to assert that a claim about safety or reliability is true with certainty. Much of this uncertainty is epistemic in nature, so it seems inevitable that expert judgment will continue to play an important role in dependability cases. Here, we consider a simple case where an expert makes a claim about the probability of failure on demand (pfd) of a subsystem of a wider system and is able to express his confidence about that claim probabilistically. An important, but difficult, problem then is how such subsystem (claim, confidence) pairs can be propagated through a dependability case for a wider system, of which the subsystems are components. An informal way forward is to justify, at high confidence, a strong claim, and then, conservatively, only claim something much weaker: """"I'm 99 percent confident that the pfd is less than 10-5, so it's reasonable to be 100 percent confident that it is less than 10-3."""" These conservative pfds of subsystems can then be propagated simply through the dependability case of the wider system. In this paper, we provide formal support for such reasoning.",2011,0, 4713,Dynamic Programming and Graph Algorithms in Computer Vision,"Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.",2011,0, 4714,Implementation Details and Safety Analysis of a Microcontroller-based SIL-4 Software Voter,"This paper presents a microcontroller-based software voting process that complies with Safety Integrity Level-4 (SIL-4) requirements. The selected system architecture consists of a 2 out of 2 schema, in which one channel acts as Master and the other as Slave. Each redundant channel uses a microcontroller as central element. The present analysis demonstrates that this system fulfills SIL-4 requirements. Once the system architecture is detailed, the system overall functionality and the data flow are presented. Then, the microcontroller's internal architecture is explained, and the software voting process flow-diagram is discussed. Afterward, the resources of the microcontroller architecture that are used for the execution of each task involved in the software voting process (hardware-software interaction) are determined. Finally, a fault analysis is elaborated to demonstrate that the cases in which the safety requirements are compromised have a very small occurrence probability, i.e., the hazard rate of proposed voting is below 1E-9.",2011,0, 4715,Cause-Effect Modeling and Spatial-Temporal Simulation of Power Distribution Fault Events,"Modeling and simulation are important tools in the study of power distribution faults due to the limited amount of actual data and the high cost of experimentation. Although a number of software packages are available to simulate the electrical signals, approaches for simulating fault events in different environments have not been well developed. In this paper, we propose a framework for modeling and simulating fault events in power distribution systems based on environmental factors and the cause-effect relationships among them. The spatial and temporal aspects of significant environmental factors leading to various faults are modeled as raster maps and probability functions, respectively. The cause-effect relationships are expressed as fuzzy rules and a hierarchical fuzzy inference system is built to infer the probability of faults in the simulated environments. A test case simulating a part of a typical city's power distribution systems demonstrates the effectiveness of the framework in generating realistic distribution faults. This work is helpful in fault diagnosis for different local systems and provides a configurable data source to other researchers and engineers in similar areas as well.",2011,0, 4716,"Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities","Security inspection and testing require experts in security who think like an attacker. Security experts need to know code locations on which to focus their testing and inspection efforts. Since vulnerabilities are rare occurrences, locating vulnerable code locations can be a challenging task. We investigated whether software metrics obtained from source code and development history are discriminative and predictive of vulnerable code locations. If so, security experts can use this prediction to prioritize security inspection and testing efforts. The metrics we investigated fall into three categories: complexity, code churn, and developer activity metrics. We performed two empirical case studies on large, widely used open-source projects: the Mozilla Firefox web browser and the Red Hat Enterprise Linux kernel. The results indicate that 24 of the 28 metrics collected are discriminative of vulnerabilities for both projects. The models using all three types of metrics together predicted over 80 percent of the known vulnerable files with less than 25 percent false positives for both projects. Compared to a random selection of files for inspection and testing, these models would have reduced the number of files and the number of lines of code to inspect or test by over 71 and 28 percent, respectively, for both projects.",2011,0, 4717,Simple Fault Diagnosis Based on Operating Characteristic of Brushless Direct-Current Motor Drives,"In this paper, a simple fault diagnosis scheme for brushless direct-current motor drives is proposed to maintain control performance under an open-circuit fault. The proposed scheme consists of a simple algorithm using the measured phase current information and detects open-circuit faults based on the operating characteristic of motors. It requires no additional sensors or electrical devices to detect open-circuit faults and can be embedded into the existing drive software as a subroutine without excessive computation effort. The feasibility of the proposed fault diagnosis algorithm is proven by simulation and experimental results.",2011,0, 4718,Recognition of Fault Transients Using a Probabilistic Neural-Network Classifier,"This paper investigates the applicability of decision tree, hidden Markov model, and probabilistic neural-network (PNN) classification techniques to distinguish the transients originating from the faults from those originating from normal switching events. Current waveforms due to different types of events, such as faults, load switching, and capacitor bank switching were generated using a high-voltage transmission system simulated in PSCAD/EMTDC simulation software. Simulated transients were used to train and test the classifiers offline. The wavelet energies calculated using three-phase currents were used as input features for the classifiers. The results of the study showed the potential for developing a highly reliable transient classification system using the PNN technique. An online classification model for PNN was fully implemented in PSCAD/EMTDC. This model was extensively tested under different scenarios. The effects of the fault impedance, signal noise, current-transformer saturation, and arcing faults were investigated. Finally, the operation of the classifier was verified using actual recorded waveforms obtained from a high-voltage transmission system.",2011,0, 4719,A Numerical Method for the Evaluation of the Distribution of Cumulative Reward till Exit of a Subset of Transient States of a Markov Reward Model,"Markov reward models have interesting modeling applications, particularly those addressing fault-tolerant hardware/software systems. In this paper, we consider a Markov reward model with a reward structure including only reward rates associated with states, in which both positive and negative reward rates are present and null reward rates are allowed, and develop a numerical method to compute the distribution function of the cumulative reward till exit of a subset of transient states of the model. The method combines a model transformation step with the solution of the transformed model using a randomization construction with two randomization rates. The method introduces a truncation error, but that error is strictly bounded from above by a user-specified error control parameter. Further, the method is numerically stable and takes advantage of the sparsity of the infinitesimal generator of the transformed model. Using a Markov reward model of a fault-tolerant hardware/software system, we illustrate the application of the method and analyze its computational cost. Also, we compare the computational cost of the method with that of the (only) previously available method for the problem. Our numerical experiments seem to indicate that the new method can be efficient and that for medium size and large models can be substantially faster than the previously available method.",2011,0, 4720,Robust Execution of Service Workflows Using Redundancy and Advance Reservations,"In this paper, we develop a novel algorithm that allows service consumers to execute business processes (or workflows) of interdependent services in a dependable manner within tight time-constraints. In particular, we consider large interorganizational service-oriented systems, where services are offered by external organizations that demand financial remuneration and where their use has to be negotiated in advance using explicit service-level agreements (as is common in Grids and cloud computing). Here, different providers often offer the same type of service at varying levels of quality and price. Furthermore, some providers may be less trustworthy than others, possibly failing to meet their agreements. To control this unreliability and ensure end-to-end dependability while maximizing the profit obtained from completing a business process, our algorithm automatically selects the most suitable providers. Moreover, unlike existing work, it reasons about the dependability properties of a workflow, and it controls these by using service redundancy for critical tasks and by planning for contingencies. Finally, our algorithm reserves services for only parts of its workflow at any time, in order to retain flexibility when failures occur. We show empirically that our algorithm consistently outperforms existing approaches, achieving up to a 35-fold increase in profit and successfully completing most workflows, even when the majority of providers fail.",2011,0, 4721,A General Software Defect-Proneness Prediction Framework,"BACKGROUND - Predicting defect-prone software components is an economically important activity and so has received a good deal of attention. However, making sense of the many, and sometimes seemingly inconsistent, results is difficult. OBJECTIVE - We propose and evaluate a general framework for software defect prediction that supports 1) unbiased and 2) comprehensive comparison between competing prediction systems. METHOD - The framework is comprised of 1) scheme evaluation and 2) defect prediction components. The scheme evaluation analyzes the prediction performance of competing learning schemes for given historical data sets. The defect predictor builds models according to the evaluated learning scheme and predicts software defects with new data according to the constructed model. In order to demonstrate the performance of the proposed framework, we use both simulation and publicly available software defect data sets. RESULTS - The results show that we should choose different learning schemes for different data sets (i.e., no scheme dominates), that small details in conducting how evaluations are conducted can completely reverse findings, and last, that our proposed framework is more effective and less prone to bias than previous approaches. CONCLUSIONS - Failure to properly or fully evaluate a learning scheme can be misleading; however, these problems may be overcome by our proposed framework.",2011,1, 4722,Dynamic Analysis for Diagnosing Integration Faults,"Many software components are provided with incomplete specifications and little access to the source code. Reusing such gray-box components can result in integration faults that can be difficult to diagnose and locate. In this paper, we present Behavior Capture and Test (BCT), a technique that uses dynamic analysis to automatically identify the causes of failures and locate the related faults. BCT augments dynamic analysis techniques with model-based monitoring. In this way, BCT identifies a structured set of interactions and data values that are likely related to failures (failure causes), and indicates the components and the operations that are likely responsible for failures (fault locations). BCT advances scientific knowledge in several ways. It combines classic dynamic analysis with incremental finite state generation techniques to produce dynamic models that capture complementary aspects of component interactions. It uses an effective technique to filter false positives to reduce the effort of the analysis of the produced data. It defines a strategy to extract information about likely causes of failures by automatically ranking and relating the detected anomalies so that developers can focus their attention on the faults. The effectiveness of BCT depends on the quality of the dynamic models extracted from the program. BCT is particularly effective when the test cases sample the execution space well. In this paper, we present a set of case studies that illustrate the adequacy of BCT to analyze both regression testing failures and rare field failures. The results show that BCT automatically filters out most of the false alarms and provides useful information to understand the causes of failures in 69 percent of the case studies.",2011,0, 4723,Evaluation of the Impact of Superconducting Fault Current Limiters on Power System Network Protections Using a RTS-PHIL Methodology,"Planning the integration of a Superconducting Fault Current Limiter (SFCL) in an electric power network mainly consists in predicting the current limiting characteristics in any fault condition, in order to set the protection relays accordingly. Due to the very non linear behavior of the SFCL, modifications to the settings of existing protection relays are expected. To explore the potential changes, we used a Real-Time Simulation (RTS) methodology with Power-Hardware-In-the-Loop (PHIL) capabilities (i.e. circuit simulator coupled with power amplifiers for driving external physical power devices). The RTS-PHIL is a powerful approach that makes it possible to incorporate the actual transient reaction of the hardware under study without the need for developing a complicated numerical model, while the power system circuit, generally simpler in nature, can be purely simulated. In this project, the response of a commercial protection relay in the presence of a SFCL was investigated. Both the relay and a small scale shielded-core inductive limiter were coupled to the real time simulator (HYPERSIM) through single-phase linear power amplifiers and a variety of faults were applied. So far, this setup has allowed us to evaluate the impact of inserting a SFCL on overcurrent relays (OCR), in a simple radial distribution network. The results show that coordination has indeed to be slightly revised.",2011,0, 4724,Protector: A Probabilistic Failure Detector for Cost-Effective Peer-to-Peer Storage,"Maintaining a given level of data redundancy is a fundamental requirement of peer-to-peer (P2P) storage systems-to ensure desired data availability, additional replicas must be created when peers fail. Since the majority of failures in P2P networks are transient (i.e., peers return with data intact), an intelligent system can reduce significant replication costs by not replicating data following transient failures. Reliably distinguishing permanent and transient failures, however, is a challenging task, because peers are unresponsive to probes in both cases. In this paper, we propose Protector, an algorithm that enables efficient replication policies by estimating the number of remaining replicas for each object, including those temporarily unavailable due to transient failures. Protector dramatically improves detection accuracy by exploiting two opportunities. First, it leverages failure patterns to predict the likelihood that a peer (and the data it hosts) has permanently failed given its current downtime. Second, it detects replication level across groups of replicas (or fragments), thereby balancing false positives for some peers against false negatives for others. Extensive simulations based on both synthetic and real traces show that Protector closely approximates the performance of a perfect oracle failure detector, and significantly outperforms time-out-based detectors using a wide range of parameters. Finally, we design, implement and deploy an efficient P2P storage system called AmazingStore by combining Protector with structured P2P overlays. Our experience proves that Protector enables efficient long-term data maintenance in P2P storage systems.",2011,0, 4725,Preventing Temporal Violations in Scientific Workflows: Where and How,"Due to the dynamic nature of the underlying high-performance infrastructures for scientific workflows such as grid and cloud computing, failures of timely completion of important scientific activities, namely, temporal violations, often take place. Unlike conventional exception handling on functional failures, nonfunctional QoS failures such as temporal violations cannot be passively recovered. They need to be proactively prevented through dynamically monitoring and adjusting the temporal consistency states of scientific workflows at runtime. However, current research on workflow temporal verification mainly focuses on runtime monitoring, while the adjusting strategy for temporal consistency states, namely, temporal adjustment, has so far not been thoroughly investigated. For this issue, two fundamental problems of temporal adjustment, namely, where and how, are systematically analyzed and addressed in this paper. Specifically, a novel minimum probability time redundancy-based necessary and sufficient adjustment point selection strategy is proposed to address the problem of where and an innovative genetic-algorithm-based effective and efficient local rescheduling strategy is proposed to tackle the problem of how. The results of large-scale simulation experiments with generic workflows and specific real-world applications demonstrate that our temporal adjustment strategy can remarkably prevent the violations of both local and global temporal constraints in scientific workflows.",2011,0, 4726,A Refactoring Approach to Parallelism,"In the multicore era, a major programming task will be to make programs more parallel. This is tedious because it requires changing many lines of code; it's also error-prone and nontrivial because programmers need to ensure noninterference of parallel operations. Fortunately, interactive refactoring tools can help reduce the analysis and transformation burden. The author describes how refactoring tools can improve programmer productivity, program performance, and program portability. The article also describes a toolset that supports several refactorings for making programs thread-safe, threading sequential programs for throughput, and improving scalability of parallel programs.",2011,0, 4727,Lower Upper Bound Estimation Method for Construction of Neural Network-Based Prediction Intervals,"Prediction intervals (PIs) have been proposed in the literature to provide more information by quantifying the level of uncertainty associated to the point forecasts. Traditional methods for construction of neural network (NN) based PIs suffer from restrictive assumptions about data distribution and massive computational loads. In this paper, we propose a new, fast, yet reliable method for the construction of PIs for NN predictions. The proposed lower upper bound estimation (LUBE) method constructs an NN with two outputs for estimating the prediction interval bounds. NN training is achieved through the minimization of a proposed PI-based objective function, which covers both interval width and coverage probability. The method does not require any information about the upper and lower bounds of PIs for training the NN. The simulated annealing method is applied for minimization of the cost function and adjustment of NN parameters. The demonstrated results for 10 benchmark regression case studies clearly show the LUBE method to be capable of generating high-quality PIs in a short time. Also, the quantitative comparison with three traditional techniques for prediction interval construction reveals that the LUBE method is simpler, faster, and more reliable.",2011,0, 4728,Assessment of the Impact of SFCL on Voltage Sags in Power Distribution System,"This paper assesses and analyses the effects of superconducting fault current limiter (SFCL) installed in power distribution system on voltage sags. First of all, resistor-type SFCL is modeled using PSCAD/EMTDC to represent the quench and recovery characteristics based on the experimental results. Next, typical power distribution system of Korea is modeled. When the SFCL is installed in various locations from the starting point to end point of feeders, improvement of voltage sag is evaluated using the Information of Technology Industry Council (ITIC) curve of customer's loads when a fault occur. Finally, future studies needing to apply SFCL to power distribution system are presented.",2011,0, 4729,Contract Specification for Hardware Interoperability Testing and Fault Analysis,"Hardware failures occur especially due to external influences, component aging, or faulty interoperability. By testing, faulty components can be localized, allowing for fault isolation or repair. The contract testing strategy from software specifies component interoperability conditions, and systematically creates correspondent tests ensuring the operability of the system. We adapt contract testing to hardware, providing component specification, and monitoring thereof. Contract specification has to be specialized with requirements on the physical environment and component input signals. Contract testing is then executed through the monitoring of the contract parameters. Furthermore, to reason about the external cause of errors, signal faults are categorized. As a case study, we present an communication system. For this system, a contract is defined, and circuits implemented to perform contract testing and fault categorization. Communication faults, related to hardware errors and to sporadic environment disturbance, are injected in the developed system. These faults are completely detected, but can be only partially categorized by the monitoring approach.",2011,0, 4730,Managing Security: The Security Content Automation Protocol,"Managing information systems security is an expensive and challenging task. Many different and complex software components- including firmware, operating systems, and applications-must be configured securely, patched when needed, and continuously monitored for security. Most organizations have an extensive set of security requirements. For commercial firms, such requirements are established through complex interactions of business goals, government regulations, and insurance requirements; for government organizations, security requirements are mandated. Meeting these requirements has been time consuming and error prone, because organizations have lacked standardized, automated ways of performing the tasks and reporting on results. To overcome these deficiencies and reduce security administration costs, the National Institute of Standards and Technology developed the security content automation protocol using community supported security resources. SCAP (pronounced """"ess-cap"""") is a suite of specifications that standardizes the format and nomenclature by which security software products communicate information about software identification, software flaws, and security configurations.",2011,0, 4731,Which Crashes Should I Fix First?: Predicting Top Crashes at an Early Stage to Prioritize Debugging Efforts,"Many popular software systems automatically report failures back to the vendors, allowing developers to focus on the most pressing problems. However, it takes a certain period of time to assess which failures occur most frequently. In an empirical investigation of the Firefox and Thunderbird crash report databases, we found that only 10 to 20 crashes account for the large majority of crash reports; predicting these top crashes thus could dramatically increase software quality. By training a machine learner on the features of top crashes of past releases, we can effectively predict the top crashes well before a new release. This allows for quick resolution of the most important crashes, leading to improved user experience and better allocation of maintenance efforts.",2011,0, 4732,A Game Platform for Treatment of Amblyopia,"We have developed a prototype device for take-home use that can be used in the treatment of amblyopia. The therapeutic scenario we envision involves patients first visiting a clinic, where their vision parameters are assessed and suitable parameters are determined for therapy. Patients then proceed with the actual therapeutic treatment on their own, using our device, which consists of an Apple iPod Touch running a specially modified game application. Our rationale for choosing to develop the prototype around a game stems from multiple requirements that such an application satisfies. First, system operation must be sufficiently straight-forward that ease-of-use is not an obstacle. Second, the application itself should be compelling and motivate use more so than a traditional therapeutic task if it is to be used regularly outside of the clinic. This is particularly relevant for children, as compliance is a major issue for current treatments of childhood amblyopia. However, despite the traditional opinion that treatment of amblyopia is only effective in children, our initial results add to the growing body of evidence that improvements in visual function can be achieved in adults with amblyopia.",2011,0, 4733,Efficient Fault Detection and Diagnosis in Complex Software Systems with Information-Theoretic Monitoring,"Management metrics of complex software systems exhibit stable correlations which can enable fault detection and diagnosis. Current approaches use specific analytic forms, typically linear, for modeling correlations. In practice, more complex nonlinear relationships exist between metrics. Moreover, most intermetric correlations form clusters rather than simple pairwise correlations. These clusters provide additional information and offer the possibility for optimization. In this paper, we address these issues by using Normalized Mutual Information (NMI) as a similarity measure to identify clusters of correlated metrics, without assuming any specific form for the metric relationships. We show how to apply the Wilcoxon Rank-Sum test on the entropy measures to detect errors in the system. We also present three diagnosis algorithms to locate faulty components: RatioScore, based on the Jaccard coefficient, SigScore, which incorporates knowledge of component dependencies, and BayesianScore, which uses Bayesian inference to assign a fault probability to each component. We evaluate our approach in the context of a complex enterprise application, and show that 1) stable, nonlinear correlations exist and can be captured with our approach; 2) we can detect a large fraction of faults with a low false positive rate (we detect up to 18 of the 22 faults we injected); and 3) we improve the diagnosis with our new diagnosis algorithms.",2011,0, 4734,Hardware/Software Codesign Architecture for Online Testing in Chip Multiprocessors,"As the semiconductor industry continues its relentless push for nano-CMOS technologies, long-term device reliability and occurrence of hard errors have emerged as a major concern. Long-term device reliability includes parametric degradation that results in loss of performance as well as hard failures that result in loss of functionality. It has been reported in the ITRS roadmap that effectiveness of traditional burn-in test in product life acceleration is eroding. Thus, to assure sufficient product reliability, fault detection and system reconfiguration must be performed in the field at runtime. Although regular memory structures are protected against hard errors using error-correcting codes, many structures within cores are left unprotected. Several proposed online testing techniques either rely on concurrent testing or periodically check for correctness. These techniques are attractive, but limited due to significant design effort and hardware cost. Furthermore, lack of observability and controllability of microarchitectural states result in long latency, long test sequences, and large storage of golden patterns. In this paper, we propose a low-cost scheme for detecting and debugging hard errors with a fine granularity within cores and keeping the faulty cores functional, with potentially reduced capability and performance. The solution includes both hardware and runtime software based on codesigned virtual machine concept. It has the ability to detect, debug, and isolate hard errors in small noncache array structures, execution units, and combinational logic within cores. Hardware signature registers are used to capture the footprint of execution at the output of functional modules within the cores. A runtime layer of software (microvisor) initiates functional tests concurrently on multiple cores to capture the signature footprints across cores to detect, debug, and isolate hard errors. Results show that using targeted set of functional test sequences, faults ca- be debugged to a fine-granular level within cores. The hardware cost of the scheme is less than three percent, while the software tasks are performed at a high-level, resulting in a relatively low design effort and cost.",2011,0, 4735,Experimental Validation of Channel State Prediction Considering Delays in Practical Cognitive Radio,"As part of the effort toward building a cognitive radio (CR) network testbed, we have demonstrated real-time spectrum sensing. Spectrum sensing is the cornerstone of CR. However, current hardware platforms for CR introduce time delays that undermine the accuracy of spectrum sensing. The time delay named response delay incurred by hardware and software can be measured at two antennas colocated at a secondary user (SU), the receiving antenna, and the transmitting antenna. In this paper, minimum response delays are experimentally quantified and reported based on two hardware platforms, i.e., the universal software radio peripheral 2 (USRP2) and the small-form-factor software-defined-radio development platform (SFF SDR DP). The response delay has a negative impact on the accuracy of spectrum sensing. A modified hidden Markov model (HMM)-based single-secondary-user (single-SU) prediction is proposed and examined. When multiple SUs exist and their channel qualities are diverse, cooperative prediction can benefit the SUs as a whole. A prediction scheme with two stages is proposed, where the first stage includes individual predictions conducted by all the involved SUs, and the second stage further performs cooperative prediction using individual single-SU prediction results obtained at the first stage. In addition, a soft-combining decision rule for cooperative prediction is proposed. To have convincing performance evaluation results, real-world Wi-Fi signals are used to test the proposed approaches, where the Wi-Fi signals are simultaneously recorded at four different locations. Experimental results show that the proposed single-SU prediction outperforms the 1-nearest neighbor (1-NN) prediction, which uses current detected state as an estimate of future states. Moreover, even with just a few SUs, cooperative prediction leads to overall performance improvement.",2011,0, 4736,Using autonomous components to improve runtime qualities of software,"In the development of software systems, quality properties should be considered along with the development process so that the qualities of software systems can be inferred and predicted at the specification and design stages and be evaluated and verified at the deployment and execution stages. However, distributed autonomous software entities are developed and maintained independently by third parties and their executions and qualities are beyond the control of system developers. In this study, the notion of an autonomous component is used to model an independent autonomous software entity. An autonomous component encapsulates data types, associated operations and quality properties into a uniform syntactical unit, which provides a way to reason about the functional and non-functional properties of software systems and meanwhile offers a means of evaluating and assuring the qualities of software systems at runtime. This study also describes the implementation and running support of autonomous components and studies a case application system to demonstrate how autonomous components can be used to improve the qualities of the application system.",2011,0, 4737,Systems engineering and safety - a framework,"This study provides a definition of safety and assesses currently available systems engineering approaches for managing safety in systems development. While most work in relation to safety is of a `safety critical` nature, the authors concentrate on wider issues associated with safety. The outcomes of the assessment lead to a proposal for a framework providing the opportunity to develop a model incorporating the safety requirements of a system. The concept of a framework facilitates an approach of combining a number of disparate methods and at the same time utilising only the beneficial features of each. Such a safety framework when combined with an approach which addresses the management of safety will enhance system effectiveness thus ensuring the non-functional requirements of stakeholders are met.",2011,0, 4738,Functional testing of feature model analysis tools: a test suite,"A feature model is a compact representation of all the products of a software product line. Automated analysis of feature models is rapidly gaining importance: new operations of analysis have been proposed, new tools have been developed to support those operations and different logical paradigms and algorithms have been proposed to perform them. Implementing operations is a complex task that easily leads to errors in analysis solutions. In this context, the lack of specific testing mechanisms is becoming a major obstacle hindering the development of tools and affecting their quality and reliability. In this article, the authors present FaMa test suite, a set of implementation-independent test cases to validate the functionality of feature model analysis tools. This is an efficient and handy mechanism to assist in the development of tools, detecting faults and improving their quality. In order to show the effectiveness of their proposal, the authors evaluated the suite using mutation testing as well as real faults and tools. Their results are promising and directly applicable in the testing of analysis solutions. The authors intend this work to be a first step towards the development of a widely accepted test suite to support functional testing in the community of automated analysis of feature models.",2011,0, 4739,Nomenclature unification of software product measures,"A large number of software quality prediction models are based on software product measures (SPdMs). There are different interpretations and representations of these measures which generate inconsistencies in their naming conventions. These inconsistencies affect the efforts to develop a generic approach to predict software quality. This study identifies two types of such inconsistencies and categorises them into Type I and Type II. Type I inconsistency emerges when different labels are suggested for the same software product measure. Type II inconsistency appears when same label is used for different measures. This study suggests a unification and categorisation framework to remove Type I and Type II inconsistencies. The proposed framework categorises SPdMs with respect to three dimensions: usage frequency, software development paradigm and software lifecycle phase. The framework is applied to 140 SPdMs and a searchable unified measures database (UMD) is developed. Overall, 48.5% of the measures are found inconsistent. Out of the total measures studied 34.28% measures are frequently used. It has been found that 30.71% measures are used in object oriented paradigm and 31.43% measures are used in conventional paradigm. There is an overlap of 37.86% measures between the two paradigms. The UMD reveals that the percentages of measures used in design and implementation phases are 52.86 and 35%, respectively.",2011,0, 4740,Measuring the Effectiveness of the Defect-Fixing Process in Open Source Software Projects,"The defect-fixing process is a key process in which an open source software (OSS) project team responds to customer needs in terms of detecting and resolving software defects, hence the dimension of defect-fixing effectiveness corresponds nicely to adopters' concerns regarding OSS products. Although researchers have been studying the defect fixing process in OSS projects for almost a decade, the literature still lacks rigorous ways to measure the effectiveness of this process. Thus, this paper aims to create a valid and reliable instrument to measure the defect-fixing effectiveness construct in an open source environment through the scale development methodology proposed by Churchill [4]. This paper examines the validity and reliability of an initial list of indicators through two rounds of data collection and analysis. Finally four indicators are suggested to measure defect-fixing effectiveness. The implication for practitioners is explained through a hypothetical example followed by implications for the research community.",2011,0, 4741,Quality Market: Design and Field Study of Prediction Market for Software Quality Control,"Given the increasing competition in the software industry and the critical consequences of software errors, it has become important for companies to achieve high levels of software quality. Generating early forecasts of potential quality problems can have significant benefits to quality improvement. In our research, we utilized a novel approach, called prediction markets, for generating early forecasts of confidence in software quality for an ongoing project in a firm. Analogous to financial market, in a quality market, a security was defined that represented the quality requirement to be predicted. Participants traded on the security to provide their predictions. The market equilibrium price represented the probability of occurrence of the quality being measured. The results suggest that forecasts generated using the prediction markets are closer to the actual project outcomes than polls. We suggest that a suitably designed prediction market may have a useful role in software development domain.",2011,0, 4742,A Systemic Approach for Assessing Software Supply-Chain Risk,"In today's business environment, multiple organizations must routinely work together in software supply chains when acquiring, developing, operating, and maintaining software products. The programmatic and product complexity inherent in software supply chains increases the risk that defects, vulnerabilities, and malicious code will be inserted into a delivered software product. As a result, effective risk management is essential for establishing and maintaining software supply-chain assurance over time. The Software Engineering Institute (SEI) is developing a systemic approach for assessing and managing software supply-chain risks. This paper highlights the basic approach being implemented by SEI researchers and provides a summary of the status of this work.",2011,0, 4743,Text Mining Support for Software Requirements: Traceability Assurance,"Requirements assurance aims to increase confidence in the quality of requirements through independent audit and review. One important and effort intensive activity is assurance of the traceability matrix (TM). In this, determining the correctness and completeness of the many-to-many relationships between functional and non-functional requirements (NFRs) is a particularly tedious and error prone activity for assurance personnel to peform manually. We introduce a practical to use method that applies well-established text-mining and statistical methods to reduce this effort and increase TM assurance. The method is novel in that it utilizes both requirements similarity (likelihood that requirements trace to each other) and dissimilarity (or anti-trace, likelihood that requirements do not trace to each other) to generate investigation sets that significantly reduce the complexity of the traceability assurance task and help personnel focus on likely problem areas. The method automatically adjusts to the quality of the requirements specification and TM. Requirements assurance experiences from the SQA group at NASA's Jet Propulsion Laboratory provide motivation for the need and practicality of the method. Results of using the method are verifiably promising based on an extensive evaluation of the NFR data set from the publicly accessible PROMISE repository.",2011,0, 4744,A Rule-Based Natural Language Technique for Requirements Discovery and Classification in Open-Source Software Development Projects,"Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding of such requirements can provide insight into the nature of open source projects. Unfortunately, manual analysis of natural language (NL) requirements is time-consuming, and for large projects, error-prone. Automated analysis of NL requirements, even partial, will be of great benefit. Towards that end, we describe the design and validation of an automated NL requirements classifier for open source projects. Initial results suggest that it can reduce the effort required to analyze requirements of open source projects.",2011,0, 4745,Facilitating Performance Predictions Using Software Components,"Component-based software engineering (CBSE) poses challenges for predicting and evaluating software performance but also offers several advantages. Software performance engineering can benefit from CBSE ideas and concepts. The MediaStore, a fictional system, demonstrates how to achieve compositional reasoning about software performance.",2011,0, 4746,Detecting SEEs in Microprocessors Through a Non-Intrusive Hybrid Technique,"This paper presents a hybrid technique based on software signatures and a hardware module with watchdog and decoder characteristics to detect SEU and SET faults in microprocessors. These types of faults have a major influence in the microprocessor's control-flow, affecting the basic blocks and the transitions between them. In order to protect the transitions between basic blocks a light hardware module is implemented in order to spoof the data exchanged between the microprocessor and its memory. Since the hardware alone is not capable of detecting errors inside the basic blocks, it is enhanced to support the new technique and then provide full control-flow protection. A fault injection campaign is performed using a MIPS microprocessor. Simulation results show high detection rates with a small amount of performance degradation and area overhead.",2011,0, 4747,QoS and Admission Probability Study for a SIP-Based Central Managed IP Telephony System,"This work presents a study of a SIP-based IP telephony system in terms of Quality of Service (QoS) and admission probability. The system is designed for an enterprise with offices in different countries. A software IP PBX maintains the dial plan, and SIP proxies are used in order to implement Call Admission Control (CAC) and to allow the sharing of the gateways' lines. The system is implemented in a testbed where QoS parameters like One Way Delay (OWD), packet loss, jitter and ITU's R-factor are measured. Simulation is also used in order to build a scenario with a big number of offices in which establishment delays and admission probability are measured.",2011,0, 4748,Defects Detection of Cold-roll Steel Surface Based on MATLAB,"The measure of detecting the surface edge information of cold-roll steel sheets has been investigated rely on the digital image processing toolbox of the mathematic software MATLAB, and the edge detecting experiment of surface grayscale image has been conducted on the computer. The method of detecting defects such as black flecks and scratching has been realized in the research, the different effect of operators with the disturbance of noise has been compared, the performance of the LOG's edge detection ability various due to the change of the parameter Sigma. Conclusions have been made that LOG operator maintains the satisfying performance with the disturbance of noise, smaller Sigma is, less satisfying the smoothing ability is, which maintains more details, whereas the smoothing ability is better with loss of more details, defects detection of cold-roll steel surface based on MATLAB has a quite satisfying performance.",2011,0, 4749,Fault Diagnosis of Rolling Element Bearing Based on Vibration Frequency Analysis,"Element bearing is one of the widely used universal parts in machine. Its running condition and failure rate influence the performance, service life and efficiency of the equipment directly. So it is significant to research on bearing condition monitoring and fault diagnosis techniques. The paper describes the use of vibration measurements by a periodic monitoring system for monitoring the condition of rolling element bearing of the centrifugal machine. Vibration data is collected using accelerometers, which are placed at the 12 o'clock position at both the drive end and the driven end of the centrifugal machine. Vibration signals are collected using a 16 channel DAT recorder, and are post processed by vibration signals analysis software in personal computer. Simple diagnosis by vibration is based on frequency analysis. Each element of rolling bearing is of its fault characteristic frequency. This paper introduces frequency analysis method of using low and high frequency bands in conjunction with time domain waveform. Fault position of drive end bearing in the centrifugal machine is detected successfully by using this method.",2011,0, 4750,Online PD Monitoring and Analysis for Step-up Transformers of Hydropower Plant,"As we known, to evaluate and diagnosis the insulation faults of large-size power transformers, partial discharges (PDs) can be detected via ultra high frequency (UHF) technique. In this paper, an UHF PD online monitoring system was developed for large-size step-up transformers. The principle of UHF PD monitoring method was introduced., and the hardware structure and software key techniques in the system were described. In order to achieve the integrated PD monitoring and ensure the accuracy of PD analysis, the operating environments and operating conditions of the transformers have been acquired synchronously. Meanwhile, the correlative analysis of PDs with respect to operating conditions can be performed and the characteristics of PD activities with respect to relevant states can be studied. The association analysis of PDs is performed as follows: (a) periodical PDs caused by power frequency voltage under stable operating conditions, (b) stochastic PDs caused by transient over-voltages under unstable operating conditions. At present, the system has been applied into several large-size step-up transforms and achieved large mount of on-site monitoring results, and the reliability of PD analyzing results is verified by the analysis of the onsite data.",2011,0, 4751,Online UHF PD Monitoring for Step-up Transformers of Hydropower Plant,"As we known, to evaluate and diagnosis the insulation faults of large-size power transformers, partial discharges (PDs) can be detected via ultra high frequency (UHF) technique. In this paper, an UHF PD online monitoring system was developed for large-size step-up transformers. The principle of UHF PD monitoring method was introduced., and the hardware structure and software key techniques in the system were described. In order to achieve the integrated PD monitoring and ensure the accuracy of PD analysis, the operating environments and operating conditions of the transformers have been acquired synchronously. Meanwhile, the correlative analysis of PDs with respect to operating conditions can be performed and the characteristics of PD activities with respect to relevant states can be studied. The association analysis of PDs is performed as follows: (a) periodical PDs caused by power frequency voltage under stable operating conditions, (b) stochastic PDs caused by transient over-voltages under unstable operating conditions. At present, the system has been applied into several large-size step-up transforms and achieved large mount of on-site monitoring results, and the reliability of PD analyzing results is verified by the analysis of the onsite data.",2011,0, 4752,Research on the Virtual Maintenance Training and Testing System of a Command and Control Equipment,"The maintenance teaching, training and testing of a certain type of control and command system fails to achieve satisfied results for the limitation in quantity and complexity. This paper discusses the development of a virtual maintenance training and testing system. Based on fault cases and B/S mode, the system realizes online virtual maintenance training and testing, greatly improving the training efficiency and effectiveness.",2011,0, 4753,Smart Scarecrow,"Thailand is an agricultural country, where is located in Southeast Asia. We can produce various kinds of food in not only a good quality but also a huge quantity. One problem of both quality and quantity control of our food products are the food harmful pests such as bird, ant, weevil, aphid, grasshopper etc. Therefore, this project intends to develop the computer system that can be chased birds from a farm. The smart scarecrow is developed by using an image processing technique. Overall works are software development. The system is designed to detect pest birds from a real time video frame after it detects the birds then it generates a loudly sound to chase them. The system consists of four major components: 1) image acquisition 2) image preprocessing, 3) bird recognition and 4) generating sound. The experiment has been conducted in order to access the following qualities: 1) usability, to prove that the system can detect and scare pest birds and 2) efficiency, to show that the system can work with a high accuracy.",2011,0, 4754,Selecting Oligonucleotide Probes for Whole-Genome Tiling Arrays with a Cross-Hybridization Potential,"For designing oligonucleotide tiling arrays popular, current methods still rely on simple criteria like Hamming distance or longest common factors, neglecting base stacking effects which strongly contribute to binding energies. Consequently, probes are often prone to cross-hybridization which reduces the signal-to-noise ratio and complicates downstream analysis. We propose the first computationally efficient method using hybridization energy to identify specific oligonucleotide probes. Our Cross-Hybridization Potential (CHP) is computed with a Nearest Neighbor Alignment, which efficiently estimates a lower bound for the Gibbs free energy of the duplex formed by two DNA sequences of bounded length. It is derived from our simplified reformulation of t-gap insertion-deletion-like metrics. The computations are accelerated by a filter using weighted ungapped q-grams to arrive at seeds. The computation of the CHP is implemented in our software OSProbes, available under the GPL, which computes sets of viable probe candidates. The user can choose a trade-off between running time and quality of probes selected. We obtain very favorable results in comparison with prior approaches with respect to specificity and sensitivity for cross-hybridization and genome coverage with high-specificity probes. The combination of OSProbes and our Tileomatic method, which computes optimal tiling paths from candidate sets, yields globally optimal tiling arrays, balancing probe distance, hybridization conditions, and uniqueness of hybridization.",2011,0, 4755,Does Socio-Technical Congruence Have an Effect on Software Build Success? A Study of Coordination in a Software Project,"Socio-technical congruence is an approach that measures coordination by examining the alignment between the technical dependencies and the social coordination in the project. We conduct a case study of coordination in the IBM Rational Team Concert project, which consists of 151 developers over seven geographically distributed sites, and expect that high congruence leads to a high probability of successful builds. We examine this relationship by applying two congruence measurements: an unweighted congruence measure from previous literature, and a weighted measure that overcomes limitations of the existing measure. We discover that there is a relationship between socio-technical congruence and build success probability, but only for certain build types, and observe that in some situations, higher congruence actually leads to lower build success rates. We also observe that a large proportion of zero-congruence builds are successful, and that socio-technical gaps in successful builds are larger than gaps in failed builds. Analysis of the social and technical aspects in IBM Rational Team Concert allows us to discuss the effects of congruence on build success. Our findings provide implications with respect to the limits of applicability of socio-technical congruence and suggest further improvements of socio-technical congruence to study coordination.",2011,0, 4756,Sub-graph Mining: Identifying Micro-architectures in Evolving Object-Oriented Software,"Developers introduce novel and undocumented micro-architectures when performing evolution tasks on object-oriented applications. We are interested in understanding whether those organizations of classes and relations can bear, much like cataloged design and anti-patterns, potential harm or benefit to an object-oriented application. We present SGFinder, a sub-graph mining approach and tool based on an efficient enumeration technique to identify recurring micro-architectures in object-oriented class diagrams. Once SGFinder has detected instances of micro-architectures, we exploit these instances to identify their desirable properties, such as stability, or unwanted properties, such as change or fault proneness. We perform a feasibility study of our approach by applying SGFinder on the reverse-engineered class diagrams of several releases of two Java applications: ArgoUML and Rhino. We characterize and highlight some of the most interesting micro-architectures, e.g., the most fault prone and the most stable, and conclude that SGFinder opens the way to further interesting studies.",2011,0, 4757,Revealing Mistakes in Concern Mapping Tasks: An Experimental Evaluation,"Concern mapping is the activity of assigning a stakeholder's concern to its corresponding elements in the source code. This activity is primordial to guide software maintainers in several tasks, such as understanding and restructuring the implementation of existing concerns. Even though different techniques are emerging to facilitate the concern mapping process, they are still manual and error-prone according to recent studies. Existing work does not provide any guidance to developers to review and correct concern mappings. In this context, this paper presents the characterization and classification of eight concern mapping mistakes commonly made by developers. These mistakes were found to be associated with various properties of concerns and modules in the source code. The mistake categories were derived from actual mappings of 10 concerns in 12 versions of industry systems. In order to further evaluate to what extent these mistakes also occur in wider contexts, we ran two experiments where 26 subjects mapped 10 concerns in two systems. Our experimental results confirmed the mapping mistakes that often occur when developers need to interact with the source code.",2011,0, 4758,Assistance System for OCL Constraints Adaptation during Metamodel Evolution,"Metamodels evolve over time, as well as other artifacts. In most cases, this evolution is performed manually by stepwise adaptation. In most cases, metamodels are described using the MOF language. Often OCL constraints are added to metamodels in order to ensure consistency of their instances (models). However, during metamodel evolution these constraints are omitted or manually rewritten, which is time consuming and error prone. We propose a tool to help the designer to make a decision on the constraints attached to a metamodel during its evolution. Thus, the tool highlights the constraints that should disappear after evolution and makes suggestions for those which need adaptation to remain consistent. For the latter case, we formally describe how the OCL constraints have to be transformed to preserve their syntactical correctness. Our adaptation rules are defined using QVT which is the OMG standard language for specifying model-to-model transformations.",2011,0, 4759,Using Multivariate Split Analysis for an Improved Maintenance of Automotive Diagnosis Functions,"The amount of automotive software functions is continuously growing. With their interactions and dependencies increasing, the diagnosis' task of differencing between symptoms indicating a fault, the fault cause itself and uncorrelated data gets enormously difficult and complex. For instance, up to 40% of automotive software functions are contributable to diagnostic functions, resulting in approximately three million lines of diagnostic code. The diagnosis' complexity is additionally increased by legal requirements forcing automotive manufacturers maintaining the diagnosis of their cars for 15 years after the end of the car's series production. Clearly, maintaining these complex functions over such an extend time span is a difficult and tedious task. Since data from diagnosis incidents has been transferred back to the OEMs for some years, analysing this data with statistic techniques promises a huge facilitation of the diagnosis' maintenance. In this paper we use multivariate split analysis to filter diagnosis data for symptoms having real impact on faults and their repair measures, thus detecting diagnosis functions which have to be updated as they contain irrelevant or erroneous observations and/or repair measurements. A key factor for performing an unbiased split analysis is to determine an ideally representative control data set for a given test data set showing some property whose influence is to be studied. In this paper, we present a performant algorithm for creating such a representative control data set out of a very large initial data collection. This approach facilitates the analysis and maintenance of diagnosis functions. It has been successfully evaluated on case studies and is part of BMW's continuous improvement process for automotive diagnosis.",2011,0, 4760,Safety Monitoring for ETCS with 4-valued LTL,"When verifying the safety of ETCS, testing and formal methods have limitations to some degree. Runtime verification is effective to detect deviation between the current and the expected system behaviors. To improve the accuracy of runtime monitoring, 4-valued LTL (Linear Time Logic) semantics and formula rewriting based algorithm are proposed. Furthermore, approximation technique is presented for 4-valued LTL formulae to make the verification procedure high efficient. Finally, the method is applied to the European Train Control System (ETCS) by monitoring several scenario traces. The experimental results show that the 4-valued LTL semantics are able to generate the most accurate verification outcomes. It can also be found that the approximation technique improves the verification efficiency apparently in some cases.",2011,0, 4761,On the Utility of a Defect Prediction Model during HW/SW Integration Testing: A Retrospective Case Study,"Testing is an important and cost-intensive part of the software development life cycle. Defect prediction models try to identify error-prone components, so that these can be tested earlier or more in-depth, and thus improve the cost-effectiveness during testing. Such models have been researched extensively, but whether and when they are applicable in practice is still debated. The applicability depends on many factors, and we argue that it cannot be analyzed without a specific scenario in mind. In this paper, we therefore present an analysis of the utility for one case study, based on data collected during the hardware/software integration test of a system from the avionic domain. An analysis of all defects found during this phase reveals that more than half of them are not identifiable by a code-based defect prediction model. We then investigate the predictive performance of different prediction models for the remaining defects. The small ratio of defective instances results in relatively poor performance. Our analysis of the cost-effectiveness then shows that the prediction model is not able to outperform simple models, which order files either randomly or by lines of code. Hence, in our setup, the application of defect prediction models does not offer any advantage in practice.",2011,0, 4762,Prioritizing Requirements-Based Regression Test Cases: A Goal-Driven Practice,"Any changes for maintenance or evolution purposes may break existing working features, or may violate the requirements established in the previous software releases. Regression testing is essential to avoid these problems, but it may be ended up with executing many time-consuming test cases. This paper tries to address prioritizing requirements-based regression test cases. To this end, system-level testing is focused on two practical issues in industrial environments: i) addressing multiple goals regarding quality, cost and effort in a project, and ii) using non-code metrics due to the lack of detailed code metrics in some situations. This paper reports a goal-driven practice at Research In Motion (RIM) towards prioritizing requirements-based test cases regarding these issues. Goal-Question-Metric (GQM) is adopted in identifying metrics for prioritization. Two sample goals are discussed to demonstrate the approach: detecting bugs earlier and maintaining testing effort. We use two releases of a prototype Web-based email client to conduct a set of experiments based on the two mentioned goals. Finally, we discuss lessons learned from applying the goal-driven approach and experiments, and we propose few directions for future research.",2011,0, 4763,On demand check pointing for grid application reliability using communicating process model,The objective of the work is to propose an on-demand asynchronous check pointing technique for the fault recovery of a grid application in communicating process approach. The formal modelling of processes using LOTOS is done wherein the process features are declared in terms of possibilities of rollback and replicas permitted to accept the assigned tasks as decided by the scheduler. If any process is tending to be faulty in run time that will be detected by check pointing mechanism through the Task Dependency Graph (TDG) and their respective worst case execution time and dead line parameters are used to decide the schedulability. The Asynchronous Check Pointing On Demand (ACP-OD) approach is used to enhance the grid application reliability through the needed fault tolerant services. The scheduling of concurrent tasks can be done using the proposed Concurrent Task Scheduling Algorithm (CTSA) algorithm to recover from the faulty states using replication or rollback techniques. The check pointing and replication mechanisms have been used in which the synchronization between communicating processes is needed to enhance the efficiency of check pointing mechanism. The model is tested with a number of rollback variables treating the application as a Stochastic Activity Network (SAN) using Mobius.,2011,0, 4764,Blind Image Quality Assessment Using a General Regression Neural Network,"We develop a no-reference image quality assessment (QA) algorithm that deploys a general regression neural network (GRNN). The new algorithm is trained on and successfully assesses image quality, relative to human subjectivity, across a range of distortion types. The features deployed for QA include the mean value of phase congruency image, the entropy of phase congruency image, the entropy of the distorted image, and the gradient of the distorted image. Image quality estimation is accomplished by approximating the functional relationship between these features and subjective mean opinion scores using a GRNN. Our experimental results show that the new method accords closely with human subjective judgment.",2011,0, 4765,SCIPS: An emulation methodology for fault injection in processor caches,"Due to the high level of radiation endured by space systems, fault-tolerant verification is a critical design step for these systems. Space-system designers use fault-injection tools to introduce system faults and observe the system's response to these faults. Since a processor's cache accounts for a large percentage of total chip area and is thus more likely to be affected by radiation, the cache represents a key system component for fault-tolerant verification. Unfortunately, processor architectures limit cache accessibility, making direct fault injection into cache blocks impossible. Therefore, cache faults can be emulated by injecting faults into data accessed by load instructions. In this paper, we introduce SPFI-TILE, a software-based fault-injection tool for many-core devices. SPFI-TILE emulates cache fault injections by randomly injecting faults into load instructions. In order to provide unbiased fault injections, we present the cache fault-injection methodology SCIPS (Smooth Cache Injection Per Skipping). Results from MATLAB simulation and integration with SPFI-TILE reveal that SCIPS successfully distributes fault-injection probabilities across load instructions, providing an unbiased evaluation and thus more accurate verification of fault tolerance in cache memories.",2011,0, 4766,Fault tolerance in ZigBee wireless sensor networks,"Wireless sensor networks (WSN) based on the IEEE 802.15.4 Personal Area Network standard are finding increasing use in the home automation and emerging smart energy markets. The network and application layers, based on the ZigBee 2007 PRO Standard, provide a convenient framework for component-based software that supports customer solutions from multiple vendors. This technology is supported by System-on-a-Chip solutions, resulting in extremely small and low-power nodes. The Wireless Connections in Space Project addresses the aerospace flight domain for both flight-critical and non-critical avionics. WSNs provide the inherent fault tolerance required for aerospace applications utilizing such technology. The team from Ames Research Center has developed techniques for assessing the fault tolerance of ZigBee WSNs challenged by radio frequency (RF) interference or WSN node failure.",2011,0, 4767,Data-driven framework for detecting anomalies in field failure data,"This paper discusses the design of a data-driven framework for detecting anomalies in the automotive field failure and repair data. The anomaly detection framework detects anomalies at two levels: 1) It detects anomalies in repair data using system-level fault model (or fault dependency-matrix) and diagnostic reasoner; 2) It detects anomalies in diagnostic trouble code (DTC) data using operating sensory parameter identifiers (PIDs) data mining. The system-level fault model provides a way to capture causal relationships between failures and symptoms of a given system. A repair is declared as anomalous if it does not match the repair recommended by the fault model and diagnostic reasoner. The PIDs data mining detects anomalies in DTC data by detecting patterns in the associated PIDs using various statistical techniques such as scatter plots, clustering and decision trees. The DTC anomalies could be either due to errors in the preconditions under which the DTCs are designed to set or errors while implementing them in the software. The PIDs data mining module provides a focused feedback to engineers for detecting the errors in DTC software algorithms and enhancing the diagnostic design of DTCs during the early stages of vehicle production. We demonstrate the data-driven framework on automobile fuel vapor pressure sensor problem.",2011,0, 4768,Automated generation of test cases from output domain and critical regions of embedded systems using genetic algorithms,"A primary issue in black-box testing is how to generate adequate test cases from input domain of the system under test on the basis of user's requirement specification. However, for some types of systems including embedded systems, developing test cases from output domain is more suitable than developing from input domain, especially, when the output domain is smaller. Exhaustive testing of the embedded systems in the critical regions is important as the embedded systems must be basically fail safe systems. The Critical regions of the input space of the embedded systems can be pre-identified and supplied as seeds. In this paper, the authors presents an Automated Test Case Generator (ATCG) that uses Genetic algorithms (GAs) to automate the generation of test cases from output domain and the criticality regions of an embedded System. The approach is applied to a pilot project `Temperature monitoring and controlling of Nuclear Reactor System' (TMCNRS) which is an embedded system developed using modified Cleanroom Software Engineering methodology. The ATCG generates test cases which are useful to conduct pseudo-exhaustive testing to detect single, double and several multimode faults in the system. The generator considers most of the combinations of outputs, and finds the corresponding inputs while optimizing the number of test cases generated.",2011,0, 4769,Predicting the software performance during feasibility study,"Performance is an important non-functional attribute to be considered for producing quality software. Software performance engineering (SPE) is a methodology having significant role in software engineering to assess the performance of software systems early in the lifecycle. Gathering performance data is an essential aspect of SPE approach. The authors have proposed a methodology to gather data during feasibility study by exploiting the use case point approach, gearing factor and COCOMO model. The proposed methodology is used to estimate the performance data required for performance assessment in the integrated performance prediction process (IP3) model. The gathered data is used as the input for solving the two models, (i) use case performance model and (ii) system model. The methodology is illustrated with a case study of airline reservation application. A regression analysis is carried out to validate the response time obtained in the use case performance model. The analysis shows the proposed estimation can be used along with performance walkthrough in data gathering. The performance metrics are obtained by solving the system model, and the behaviour of the hardware resources is observed. Bottleneck resources are identified and the performance parameters are optimised using sensitivity analysis.",2011,0, 4770,Effort estimation of component-based software development a survey,"Effort estimation of software development is an important sub-discipline in software engineering. It has been the focus of much research mostly over the last couple of decades. In recent years, software development turned into engineering through the introduction of component-based software development (CBSD). The industry has reported significant advantages in using CBSD over traditional software development paradigms. However, the introduction of CBSD has also brought a host of unique challenges to software effort estimation which are quite different from those associated with traditional software development. Owing to the increasing tendency to use the CBSD approach in recent years, it is clear that effort estimation of CBSD is particularly an important area of research with a direct relevance to industry. In this study, the authors survey the most up-to-date research work published on predicting the effort of CBSD. The authors analyse the surveyed approaches in terms of modelling technique, the type of data required for their use, the type of estimation provided, lifecycle activities covered and their level of acceptability with regard to any validation. The aim of this survey is to provide a better understanding of the cost and schedule estimation approaches for CBSD.",2011,0, 4771,Using lightweight virtual machines to achieve resource adaptation in middleware,"Current middleware does not offer enough support to cover the demands of emerging application domains, such as embedded systems or those featuring distributed multimedia services. These kinds of applications often have timeliness constraints and yet are highly susceptible to dynamic and unexpected changes in their environment. There is then a clear need to introduce adaptation in order for these applications to deal with such unpredictable changes. Resource adaptation can be achieved by using scheduling or allocation algorithms, for large-scale applications, but such a task can be complex and error-prone. Virtual machines (VMs) represent a higher-level approach, whereby resources can be managed without dealing with lower-level details, such as scheduling algorithms, scheduling parameters and so on. However, the overhead penalty imposed by traditional VMs is unsuitable for real-time applications. On the other hand, virtualisation has not been previously exploited as a means to achieve resource adaptation. This study presents a lightweight VM framework that exploits application-level virtualisation to achieve resource adaptation in middleware for soft real-time applications. Experimental results are presented to validate the approach.",2011,0, 4772,Investigation of automatic prediction of software quality,"The subjective nature of software code quality makes it a complex topic. Most software managers and companies rely on the subjective evaluations of experts to determine software code quality. Software companies can save time and money by utilizing a model that could accurately predict different code quality factors during and after the production of software. Previous research builds a model predicting the difference between bad and excellent software. This paper expands this to a larger range of bad, poor, fair, good, and excellent, and builds a model predicting these classes. This research investigates decision trees and ensemble learning from the machine learning tool Weka as primary classifier models predicting reusability, flexibility, understandability, functionality, extendibility, effectiveness, and total quality of software code.",2011,0, 4773,Towards near-real time data property specification and verification for Arctic hyperspectral sensor data,"Environmental scientists, especially those conducting studies in remote areas such as the Arctic, can benefit from assessing data quality from autonomous sensors in near-real time. The Data Assessment Run-Time (DART) framework was developed to allow environmental scientists to specify and verify data properties associated with autonomous sensors. Data properties are logical statements about data values associated with sensors and their relationship with other sensor output or properties derived from historical data. The properties can be verified at near-real time, i.e., as the data are being collected in the field, or through post-processing routines after the data has been collected. This paper describes a case study that evaluates the specification of data properties associated with hyperspectral sensor data and how the DART framework was used to verify these data in both near-real time and through post-processing.",2011,0, 4774,An End-to-End Virtual Path Construction System for Stable Live Video Streaming over Heterogeneous Wireless Networks,"In this paper, we propose an effective end-to-end virtual path construction system, which exploits path diversity over heterogeneous wireless networks. The goal of the proposed system is to provide a high quality live video streaming service over heterogeneous wireless networks. First, we propose a packetization-aware fountain code to integrate multiple physical paths efficiently and increase the fountain decoding probability over wireless packet switching networks. Second, we present a simple but effective physical path selection algorithm to maximize the effective video encoding rate while satisfying delay and fountain decoding failure rate constraints. The proposed system is fully implemented in software and examined over real WLAN and HSDPA networks.",2011,0, 4775,Fault Management of Robot Software Components Based on OPRoS,"Component-based robot development has been a vibrant research topic in robotics due to its reusability and interoperability benefits. However, robot application developers using robot components must invest non-trivial amount of time and effort applying fault tolerance techniques into their robot applications. Despite the need for a common, framework-level fault management, the majority of existing robot software frameworks has failed to provide systematic fault management features. In this paper, we propose a fault management method to detect, diagnose, isolate and recover faults based on the OPRoS software framework. The proposed method provides a collective, framework-level management for commonly encountered robot software faults, thereby reducing the application developers' efforts while enhancing the robot system reliability. To verify the effectiveness of the proposed approach, we have implemented a prototype reconnaissance robot using OPRoS components and injected different types of faults. The results of the experiments have shown that our approach effectively detects, diagnoses, and recovers component faults using the software framework.",2011,0, 4776,Intelligent Trend Indices in Detecting Changes of Operating Conditions,"Temporal reasoning is a very valuable tool to diagnose and control slow processes. Identified trends are also used in data compression and fault diagnosis. Although humans are very good at visually detecting such patterns, for control system software it is a difficult problem including trend extraction and similarity analysis. In this paper, an intelligent trend index is developed from scaled measurements. The scaling is based on monotonously increasing, nonlinear functions, which are generated with generalised norms and moments. The monotonous increase is ensured with constraint handling. Triangular episodes are classified with the trend index and the derivative of it. Severity of the situations is evaluated by a deviation index which takes into account the scaled values of the measurements.",2011,0, 4777,Software reliability model with bathtub-shaped fault detection rate,"This paper proposes a software reliability model with a bathtub-shaped fault detection rate. We discuss how the inherent characteristics of the software testing process support the three phases of the bathtub; the first phase with a decreasing fault detection rate arises from the removal of simple, yet frequent faults like syntax errors and typos; the second phase possesses a constant fault detection rate marking the beginning of functional requirements testing; the third and final code comprehension stage exhibits an increasing fault detection rate because testers are now familiar with the system and can focus their attention on the outstanding and as yet untested portions of code. We also discuss how eliminating one of the testing phases gives rise to the burn-in model, which is a special case of the bathtub model. We compare the performance of the bathtub and burn-in models with the three classical software reliability models using the Predictive Mean Square Error and Akaike Information Criterion, by applying these models to a data set in the literature. Our results suggest that the bathtub model best describes the observed data and also most precisely predicts the future data points compared to the other popular software reliability models. The bathtub model can thus be used to provide accurate predictions during the testing process and guide optimal release time decisions.",2011,0, 4778,Consequence Oriented Self-Healing and Autonomous Diagnosis for Highly Reliable Systems and Software,"Computing software and systems have become increasingly large and complex. As their dependability and autonomy are of great concern, self-healing is an ongoing challenge. This paper presents an innovative model and technology to realize the self-healing function under the real-time requirement. The proposed approach, different from existing technologies, is based on a new concept defined as consequence-oriented diagnosis and healing. Derived from the new concept, a prototype model for proactive self-healing actions is presented. Then, a hybrid diagnosis tool is proposed that takes advantages from the Multivariate Decision Diagram, Fuzzy Logic, and Neural Networks, achieving an efficient, effective, accurate, and intelligent result. The consequence-oriented diagnosis and self-healing function is also implemented. The experimental results exhibit that the innovative system is very effective and precise in predicting the consequence, and in preventing resulting software and system failures.",2011,0, 4779,Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality,"Our approach to blind image quality assessment (IQA) is based on the hypothesis that natural scenes possess certain statistical properties which are altered in the presence of distortion, rendering them un-natural; and that by characterizing this un-naturalness using scene statistics, one can identify the distortion afflicting the image and perform no-reference (NR) IQA. Based on this theory, we propose an (NR)/blind algorithm-the Distortion Identification-based Image Verity and INtegrity Evaluation (DIIVINE) index-that assesses the quality of a distorted image without need for a reference image. DIIVINE is based on a 2-stage framework involving distortion identification followed by distortion-specific quality assessment. DIIVINE is capable of assessing the quality of a distorted image across multiple distortion categories, as against most NR IQA algorithms that are distortion-specific in nature. DIIVINE is based on natural scene statistics which govern the behavior of natural images. In this paper, we detail the principles underlying DIIVINE, the statistical features extracted and their relevance to perception and thoroughly evaluate the algorithm on the popular LIVE IQA database. Further, we compare the performance of DIIVINE against leading full-reference (FR) IQA algorithms and demonstrate that DIIVINE is statistically superior to the often used measure of peak signal-to-noise ratio (PSNR) and statistically equivalent to the popular structural similarity index (SSIM). A software release of DIIVINE has been made available online: http://live.ece.utexas.edu/research/quality/DIIVINE_release.zip for public use and evaluation.",2011,0, 4780,Transmission line fault detection and classification,"Transmission line protection is an important issue in power system engineering because 85-87% of power system faults are occurring in transmission lines. This paper presents a technique to detect and classify the different shunt faults on a transmission lines for quick and reliable operation of protection schemes. Discrimination among different types of faults on the transmission lines is achieved by application of evolutionary programming tools. PSCAD/EMTDC software is used to simulate different operating and fault conditions on high voltage transmission line, namely single phase to ground fault, line to line fault, double line to ground and three phase short circuit. The discrete wavelet transform (DWT) is applied for decomposition of fault transients, because of its ability to extract information from the transient signal, simultaneously both in time and frequency domain. The data sets which are obtained from the DWT are used for training and testing the SVM architecture. After extracting useful features from the measured signals, a decision of fault or no fault on any phase or multiple phases of a transmission line is carried out using three SVM classifiers. The ground detection task is carried out by a proposed ground index. Gaussian radial basis kernel function (RBF) has been used, and performances of classifiers have been evaluated based on fault classification accuracy. In order to determine the optimal parametric settings of an SVM classifier (such as the type of kernel function, its associated parameter, and the regularization parameter c), fivefold cross-validation has been applied to the training set. It is observed that an SVM with an RBF kernel provides better fault classification accuracy than that of an SVM with polynomial kernel. It has been found that the proposed scheme is very fast and accurate and it proved to be a robust classifier for digital distance protection.",2011,0, 4781,Experimental study report on Opto-electronic sensor based gaze tracker system,The paper presents a smart assistive technology to improve the life quality of the people with severe mobility disorders by giving them independence of motion despite the difficulties in moving their limbs. The objective of this paper is to develop an Opto-electronic sensor based gaze tracker apparatus to detect the eye gaze direction based on movement of iris. Eventually this detection helps the user to control and steer the wheel chair by themselves through their eye gaze. The principle of operation of this system is based on the reflection of incidental light in the iris and sclera regions of eye. The user's gaze tracker is in the form of eye-goggle embedded with infrared source and detectors. The performance of various optical sources and detectors are tested and the results are graphically represented. A template database and also an algorithm is generated to provide real time computational analysis.,2011,0, 4782,Network architecture for smart grids,"Smart grid is designed to make the existing power grid system function spontaneously and independently without human intervention. Sensors are deployed to detect faults in the flow of power. In the proposed work, smart monitoring and controls are done by intelligent electronic devices. IED's (Intelligent electronic devices) monitors and records the value of power generated, and its corresponding voltage and frequency, which in turn is fed in to the demand-supply chain. Network architecture is designed to determine the flow of power from the generation end to the consumers. Demand-supply curve is embedded in architecture to map the power generated with the supply. If the demand at a particular instant of time is higher than the supply, then tariff is fixed and warning is given to the users regarding the rate of payment, to meet the higher tariff. Trust based authencity is provided for security.",2011,0, 4783,A single-specification principle for functional-to-timing simulator interface design,"Microarchitectural simulators are often partitioned into separate, but interacting, functional and timing simulators. These simulators interact through some interface whose level of detail depends upon the needs of the timing simulator. The level of detail supported by the interface profoundly affects the speed of the functional simulator, therefore, it is desirable to provide only the detail that is actually required. However, as the microarchitectural design space is explored, these needs may change, requiring corresponding time-consuming and error-prone changes to the interface. Thus simulator developers are tempted to include extra detail in the interface """"just in case"""" it is needed later, trading off simulator speed for development time. We show that this tradeoff is unnecessary if a single-specification design principle is practiced: write the simulator once with an extremely detailed interface and then derive less-detailed interfaces from this detailed simulator. We further show that the use of an Architectural Description Language (ADL) with constructs for interface specification makes it possible to synthesize simulators with less-detailed interfaces from a highly-detailed specification with only a few lines of code and minimal effort. The speed of the resulting low-detail simulators is up to 14.4 times the speed of high-detail simulators.",2011,0, 4784,Architectures for online error detection and recovery in multicore processors,"The huge investment in the design and production of multicore processors may be put at risk because the emerging highly miniaturized but unreliable fabrication technologies will impose significant barriers to the life-long reliable operation of future chips. Extremely complex, massively parallel, multi-core processor chips fabricated in these technologies will become more vulnerable to: (a) environmental disturbances that produce transient (or soft) errors, (b) latent manufacturing defects as well as aging/wearout phenomena that produce permanent (or hard) errors, and (c) verification inefficiencies that allow important design bugs to escape in the system. In an effort to cope with these reliability threats, several research teams have recently proposed multicore processor architectures that provide low-cost dependability guarantees against hardware errors and design bugs. This paper focuses on dependable multicore processor architectures that integrate solutions for online error detection, diagnosis, recovery, and repair during field operation. It discusses taxonomy of representative approaches and presents a qualitative comparison based on: hardware cost, performance overhead, types of faults detected, and detection latency. It also describes in more detail three recently proposed effective architectural approaches: a software-anomaly detection technique (SWAT), a dynamic verification technique (Argus), and a core salvaging methodology.",2011,0, 4785,Modeling manufacturing process variation for design and test,"For process nodes 22nm and below, a multitude of new manufacturing solutions have been proposed to improve the yield of devices being manufactured. With these new solutions come an increasing number of defect mechanisms. There is a need to model and characterize these new defect mechanisms so that (i) ATPG patterns can be properly targeted, (ii) defects can be properly diagnosed and addressed at design or manufacturing level. This presentation reviews currently available defect modeling and test solutions and summarizes open issues faced by the industry today. It also explores the topic of creating special test structures to expose manufacturing process parameters which can be used as input to software defect models to predict die specific defect locations for better targeting of test.",2011,0, 4786,Multi-level attacks: An emerging security concern for cryptographic hardware,"Modern hardware and software implementations of cryptographic algorithms are subject to multiple sophisticated attacks, such as differential power analysis (DPA) and fault-based attacks. In addition, modern integrated circuit (IC) design and manufacturing follows a horizontal business model where different third-party vendors provide hardware, software and manufacturing services, thus making it difficult to ensure the trustworthiness of the entire process. Such business practices make the designs vulnerable to hard-to-detect malicious modifications by an adversary, termed as Hardware Trojans. In this paper, we show that malicious nexus between multiple parties at different stages of the design, manufacturing and deployment makes the attacks on cryptographic hardware more potent. We describe the general model of such an attack, which we refer to as Multi-level Attack, and provide an example of it on the hardware implementation of the Advanced Encryption Standard (AES) algorithm, where a hardware Trojan is embedded in the design. We then analytically show that the resultant attack poses a significantly stronger threat than that from a Trojan attack by a single adversary. We validate our theoretical analysis using power simulation results as well as hardware measurement and emulation on a FPGA platform.",2011,0, 4787,Markov Chain Based Monitoring Service for Fault Tolerance in Mobile Cloud Computing,"Mobile cloud computing is a combination of mobile computing and cloud computing, and provides cloud computing environment through various mobile devices. Recently, due to rapid expansion of smart phone market and wireless communication environment, mobile devices are considered as resource for large scale distributed processing. But mobile devices have several problems, such as unstable wireless connection, limitation of power capacity, low communication bandwidth and frequent location changes. As resource providers, mobile devices can join and leave the distributed computing environment unpredictably. This interrupts the undergoing operation, and the delay or failure of completing the operation may cause a system failure. Because of low reliability and no-guarantee of completing an operation, it is difficult to use a mobile device as a resource. That means that mobile devices are volatile. Therefore, we should consider volatility, one of dynamic characteristics of mobile devices, for stable resource provision. In this paper, we propose a monitoring technique based on the Markov Chain model, which analyzes and predicts resource states. With the proposed monitoring technique and state prediction, a cloud system will get more resistant to the fault problem caused by the volatility of mobile devices. The proposed technique diminishes the volatility of a mobile device through modeling the patterns of past states and making a prediction of future state of a mobile device.",2011,0, 4788,Adaptive Genetic Algorithm for QoS-aware Service Selection,"An adaptive Genetic Algorithm is presented to select optimal web service composite plan from a lot of composite plans on the basis of global Quality-of-Service (QoS) constraints. In this Genetic Algorithm, a population diversity measurement and an adaptive crossover strategy are proposed to further improve the efficiency and convergence of Genetic Algorithm. The probability value of the crossover operation can be set according to the combination of population diversity and individual fitness. The algorithm can get more excellent composite service plan because it accords with the characteristic of web service selection very well. Some simulation results on web service selection with global QoS constraints have shown that the adaptive Genetic Algorithm can gain quickly better composition service plan that satisfies the global QoS requirements.",2011,0, 4789,Software faults prediction using multiple classifiers,"In recent years, the use of machine learning algorithms (classifiers) has proven to be of great value in solving a variety of problems in software engineering including software faults prediction. This paper extends the idea of predicting software faults by using an ensemble of classifiers which has been shown to improve classification performance in other research fields. Benchmarking results on two NASA public datasets show all the ensembles achieving higher accuracy rates compared with individual classifiers. In addition, boosting with AR and DT as components of an ensemble is more robust for predicting software faults.",2011,0, 4790,A systematic approach to assemble sequence diagrams from use case scenarios,"The critical task of developing executable system-level sequence diagrams to represent those scenarios remains a manual task that has to be entirely performed by the tester. This obviously is time consuming, error prone and very costly, since even the smallest systems can potentially have a large number of scenarios. In this paper, we propose an approach to semi-automate the construction of system-level sequence diagrams. The approach is based on a traceability framework, which allows its users to efficiently specify scenarios at a high level using use case descriptions, while systematically building the corresponding sequence diagrams. An ATM case study is presented to demonstrate the feasibility of the proposed approach.",2011,0, 4791,Phase-based tuning for better utilization of performance-asymmetric multicore processors,"The latest trend towards performance asymmetry among cores on a single chip of a multicore processor is posing new challenges. For effective utilization of these performance-asymmetric multicore processors, code sections of a program must be assigned to cores such that the resource needs of code sections closely matches resource availability at the assigned core. Determining this assignment manually is tedious, error prone, and significantly complicates software development. To solve this problem, we contribute a transparent and fully-automatic process that we call phase-based tuning which adapts an application to effectively utilize performance-asymmetric multicores. Compared to the stock Linux scheduler we see a 36% average process speedup, while maintaining fairness and with negligible overheads.",2011,0, 4792,A comparison study of automatic speech quality assessors sensitive to packet loss burstiness,"The paper delves the behavior rating of new emerging automatic quality assessors of VoIP calls subject to bursty packet loss process. The examined speech quality assessment (SQA) algorithms are able to estimate speech quality of live VoIP calls at run-time using control information extracted from header content of received packets. They are especially designed to be sensitive to packet loss burstiness. The performance evaluation study is performed using a dedicated set-up software-based SQA framework. It offers a personalized packet killer and includes implementation of four SQA algorithms. A speech quality database, which covers a wide range of bursty packet loss conditions, has been created then thoroughly analyzed. Our important findings are the following: (1) all examined automatic bursty-loss aware speech quality assessors achieve a satisfactory correlation under upper (>;20%) and lower (<;10%) ranges of packet loss process (2) They exhibit a clear weakness to assess speech quality under a moderated packet loss process (3) The accuracy of sequence-by-sequence basis of examined SQA algorithms should be addressed in details for further precision.",2011,0, 4793,Predicting upgrade failures using dependency analysis,"Upgrades in component based systems can disrupt other components. Being able to predict the possible consequence of an upgrade just by analysing inter-component dependencies can avoid errors and downtime. In this paper we precisely identify in a repository the components p whose upgrades force a large set of others components to be upgraded. We are also able to discriminate whether all the future versions of p have the same impact, or whether there are different classes of future versions that have different impacts. We perform our analysis on Debian, one of the largest FOSS distributions.",2011,0, 4794,Software reliability prediction model based on PSO and SVM,"Software reliability prediction classifies software modules as fault-prone modules and less fault-prone modules at the early age of software development. As to a difficult problem of choosing parameters for Support Vector Machine (SVM), this paper introduces Particle Swarm Optimization (PSO) to automatically optimize the parameters of SVM, and constructs a software reliability prediction model based on PSO and SVM. Finally, the paper introduces Principal Component Analysis (PCA) method to reduce the dimension of experimental data, and inputs these reduced data into software reliability prediction model to implement a simulation. The results show that the proposed prediction model surpasses the traditional SVM in prediction performance.",2011,0, 4795,Numerical simulation of FLD based on leaner strain path for fracture on auto panel surface,"In order to emerge the traditional measurement's shortage of auto body panel, we proposed the corrected FLD based on the linear path method and calculation methods. According to the above mentioned programs and the fracture defect diagnosis, we developed a CAE module for the fracture defect analysis based on VC++ environment, and solved the problems which the traditional sheet metal forming CAE software can not accurately predict. And then a forming process of a hood is simulated by applying the proposed method and AUTOFORM. Comparison between the two simulation results is done which shows the method we proposed is better, and then we introduced some methods to optimize the adjustment amount of metal flow and the stamping dies for fracture. Some suggestions are given by investigating the adjustment amount and modification of the stamping die.",2011,0, 4796,MoldFlow analysis and parameters optimization of button seat injecting,"The effect of the gate number and location, packing pressure and time on the filling cavity pressure, distribution of welding lines, warpage distortion quantity and shrinkage of the plastics part were analyzed by means of moldflow software for plastic electric button seat. Some possible defects in the products were predicted based on the numerical simulation. The optimal runner gate, the optimal technology scheme and the injecting process parameters were obtained. The research shows that the analysis results can provide the effective references for the injecting mold design, Practice has proved that Moldflow can be used to ensure the reasonable and efficiency of mold design.",2011,0, 4797,Modeling of shift hydraulic system for automatic transmission,"The main functions of the shift hydraulic system for stepped automatic transmission are to generate and maintain desired clutch pressures for shifting operation, as well as to initiate gear shifts and control shift quality. It consists of supply line pressure regulation system, solenoid valve, pressure control valve (PCV), and wet clutch. This paper presents a dynamic model of the shift control system and conducts simulation based on AMESim software. The simulation model is then validated against experimental data. Because the model derived is complex, highly nonlinear and high order, which is not suitable for use in controller, the model simplification is carried out based on an energy-based model order reduction method. The results confirm that simulation analysis with AMESim can predict hydraulic system dynamic response accurately and the model simplification is instructive for controller design.",2011,0, 4798,Assessing Oracle Quality with Checked Coverage,"A known problem of traditional coverage metrics is that they do not assess oracle quality - that is, whether the computation result is actually checked against expectations. In this paper, we introduce the concept of checked coverage - the dynamic slice of covered statements that actually influence an oracle. Our experiments on seven open-source projects show that checked coverage is a sure indicator for oracle quality - and even more sensitive than mutation testing, its much more demanding alternative.",2011,0, 4799,An Empirical Evaluation of Assertions as Oracles,"In software testing, an oracle determines whether a test case passes or fails by comparing output from the program under test with the expected output. Since the identification of faults through testing requires that the bug is both exercised and the resulting failure is recognized, it follows that oracles are critical to the efficacy of the testing process. Despite this, there are few rigorous empirical studies of the impact of oracles on effectiveness. In this paper, we report the results of one such experiment in which we exercise seven core Java classes and two sample programs with branch-adequate, input only(i.e., no oracle) test suites and collect the failures observed by different oracles. For faults, we use synthetic bugs created by the muJava mutation testing tool. In this study we evaluate two oracles: (1) the implicit oracle (or """"null oracle"""") provided by the runtime system, and (2) runtime assertions embedded in the implementation (by others) using the Java Modeling Language. The null oracle establishes a baseline measurement of the potential benefit of rigorous oracles, while the assertions represent a more rigorous approach that is sometimes used in practice. The results of our experiments are interesting. First, on a per-method basis, we observe that the null oracle catches less than 11% of the faults, leaving more than 89% uncaught. Second, we observe that the runtime assertions in our subjects are effective at catching about 53% of the faults not caught by null oracle. Finally, by analyzing the data using data mining techniques, we observe that simple, code-based metrics can be used to predict which methods are amenable to the use of assertion-based oracles with a high degree of accuracy.",2011,0, 4800,EFindBugs: Effective Error Ranking for FindBugs,"Static analysis tools have been widely used to detect potential defects without executing programs. It helps programmers raise the awareness about subtle correctness issues in the early stage. However, static defect detection tools face the high false positive rate problem. Therefore, programmers have to spend a considerable amount of time on screening out real bugs from a large number of reported warnings, which is time-consuming and inefficient. To alleviate the above problem during the report inspection process, we present EFindBugs to employ an effective two-stage error ranking strategy that suppresses the false positives and ranks the true error reports on top, so that real bugs existing in the programs could be more easily found and fixed by the programmers. In the first stage, EFindBugs initializes the ranking by assigning predefined defect likelihood for each bug pattern and sorting the error reports by the defect likelihood in descending order. In the second stage, EFindbugs optimizes the initial ranking self-adaptively through the feedback from users. This optimization process is executed automatically and based on the correlations among error reports with the same bug pattern. Our experiment on three widely-used Java projects (AspectJ, Tomcat, and Axis) shows that our ranking strategy outperforms the original ranking in Find Bugs in terms of precision, recall and F1-score.",2011,0, 4801,Finding Software Vulnerabilities by Smart Fuzzing,"Nowadays, one of the most effective ways to identify software vulnerabilities by testing is the use of fuzzing, whereby the robustness of software is tested against invalid inputs that play on implementation limits or data boundaries. A high number of random combinations of such inputs are sent to the system through its interfaces. Although fuzzing is a fast technique which detects real errors, its efficiency should be improved. Indeed, the main drawbacks of fuzz testing are its poor coverage which involves missing many errors, and the quality of tests. Enhancing fuzzing with advanced approaches such as: data tainting and coverage analysis would improve its efficiency and make it smarter. This paper will present an idea on how these techniques when combined give better error detection by iteratively guiding executions and generating the most pertinent test cases able to trigger potential vulnerabilities and maximize the coverage of testing.",2011,0, 4802,Cost Optimizations in Runtime Testing and Diagnosis of Systems of Systems,"In practically all development processes tests are used to detect the presence of faults. This is not an exception for critical and high-availability systems. However, these systems cannot be taken offline or duplicated for testing in some cases. This makes runtime testing necessary. This paper presents work aimed at optimizing the three main sources of testing cost: preparation, execution and diagnosis. First, preparation cost is optimized by defining a metric of the runtime testability of the system, used to elaborate an implementation plan of preparative work for runtime testing. Second, the interrelated nature of test execution cost and diagnostic cost is highlighted and a new diagnostic test prioritization is introduced.",2011,0, 4803,Constraint generation for software-based post-silicon bug masking with scalable resynthesis technique for constraint optimization,"Due to the dramatic increase in design complexity, verifying the functional correctness of a circuit is becoming more difficult. Therefore, bugs may escape all verification efforts and be detected after tape-out. While most existing solutions focus on fixing the problem on the hardware, in this work we propose a different methodology that tries to generate constraints which can be used to mask the bugs using software. This is achieved by utilizing formal reachability analysis to extract the conditions that can trigger the bugs. By synthesizing the bug conditions, we can derive input constraints for the software so that the hardware bugs will never be exposed. In addition, we observe that such constraints have special characteristics: they have small onset terms and flexible minterms. To facilitate the use of our methodology, we also propose a novel resynthesis technique to reduce the complexity of the constraints. In this way, software can be modified to run correctly on the buggy hardware, which can improve system quality without the high cost of respin.",2011,0, 4804,Occurrence probability analysis of a path at the architectural level,"In this paper, we propose an algorithm to compute the occurrence probability for a given path precisely in an acyclic synthesizable VHDL or software code. This can be useful for the ranking of critical paths and in a variety of problems that include compiler-level architectural optimization and static timing analysis for improved performance. Functions that represent condition statements at the basic blocks are manipulated using Binary Decision Diagrams (BDDs). Experimental results show that the proposed method outperforms the traditional Monte Carlo simulation approach. The later is shown to be non-scalable as the number of inputs increases.",2011,0, 4805,Improving the control-design process in naval applications using CHIL,"Different steps are necessary to develop power-electronic systems (PES): After the concept is chosen, the controller is designed using software simulations. The control is implemented and tested on the dedicated PES. However, a successful final test requires that the controller hardware is interacting with the PES properly. For this, a real-time simulator suits best which allows to verify the function of the controller hard- and software in real-time. This is realized by simulating the entire controlled system - including the power electronics - by means of state-space equations. Furthermore, it is possible to verify the generation of the switching-signals independent of the simulation. The proposed real-time simulator offers the possibility to test the full control hardware. On the one hand side, the control algorithms can be assessed regarding quality and time expense. On the other hand, the proper operation of the switching-signal generation can be tested without endangering the costly PES.",2011,0, 4806,CT Saturation Detection Based on Waveform Analysis Using a Variable-Length Window,"Saturation of current transformers (CTs) can lead to maloperation of protective relays. Using the waveshape differences between the distorted and undistorted sections of fault current, this paper introduces a novel method to quickly detect CT saturation. First, a symmetrical variable-length window is defined for the current waveform. The least error squares technique is employed to process the current inside this window and make two estimations for the current samples exactly before and after the window. CT saturation can be identified based on the difference between these two estimations. The accurate performance of this method is independent of the CT parameters, such as CT remanence and its magnetization curve. Moreover, the proposed method is not influenced by the fault current characteristics, noise, etc., since it is based on the significant differences between the distorted and undistorted fault currents. Extensive simulation studies were performed using PSCAD/EMTDC software and the fast and reliable response of the proposed method for various conditions, including very fast and mild saturation events, was demonstrated.",2011,0, 4807,Fast Inter-Mode Decision Algorithm Based on Contextual Mode and Priority Information for H.264/AVC Video Encoding System,"The recentH.264/AVCvideo coding standard provides a higher coding efficiency than previous standards. H.264/AVCachieves a bit rate saving of more than 50 % with many new technologies, but it shows very heavy computational complexity. In this paper, a fast mode decision scheme for inter-frame coding is proposed to reduce the computational complexity for H.264/AVC video encoding system. To reduce the block mode decision complexity in inter-frame coding, we use the contextual information based on the co-located and neighboring macroblocks (MBs) to detect a proper MB that can be early stopped. Then for the current MB, a priority information of the context is suggested for adding more mode types adaptively. The proposed algorithm shows the average speedup factors of 59.11 ~ 77.41% for various sequences with a negligible bit increment and a minimal loss of image quality, in JM 11.0 reference software.",2011,0, 4808,Simulation Based Functional and Performance Evaluation of Robot Components and Modules,"This paper presents a simulation based test method for functional and performance evaluation of robotic components and modules. In the proposed test method, function test procedure consists of unit, state, and interface tests which assess if the functional specifications of robot component 1 or module 2 are met. As for performance test, simulation environment provides a down scaled virtual work space for performance test of robot module accommodating virtual devices in conformity with the detailed performance specifications of real robot components. The proposed method can be used for verification of reliability of robot modules and components which prevents faults of them before their usage in real applications. In addition, the developed test system can be extended to support various test conditions implying possible cost saving for additional tests.",2011,0, 4809,Modeling Variability from Requirements to Runtime,"In software product line (SPL) engineering, a software configuration can be obtained through a valid selection of features represented in a feature model (FM). With a strong separation between requirements and reusable components and a deep impact of high level choices on technical parts, determining and configuring an well-adapted software configuration is a long, cumbersome and error-prone activity. This paper presents a modeling process in which variability sources are separated in different FMs and inter-related by propositional constraints while consistency checking and propagation of variability choices are automated. We show how the variability requirements can be expressed and then refined at design time so that the set of valid software configurations to be considered at run time may be highly reduced. Software tools support the approach and some experimentations on a video surveillance SPL are also reported.",2011,0, 4810,Towards a MDE Transformation Workflow for Dependability Analysis,"In the last ten years, Model Driven Engineering (MDE) approaches have been extensively used for the analysis of extra-functional properties of complex systems, like safety, dependability, security, predictability, quality of service. To this purpose, engineering languages (like UML and AADL) have been extended with additional features to model the required non-functional attributes, and transformations have been used to automatically generate the analysis models to be solved by appropriate analysis tools. In most of the available works, however, the transformations are not inte grated into a more general development process, aimed to support both domain-specific design analysis and verification of extra-functional properties. In this paper we explore this research direction presenting a transformation work flow for dependability analysis that is part of an industrial-quality infrastructure for the specification, analysis and verification of extra-functional properties, currently under development within the ARTEMIS-JU CHESS project. Specifically, the paper provides the following major contributions: i) definition of the required transformation steps to automatically assess the system dependability properties starting from the CHESS Modeling Language, ii) definition of a new Intermediate Dependability Model (IDM) acting as a bridge between the CHESS Modeling Language and the low-level analysis models, iii) definition of transformations from the CHESS Modeling Language to IDM models.",2011,0, 4811,Qualification and Selection of Off-the-Shelf Components for Safety Critical Systems: A Systematic Approach,"Mission critical systems are increasingly been developed by means of Off-The-Shelf (OTS) items since this allows reducing development costs. Crucial issues to be properly treated are (i) to assess the quality of each potential OTSitem to be used and (ii) to select the one that better fits the system requirements. Despite the importance of these issues, the current literature lacks a systematic approach to perform the previous two operations. The aim of this paper is to present a framework that can overcome this lack. Reasoning from the available product assurance standards for certifying mission critical systems, the proposed approach is based on the customized quality model that describes the quality attributes. Such quality model will guide a proper evaluation of OTS products, and the choice of which product to use is based on the outcomes of such an evaluation process. This framework represents a key solution to have a dominant role in the market of mission critical systems due to the demanding request by manufactures of such systems for an efficient qualification/certification process.",2011,0, 4812,Preserving the Exception Handling Design Rules in Software Product Line Context: A Practical Approach,"Checking the conformance between implementation and design rules is an important activity to guarantee quality on architecture and source code. To address the current needs of dependable systems it is also important to define design rules related to the exception handling behavior. The current approaches to automatically check design rules, however, do not provide suitable ways to define design rules related to the exception handling policy of a system. This paper proposes a practical approach to preserve the exception policy of a system or a family of systems along with its evolution, based on the definition and automatic checking of exception handling design rules, that regulates how exceptions flow inside the system -- which exceptions should flow and which elements are responsible for signaling and handling them. This approach automatically generates the partial code of JUnit tests to check such rules, and use the aspect-oriented technique to support such tests. The proposed approach was applied to define and check the exception handling rules of a software product line. Four different versions were evaluated (in both object-oriented and aspect-oriented implementations) in order to evaluate whether the exception handling policy was preserved during SPL evolution. Our experience shows that the proposed approach can be used to effectively detect violations on the exception handling policy of a software product line during its evolution.",2011,0, 4813,Discussion on questions about using artificial neural network for predicting of concrete property,"Several questions about predicting concrete property using BP artificial neural network have been discussed, including the selection of network structure, the determination of sample capacity and grouping method, the protection from over-fitting, and the comparison on precision of prediction. For the network-structure, it has been found that directly apply the consumption of raw-material and other crucial quality indices as the units of input can bring about a satisfactory result of prediction, in which a single hide layer holds 10 units and the workability along with the strength and durability formed two sub-networks simultaneously. For the sample capacity and grouping method, at least 100 sets of samples are necessary to find the intrinsic regularity, among them 1/3-1/4 should be taken as test samples. A new tactics for error-tracking has been proposed which is verified effective to avoid the over-fitting. The comparison of effectiveness and feasibility between BP neural networks and linear regression algorithm showed that BP neural networks have better performance in accuracy of prediction. Finally, an applicable software has been developed and used as examples to predict 163 sets of mixes for a ready-mixed concrete plant, to show its application in detail.",2011,0, 4814,Formal methods and automation for system verification,"Software and hardware systems are growing fast in both functionality and complexity and consequently, the probability of delicate faults existence in these systems is also increasing. Some of these faults may result in disastrous loss in both money and time. One main goal of designing those systems is to construct better and more reliable systems, regardless of the level of their complexity. Formal methods can be used to specify such systems and be automated to verify them. In this paper, we introduce and show how we can use some of those formal methods, Propositional Logic (PL) and First Order Logic (FOL), in specifying and verifying the correctness of related system aspects.",2011,0, 4815,A car steering rod fatigue analysis based on user conditions,"Fatigue one of the major mechanical failure, has caught more and more attention in vehicle reliability study. Fatigue test results of vehicles are usually discrete, which is caused mostly by the user's conditions. It is considered the user's real purpose in the test, which can help the users to develop or design a suitable testing program, improve the test run quality and shorten the product development cycle from concept to bulk production. This article describes the ways and methods of investigating the user's conditions, deals with the strain gauge data which is obtained from the test of the right steering link by using the professional signal processing software according to the reference of the survey results, and assesses the fatigue life of the steering link according to the rain-flow counting method and local strain-life fatigue analysis effecting on the strain spectrum finally.",2011,0, 4816,Simulation of sensing field for electromagnetic tomography system,"Electromagnetic tomography has potential value in process measurement. The frontier of electromagnetic tomography system is the sensor array. Owing to the excited signal acting on the sensor array directly, the excited strategy and the frequency and amplitude of the signal affect the quality of the information that the detected coil acquired from the object space. Furthermore, it would affect the accuracy of the information post extracted from the object field. To improve the sensitivity of the sensor array on the changes of the object field distribution, upgrade the sensitivity and accuracy of the system and guarantee high precision and high stability of the experimental data, use the finite element simulation software COMSOL Multiphysics to analyze the excited strategy and the characteristic of the excitation frequency of electromagnetic tomography. Establish the foundation in optimal using of electromagnetic tomography system.",2011,0, 4817,A distributed cable harness tester based on CAN bus,"This paper discusses a distributed cable harness tester based on CAN bus, which has a few functions such as connection detecting of a wire, diode orientation testing and resistor's impedance testing. In this article, application layer protocal design is researched on in particular in order to improve the tester's performance, and the software design of upper computer and the framework of handware of tester nodes are introduced in details. The tester is designed to ensure quality and reliability of cable harness. Through detecting , early failure products such as breakage circuit, short circuit and wrong conductor arrangement can be rejected. So it improves the efficiency of detection.",2011,0, 4818,The design of circular saw blade dynamic performance detection of high-speed data acquisition system,"The dynamic properties of the circular saw blade when cutting at a high speed influence the cutting quality at a large extent. This paper gives a brief introduction about the employment of advanced sensors and detection methods together with self-detection software system detect circular saw blade vibration quickly and accurately, non-destructively at the condition of high-speed rotating of the circular saw blade. The results of the actual measurement of the different carbide circular saw blades show that this detection system has the quality of high accuracy and high reliability. It is fitful for the production process of circular saw blade body for quality control.",2011,0, 4819,Phase Asymmetry: A New Parameter for Detecting Single-Phase Earth Faults in Compensated MV Networks,"Traditionally, the detection of high-resistance earth faults has been a difficult task in compensated medium-voltage (MV) distribution networks, mainly due to its very low-current fault. To date, several techniques have been proposed to detect them: using current injection in the neutral, superposing voltage signals, varying the value of the arc suppression coil, etc. These techniques use different detection parameters, such as fault resistance to earth, line asymmetries, or partial residual neutral voltages. In this paper, phase asymmetry is defined as a new parameter that can be used, together with the aforementioned techniques, in order to improve the reliability and efficiency of the detection process in single-phase earth faults, especially for compensated networks. The use of this parameter has been validated through extensive simulations of resistive faults up to 15 k , with the use of RESFAL software, which is based on Matlab/Simulink.",2011,0, 4820,A Visualization Quality Evaluation Method for Multiple Sequence Alignments,"Multiple sequence alignments (MSA) method is basic way for the analysis of biology sequences. As dozens of MSA algorithms appears, reasonable and effective quality evaluation method is necessary. Gap-insertion is common phenomenon after MSA, and should be an important factor to evaluate whether MSA is successful. A new MSA score evaluation method was introduced in this paper, which was based on the combination of both row distance of letters and column consensus of sequences. It can be used to evaluate the quality of MSA in the global and local justly and effectively. At same time, two formulas for assessing the distance of different MSA algorithms were derived.",2011,0, 4821,On the Reliability and Availability of Systems Tolerant to Stealth Intrusion,"This paper considers the estimation of reliability and availability of intrusion-tolerant systems subject to non-detectable intrusions. Our motivation comes from the observation that typical techniques of intrusion tolerance may in certain circumstances worsen the non-functional properties they were meant to improve (e.g., dependability). We start by modeling attacks as adversarial efforts capable of affecting the intrusion rate probability of components of the system. Then, we analyze several configurations of intrusion-tolerant replication and pro-active rejuvenation, to find which ones lead to security enhancements. We analyze several parameterizations, considering different attack and rejuvenation models and taking into account the mission time of the overall system and the expected time to intrusion of its components. In doing so, we identify thresholds that distinguish between improvement and degradation. We compare the effects of replication and rejuvenation and highlight their complementarity, showing improvements of resilience not attainable with any of the techniques alone, but possible only as a synergy of their combination. We advocate the need for thorougher system models, by showing fundamental vulnerabilities arising from incomplete specifications.",2011,0, 4822,Prediction of compression bound and optimization of compression architecture for linear decompression-based schemes,"On-chip linear decompression-based schemes have been widely adopted by industrial circuits nowadays to effectively reduce the ever increasing test data volume and test time. Though they can easily achieve relatively high compression ratio, there is a bound of effective compression ratio for these compression schemes. Prior work tried to address this problem by trying different compression architectures to identify this compression bound. However, they can not predict this compression bound efficiently. In this paper, we will first analyze the correlation between the effective compression ratio and the compression architecture, thus to predict that compression bound efficiently. In addition, this paper will also propose how to design the compression architecture for target effective compression ratio with one-pass calculation, which was usually done by a time-consuming try-and-error process as well in the current DFT flow. Experimental results show the accuracy of the prediction and the effectiveness of the compression architecture design.",2011,0, 4823,A new methodology for realistic open defect detection probability evaluation under process variations,"CMOS IC scaling has provided significant improvements in electronic circuit performance. Advances in test methodologies to deal with new failure mechanisms and nanometer issues are required. Interconnect opens are an important defect mechanism that requires detailed knowledge of its physical properties. In nanometer process, variability is predominant and considering only nominal value of parameters is not realistic. In this work, a model for computing a realistic coverage of via open defect that takes into account the process variability is proposed. Correlation between parameters of the affected gates is considered. Furthermore, spatial correlation of the parameters for those gates tied to the defective floating node can also influence the detectability of the defect. The proposed methodology is implemented in a software tool to determine the probability of detection of via opens for some ISCAS benchmark circuits. The proposed detection probability evaluation together with a test methodology to generate favorable logic conditions at the coupling lines can allow a better test quality leading to higher product reliability.",2011,0, 4824,Ultra-high-speed protection of parallel transmission lines using current travelling waves,"This study presents a protective algorithm for discrimination and classification of short-circuit faults in parallel circuit transmission lines. The proposed algorithm is based on the initial fault-induced current travelling waves (TWs) detected by the relay using the wavelet transform. In the case of internal faults on any of the parallel circuits, the detected current TWs in the corresponding phases of the parallel circuits are different, whereas, for external faults, the initial current TWs are almost similar. This feature is used to discriminate between internal and external faults. A fault-type classification algorithm is also proposed to identify the faulted phases. The proposed algorithm only uses the initial current TWs caused by the fault and provides an ultra-high-speed protective technique. It also covers inter-circuit faults, in which phases of both parallel circuits get involved in the fault. The obtained simulation results using the PSCAD/EMTDC software show that the proposed algorithm is able to discriminate the internal faults and to select the faulted phases very rapidly and reliably. It is able to identify faults in just ~1~ms.",2011,0, 4825,Adding code generation to develop a simulation platform,"Efficient and accurate control technologies require extensive simulation capabilities to validate the control software and demonstrate the impact on the business and equipment. To create a platform for rapid development and simulation of complex dynamic models, the authors and their colleagues have designed an object-oriented architecture. A portion of the architecture framework is constructed using code generation based on XML component definitions. This paper describes the key aspects of the architecture of the control simulator platform and its code generation capabilities. The simulation platform consists of defining a collection of components represented by differential equations, the capability to select, configure and interconnect components, and the ability to solve the coupled set of equations. The code generation is custom-built, however, the generated code results in more consistency and improved reliability by eliminating error prone steps and allowing the simulation engineer to focus on component mathematical description. As the number of components that are generated increases, the investment in a custom-built code generation is quickly realized by significantly reducing the amount of time required to create or update the code template for each component. The paper also discusses the importance of handling updates as well as the initial creation of code files. The techniques leveraged within this project have been learned through the use of other code generation tools, including GUI development tools. In particular, care has been taken to minimize the accidental loss of manually introduced code and handle version updates of the code generator. Our team is using this framework to develop simulation tests for power plant optimizations and have plans to add custom code generation to other areas of the platform.",2011,0, 4826,CEDA: Control-Flow Error Detection Using Assertions,"This paper presents an efficient software technique, control-flow error detection through assertions (CEDA), for online detection of control-flow errors. Extra instructions are automatically embedded into the program at compile time to continuously update runtime signatures and to compare them against preassigned values. The novel method of computing runtime signatures results in a huge reduction in the performance overhead, as well as the ability to deal with complex programs and the capability to detect subtle control-flow errors. The widely used C compiler, GCC, has been modified to implement CEDA, and the SPEC benchmark programs were used as the target to compare with earlier techniques. Fault injection experiments were used to demonstrate the effect of control-flow errors on software and to evaluate the fault detection capabilities of CEDA. Based on a new comparison metric, method efficiency, which takes into account both error coverage and performance overhead, CEDA is found to be much better than previously proposed methods.",2011,0, 4827,A Hybrid Multiagent Framework With Q-Learning for Power Grid Systems Restoration,"This paper presents a hybrid multiagent framework with a Q-learning algorithm to support rapid restoration of power grid systems following catastrophic disturbances involving loss of generators. This framework integrates the advantages of both centralized and decentralized architectures to achieve accurate decision making and quick responses when potential cascading failures are detected in power systems. By using this hybrid framework, which does not rely on a centralized controller, the single point of failure in power grid systems can be avoided. Further, the use of the Q-learning algorithm developed in conjunction with the restorative framework can help the agents to make accurate decisions to protect against cascading failures in a timely manner without requiring a global reward signal. Simulation results demonstrate the effectiveness of the proposed approach in comparison with the typical centralized and decentralized approaches based on several evaluation attributes.",2011,0, 4828,When to stop testing: A study from the perspective of software reliability models,"The important question often asked in the software industry is: when to stop testing and to release a software product? Unfortunately, in industrial practices, it is not easy for project manager and developers to be able to answer this question confidently. Software release time is a trade-off between capturing the benefits of an earlier market introduction and the deferral of a product release in order to enhance functionalities or to improve quality. The question has a lot to do with the time required to detect and correct faults in order to ensure a specified reliability goal. During testing, reliability measure is an important criterion in deciding when to release a software product. This study helps answer this question by presenting the perspectives from a study of software reliability models, with focuses on reliability paradigm, efficient management of resources and decision making under uncertainty.",2011,0, 4829,Towards a software quality assessment model based on open-source statical code analyzers,"In the context of software engineering, quality assessment is not straightforward. Generally, quality assessment is important since it can cut costs in the product life-cycle. A software quality assessment model based on open-source analyzers and quality factors facilitates quality measurement. Our quality assessment model is based on three principles: mapping rules to quality factors, computing scores based on the percentage of rule violating entities (classes, methods, instructions), assessing quality as a weighted mean of the rule attached scores.",2011,0, 4830,Product defect prediction model,"The prediction of software reliability can determine the current reliability of a product, using statistical techniques based on the failures data, obtained during testing or system usability. Software reliability growth models attempt to predict the number of defect using a correlation between exponential function and defect data. The purpose of this paper is to study the evolution of a real-life product over three releases, using the Rayleigh function in order to predict the number of defects. Our paper offers two possibilities for computing the model parameters, and then we should be able to decide which is better and what can be improved. Results from this study will be used to determine which approach is best to be used.",2011,0, 4831,A new method for High Impedance Faults detection in power system connected to the wind farm equipped with DFIGs,"This paper, investigates the effect of High Impedance Fault (HIF) on wind turbine equipped with Doubly Fed Induction Generator (DFIG) operation. Consequently, a new proposed method is used to HIF detecting in power system connected to wind farm equipped with DFIGs means of harmonic component analyzing of DFIG rotor current. The simulation has been done with PSCAD/EMTDC software.",2011,0, 4832,Employing S-transform for fault location in three terminal lines,"This paper has proposed a new fault location method for three terminal transmission lines. Once a fault occurs in a transmission line, high frequency transient components through travelling of the waves are appeared in all terminals. In this paper, S-transform is employed to detect the arrival time of these waves to terminals. For simulating various conditions, ATP-EMTP software and to process the proposed fault location method MATLAB software are used. The results have shown the good performance of the algorithm in different conditions.",2011,0, 4833,A quantitative assessment method for simulation-based e-learnings,"For several years the software industry has been focused on improving its product's quality by implementing different frameworks, models and standards like CMMI and ISO. It has been discovered that training team members is a must within these quality frameworks. Given the vast technologies differentiations and new methodologies for developing software, it is imminent that alternative faster, effective and more customized ways of training people are needed. One alternative way in training people is using simulation-based e-learning technologies. Due to the vast e-learnings market's availability, evaluations on educational software must be done to verify the quality of the training that is been produced or acquired. This paper presents a method that provides a quantitative assessment of the training quality. The proposed method presents an approach towards assessing educational software through the quantitative evaluation of predefined attribute. A pilot experience is presented in this paper along with the method description and explanation.",2011,0, 4834,An improved software reliability model incorporating detection process and the total number of faults,"Software reliability growth models base on the NHPP are quite successful tools that have been proposed to assess the software reliability. Various NHPP-SRGMs have been built upon various assumptions such as the number of remaining faults, software failure rate, and software reliability. But in realistic, the number of faults and the detection rate are not constants. They are time functions. In this paper, we aim to incorportate total number function of faults and detection rate function into conditional SRGMs. Experimental results indicate that the new model which proposed in this paper has a fairly accurate prediction capability.",2011,0, 4835,An AMI based measurement and control system in smart distribution grid,"To realize some of the smart grid goals, for the distribution system of the rural area, with GPRS communication network, a feeder automation based on AMI is proposed. The three parts of the system are introduced. Integrated with the advanced communication and measurement technology, the proposed system can monitor the operating situation and status of breakers, detect and locate the fault of the feeders. The information from the system will help realizing the advanced distribution operation, such as improve power quality, loss detection, state estimation and so on. The application case in Qingdao utilities in Shandong Province, PR China shows the effectiveness of the proposed system.",2011,0, 4836,Detecting matrix multiplication faults in many-core systems,"Many-core systems are characterized by a large number of components based on ever-shrinking circuit geometries. System reliability becomes an issue because of the system complexity, the large number of components and nanoscale issues due to soft errors. While information redundancy techniques can be used for fault tolerance, they occupy too much memory space and increase the memory and network bandwidth. Moreover, in many-cores, resources are plentiful encouraging the design of simple cores without hardware fault tolerance. Thus in the absence of information redundancy, software fault detection techniques become necessary to detect errors. Herein, we present fault detection techniques for 22 matrix multiplication which we extend to nxn matrix multiplication. These tests can detect transient and some intermittent and permanent hardware faults. These tests are also suitable to computing grids and distributed heterogeneous systems where the result-forming node may run tests in software to validate the sub-results submitted by the grid nodes.",2011,0, 4837,Empirical study of an intelligent argumentation system in MCDM,"Intelligent argumentation based collaborative decision making system assists stakeholders in a decision making group to assess various alternatives under different criterion based on the argumentation. A performance score of each alternative under every criterion in Multi-Criteria Decision Making (MCDM) is represented in a decision matrix and it denotes satisfaction of the criteria by that alternative. The process of determining the performance scores of alternatives in a decision matrix for criterion could be controversial sometimes because of the subjective nature of criterion. We developed a framework for acquiring performance scores in a decision matrix for multi-criteria decision making using an intelligent argumentation and collaborative decision support system we developed in the past [1]. To validate the framework empirically, we have conducted a study in a group of stakeholders by providing them an access to use the intelligent argumentation based collaborative decision making tool over the Web. The objectives of the study are: 1) to validate the intelligent argumentation system for deriving performance scores in multi-criteria decision making, and 2) to validate the overall effectiveness of the intelligent argumentation system in capturing rationale of stakeholders. The results of the empirical study are analyzed in depth and they show that the system is effective in terms of collaborative decision support and rationale capturing. In this paper, we present how the study was carried out and its empirical results.",2011,0, 4838,Software and communications architecture for Prognosis and Health Monitoring of ocean-based power generator,"This paper presents a communications and software architecture in support of Prognosis and Health Monitoring (PHM) applications for renewable ocean-based power generation. The generator/turbine platform is instrumented with various sensors (e.g. vibration, temperature) that generate periodic measurements used to assess the current system health and to project its future performance. The power generator platform is anchored miles offshore and uses a pair of wireless data links for monitoring and control. Since the link is expected to be variable and unreliable, being subject to challenging environmental conditions, the main functions of the PHM system are performed on a computing system located on the surface platform. The PHM system architecture is implemented using web services technologies following MIMOSA OSA-CBM standards. To provide sufficient Quality of Service for mission-critical traffic, the communications system employs application-level queue management with semantic-based filtering for the XML PHM messages, combined with IP packet traffic control and link quality monitoring at the network layer.",2011,0, 4839,The possibility of application the optical wavelength division multiplexing network for streaming multimedia distribution,"In this paper, simulation is used to investigate the possibility of streaming multimedia distribution over optical wavelength division multiplexing (WDM) network with optical burst switching (OBS) nodes. Simulation model is developed using software tool Delsi for Delfi 4.0. Pareto generator is used to model real-time multimedia stream. The wavelength allocation (WA) method and the deflection routing are implemented in OBS nodes. Performance measures, packet loss and delay, are estimated in order to investigate the quality of multimedia service. Statistical analysis of simulation output is performed by estimating the confidence intervals for a given degree according to the Student distribution. Obtained results indicate that optical WDM network manages multimedia contents distribution in efficient way to give a high quality of service.",2011,0, 4840,Applying source code analysis techniques: A case study for a large mission-critical software system,"Source code analysis has been and still is extensively researched topic with various applications to the modern software industry. In this paper we share our experience in applying various source code analysis techniques for assessing the quality of and detecting potential defects in a large mission-critical software system. The case study is about the maintenance of a software system of a Bulgarian government agency. The system has been developed by a third-party software vendor over a period of four years. The development produced over 4 million LOC using more than 20 technologies. Musala Soft won a tender for maintaining this system in 2008. Although the system was operational, there were various issues that were known to its users. So, a decision was made to assess the system's quality with various source code analysis tools. The expectation was that the findings will reveal some of the problems' cause, allowing us to correct the issues and thus improve the quality and focus on functional enhancements. Musala Soft had already established a special unit - Applied Research and Development Center - dealing with research and advancements in the area of software system analysis. Thus, a natural next step was for this unit to use the know-how and in-house developed tools to do the assessment. The team used various techniques that had been subject to intense research, more precisely: software metrics, code clone detection, defect and code smells detection through flow-sensitive and points-to analysis, software visualization and graph drawing. In addition to the open-source and free commercial tools, the team used internally developed ones that complement or improve what was available. The internally developed Smart Source Analyzer platform that was used is focused on several analysis areas: source code modeling, allowing easy navigation through the code elements and relations for different programming languages; quality audit through software metri- s by aggregating various metrics into a more meaningful quality characteristic (e.g. maintainability); source code pattern recognition - to detect various security issues and code smells. The produced results presented information about both the structure of the system and its quality. As the analysis was executed in the beginning of the maintenance tenure, it was vital for the team members to quickly grasp the architecture and the business logic. On the other hand, it was important to review the detected quality problems as this guided the team to quick solutions for the existing issues and also highlighted areas that would impede future improvements. The tool IPlasma and its System Complexity View (Fig. 1) revealed where the business logic is concentrated, which are the most important and which are the most complex elements of the system. The analysis with our internal metrics framework (Fig. 2) pointed out places that need refactoring because the code is hard to modify on request or testing is practically impossible. The code clone detection tools showed places where copy and paste programming has been applied. PMD, Find Bugs and Klockwork Solo tools were used to detect various code smells (Fig. 3). There were a number of occurrences that were indeed bugs in the system. Although these results were productive for the successful execution of the project, there were some challenges that should be addressed in the future through more extensive research. The two aspects we consider the most important are usability and integration. As most of the tools require very deep understanding of the underlying analysis, the whole process requires tight cooperation between the analysis team and the maintenance team. For example, most of the metrics tools available provide specific values for a given metric without any indication what the value means and what is the threshold. Our internal metrics framework aggregates the met",2011,0, 4841,Automatic generation of system-level virtual prototypes from streaming application models,"Virtual prototyping is a more and more accepted technology to enable early software development in the design flow of embedded systems. Since virtual prototypes are typically constructed manually, their value during design space exploration is limited. On the other hand, system synthesis approaches often start from abstract and executable models, allowing for fast design space exploration, considering only predefined design decisions. Usually, the output of these approaches is an """"ad hoc"""" implementation, which is hard to reuse in further refinement steps. In this paper, we propose a methodology for automatic generation of heterogeneous MPSoC virtual prototypes starting with models for streaming applications. The advantage of the proposed approach lies in the fact that it is open to subsequent design steps. The applicability of the proposed approach to real-world applications is demonstrated using a Motion JPEG decoder application that is automatically refined into several virtual prototypes within seconds, which are correct by construction, instead of using error-prone manual refinement, which typically requires several days.",2011,0, 4842,A novel approach for spam detection using boosting pages,"Link spam techniques are widely used in commercial web pages to achieve higher ranking in search results which may reduce the quality of the results. Most of techniques require some boosting pages to increase score of the target pages. In this paper, we aim to detect boosting pages and then spam pages to increase the quality of results. Our approach consists of two steps: (1) find boosting pages from spam seed set, and (2) detect web spam from the discovered boosting pages. An experimental result shows that our approach can detect 93.20 % of web spam pages with 79.96% precision.",2011,0, 4843,Pulse inversion linear bandpass filter for detecting subharmonic from microbubbles,"Subharmonic imaging promises to improve ultrasound imaging quality due to an increasing contrast-to-tissue ratio (CTR). However, to improve image quality, signal processing techniques for maximizing subharmonic signal are needed. In this work, we present the pulse inversion linear bandpass filter (PILBF) method for detection of sub harmonic components in order to separate signals from microbubble contrast agent echoes. This method is based on the combination between pulse inversion (PI) and linear bandpass filter (LBF) methods. While PI without LBF produces CTR 54 dB, the PILBF produces 82 dB enhancement. The high CTR value confirms the generation of high quality image.",2011,0, 4844,An efficient approach for data-duplication detection based on RDBMS,"Data-duplication is one of the most important issues in the context of information system management. Instead of storing a single real-world object as an entity in an information system, the duplication, storing more than one entity representing a single object, can be occurred. This problem can decrease the quality of service of information systems. In this paper, we propose an efficient approach to detect the duplication based on the RDBMS foundation. Our approach is based on the assumption that the data to be processed have been stored in the RDBMS at the first place. Thus, the proposed approach does not require the data to be imported/exported from the storage. Also, such approach will benefit from the query optimizer of the RDBMS. The experiment results on the TPC-H dataset have been presented to validate such proposed work.",2011,0, 4845,Bad-smell prediction from software design model using machine learning techniques,Bad-smell prediction significantly impacts on software quality. It is beneficial if bad-smell prediction can be performed as early as possible in the development life cycle. We present methodology for predicting bad-smells from software design model. We collect 7 data sets from the previous literatures which offer 27 design model metrics and 7 bad-smells. They are learnt and tested to predict bad-smells using seven machine learning algorithms. We use cross-validation for assessing the performance and for preventing over-fitting. Statistical significance tests are used to evaluate and compare the prediction performance. We conclude that our methodology have proximity to actual values.,2011,0, 4846,Towards a Rapid-Alert System for Security Incidents,"Predicting security incidents and forecasting risk are two essential duties when designing an enterprise security system. Based on a quantitative risk assessment technique arising from an an attacker-defender model, we propose a Bayesian learning strategy to continuously update the quality of protection and forecast the decision-theoretic risk. Evidence for or against the security of particular system components can be obtained from various sources, including security patches, software updates, scientific or industrial research result notifications retrieved through RSS feeds. Using appropriate stochastic distribution models, we obtain closed-form expressions (formulas) for the times when to expect the next security incident and when a re-consideration of a security system or component becomes advisable.",2011,0, 4847,Apad: A QoS Guarantee System for Virtualized Enterprise Servers,"Today's data center often employs virtualization to allow multiple enterprise applications to share a common hosting platform in order to improve resource and space utilization. It could be a greater challenge when the shared server becomes overloaded or enforces priority of the common resource among applications in such an environment. In this paper, we propose Apad, a feedback-based resource allocation system which can dynamically adjust the resource shares to multiple virtual machine nodes in order to meet performance target on shared virtualized infrastructure. To evaluate our system design, we built a test bed hosting several virtual machines which employed Xen Virtual Machine Monitor (VMM), and using Apache server along with its workload-generated tool. Our experiment results indicate that our system is able to detect and adapt to resource requirement that changes over time and allocate virtualized resources accordingly to achieve application-level Quality of Service (QoS).",2011,0, 4848,Identity attack and anonymity protection for P2P-VoD systems,"As P2P multimedia streaming service is becoming more popular, it is important for P2P-VoD content providers to protect their servers identity. In this paper, we first show that it is possible to launch an """"identity attack"""": exposing and identifying servers of peer-to-peer video-on-demand (P2P-VoD) systems. The conventional wisdom of the P2P-VoD providers is that identity attack is very difficult because peers cannot distinguish between regular peers and servers in the P2P streaming process. We are the first to show that it is otherwise, and present an efficient and systematic methodology to perform P2P-VoD servers detection. Furthermore, we present an analytical framework to quantify the probability that an endhost is indeed a P2P-VoD server. In the second part of this paper, we present a novel architecture that can hide the identity and provide anonymity protection for servers in P2P-VoD systems. To quantify the protective capability of this architecture, we use the """"fundamental matrix theory"""" to show the high complexity of discovering all protective nodes so as to disrupt the P2P-VoD service. We not only validate the model via extensive simulation, but also implement this protective architecture on PlanetLab and carry out measurements to reveal its robustness against identity attack.",2011,0, 4849,Online condition monitoring network for critical equipment at Holcim's STE. genevieve plant,"This paper describes a network architecture for remote monitoring and fault detection of critical rotating equipment. Holcim's Sphinx Monitoring System (SMS) is intended for equipment protection and early prediction of machine defects. SMS is based on a server-client model, where the server maintains a database of equipment configurations and evaluates each target with a unique set of rules that monitor the equipment's function and its integrated components' failure modes. Adaptive software algorithms monitor process and operational changes to tune the analysis criteria for evaluating vibration signals and detecting common machine faults. Network structure, software algorithms and other aspects of the system are further discussed and evaluated against samples of collected data, generated analysis results and physical inspections' outcomes.",2011,0, 4850,Formal Verification of Distributed Transaction Management in a SOA Based Control System,"In large scale, heavy workload systems, managing distributed transactions on multiple datasets becomes challenging and error prone task. Software systems based on service oriented architecture principles that manage critical infrastructures are typical environments where robust transaction management is one of the essential goals to achieve. The aim of this paper is to provide a formal description of the solution for transaction management and individual service component behavior in a SOA-based control system, and prove the correctness of the proposed design with the SMV formal verification tool. Atomic commitment protocol is used as a basis for solving distributed transaction management problem. SMV language and verification tool are utilized for formal description of the problem and verification of the necessary properties. The case study describes an application of the proposed approach in commercial software system for electrical power distribution management. Verification of given model properties has shown that suggested solution is suitable for the described class of SOA-based systems.",2011,0, 4851,Method of safety critical requirements flow in product life cycle processes,"The safety-related requirements are a part of the system requirements which are inputs to the software life cycle processes. These system requirements are developed from systems architecture. The system requirements are developed on each functional area applications. The safety requirements are assessed at the individual functional areas. This white paper proposes a solution to develop safety requirements for functional areas interfaces and method to flowdown software requirements throughout product lifecycle. While developing requirements from system requirements to high level and low level requirements, the requirements would be completely analyzed for safety perspective. The advantage of this proposal is that the functional modules interface requirements are analyzed and captured from safety perspective. Reuse of the product extracts specific safety related requirements at each functional level as the requirements are developed at each interface modules. This leads to extensive verification of interface requirements along with system requirements which would leads to a safer product.",2011,0, 4852,Component level risk assessment in Grids: A probablistic risk model and experimentation,"The current approaches to manage risk in Grid computing are a major step towards the provision of Quality of Service (QoS) to the end-user. However these approaches are based on node or machine level Assessment. As a node may contain CPU(s), storage devices, connections for communication and software resource, a node failure may actually be a failure of any of these components. This paper proposes a probabilistic risk model at the component level; the probabilistic risk model encompasses series and parallel model(s). Our approach towards risk assessment is aimed at a granularity level of individual components as compared to previous efforts at node level. The benefits of this probabilistic approach is the provision of a detailed risk assessment to the Grid resource provider leading to risk aware scheduling and an efficient usage of resources. Grid failure data was analyzed and experimentation was conducted based the proposed risk model. The results of the experiments provide detailed risk information at component level for the nodes required in the SLA (Service Level Agreement).",2011,0, 4853,Test case generation for use case dependency fault detection,Testing is an important phase of quality control in Software development. The use case diagram present in UML 2.0 plays a vital role in describing the behavior of a system and it is widely used for generating test cases. But to identify the dependency faults that occur between use cases is a challenge for the test engineers in a Model Based Testing (MBT) environment. This paper presents a novel approach for generating test cases to detect use case dependency faults in UML use case diagrams using mulitway trees. Our approach includes transforming the UML use case diagrams into a tree representation called as Use Case Dependency Tree (UCDT). This is followed by a thorough traversal of the tree to generate the test cases so as to detect any existing intra and inter use case dependency faults among the various use cases being invoked by the different actors interacting with the software system.,2011,0, 4854,A model based prioritization technique for component based software retesting using uml state chart diagram,"Regression testing is the process of testing a modified system using the old test suite. As the test suite size is large, system retesting consumes large amount of time and computing resources. This issue of retesting of software systems can be handled using a good test case prioritization technique. A prioritization technique schedules the test cases for execution so that the test cases with higher priority executed before lower priority. The objective of test case prioritization is to detect fault as early as possible so that the debuggers can begin their work earlier. In this paper we propose a new prioritization technique to prioritize the test cases to perform regression testing for Component Based Software System (CBSS). The components and the state changes for a component based software systems are being represented by UML state chart diagrams which are then converted into Component Interaction Graph (CIG) to describe the interrelation among components. Our prioritization algorithm takes this CIG as input along with the old test cases and generates a prioritized test suit taking into account total number of state changes and total number of database access, both direct and indirect, encountered due to each test case. Our algorithm is found to be very effective in maximizing the objective function and minimizing the cost of system retesting when applied to few JAVA projects.",2011,0, 4855,Prediction of software project effort using fuzzy logic,"Software development effort estimation is a branch of forecasting that has received increased interest in academia as well as in the field of research and development. Predicting software effort with any acceptable degree remains challenging. In this paper we have developed 2 different linear regression models using fuzzy function point (FFP) and non fuzzy function point in order to predict the software project effort and further we have also considered that the entire projects are organic in nature i.e. the project size lies between 2 to 50 KLOC. After obtaining the software effort, project manager can control the cost and ensures the quality more accurately.",2011,0, 4856,Test case prioritization for regression testing based on fault dependency,"Test case prioritization techniques involve scheduling test cases for regression testing in an order that increases their effectiveness at meeting some performance goal. This is inefficient to re execute all the test cases in regression testing following the software modifications. Using information obtained from previous test case execution, prioritization techniques order the test cases for regression testing so that most beneficial are executed first thus allows an improved effectiveness of testing. One performance goal, rate of dependency detected among faults, measures how quickly dependency among faults are detected within the regression testing process. An improved rate of fault dependency can provide faster feedback on software and let developers start debugging on the severe faults that cause other faults to appear later. This paper presents the new metric for assessing rate of fault dependency detection and an algorithm to prioritize test cases. Using the new metric the effectiveness of this prioritization is shown comparing it with non-prioritized test case. Analysis proves that prioritized test cases are more effective in detecting dependency among faults.",2011,0, 4857,Automated generation of test cases from output domain of an embedded system using Genetic algorithms,"A primary issue in black-box testing is how to generate adequate test cases from input domain of the system under test on the basis of user's requirement specification. However, for some types of systems including embedded systems, developing test cases from output domain is more suitable than developing from input domain, especially, when the output domain is smaller. This approach ensures better reliability of the system under test. In this paper, the authors present a new approach to automate the generation of test cases from output domain of a pilot project Temperature Monitoring and Controlling of Nuclear Reactor System (TMCNRS) which is an embedded system developed using modified Cleanroom Software Engineering methodology. An Automated Test Case Generator (ATCG) that uses Genetic algorithms (GAs) extensively and generates test cases from output domain is proposed. The ATCG generates test cases which are useful to conduct pseudo - exhaustive testing to detect single, double and several multimode faults in the system. The generator considers most of the combinations of outputs, and finds the corresponding inputs while optimizing the number of test cases generated. In order to investigate the effectiveness of this approach, test cases were generated by ATCG and the tests were conducted on the target embedded system at a minimum cost and time. Experimental results show that this approach is very promising.",2011,0, 4858,MngRisk A decisional framework to measure managerial dimensions of legacy application for rejuvenation through reengineering,"Nowadays legacy system reengineering has emerged as a well-known system evolution technique. The goal of reengineering is to increase productivity and quality of legacy system through fundamental rethinking and radical redesigning of system. A broad range of risk issues and concerns must be addressed to understand and model reengineering process. Overall success of reengineering effort requires to considering three distinctive but connected areas of interest i.e. system domain, managerial domain and technical domain. We present a hierarchical managerial domain risk framework MngRisk to analyze managerial dimensions of legacy system. The fundamental premise of framework is to observe, extract and categories the contextual perspective models and risk clusters of managerial domain. This work contributes for a decision driven framework to identify and assess risk components of managerial domain. Proposed framework provides guidance on interpreting the results obtained from assessment to take decision about when evolution of a legacy system through reengineering is successful.",2011,0, 4859,Predicting the Reliability of Software Systems Using Fuzzy Logic,"Software industry suffer many challenges in developing a high quality reliable software. Many factors affect their development such as the schedule, limited resources, uncertainty in the developing environment and inaccurate requirement specification. Software Reliability Growth Models (SRGM)were significantly used to help in solving these problems by accurately predicting the number of faults in the software during both development and testing processes. The issue of building growth models was the subject of many research work. In this paper, we explore the use of fuzzy logic to build a SRGM. The proposed fuzzy model consists of a collection of linear sub-models joined together smoothly using fuzzy membership functions to represent the fuzzy model. Results and analysis based data set developed by John Musa of Bell Telephone Laboratories are provided to show the potential advantages of using fuzzy logic in solving this problem.",2011,0, 4860,Bayesian Estimation of the Normal Mean from Censored Samples,"The normal distribution is often used a s a model for reliability, and censored samples naturally arise in software reliability applications. Bayesian estimation methods have an advantage over the frequentist approach as they provide the user a framework for incorporating important factors such as software complexity, operating system, level and quality of verification and validation in the software reliability estimation process. Our goal in this paper is to compute the Bayes estimate of the mean of a normal population when the data set is censored . The proposed method is illustrated via several examples.",2011,0, 4861,An Empirical Study on the Impacts of Autonomy of Components on Qualities of Software Systems,"More and more autonomous computing entities are implemented and deployed on the Internet and they are supposed to be able to adapt to the instable connection, decentralized control, dynamism and openness of the networking environment. Prior to implementing these autonomous computing entities, software engineers should decide a range of acceptable autonomy of components to ensure qualities of both individual components and the whole system. In order to qualitatively investigate how the autonomy of components impact on the qualities of the whole system and what other factors impact their relations, we conducted an experimental study based on the stochastic process. First, we give the definition of autonomy and an approach for measuring autonomy degree of a component based on the general recognition of the academia. Next, we build up a mathematical model for the relationship between autonomy degree and quality by using the stochastic process. At last, we construct an intelligent traffic control simulation system composed of Autonomous Components to concrete the mathematical model and to draw some generic conclusions from the experimental system. By recording these qualities under different autonomy degrees and different environment complexities, we work out the probability density distribution of quality movement. Combining the mathematical model, we give out some guides for autonomous components to adjust their autonomy degrees automatically under different contexts.",2011,0, 4862,Portable electronic nose for beverage quality assessment,"A portable electronic nose (e-nose) was appropriately designed for investigating quality of beverages such as juice or wine, etc. The e-nose system comprises of sample and reference containers, air flow unit, sensing unit and data acquisition unit. All of the hardware units were controlled by in-house software under LABVIEW program via USB port of a DAQ card. The sensing unit includes eight different metal oxide gas sensors from Figaro Engineering Inc. Principal component analysis (PCA) was used as a statistical method in order to discriminate and assess the experimental data as defined by the percentage change in sensor resistances that correlates directly to difference in the aroma characteristics. Drift compensation model was applied to the raw data that sometimes suffer from the effects of sensor drift. Constructed portable e-nose has been tested on-field in a winery to evaluate wine aroma during process of wine bottling. The e-nose using PCA algorithm can distinguish the wine bottling under nitrogen from the bottling under partial vacuum. We also demonstrated that e-nose can be used to help wine maker to design the appropriate process of wine bottling achieving a high quality of wine product.",2011,0, 4863,Evaluation of the functionality of a traditional setting policy applied on directional earth fault function,"High-impedance faults (HIFs) cannot be detected by the protective zones of distance relays. To overcome to this problem, a protective function called directional earth fault (DEF) function is included in numerical distance relays. To set the function some extensive calculations and probably some exhaustive simulations should be done. However, protection engineers usually utilize a traditional setting policy to make the setting easy. In this paper, we will have an evaluation on this setting policy to find out whether it can result in a proper operation of DEF function in detecting HIFs or not. To do this, using software MATLAB, we will simulate different HIFs on line Dogonbadan-Behbahan (a line of Iran transmission grid). Then the functionality of DEF function of the line relays which is set on the basis of the traditional setting policy will be evaluated in the presence of each HIF case. Eventually, for all simulated HIFs the impedance measured by the distance function of the line relays will be presented to show how HIFs affect the impedance location with respect to the protective zones.",2011,0, 4864,Failure Avoidance through Fault Prediction Based on Synthetic Transactions,"System logs are an important tool in studying the conditions (e.g., environment misconfigurations, resource status, erroneous user input) that cause failures. However, production system logs are complex, verbose, and lack structural stability over time. These traits make them hard to use, and make solutions that rely on them susceptible to high maintenance costs. Additionally, logs record failures after they occur: by the time logs are investigated, users have already experienced the failures' consequences. To detect the environment conditions that are correlated with failures without dealing with the complexities associated with processing production logs, and to prevent failure-causing conditions from occurring before the system goes live, this research suggests a three step methodology: (i) using synthetic transactions, i.e., simplified workloads, in pre-production environments that emulate user behavior, (ii) recording the result of executing these transactions in logs that are compact, simple to analyze, stable over time, and specifically tailored to the fault metrics of interest, and (iii) mining these specialized logs to understand the conditions that correlate to failures. This allows system administrators to configure the system to prevent these conditions from happening. We evaluate the effectiveness of this approach by replicating the behavior of a service used in production at Microsoft, and testing the ability to predict failures using a synthetic workload on a 650 million events production trace. The synthetic prediction system is able to predict 91% of real production failures using 50-fold fewer transactions and logs that are 10,000-fold more compact than their production counterparts.",2011,0, 4865,Fault Tree Analysis Based on Logic Grey Gate in Power System,"Because there exists the ambiguity in complex systems, and there is the limitation in traditional fault tree, a novel fault tree analysis theory is introduced. In the theory, the probability grey number, which can express the event's subjective ambiguity and objective ambiguity, is introduced to express the degree and probability that the components go wrong, dynamic envelope is applied to score the relation among components, and a new logic gate, Grey-gate, is advanced for expressing the effect of system reliability when the components go wrong. Finally, the theory of fault diagnosis is applied to analyze the fault effect of the system with software and hardware.",2011,0, 4866,A systems engineering approach for crown jewels estimation and mission assurance decision making,"Understanding the context of how IT contributes to making missions more or less successful is a cornerstone of mission assurance. This paper describes a continuation of our previous work that used process modeling to allow us to estimate the impact of cyber incidents on missions. In our previous work we focused on developing a capability that could work as an online process to estimate the impacts of incidents that are discovered and reported. In this paper we focus instead on how our techniques and approach to mission modeling and computing assessments with the model can be used offline to help support mission assurance engineering. The heart of our approach involves using a process model of the system that can be run as an executable simulation to estimate mission outcomes. These models not only contain information about the mission activities, but also contain attributes of the process itself and the context in which the system operates. They serve as a probabilistic model and stochastic simulation of the system itself. Our contributions to this process modeling approach have been the addition of IT activity models that document in the model how various mission activities depend on IT supported processes and the ability to relate how the capabilities of the IT can affect the mission outcomes. Here we demonstrate how it is possible to evaluate the mission model offline and compute characteristics of the system that reflect its mission assurance properties. Using the models it is possible to identify the crown jewels, to expose the systems susceptibility to different attack effects, and evaluate how different mitigation techniques would likely work. Being based on an executable model of the system itself, our approach is much more powerful than a static assessment. Being based on business process modeling, and since business process analysis is becoming popular as a systems engineering tool, we also hope our approach will push mission assurance analysis tasks into - - a framework that allows them to become a standard systems engineering practice rather than the off to the side activity it currently is.",2011,0, 4867,An evolutionary multiobjective optimization approach to component-based software architecture design,"The design of software architecture is one of the difficult tasks in the modern component-based software development which is based on the idea that develop software systems by assembling appropriate off-the-shelf components with a well-defined software architecture. Component-based software development has achieved great success and been extensively applied to a large range of application domains from realtime embedded systems to online web-based applications. In contrast to traditional approaches, it requires software architects to address a large number of non-functional requirements that can be used to quantify the operation of system. Moreover, these quality attributes can be in conflict with each other. In practice, software designers try to come up with a set of different architectural designs and then identify good architectures among them. With the increasing scale of architecture, this process becomes time-consuming and error-prone. Consequently architects could easily end up with some suboptimal designs because of large and combinatorial search space. In this paper, we introduce AQOSA (Automated Quality-driven Optimization of Software Architecture) toolkit, which integrates modeling technologies, performance analysis techniques, and advanced evolutionary multiobjective optimization algorithms (i.e. NSGA-II, SPEA2, and SMS-EMOA) to improve non-functional properties of systems in an automated manner.",2011,0, 4868,Definition of Test Criteria Based on the Scene Graph for VR Applications,"Virtual Reality applications are becoming more popular. In general, the development of these applications does not include a testing phase, or, at best, the evaluation is conducted only with the users. The activity of software testing has received considerable attention from researchers and software engineers who recognize its usefulness in creating quality products. However, the tests are expensive and prone to errors, which imposes the need to systematize and hence the definition of techniques to increase quality and productivity in their driving. Several testing techniques have been developed and have been used, each with its own characteristics in terms of effectiveness, cost, implementation stages, etc. Moreover, these techniques can also be adapted. In this paper, testing criteria based on scene graph are studied in order to ensure the quality of the Virtual Reality applications implementation. In addition, a proof of concept is presented, by using the defined criteria applied to a VR framework built to generate applications in the medical training area.",2011,0, 4869,A Configurable Approach to Tolerate Soft Errors via Partial Software Protection,"Compared with hardware-based methods, software-based methods which need not additional hardware costs are regarded as efficient methods to tolerate soft errors. Software-based methods which are implemented by software protection have performance sacrifice. This paper proposes a new configurable approach whose purpose is to balance system reliability and performance, to tolerate soft errors via partial software protection. Those unprotected software regions which are motivated by soft error mask on software level are related to statically dead codes, those codes whose probabilities to be executed are low and some partially dead codes. For those protected codes, we copy every data and operate every operation twice to ensure those data stored into memory are right. Additionally, we ensure every branch instruction can jump to the right address by checking condition and destination address. Finally, our approach is implemented by modification of compiler. System reliability and performance are evaluated with different configurations. Experimental results demonstrate our purpose to balance system reliability and performance.",2011,0, 4870,Combined methods for solving inductive coupling problems,"The problem of induced AC voltages on pipelines has always been with us, and the interference caused by power transmission lines to buried gas pipelines is under investigation for many years. Situations where a pipeline is influenced by power lines in a right-of-way are more frequent nowadays. Even under normal operating conditions, voltages and currents are induced on the pipeline that may pose danger to working personnel or may accelerate the corrosion of the pipeline's metal. The aim of the paper is to evaluate the induced voltages and currents in case of an underground gas pipeline, which shares a same right of way with a high voltage transmission line, in order to detect the possibility of the AC corrosion in occurring of the pipeline. We choose to combine the electromagnetic field method and the conventional circuit method. Because the electrical equivalent circuit involves a high number of circuit elements that must be defined, a software code that generates this automatically was created. The considered complex problem is studied for different operating conditions of the power transmission line and different values of the coupling coefficient.",2011,0, 4871,Automatic generation of test data for path testing by adaptive genetic simulated annealing algorithm,"Software testing has become an important stage of the software developing process in recent years, and it is crucial element of software quality assurance. Path testing has become one of the most important unit test methods, and it is a typical white box test. The generation of testing data is one of the key steps which have a great effect on the automation of software testing. GA is adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. Because it is a robust search method requiring little information to search effectively in a large or poorly-understood search space, it is widely used to search and optimize, and also can be used to generate test data. In this article we put the anneal mechanism of the Simulated Anneal Algorithm into the genetic algorithm to decide to accept the new individuals or not, and we import dynamic selections to adaptive select individuals which can be copied to next generation. Adaptive crossover probability, adaptive mutation probability and elitist preservation ensure that the best individuals can not be destroyed. The experiment results show that adaptive genetic simulated annealing algorithm is superior to genetic algorithm in effectiveness and efficiency.",2011,0, 4872,Guaranteed Seamless Transmission Technique for NGEO Satellite Networks,"Non-geostationary (NGEO) satellite communication systems are able to provide global communication with reasonable latency and low terminal power requirements. However the highly topological dynamics, large delay and error prone links have been a matter of fact in the satellite network studies. This paper proposes a novel Guaranteed Seamless Transmission Technique(GST), which is a Hop-by-Hop scheme enhanced with the End-to-End scheme and associated with a link algorithm, which updates the link load explicitly and sends it back to the sources that use the link. We analyze GST theoretically by adopting a simple fluid model. The good performance of GST, in terms of bandwidth utilization, effective transmission ratio and fairness, is verified via a set of simulations.",2011,0, 4873,A middleware for reliable soft real-time communication over IEEE 802.11 WLANs,"This paper describes a middleware layer for soft real-time communication in wireless networks devised and realized using standard WLAN hardware and software. The proposed middleware relies on a simple network architecture comprising a number of stations that generate real-time traffic and a particular station, called Scheduler, that coordinates the transmission of the real-time packets using a polling mechanism. The middleware combines EDF scheduling with a dynamic adjustment of the maximum number of transmission attempts, so as to adapt the performance to fluctuations of the link quality, thus increasing the communication reliability, while taking deadlines into account. After describing the basic communication paradigm and the underlying concepts of the proposed middleware, the paper describes the target network configuration and the software architecture. Finally, the paper assesses the effectiveness of the proposed middleware in terms of PER, On-Time Throughput, and Deadline Miss Rate, presenting the results of measurements performed on real testbeds.",2011,0, 4874,Improving model-based verification of embedded systems by analyzing component dependences,"Embedded systems in automobiles become increasingly complex as they are intended to make vehicles even more safe, comfortable, and efficient. International norms like ISO 26262 and IEC 61165 postulate methods for the development and verification of safety critical systems. These standards should ensure that the dependability and quality of the embedded systems is maintained while their complexity and interdependence increases. Yet, the standards do not contain concrete methods or tools for their fulfillment. As concerns classic techniques for dependability analysis they either base on system analysis by means of Markov analysis or on reliability estimation from a usage perspective. Treating the system only from one perspective, however, is a drawback as the system analysis neglects functional or non-functional dependences of the system. These dependences can directly influence the reliability in the field usage. In this paper we present our approach to combine component dependency models with usage models to overcome these deficiencies. It is possible to identify usage scenarios which aim for critical dependences and to analyze the interaction of components inside the system. On the other hand usage scenarios can be assessed whether they meet the desired verification purpose. The component dependency models reveal dependences that were not identified before, because it allows the extraction of implications across functional and non functional dependences like memory, timing and processor utilization.",2011,0, 4875,Towards runtime testing in automotive embedded systems,"Runtime testing is a common way to detect faults during normal system operation. To achieve a specific diagnostic coverage runtime testing is also used in safety critical, automotive embedded systems. In this paper we propose a test architecture to consolidate the hardware resource consumption and timing needs of runtime tests and of application and system tasks in a hard real-time embedded system as applied to the automotive domain. Special emphasis is put to timing requirements of embedded systems with respect to hard real-time and concurrent hardware resource accesses of runtime tests and tasks running on the target system.",2011,0, 4876,Control-flow error detection using combining basic and program-level checking in commodity multi-core architectures,"This paper presents a software-based technique to detect control-flow errors using basic level control-flow checking and inherent redundancy in commodity multi-core processors. The proposed detection technique is composed of two phases of basic and program-level control-flow checking. Basic-level control-flow error detection is achieved through inserting additional instructions into program at design time regarding to control-flow graph. Previous research shows that modern superscalar microprocessors already contain significant amounts of redundancy. Program-level control-flow checking can detect CFEs by leveraging existing microprocessors redundancy. Therefore, the cost of adding extra redundancy for fault tolerance is eliminated. In order to evaluate the proposed technique, three workloads quick sort, matrix multiplication and linked list utilized to run on a multi-core processor, and a total of 6000 transient faults have been injected on the processor. The advantage of the proposed technique in terms of performance and memory overheads and detection capability compared with conventional control-flow error detection techniques.",2011,0, 4877,Regression Test Selection Techniques for Test-Driven Development,"Test-Driven Development (TDD) is characterized by repeated execution of a test suite, enabling developers to change code with confidence. However, running an entire test suite after every small code change is not always cost effective. Therefore, regression test selection (RTS) techniques are important for TDD. Particularly challenging for TDD is the task of selecting a small subset of tests that are most likely to detect a regression fault in a given small and localized code change. We present cost-bounded RTS techniques based on both dynamic program analysis and natural-language analysis. We implemented our techniques in a tool called Test Rank, and evaluated its effectiveness on two open-source projects. We show that using these techniques, developers can accelerate their development cycle, while maintaining a high bug detection rate, whether actually following TDD, or in any methodology that combines testing during development.",2011,0, 4878,A Principled Evaluation of the Effect of Directed Mutation on Search-Based Statistical Testing,"Statistical testing generates test inputs by sampling from a probability distribution that is carefully chosen so that the inputs exercise all parts of the software being tested. Sets of such inputs have been shown to detect more faults than test sets generated using traditional random and structural testing techniques. Search-based statistical testing employs a metaheuristic search algorithm to automate the otherwise labour-intensive process of deriving the probability distribution. This paper proposes an enhancement to this search algorithm: information obtained during fitness evaluation is used to direct the mutation operator to those parts of the representation where changes may be most beneficial. A principled empirical evaluation demonstrates that this enhancement leads to a significant improvement in algorithm performance, and so increases both the cost-effectiveness and scalability of search-based statistical testing. As part of the empirical approach, we demonstrate the use of response surface methodology as an effective and objective method of tuning algorithm parameters, and suggest innovative refinements to this methodology.",2011,0, 4879,Identifying Infeasible GUI Test Cases Using Support Vector Machines and Induced Grammars,"Model-based GUI software testing is an emerging paradigm for automatically generating test suites. In the context of GUIs, a test case is a sequence of events to be executed which may detect faults in the application. However, a test case may be infeasible if one or more of the events in the event sequence are disabled or made inaccessible by a previously executed event (e.g., a button may be disabled until another GUI widget enables it). These infeasible test cases terminate prematurely and waste resources, so software testers would like to modify the test suite execution to run only feasible test cases. Current techniques focus on repairing the test cases to make them feasible, but this relies on executing all test cases, attempting to repair the test cases, and then repeating this process until a stopping condition has been met. We propose avoiding infeasible test cases altogether by predicting which test cases are infeasible using two supervised machine learning methods: support vector machines (SVMs) and grammar induction. We experiment with three feature extraction techniques and demonstrate the success of the machine learning algorithms for classifying infeasible GUI test cases in several subject applications. We further demonstrate a level of robustness in the algorithms when training and classifying test cases of different lengths.",2011,0, 4880,Event-Based GUI Testing and Reliability Assessment Techniques -- An Experimental Insight and Preliminary Results,"It is widely accepted that graphical user interfaces (GUIs) highly affect - positive or negative - the quality and reliability of human-machine systems. In spite of this fact, quantitative assessment of the reliability of GUIs is a relatively young research field. Existing software reliability assessment techniques attempt to statistically describe the software testing process and to determine and thus predict the reliability of the system under consideration (SUC). These techniques model the reliability of the SUC based on particular assumptions and preconditions on probability distribution of cumulative number of failures, failure data observed, and form of the failure intensity function, etc. We expect that the methods used for modeling a GUI and related frameworks used for testing it also affect the factors mentioned above, especially failure data to be observed and prerequisites to be met. Thus, the quality of the reliability assessment process, and ultimately also the reliability of the GUI, depends on the methods used for modeling and testing the SUC. This paper attempts to gain some experimental insight into this problem. GUI testing frameworks based on event sequence graphs and event flow graphs were chosen as examples. A case study drawn from a large commercial web-based system is used to carry out the experiments and discuss the results.",2011,0, 4881,Change Sensitivity Based Prioritization for Audit Testing of Webservice Compositions,"Modern software systems have often the form of Web service compositions. They take advantage of the availability of a variety of external Web services to provide rich and complex functionalities, obtained as the integration of external services. However, Web services change at a fast pace and while syntactic changes are easily detected as interface incompatibilities, other more subtle changes are harder to detect and may give raise to faults. They occur when the interface is compatible with the composition, but the semantics of the service response has changed. This typically involves undocumented or implicit aspects of the service interface. Audit testing of services is the process by which the service integrator makes sure that the service composition continues to work properly with the new versions of the integrated services. Audit testing of services is conducted under strict (sometimes extreme) time and budget constraints. Hence, prioritizing the audit test cases so as to execute the most important ones first becomes of fundamental importance. We propose a test case prioritization method specifically tailored for audit testing of services. Our method is based on the idea that the most important test cases are those that have the highest sensitivity to changes injected into the service responses (mutations). In particular, we consider only changes that do not violate the explicit contract with the service (i.e., the WSDL), but may violate the implicit assumptions made by the service integrator.",2011,0, 4882,An Evaluation of Mutation and Data-Flow Testing: A Meta-analysis,"Mutation testing is a fault-based testing technique for assessing the adequacy of test cases in detecting synthetic faulty versions injected to the original program. The empirical studies report the effectiveness of mutation testing. However, the inefficiency of mutation testing has been the major drawback of this testing technique. Though a number of studies compare mutation to data flow testing, the summary statistics for measuring the magnitude order of effectiveness and efficiency of these two testing techniques has not been discussed in literature. In addition, the validity of each individual study is subject to external threats making it hard to draw any general conclusion based solely on a single study. This paper introduces a novel meta-analytical approach to quantify and compare mutation and data flow testing techniques based on findings reported in research articles. We report the results of two statistical meta-analyses performed on comparing and measuring the effectiveness as well as efficiency of mutation and data-flow testing based on relevant empirical studies. We focus on the results of three empirical research articles selected from the premier venues with their focus on comparing these two testing techniques. The results show that mutation is at least two times more effective than data-flow testing, i.e., odds ratio= 2.27. However, mutation is three times less efficient than data-flow testing, i.e., odds ratio= 2.94.",2011,0, 4883,Test Case Generation from Mutants Using Model Checking Techniques,"Mutation testing is a powerful testing technique: a program is seeded with artificial faults and tested. Undetected faults can be used to improve the test bench. The problem of automatically generating test cases from undetected faults is typically not addressed by existing mutation testing systems. We propose a symbolic procedure, namely Sym BMC, for the generation of test cases from a given program using Bounded Model Checking (BMC) techniques. The Sym BMC procedure determines a test bench, that detects all seeded faults affecting the semantics of the program, with respect to a given unrolling bound. We have built a prototype tool that uses a Satisfiability Modulo Theories (SMT) solver to generate test cases and we show initial results for ANSI-C benchmark programs.",2011,0, 4884,An Experience Report on Using Code Smells Detection Tools,"Detecting code smells in the code and consequently applying the right refactoring steps when necessary is very important to improve the quality of the code. Different tools have been proposed for code smell detection, each one characterized by particular features. The aim of this paper is to describe our experience on using different tools for code smell detection. We outline the main differences among them and the different results we obtained.",2011,0, 4885,Prioritising Refactoring Using Code Bad Smells,"We investigated the relationship between six of Fowler et al.'s Code Bad Smells (Duplicated Code, Data Clumps, Switch Statements, Speculative Generality, Message Chains, and Middle Man) and software faults. In this paper we discuss how our results can be used by software developers to prioritise refactoring. In particular we suggest that source code containing Duplicated Code is likely to be associated with more faults than source code containing the other five Code Bad Smells. As a consequence, Duplicated Code should be prioritised for refactoring. Source code containing Message Chains seems to be associated with a high number of faults in some situations. Consequently it is another Code Bad Smell which should be prioritised for refactoring. Source code containing only one of the Data Clumps, Switch Statements, Speculative Generality, or Middle Man Bad Smell is not likely to be fault-prone. As a result these Code Bad Smells could be put into a lower refactoring priority.",2011,0, 4886,On Investigating Code Smells Correlations,"Code smells are characteristics of the software that may indicate a code or design problem that can make software hard to evolve and maintain. Detecting and removing code smells, when necessary, improves the quality and maintainability of a system. Usually detection techniques are based on the computation of a particular set of combined metrics, or standard object-oriented metrics or metrics defined ad hoc for the smell detection. The paper investigates the direct and indirect correlations existing between smells. If one code smell exists, this can imply the existence of another code smell, or if one smell exists, another one cannot be there, or perhaps it could observe that some code smells tend to go together.",2011,0, 4887,"Automatic Validation and Correction of Formalized, Textual Requirements","Nowadays requirements are mostly specified in unrestricted natural language so that each stakeholder understands them. To ensure high quality and to avoid misunderstandings, the requirements have to be validated. Because of the ambiguity of natural language and the resulting absence of an automatic mechanism, this has to be done manually. Such manual validation techniques are time-consuming, error-prone, and repetitive because hundreds or thousands of requirements must be checked. With an automatic validation the requirements engineering process can be faster and can produce requirements of higher quality. To realize an automatism, we propose a controlled natural language (CNL) for the documentation of requirements. On basis of the CNL, a concept for an automatic requirements validation is developed for the identification of inconsistencies and incomplete requirements. Additionally, automated correction operations for such defective requirements are presented. The approach improves the quality of the requirements and therefore the quality of the whole development process.",2011,0, 4888,Model Based Test Specifications: Developing of Test Specifications in a Semi Automatic Model Based Way,Developing and implementing conformity tests is a time-consuming and fault-prone task. To reduce these efforts a new route must be tackled. The current way of specifying tests and implementing them includes too many manual parts. Based on the experience of testing electronic smart cards in ID documents like passports or ID cards the author describes a new way of saving time to write new test specifications and to get test cases based on these specifications. With new technologies like model based testing (MBT) and domain specific languages (DSL) it is possible to improve the specification and implementation of tests significantly. The author describes his experience in using a DSL to define a new language for testing smart cards and to use this language to generate both documents and test cases that can be run in several test tools.,2011,0, 4889,"Scanstud: A Methodology for Systematic, Fine-Grained Evaluation of Static Analysis Tools","Static analysis of source code is considered to be a powerful tool for detecting potential security vulnerabilities. However, only limited information regarding the current quality of static analysis tools exist. A public assessment of the capabilities of the competing approaches and products is not available. Also, neither a common benchmark nor a standard evaluation procedure has yet been defined. In this paper, we propose a general methodology for systematically evaluating static analysis tools. We document the design of an automatic execution and evaluation framework to support iterative test case design and reliable result analysis. Furthermore, we propose a methodology for creating test cases which can assess the specific capabilities of static analysis tools on a very fine level of detail. We conclude the paper with a brief discussion of our experiences which we collected through a practical evaluation study of six commercial static analysis products.",2011,0, 4890,Introducing Test Case Derivation Techniques into Traditional Software Development: Obstacles and Potentialities,"In traditional development, extracting test cases manually is an effort-consuming and error-prone process. To examine whether automation techniques can be integrated into such traditional development, we implemented our previously proposed method to """"TesMa"""", a test case generation tool. We had a case study to evaluate the effectiveness and the cost.",2011,0, 4891,Assessing the Impact of Using Fault Prediction in Industry,"Software developers and testers need realistic ways to measure the practical effects of using fault prediction models to guide software quality improvement methods such as testing, code reviews, and refactoring. Will the availability of fault predictions lead to discovery of different faults, or to more efficient means of finding the same faults? Or do fault predictions have no practical impact at all? In this challenge paper we describe the difficulties of answering these questions, and the issues involved in devising meaningful ways to assess the impact of using prediction models. We present several experimental design options and discuss the pros and cons of each.",2011,0, 4892,An Empirical Study on Object-Oriented Metrics and Software Evolution in Order to Reduce Testing Costs by Predicting Change-Prone Classes,"Software maintenance cost is typically more than fifty percent of the cost of the total software life cycle and software testing plays a critical role in reducing it. Determining the critical parts of a software system is an important issue, because they are the best place to start testing in order to reduce cost and duration of tests. Software quality is an important key factor to determine critical parts since high quality parts of software are less error prone and easy to maintain. As object oriented software metrics give important evidence about design quality, they can help software engineers to choose critical parts, which should be tested firstly and intensely. In this paper, we present an empirical study about the relation between object oriented metrics and changes in software. In order to obtain the results, we analyze modifications in software across the historical sequence of open source projects. Empirical results of the study indicate that the low level quality parts of a software change frequently during the development and management process. Using this relation we propose a method that can be used to estimate change-prone classes and to determine parts which should be tested first and more deeply.",2011,0, 4893,Probabilistic Error Propagation Modeling in Logic Circuits,"Recent study has shown that accurate knowledge of the false negative rate (FNR) of tests can significantly improve the diagnostic accuracy of spectrum-based fault localization. To understand the principles behind FNR modeling in this paper we study three error propagation probability (EPP) modeling approaches applied to a number of logic circuits from the 74XXX/ISCAS-85 benchmark suite. Monte Carlo simulations for random injected faults show that a deterministic approach that models gate behavior provides high accuracy (O(1%)), while probabilistic approaches that abstract from gate modeling generate higher prediction errors (O(10%)), which increase with the number of injected faults.",2011,0, 4894,Modeling the Diagnostic Efficiency of Regression Test Suites,"Diagnostic performance, measured in terms of the manual effort developers have to spend after faults are detected, is not the only important quality of a diagnosis. Efficiency, i.e., the number of tests and the rate of convergence to the final diagnosis is a very important quality of a diagnosis as well. In this paper we present an analytical model and a simulation model to predict the diagnostic efficiency of test suites when prioritized with the information gain algorithm. We show that, besides the size of the system itself, an optimal coverage density and uniform coverage distribution are needed to achieve an efficient diagnosis. Our models allow us to decide whether using IG with our current test suite will provide a good diagnostic efficiency, and enable us to define criteria for the generation or improvement of test suites.",2011,0, 4895,Compression Strategies for Passive Testing,"Testing is one of the most widely used techniques to increase the confidence on the correctness of complex software systems. In this paper we extend our previous work on passive testing with invariants, this technique checks the logs, collected from the system under test, in order to detect faults. This new proposal is focused on how the collected logs can be compressed without loosing information and how the invariants must be adapted with respect to the selected compression strategy, in a correct way. We show the soundness of this new methodology.",2011,0, 4896,A Diagnostic Point of View for the Optimization of Preparation Costs in Runtime Testing,"Runtime testing is emerging as the solution for the validation and acceptance testing of service-oriented systems, where many services are external to the organization, and duplicating the system's components and their context is too complex, if possible at all. In order to perform runtime tests, an additional expense in the test preparation phase is required, both in software development and in hardware. Preparation cost prioritization methods have been based on runtime testability (i.e, coverage) and do not consider whether a good runtime testability is sufficient for a good runtime diagnosis quality in case faults are detected, and whether this diagnosis will be obtained efficiently (i.e., with a low number of test cases). In this paper we show (1) the direct relationship between testability and diagnosis quality, that (2) these two properties do not guarantee an efficient diagnosis, and (3) a measurement that ensures better prediction of efficiency.",2011,0, 4897,Detection of Sleeping Cells in LTE Networks Using Diffusion Maps,"In mobile networks emergence of failures is caused by various breakdowns of hardware and software elements. One of the serious failures in radio networks is a Sleeping Cell. In our work one of the possible root causes for appearance of this network failure is simulated in a dynamic network simulator. The main aim of the research is to detect the presence of a Sleeping Cell in the network and to define its location. For this purpose Diffusion Maps data mining technique is employed. The developed fault identification framework is using the performance characteristics of the network, collected during its regular operation, and for that reason it can be implemented in real Long Term Evolution (LTE) networks within the Self-Organizing Networks (SON) concept.",2011,0, 4898,Design Time Validation of Service Orientation Principles Using Design Diagrams,"Design principles of Services ensure reliability, scalability and reusability of software components. Services that follow the design principles are robust to changes and are largely reusable in multiple scenarios but in similar domains. To-date there is no systematic approach to apply these design principles to Services design that will ensure Service quality. Errors in Service design stage often pass through multiple levels of amplification to subsequent stages of development and maintenance. Early detection of Service design faults reduces the cumulative effect on succeeding stages of service development. It is important to validate that the Services follows the principles at design time. In this paper, we present a formal rigorous approach to check the adherence of the Services designed for an enterprise solution to the Service orientation principles using design diagrams. We introduce a set of """"mapping rules"""" by which relevant aspects of design diagrams can be used for validating the Services' adherence to design principles. We also present the results of an empirical study to assess the feasibility of our new approach.",2011,0, 4899,Boundless memory allocations for memory safety and high availability,"Spatial memory errors (like buffer overflows) are still a major threat for applications written in C. Most recent work focuses on memory safety - when a memory error is detected at runtime, the application is aborted. Our goal is not only to increase the memory safety of applications but also to increase the application's availability. Therefore, we need to tolerate spatial memory errors at runtime. We have implemented a compiler extension, Boundless, that automatically adds the tolerance feature to C applications at compile time. We show that this can increase the availability of applications. Our measurements also indicate that Boundless has a lower performance overhead than SoftBound, a state-of-the-art approach to detect spatial memory errors. Our performance gains result from a novel way to represent pointers. Nevertheless, Boundless is compatible with existing C code. Additionally, Boundless provides a trade-off to reduce the runtime overhead even further: We introduce vulnerability specific patching for spatial memory errors to tolerate only known vulnerabilities. Vulnerability specific patching has an even lower runtime overhead than full tolerance.",2011,0, 4900,A combinatorial approach to detecting buffer overflow vulnerabilities,"Buffer overflow vulnerabilities are program defects that can cause a buffer to overflow at runtime. Many security attacks exploit buffer overflow vulnerabilities to compromise critical data structures. In this paper, we present a black-box testing approach to detecting buffer overflow vulnerabilities. Our approach is motivated by a reflection on how buffer overflow vulnerabilities are exploited in practice. In most cases the attacker can influence the behavior of a target system only by controlling its external parameters. Therefore, launching a successful attack often amounts to a clever way of tweaking the values of external parameters. We simulate the process performed by the attacker, but in a more systematic manner. A novel aspect of our approach is that it adapts a general software testing technique called combinatorial testing to the domain of security testing. In particular, our approach exploits the fact that combinatorial testing often achieves a high level of code coverage. We have implemented our approach in a prototype tool called Tance. The results of applying Tance to five open-source programs show that our approach can be very effective in detecting buffer overflow vulnerabilities.",2011,0, 4901,Fault injection-based assessment of aspect-oriented implementation of fault tolerance,"Aspect-oriented programming provides an interesting approach for implementing software-based fault tolerance as it allows the core functionality of a program and its fault tolerance features to be coded separately. This paper presents a comprehensive fault injection study that estimates the fault coverage of two software implemented fault tolerance mechanisms designed to detect or mask transient and intermittent hardware faults. We compare their fault coverage for two target programs and for three implementation techniques: manual programming in C and two variants of aspect-oriented programming. We also compare the impact of different compiler optimization levels on the fault coverage. The software-implemented fault tolerance mechanisms investigated are: i) triple time-redundant execution with voting and forward recovery, and ii) a novel dual signature control flow checking mechanism. The study shows that the variations in fault coverage among the implementation techniques generally are small, while some variations for different compiler optimization levels are significant.",2011,0, 4902,Aaron: An adaptable execution environment,"Software bugs and hardware errors are the largest contributors to downtime, and can be permanent (e.g. deterministic memory violations, broken memory modules) or transient (e.g. race conditions, bitflips). Although a large variety of dependability mechanisms exist, only few are used in practice. The existing techniques do not prevail for several reasons: (1) the introduced performance overhead is often not negligible, (2) the gained coverage is not sufficient, and (3) users cannot control and adapt the mechanism. Aaron tackles these challenges by detecting hardware and software errors using automatically diversified software components. It uses these software variants only if CPU spare cycles are present in the system. In this way, Aaron increases fault coverage without incurring a perceivable performance penalty. Our evaluation shows that Aaron provides the same throughput as an execution of the original application while checking a large percentage of requests - whenever load permits.",2011,0, 4903,A new framework for call admission control in wireless cellular network,"Managing the limited amount of the radio spectrum is an important issue with increasing demand of the same. In recent work, we have introduced MAS (Multi-agent System) for channel assignment problem in wireless cellular networks. Instead of using a base station directly for negotiation, a multi-agent system comprising of software agents was designed to work at base station. The system consists of a collection of layers to take care of local and global scenario. In this paper we propose the combination of MAS with a new call admission control (CAC) mechanism based on fuzzy control. This paper aims to provide improvement in QoS parameters using fuzzy control at call admission level. From the simulation studies it is observed that combined approach of Multi-agent system and fuzzy control at initial level improves channel allocation and other QoS factors in an effective and efficient manner. The simulation results are presented on a benchmark 49 cell environment with 70 channels that validate the performance of this approach.",2011,0, 4904,Automated vulnerability discovery in distributed systems,"In this paper we present a technique for automatically assessing the amount of damage a small number of participant nodes can inflict on the overall performance of a large distributed system. We propose a feedback-driven tool that synthesizes malicious nodes in distributed systems, aiming to maximize the performance impact on the overall behavior of the distributed system. Our approach focuses on the interface of interaction between correct and faulty nodes, clearly differentiating the two categories. We build and evaluate a prototype of our approach and show that it is able to discover vulnerabilities in real systems, such as PBFT, a Byzantine Fault Tolerant system. We describe a scenario generated by our tool, where even a single malicious client can bring a BFT system of over 250 nodes down to zero throughput.",2011,0, 4905,DynaPlan: Resource placement for application-level clustering,"Creating a reliable computing environment from an unreliable infrastructure is a common challenge. Application-Level High Availability (HA) clustering addresses this problem by relocating and restarting applications when failures are detected. Current methods of determining the relocation target(s) of an application are rudimentary in that they do not take into account the myriad factors that influence an optimal placement. This paper presents DynaPlan, a method that improves the quality of failover planning by allowing the expression of a wide and extensible range of considerations, such as multidimensional resource consumption and availability, architectural compatibility, security constraints, location constraints, and policy considerations, such as energy-favoring versus performance-favoring. DynaPlan has been implemented by extending the IBM PowerHA clustering solution running on a group of IBM System P servers. In this paper, we describe the design, implementation, and preliminary performance evaluation of DynaPlan.",2011,0, 4906,Verification of embedded system by a method for detecting defects in source codes using model checking,We have proposed a method based on model checking for detecting hard-to-discover defects in enterprise systems. We apply our method to embedded system development to easily discover some defects caused by input/output data of the hardware which are influenced by the external environment before the software is integrated into the hardware. This paper discuss the effectiveness of our method using a case study to develop a line tracing robot.,2011,0, 4907,On sequence based interaction testing,"T-way strategies aim to generate effective test data for detecting fault due to interaction. Different levels of interaction possibilities have been considered as part of existing t-way strategies including that of uniform strength interaction, variable strength interaction as well as input-output based relations. Many t-way strategies have been developed as a result (e.g. GTWay, TConfig, AETG, Jenny and GA for uniform strength interaction; PICT, IPOG and ACS for variable strength interaction; and TVG, Density and ParaOrder for input-output based relations). Although useful, all aforementioned t-way strategies have assumed sequence-less interactions amongst input parameters. In the case of reactive system, such an assumption is invalid as some parameter operations (or events) occur in sequence and hence, creating a possibility of bugs or faults triggered by the order (or sequence) of input parameters. If t-way strategies are to be adopted in such a system, there is also a need to support test data generation based on sequence of interactions. In line with such a need, this paper discusses the sequence based t-way testing (termed sequence covering array) as an extension of existing t-way strategies. Additionally, this paper also highlights the current progress and achievements.",2011,0, 4908,An Architectural Approach to Support Online Updates of Software Product Lines,"Despite the successes of software product lines (SPL), managing the evolution of a SPL remains difficult and error-prone. Our focus of evolution is on the concrete tasks integrators have to perform to update deployed SPL products, in particular products that require runtime updates with minimal interruption. The complexity of updating a deployed SPL product is caused by multiple interdependent concerns, including variability, traceability, versioning, availability, and correctness. Existing approaches typically focus on particular concerns while making abstraction of others, thus offering only partial solutions. An integrated approach that takes into account the different stakeholder concerns is lacking. In this paper, we present an architectural approach for updating SPL products that supports multiple concerns. The approach comprises of two complementary parts: (1) an update viewpoint that defines the conventions for constructing and using architecture views to deal with multiple update concerns, and (2) a supporting framework that provides an extensible infrastructure supporting integrators of a SPL. We evaluated the approach for an industrial SPL for logistic systems providing empirical evidence for its benefits and recommendations.",2011,0, 4909,Industrial Architectural Assessment Using TARA,"Scenario based architectural assessment is a well established approach for assessing architectural designs. However scenario-based methods are not always usable in an industrial context, where they can be perceived as complicated and expensive to use. In this paper we explore why this may be the case and define a simpler technique called TARA which has been designed for use in situations where scenario based methods are unlikely to be successful. The method is illustrated through a case study that explains how it was applied to the assessment of two quantitative analysis systems.",2011,0, 4910,SOFAS: A Lightweight Architecture for Software Analysis as a Service,"Access to data stored in software repositories by systems such as version control, bug and issue tracking, or mailing lists is essential for assessing the quality of a software system. A myriad of analyses exploiting that data have been proposed throughout the years: source code analysis, code duplication analysis, co-change analysis, bug prediction, or detection of bug fixing patterns. However, easy and straight forward synergies between these analyses rarely exist. To tackle this problem we have developed SOFAS, a distributed and collaborative software analysis platform to enable a seamless interoperation of such analyses. In particular, software analyses are offered as Restful web services that can be accessed and composed over the Internet. SOFAS services are accessible through a software analysis catalog where any project stakeholder can, depending on the needs or interests, pick specific analyses, combine them, let them run remotely and then fetch the final results. That way, software developers, testers, architects, or quality assurance experts are given access to quality analysis services. They are shielded from many peculiarities of tool installations and configurations, but SOFAS offers them sophisticated and easy-to-use analyses. This paper describes in detail our SOFAS architecture, its considerations and implementation aspects, and the current set of implemented and offered Restful analysis services.",2011,0, 4911,Assessing Suitability of Cloud Oriented Platforms for Application Development,"The enterprise data centers and software development teams are increasingly embracing the cloud oriented and virtualized computing platforms and technologies. As a result it is no longer straight forward to choose the most suitable platform which may satisfy a given set of Non-Functional Quality Attributes (NFQA) criteria that is significant for an application. Existing methods such as Serial Evaluation and Consequential Choice etc. are inadequate as they fail to capture the objective measurement of various criteria that are important for evaluating the platform alternatives. In practice, these methods are applied in an ad-hoc fashion. In this paper we introduce three application development platforms: 1) Traditional non-cloud 2) Virtualized and 3) Cloud Aware. We propose a systematic method that allows the stakeholders to evaluate these platforms so as to select the optimal one by considering important criteria. We apply our evaluation method to these platforms by considering a certain (non-business) set of NFQAs. We show that the pure cloud oriented platforms fare no better than the traditional non-cloud and vanilla virtualized platforms in case of most NFQAs.",2011,0, 4912,A Low-Power High-Performance Concurrent Fault Detection Approach for the Composite Field S-Box and Inverse S-Box,"The high level of security and the fast hardware and software implementations of the Advanced Encryption Standard have made it the first choice for many critical applications. Nevertheless, the transient and permanent internal faults or malicious faults aiming at revealing the secret key may reduce its reliability. In this paper, we present a concurrent fault detection scheme for the S-box and the inverse S-box as the only two nonlinear operations within the Advanced Encryption Standard. The proposed parity-based fault detection approach is based on the low-cost composite field implementations of the S-box and the inverse S-box. We divide the structures of these operations into three blocks and find the predicted parities of these blocks. Our simulations show that except for the redundant units approach which has the hardware and time overheads of close to 100 percent, the fault detection capabilities of the proposed scheme for the burst and random multiple faults are higher than the previously reported ones. Finally, through ASIC implementations, it is shown that for the maximum target frequency, the proposed fault detection S-box and inverse S-box in this paper have the least areas, critical path delays, and power consumptions compared to their counterparts with similar fault detection capabilities.",2011,0, 4913,10-Gbps IP Network Measurement System Based on Application-Generated Packets Using Hardware Assistance and Off-the-Shelf PC,"Targeting high-bandwidth applications such as video streaming services, we discuss advanced measurement systems for high-speed 10-Gbps networks. To verify service stability in such high-speed networks, we need to detect network quality under real environmental conditions. For example, test traffic injected into networks under-test for measurements should have the same complex characteristics as the video streaming traffic. For such measurements, we have built Internet protocol (IP) stream measurement systems by using our 10-Gbps network interface card with hardware-assisted active/passive monitor extensions based on low-cost off-the-shelf personal computers (PCs). After showing hardware requirements and our implementation of each hardware-assisted extensions, we report how we build pre-service and in-service network measurement systems to verify the feasibility of our hardware architecture. A traffic-playback system captures packets and stores traffic characteristics data without sampling any packets and then sends them precisely emulating the complex characteristics of the original traffic by using our hardware assistance. The generated traffic is useful as test traffic in pre-service measurement. A distributed in-service network monitoring system collects traffic characteristics at multiple sites by utilizing synchronized precise timestamps embedded in video streaming traffic. The results are presented on the operator's display. We report on their effectiveness by measuring 1.5-Gbps uncompressed high-definition television traffic flowing in the high-speed testbed IP network in Japan.",2011,0, 4914,Automatic Correction of Registration Errors in Surgical Navigation Systems,"Surgical navigation systems are used widely among all fields of modern medicine, including, but not limited to ENT- and maxillofacial surgery. As a fundamental prerequisite for image-guided surgery, intraoperative registration, which maps image to patient coordinates, has been subject to many studies and developments. While registration methods have evolved from invasive procedures like fixed stereotactic frames and implanted fiducial markers toward surface-based registration and noninvasive markers fixed to the patient's skin, even the most sophisticated registration techniques produce an imperfect result. Due to errors introduced during the registration process, the projection of navigated instruments into image data deviates up to several millimeter from the actual position, depending on the applied registration method and the distance between the instrument and the fiducial markers. We propose a method that allows to automatically and continually improve registration accuracy during intraoperative navigation after the actual registration process has been completed. The projections of navigated instruments into image data are inspected and validated by the navigation software. Errors in image-to-patient registration are identified by calculating intersections between the virtual instruments' axes and surfaces of hard bone tissue extracted from the patient's image data. The information gained from the identification of such registration errors is then used to improve registration accuracy by adding an additional pair of registration points at every location where an error has been detected. The proposed method was integrated into a surgical navigation system based on paired points registration with anatomical landmarks. Experiments were conducted, where registrations with deliberately misplaced point pairs were corrected with automatic error correction. Results showed an improvement in registration quality in all cases.",2011,0, 4915,Earthquake response of an arch bridge under near-fault ground motions,"During the Wenchuan earthquake in 2008, some of the arch bridges in the seismic zone were damaged. To assess the performance of arch bridge under near-fault ground motions, both experimental and numerical studies are conducted in this paper. Firstly, an ambient vibration test of a real typical arch bridge is made to measure the dynamic characteristics of the structure and calibrate the finite-element model. Then the earthquake response of the bridge is made using FE software MIDAS based on recorded ground motion in the Wenchuan earthquake. The study shows that the joint between deck and tie at 1/4 span and 3/4 span, the joint between beam and arch and middle of the beam are weak links under near-fault ground motions.",2011,0, 4916,Software component quality-finite mixture component model using Weibull and other mathematical distributions,"Software component quality has a major influence in software development project performances such as lead-time, time to market and cost. It also affects the other projects within the organization, the people assigned into the projects and the organization in general. In this study a finite mixture of several mathematical distributions is used to describe the fault occurrence in the system based on individual software component contribution. Several examples are selected to demonstrate model fitting and comparison between the models. Four case studies are presented and evaluated for modeling software quality in very large development projects within the AXE platform, BICC as a call control protocol in the Ericsson Nikola Tesla R&D.",2011,0, 4917,Interconnectable gadgets and web services usage in supervisory control of Unmanned Underwater vehicles,"Unmanned Underwater vehicles (UUVs) are routinely used for data collection during underwater research missions. UUV operators which perform advanced data collection are usually not qualified for data interpretation. On the fly adaptation of data collection methods based on interpreted data can increase data quality and lower the operator's effort. However, this requires the presence of an expert on site. In order to avoid this, a system of remote monitoring and control over the Internet is proposed. Closing the vehicle control loop over the Internet is problematic due to latency issues, therefore a supervisory control approach is used. This requires only high-level commands to be sent over the Internet while closing the control loop locally. Service oriented architecture (SOA) is used as an API for vehicle monitoring and mission control, while software gadgets are used to display collected data and to send commands for mission adaptation. Gadgets provide support for modifying and displaying data as well as defining and detecting logical conditions. Usage of connectible gadgets as building blocks eliminates the need for expertise in programming languages while increasing the scalability and flexibility of the system.",2011,0, 4918,Complex systems and risk management,"The risk management process, and in particular, risk assessment is a very tedious and error prone process with no exact measure of how it progresses, or even the justification that it reflects the real situation. This is because the whole process heavily depends on the experience of the people doing it. Furthermore, simplifications are done that run just contrary to what the real systems are, complex systems! In this paper we argue that all this process has to be done with complexity in mind, as it is complex system, and we outline a novel risk management method based on those premises. It is possible to automate the risk assessment process presented in this paper to a high degree. Also, the risk method has better justifications and is less dependent on the skills of the people doing risk assessment. Finally, progress can be measured by measuring the complexity of the model.",2011,0, 4919,Design of Three Phase Network Parameter Monitoring System based on 71M6513,"In this paper, the design process of Three Phase Network Parameter Monitoring System is analyzed and the overall research scheme and design idea of the system are expounded. This paper mainly studies how to improve the accuracy of input voltage and current. The front-end voltage decreasing, current dropping circuit and remote data transmission circuit are designed. The programs of the data acquisition and calculation are made, real-time detection of power network parameters, energy calculation and the remote data transmission are achieved. Based on .Net platform, the host computer software is designed to receive the data and store them in the database, the mean value of power and energy parameters are calculated by the day, the week and the month, which are available to the user for inquiry. Many parameters such as three-phase voltage, three-phase current, frequency, active power, reactive power and voltage harmonics, can be detected by the system, which will provide a reliable basis for power quality analysis.",2011,0, 4920,Online monitoring of transformer winding axial displacement and its extent using scattering parameters and k-nearest neighbour method,"The online monitoring of the transformer winding using scattering parameters fingerprint is presented. As a test object, a simplified model of transformer is used. The winding axial displacement is modelled on this test object. The scattering parameters of the test object are calculated using the high-frequency simulation software and measured using a network analyser. Two indices are defined based on the magnitude and phase of scattering parameters for the detection of the axial displacement. A new algorithm for the estimation of the axial displacement extent is presented using the proposed indices and high-frequency modelling of the transformer. To detect this mechanical defect and its extent, the k-nearest neighbour (k-NN) regression is suggested.",2011,0, 4921,Reasoning about Faults in Aspect-Oriented Programs: A Metrics-Based Evaluation,"Aspect-oriented programming (AOP) aims at facilitating program comprehension and maintenance in the presence of crosscutting concerns. Aspect code is often introduced and extended as the software projects evolve. Unfortunately, we still lack a good understanding of how faults are introduced in evolving aspect-oriented programs. More importantly, there is little knowledge whether existing metrics are related to typical fault introduction processes in evolving aspect-oriented code. This paper presents an exploratory study focused on the analysis of how faults are introduced during maintenance tasks involving aspects. The results indicate a recurring set of fault patterns in this context, which can better inform the design of future metrics for AOP. We also pinpoint AOP-specific fault categories which are difficult to detect with popular metrics for fault-proneness, such as coupling and code churn.",2011,0, 4922,Adding Process Metrics to Enhance Modification Complexity Prediction,"Software estimation is used in various contexts including cost, maintainability or defect prediction. To make the estimate, different models are usually applied based on attributes of the development process and the product itself. However, often only one type of attributes is used, like historical process data or product metrics, and rarely their combination is employed. In this report, we present a project in which we started to develop a framework for such complex measurement of software projects, which can be used to build combined models for different estimations related to software maintenance and comprehension. First, we performed an experiment to predict modification complexity (cost of a unity change) based on a combination of process and product metrics. We observed promising results that confirm the hypothesis that a combined model performs significantly better than any of the individual measurements.",2011,0, 4923,Design Defects Detection and Correction by Example,"Detecting and fixing defects make programs easier to understand by developers. We propose an automated approach for the detection and correction of various types of design defects in source code. Our approach allows to automatically find detection rules, thus relieving the designer from doing so manually. Rules are defined as combinations of metrics/thresholds that better conform to known instances of design defects (defect examples). The correction solutions, a combination of refactoring operations, should minimize, as much as possible, the number of defects detected using the detection rules. In our setting, we use genetic programming for rule extraction. For the correction step, we use genetic algorithm. We evaluate our approach by finding and fixing potential defects in four open-source systems. For all these systems, we found, in average, more than 80% of known defects, a better result when compared to a state-of-the-art approach, where the detection rules are manually or semi-automatically specified. The proposed corrections fix, in average, more than 78%of detected defects.",2011,0, 4924,Satisfying Programmers' Information Needs in API-Based Programming,"Programmers encounter many difficulties in using an API to solve a programming task. To cope with these difficulties, they browse the Internet for code samples, tutorials, and API documentation. In general, it is time-consuming to find relevant help from the plethora of information on the web. While programmers can use search-based tools to help locate code snippets or applications that may be relevant to the APIs they are using, they still face the significant challenge of understanding and assessing the quality of the search results. We propose to investigate a proactive help system that is integrated in a development environment to provide contextual suggestions to the programmers as the code is being read and edited in the editor.",2011,0, 4925,Context and Vision: Studying Two Factors Impacting Program Comprehension,"Linguistic information derived from identifiers and comments has a paramount role in program comprehension. Indeed, very often, program documentation is scarce and when available, it is almost always outdated. Previous research works showed that program comprehension is often solely grounded on identifiers and comments and that, ultimately, it is the quality of comments and identifiers that impact the accuracy and efficiency of program comprehension. Previous works also investigated the factors influencing program comprehension. However, they are limited by the available tools used to establish relations between cognitive processes and program comprehension. The goal of our research work is to foster our understanding of program comprehension by better understanding its implied underlying cognitive processes. We plan to study vision as the fundamental mean used by developers to understand a code in the context of a given program. Vision is indeed the trigger mechanism starting any cognitive process, in particular in program comprehension. We want to provide supporting evidence that context guides the cognitive process toward program comprehension. Therefore, we will perform a series of empirical studies to collect observations related to the use of context and vision in program comprehension. Then, we will propose laws and then derive a theory to explain the observable facts and predict new facts. The theory could be used in future empirical studies and will provide the relation between program comprehension and cognitive processes.",2011,0, 4926,Capturing Expert Knowledge for Automated Configuration Fault Diagnosis,"The process of manually diagnosing a software misconfiguration problem is time consuming. Manually writing and updating rules to detect future problems is still the state of the practice. Consequently, there is a need for increased automation. In this paper, we propose a three-phase framework using machine learning techniques for automated configuration faults diagnosis. This system can also help in capturing expert knowledge of configuration troubleshooting. Our experiments on Apache web server configurations are generally encouraging and non-experts can use this system to diagnose misconfigurations effectively.",2011,0, 4927,Towards model-driven safety analysis,"Model-based safety analysis allows very high quality analysis of safety requirements. Both qualitative (i.e. what must go wrong for a system failure) and quantitative aspects (i.e. how probable is a system failure) are of great interest for safety analysis. Traditionally, the analysis of these aspects requires separate, tool-dependent formal models. However, building adequate models for each analysis requires a lot of effort and expertise. Model-driven approaches support this by automating the generation of analysis models. SAML is a tool-independent modeling framework that allows for the construction of models with both non-deterministic and probabilistic behavior. SAML models can automatically be transformed into the input language of different state of the art formal analysis tools - while preserving the semantics - to analyze different aspects of safety. As a consequence both - qualitative and quantitative - model-based safety analysis can be done without any additional generation of models and with transferable results. This approach makes SAML an ideal intermediate language for a model-driven safety analysis approach. Every higher-level language that can be transformed into SAML can be analyzed with all targeted formal analysis tools. New analysis tools can be added and the user benefits from every advancement of the analysis tools.",2011,0, 4928,Power swing detection for correct distance relay operation using S-transform and neural networks,"This paper presents an advanced technique for detecting power swings for distance relay operation. It uses a derivative of voltage and signal processing technique called as S-transform for feature extraction. Then an artificial neural network (ANN) is deployed in detecting the unstable swing in power systems. This approach overcomes the traditional relay scheme drawback, by distinguishing a fault, stable swing and unstable swing. To illustrate the effectiveness of the proposed approach, simulations were carried out on the IEEE 39 bus test system using the PSS/E software. Test results show that the proposed approach can effectively differentiate the fault, stable swing and unstable swing with a good accuracy.",2011,0, 4929,A cross platform intrusion detection system using inter server communication technique,"In recent years, web applications have become tremendously popular. However, vulnerabilities are pervasive resulting in exposure of organizations and firms to a wide array of risks. SQL injection attacks, which has been ranked at the top in web application attack mechanisms used by hackers can potentially result in unauthorized access to confidential information stored in a backend database and the hackers can take advantages due to flawed design, improper coding practices, improper validations of user input, configuration errors, or other weaknesses in the infrastructure. Whereas using cross-site scripting techniques, miscreants can hijack Web sessions, and craft credible phishing sites. In this paper we have made a survey on different techniques to prevent SQLi and XSS attacks and we proposed a solution to detect and prevent against the malicious attacks over the developer's Web Application written in programming languages like PHP, ASP.NET and JSP also we have created an API (Application Programming Interface) in native language through which transactions and interactions are sent to IDS Server through Inter Server Communication Mechanism. This IDS Server which is developed from PHPIDS, a purely PHP based intrusion detection system and has a system architecture meant only for PHP application detects and prevents attacks like SQLi (SQL Injection) and XSS(Cross-site scripting), LFI(Local File Inclusion), and RFE(Remote File Execution) and returns back the result to the Web Application and logs the intrusions. In addition to this behavioural pattern of Web Logs is analysed using WAPT algorithm (Web Access Pattern Tree), which helps in recording the activity of the web application and examines any suspicious behaviour, uncommon patterns of behaviour over a period of time, and it also monitors the increased activity and known attack variants. Based on this an report is generated dynamically using P-Chart which can help the Website owner to increase the security measu- - res, and also used to improve the quality of the Web Application.",2011,0, 4930,Black box test case prioritization techniques for semantic based composite web services using OWL-S,"Web services are the basic building blocks for the business which is different from web applications. Testing of web services is difficult and increases the cost due to the unavailability of source code. Researchers have, web services are tested based on the syntactic structure using Web Service Description Language (WSDL) for atomic web services. This paper proposes an automated testing framework for composite web services based on semantics where the domain knowledge of the web services is described using protege tool and the behaviour of the entire business operation flow for the composite web service is described by Ontology Web Language for services (OWL-S). Prioritization of test cases is performed based on various coverage criteria for composite web services. Series of experiments were conducted to assess the effectiveness of prioritization and empirical results shown that prioritization techniques perform well in detecting faults compared to traditional techniques.",2011,0, 4931,An integrated apparatus to measure Mallampati score for the characterization of Obstructive Sleep Apnea (OSA),"Obstructive Sleep Apnea (OSA) affects as many as 1 in every 5 adults and has the potential to cause serious long-term health complications such as, cardiovascular disease, stroke, hypertension and the consequent reduced quality of life. Studies have shown that the probability of having OSA increases with a higher BMI irrespective of gender and that there is a definite link concerning the race of the patient and having OSA. This paper describes the design of an integrated apparatus to collect Mallampati scores with little human intervention and perform automatic processing of various parameters. The system permits life-cycle studies on patients with OSA and other sleep disorders.",2011,0, 4932,Data mining: An application to the semiconductor industry,"The development project consists of the study and use of data mining techniques to analyze a dataset of high dimensionality and large volume generated during the manufacture of memory modules in a semiconductor industry. In this paper, we propose the use of a neural network self-organizing map as a technique for knowledge extraction, we discuss the analysis phase and preparing the database for the mining process and presents the following steps research. It is expected that the results at the end of the project to identify relationships and / or patterns that may help in predicting possible factors causing failures during the production process, thus contributing to improving the quality of industrial processes, particularly in semiconductor industry.",2011,0, 4933,Comparative analysis of virtual worlds,"This paper presents a comparative analysis between a set of virtual worlds in order to facilitate the process of selecting a virtual world to serve as a platform for application development. Based on exhaustive research in the area, we selected a set of criteria, based on the work of Mannien in 2004 and Robbins in 2009. After this identification we applied the Quantitative Evaluation Framework (QEF) developed by Squire in 2007 with the aim of assessing quantitatively the platforms under consideration. The results showed that Second Life, OpenSim and Active Worlds are platforms that offer more services and tools for developing applications with quality.",2011,0, 4934,"Classification of defect types in requirements specifications: Literature review, proposal and assessment","Requirements defects have a major impact throughout the whole software lifecycle. Having a specific defects classification for requirements is important to analyse the root causes of problems, build checklists that support requirements reviews and to reduce risks associated with requirements problems. In our research we analyse several defects classifiers; select the ones applicable to requirements specifications, following rules to build defects taxonomies; and assess the classification validity in an experiment of requirements defects classification performed by graduate and undergraduate students. Not all subjects used the same type of defect to classify the same defect, which suggests that defects classification is not consensual. Considering our results we give recommendations to industry and other researchers on the design of classification schemes and treatment of classification results.",2011,0, 4935,A programming tool to ease modular programming with C++,"Module management support is very rough in the C and C++ programming languages. Modules must be separated in interface and implementation files, which will store declarations and definitions, respectively. Ultimately, only text substitution tools are available, by means of the C/C++ preprocessor, which is able to insert an interface file in a given point of a translation unit. This way of managing modules does not take into account aspects like duplicated inclusions, or proper separation of declarations and definitions, just to name a few. While the seasoned programmer will find this characteristic of the language annoying and error-prone, students will find it not less than challenging. In this document, a tool specially designed for improving the support of modules in C++ is presented. Its main advantage is that it makes it easier to manage large, module-based projects, while still allowing to use classic translation units. This tool is designed for students who have to learn modular programming; not only those in the computer science discipline, but also those in other engineerings in which programming is part of the curriculum.",2011,0, 4936,Building the pillars for the definition of a data quality model to be applied to the artifacts used in the Planning Process of a Software Development Project,"The success of a software development project is mainly dependent on the quality of the used artifacts along the project; this quality is reliant on the contents of the artifacts, as well as the level of quality of the data values corresponding to the metadata that describe the artifacts. In order to assess both kind of qualities, it should be therefore taken into account the artifacts' structure and metadata. This paper proposes a DQ model that can be used as a reference by project managers to assess and, if necessary, improve the quality of the data values corresponding to the metadata describing the artifacts used in the process of planning a software development project. For our research, we have identified the corresponding artifacts from those described as part of the Planning Process defined in international standard ISO / IEC 12207:2008. We have aligned these found artifacts with those proposed by PMBOK, in order to better depict their structure; and finally, we are to build our data quality model upon the DQ dimensions proposed by Strong, D. M., Y. W. Lee and Wang, R. in Data Quality in Context. Comm. of the ACM 1997 40 (5): 103-110. We all of these elements, we intend to optimize the performance of the software development process by improving the project management process.",2011,0, 4937,Real world oriented test functions,"The global optimal solutions of presently widely-used test functions are known or controllable, which leaves room for algorithm falsification. Moreover, the most algorithmic results to optimize test functions from papers are one-way conversation without peers' recognition, whose authenticity depends entirely on the authors' personal integrity. In this paper, in order to change the situation, two groups of real world oriented test functions and the resultative optimal solutions are given. The chief characteristics of these functions are its global optimal solution unknown to all people forever, which blocks off the loopholes of algorithmic cheating from the source. The comparison of the quality of the two algorithmic models depends on which can find out a better optimal solution coordinate. Therefore, a real world oriented test function can detect the actual level of algorithm.",2011,0, 4938,Research of software tools for DO-254 projects,"Airborne Electronic Hardware (AEH) development relies deeply on the quality of tools that helps the hardware artifact implementation from requirement to entity. Electronic Data Automation (EDA) tools are made to test the logic, synthesize the circuits, place and route the electronic elements and their connections prior to final implementation. A critical issue for EDA tools is its adequate safety to use in avionic. The paper focuses on approaches, from EDA industry, to using the EDA tools in the DO-254 project recently, then explains the principle of assessment and qualification, finally introduces methods to assess some EDA tools. The discussed contents will provide a guideline for the tools certification process.",2011,0, 4939,Dynamic fault tree analysis approach to Safety Analysis of Civil Aircraft,"As one of key projects of China at present, the safety of the civil aircraft is a major issue first considered in design and development. Fault-tolerant systems with redundancy are used more often for the design of modern advanced large aircraft. It is a new research area for the safety analysis technology which is not perfect in China. A novel approach to Safety Analysis of Civil Aircraft based on Dynamic Fault Tree Analysis has been proposed in this paper. This paper takes the mature typical design of large aircraft and general aircraft in developed countries as research object, performs safety analysis using dynamic fault tree, and optimizes the analysis of fault tree using modularization thinking. Then, the failure probability of civil aircraft can be calculated in the newly-developed simulation software.",2011,0, 4940,An Evaluation of QoE in Cloud Gaming Based on Subjective Tests,"Cloud Gaming is a new kind of service, which combines the successful concepts of Cloud Computing and Online Gaming. It provides the entire game experience to the users remotely from a data center. The player is no longer dependent on a specific type or quality of gaming hardware, but is able to use common devices. The end device only needs a broadband internet connection and the ability to display High Definition (HD) video. While this may reduce hardware costs for users and increase the revenue for developers by leaving out the retail chain, it also raises new challenges for service quality in terms of bandwidth and latency for the underlying network. In this paper we present the results of a subjective user study we conducted into the user-perceived quality of experience (QoE) in Cloud Gaming. We design a measurement environment, that emulates this new type of service, define tests for users to assess the QoE, derive Key Influence Factors (KFI) and influences of content and perception from our results.",2011,0, 4941,Reliability of consecutive k-out-of-n: F system under incomplete information,"Most studies of consecutive k-out-of-n: F system assume that the component lifetime distributions are known precisely, this implies the reliability of component is known. However it is difficult to expect this assumption is fulfilled, (such as software, human-machine systems). Therefore the paper considers the reliability of consecutive k-out-of-n: F system by using imprecise probability theory under the condition, that there is only some partial information about the component lifetime distribution. The exact formulas are obtained when k is 2; an algorithm is put forward. Finally an example is illustrated.",2011,0, 4942,Software defect prediction based on stability test data,"Software defect prediction is an essential part of evaluating product readiness in terms of software quality prior to the software delivery. As a new software load with new features and bug fixes becomes available, stability tests are performed typically with a call load generator in a full configuration environment. Defect data from the stability test provides most accurate information required for the software quality assessment. This paper presents a software defect prediction model using defect data from stability test. We demonstrate that test run duration in hours is a better measure than calendar time in days for predicting the number of defects in a software release. An exponential reliability growth model is applied to the defect data with respect to test run duration. We then address how to identify whether estimates of the model parameters are stable enough for assuring the prediction accuracy.",2011,0, 4943,Research on key techniques of simulation and detection for fire control systemon the basis of database,"Fire control system is the core component of the x-type artillery weapon system. Because of the position distribution, work control process and signal transfer relationship of the units in the fire control system, the traditional detective devices are limited when detecting it. This paper describe how to simulate the work environment of the fire control system on the basis of which the equipment units are detected. Realizing the management of sending and reception of serial port information by building the datagram management model and communication protocol management model adopting database techniques in simulation system. On the basis of the purpose that it is used to detect the units, it is hardly necessary to realize the ballistic curve solving and manipulation & aiming solving according to the equipment principle. Also, database technique is employed to build up the database of ballistic curve solving and manipulation & aiming solving in line with which the functions of ballistic curve solving and manipulation & aiming solving are completed in simulation system. Then the performance testing is realized through comparing the resolution results with the standard results in database by querying the database.",2011,0, 4944,Study on the relevance of the warnings reported by Java bug-finding tools,"Several bug-finding tools have been proposed to detect software defects by means of static analysis techniques. However, there is still no consensus on the effective role that such tools should play in software development. Particularly, there is still no concluding answer to the following question usually formulated by software developers and software quality managers: how relevant are the warnings reported by bug finding tools? The authors first report an in-depth study involving the application of two bug-finding tools (FindBugs and PMD) in five stable versions of the Eclipse platform. Next, in order to check whether the initial conclusions are supported by other systems, the authors describe an extended case study with 12 systems. In the end, it has been concluded that rates of relevance superior to 50% can be achieved when FindBugs is configured in a proper way. On the other hand, in the best scenario considered in the research, only 10% of the warnings reported by PMD have been classified as relevant.",2011,0, 4945,Suffix tree-based approach to detecting duplications in sequence diagrams,"Models are core artefacts in software development and maintenance. Consequently, quality of models, especially maintainability and extensibility, becomes a big concern for most non-trivial applications. For some reasons, software models usually contain some duplications. These duplications had better be detected and removed because the duplications may reduce maintainability, extensibility and reusability of models. As an initial attempt to address the issue, the author propose an approach in this study to detecting duplications in sequence diagrams. With special preprocessing, the author convert 2-dimensional (2-D) sequence diagrams into an 1-D array. Then the author construct a suffix tree for the array. With the suffix tree, duplications are detected and reported. To ensure that every duplication detected with the suffix tree can be extracted as a separate reusable sequence diagram, the author revise the traditional construction algorithm of suffix trees by proposing a special algorithm to detect the longest common prefixes of suffixes. The author also probe approaches to removing duplications. The proposed approach has been implemented in DuplicationDetector. With the implementation, the author evaluated the proposed approach on six industrial applications. Evaluation results suggest that the approach is effective in detecting duplications in sequence diagrams. The main contribution of the study is an approach to detecting duplications in sequence diagrams, a prototype implementation and an initial evaluation.",2011,0, 4946,From village greens to volcanoes: Using wireless networks to monitor air quality,"Summary form only given. Air quality mapping is a rapidly growing need as people become aware of the acute and long term risks of poor air quality. Both spatial and temporal mapping is required to understand, predict and improve air quality. The collision of several technologies has led to an opportunity to create affordable wireless networks to monitor air quality: short range radio on a chip; GPS and internet maps; GSM integrated chips; improved batteries and solar power; Python and other internet languages; and low power, low cost gas sensors with ppb resolution. Bringing these technologies together requires collaboration between electronics engineers, mathematicians, software programmers, atmospheric chemists and sensor technology providers. And add in local politicians because wireless networks need permission. We will discuss implementation and results of successful trials and look at what is now underway: Newcastle, Cambridge and London trials (MESSAGE); Cambridge UK real time air quality mapping (2010); Heathrow airport air quality network (2011); volcanic ash mapping; landfill site monitoring networks; indoor air quality studies; and rural air measurements. Each application has its own requirements of power, wireless protocol, air monitoring needs, data analysis and presentation of results.",2011,0, 4947,An approach of mission completion success probability prediction for circuits based on Saber simulation,"To solve problems existing in the static fault simulation method, the dynamic one is proposed in this paper, and based on it, an approach of predicting mission completion success probability for circuits is proposed. In this approach, the failure distributions of components are derived from Reliability Prediction Handbook for Electronic Equipment, and the sampling algorithm is introduced to simulate the randomness of failure modes and fault occurring time. Moreover, this approach introduces looping fault simulation and automatic fault judgment techniques to simulate large numbers of tests to predict mission completion success probability for circuits. For the realization of this approach in Saber simulation software, this paper presents automatic fault injection based on MAST models of Saber components and automatic fault judgment techniques in the circuit simulation process, and predicts the mission completion success probability for a representative example with Saber based on the approach.",2011,0, 4948,Research on the definition and model of software testing quality,"Aim of software testing is to find out software defects and evaluate the software quality. In order to explain the software testing can really elevate the software quality, it is necessary to assess the quality of software testing itself. This paper discusses the definition of software testing quality, and further builds the framework model of software testing quality (STQFM) and the metrics model of software testing quality (SFTMM). The availability of the models is verified by example application.",2011,0, 4949,Study of software reliability prediction based on GR neural network,"The failures of safety-critical software may result in the serious loss of property and life, thus software reliability has become very demanding. As an important quantitative approach for estimating and predicting software reliability, software reliability prediction technique is significantly useful for improving and ensuring software quality and testing efficiency. A novel software reliability prediction method based on general regression neural network (GRNN) is proposed, which makes it feasible that without constructing a statistical model like classic software reliability models and having difficulties of solving multivariate likelihood equations, this method can be used to predict software failures. It also incorporates test coverage which has increased prediction accuracy. By using probability plot technique and the least square fitting, the probability distribution functions of the original failure data can be determined. And large amount of data can be simulated to make the reliable prediction, which provides a way for solving the inaccuracy problem caused by small size sample of test failure data. A case study has also been done in a real failure data set. The results show that the proposed method can reflect the relationships among the time, test coverage and number of the faults. And it can improve the prediction accuracy.",2011,0, 4950,The object-FMA based test case generation approach for GUI software exception testing,"The traditional exception testing usually utilizes the error-guessing approach or the equivalence class-partition method to generate test cases, which heavily depend on the experience of testers and easily make the exception testing omissive. In order to solve this problem, this paper introduces the SFMEA (Software Failure Mode Effect Analysis) to generate the exception testing cases for the GUI software by analyzing the failure modes of the controls and the control sets of the GUI software and then translating those failure modes directly into the exception test cases. In order to make the failure mode analysis sufficient, we first propose an object-based approach for the failure mode analysis (i.e. Object-FMA), and then utilize this approach to analyze the failure modes of the common controls in the Windows, and generate the database of the failure modes of the controls for guiding to design the test cases. In an actual GUI software-testing project, a case study is presented through comparing four diverse exception test suites. Three test groups with different experience use the error-guessing approach to generate three exception test suites respectively. Then one group is selected from these three groups to use this proposed Object-FMA approach to generate one exception test suite. The comparison results show that the exception testing cases generated by the Object-FMA approach not only are more sufficient than the ones generated by the error-guessing approach, but also detect more exceptions. This proposed Object-FMA approach can avoid to overreliance on the experience of testers during designing the exception testing cases. Moreover, this approach can ensure the quality of the exception testing cases from the methodological viewpoint. Thus, the feasibility and validity of this proposed Object-FMA approach are validated consequently.",2011,0, 4951,A dynamic software binary fault injection system for real-time embedded software,"Dynamic fault injection is an efficient technique for reliability evaluation. In this paper, a new fault injection system called DDSFIS (debug-based dynamic software fault injection system) is presented. DDSFIS is designed to be adaptable to real-time embedded systems. Locations of injections and faults are detected and injected automatically without recompiling. The tool is highly portable between different platforms since it relies on the GNU tool chains. The effectiveness and performance of DDSFIS is validated by experiments.",2011,0, 4952,The development of a software dependability case based on GSN,"The GSN method is used to develop a dependability case to study the software dependability on the basis of the extension of safety case. Given the scalability of GSN, the dependability of the anti-icy software system is analyzed as a case by developing the software dependability case based on extending GSN from the four aspects: the behaviors and results of the software can be predicted; the software behavior states can be monitored; the software behavior results can be assessed and the software abnormal behaviors can be controlled.",2011,0, 4953,Design and implement of RS-485 bus fault injection,"Testability is a significant design feature for system, which has gong through five stages such as External Test, BIT (Build-in Test), Intelligent BIT, Comprehensive Diagnosis and PHM (Prognostics and Health Management), has greatly improved the maintainability, reliability and availability of the modern weapons especially in electronic systems. Meanwhile, in the process of the researches on systems and equipment, we need to inject faults to undertake the actual tests in the prototype, in order to verify the correctness of the testability analysis and design, identify design flaws and inspect whether the products satisfy the requirements of the testability design absolutely. Nowadays, for the complex electronic systems, the foreign researchers widely use the technology of fault injection to find the testability design flaws in the electronic systems exactly and assess the testability indexes of the systems. In this paper, we have mainly researched the external-bus faults injection system for the requirement of the PHM verification tests in the position of the external-bus, designed and implemented a fault injector based on the RS-485 bus and designed a simulating experimental environment to validate the authenticity and validity of the RS-485 bus fault injector.",2011,0, 4954,Cassini spacecraft post-launch malfunction correction success,"After the launch of the Cassini Mission-to-Saturn Spacecraft, the volume of subsequent mission design modifications was expected to be minimal due to the rigorous testing and verification of the Flight Hardware and Flight Software. For known areas of risk where faults could potentially occur, component redundancy and/or autonomous Fault Protection (FP) routines were implemented to ensure that the integrity of the mission was maintained. The goal of Cassini's FP strategy is to ensure that no credible Single Point Failure (SPF) prevents attainment of mission objectives or results in a significantly degraded mission, with the exception of the class of faults which are exempted due to low probability of occurrence. In the case of Cassini's Propulsion Module Subsystem (PMS) design, a waiver was approved prior to launch for failure of the prime regulator to properly close; a potentially mission catastrophic single point failure. However, one month after Cassini's launch when the fuel and oxidizer tanks were pressurized for the first time, the prime regulator was determined to be leaking at a rate significant enough to require a considerable change in Main Engine (ME) burn strategy for the remainder of the mission. Crucial mission events such as the Saturn Orbit Insertion (SOI) bum task which required a characterization exercise for the PMS system 30 days before the maneuver were now impossible to achieve. This details the steps necessary to support the unexpected malfunction of the prime regulator, the introduction of new failure modes which required new FP design changes consisting of new/modified under-pressure and over-pressure algorithms; all which must be accomplished during the operation phase of the spacecraft, as a result of a presumed low probability waived failure which occurred after launch.",2011,0, 4955,Elman network voting system for cyclic system,"It is important to improve voting system in current software fault tolerance research. In this paper, we propose an Elman network voting system. This is an application of Elman network (a form of recurrent neural network). In time sequential environment, Elman network can predict next state by referencing previous state. Thus, Elman network is especially suitable for cyclic system. Majority voting system is a classical voting system with inherent safety mechanism. Combination of majority voting system and Elman network earns predictive and secure characteristics. Experiment shows that Elman network voting system can give an appropriate advice for disagreement situation after training; this approach performs quite well in small and big turbulent.",2011,0, 4956,Modified real-value negative selection algorithm and its application on fault diagnosis,"Analyze the drawbacks of common real-value negative selection algorithm applied on fault diagnosis, and the modified real-value negative selection algorithm is presented based on the corresponding innovations. Firstly, the fault detector set is partitioned into remember-detector set covering known-fault space and random-detector set covering unknown-fault space. Secondly, taking all known states including normal state as self set in training period, get the random-detector set through negative selection and distribution optimization. Lastly, in order to avoid `Fail to Alarm' event caused by the Hole, the two-time-matching method is presented in detecting period which takes the normal state as self set. A resistance circuit fault diagnosis experiment shows that compared with the common real-value negative selection algorithm, the modified real-value negative algorithm can effectively avoid `Fail to Alarm' event, and has higher diagnostic accuracy.",2011,0, 4957,Software fault detection using program patterns,"Effective detection of software faults is an important activity of software development process. The main difficulty of detecting software fault is finding faults in a large and complex software system. In this paper, we propose an approach that applies program patterns to detect and locate software fault so that programmer can fix bug and increase software quality. The advantage of the proposed approach is that the defect-prone code segments can be detected. To facilitate the programmer to detect program bugs, this approach also includes a Graphic User Interface to locate the defect-prone code segments.",2011,0, 4958,Research on quality of service in wireless sensor networks,"As a complex network consisting of sensing, processing and communication, wireless sensor network is driven by various applications and highly requires new Quality of Service guarantees. The application target of WSN is to monitor events, and it pays more attention to the QoS of colony data packets. So, we present an improved QoS mechanism based on weighted mapping table. Because of the limited computational capabilities of WSN, the proposed mechanism import mapping table. The improved QoS mechanism finds the weight from mapping table according to the level of events, and dynamically adjusts the transmit probability of each event. The mechanism considers the different event delay requirements, and adjusts the bandwidth and event delay reasonably for each event. The simulation proves that the improved mechanism algorithm is simple and effective; it can meet the application requirement of wireless sensor network.",2011,0, 4959,An adaptive threshold segmentation method based on BP neural network for paper defect detection,"Threshold segmentation is the fastest method of defect detection in the modern defect inspection system based on computer vision. But in the real paper defect detection system, the segmentation thresholds usually change with the paper image luminance which is influenced by many factors. In order to resolve this problem, an adaptive threshold segmentation method based on BP neural network is proposed in this paper. For this method, BP neural network models are created and trained to obtain the segmentation thresholds according to the image luminance and the defects are segmented with these thresholds obtained by the network. This method is especially suitable for detecting three typical types of paper defects: dark spot, light spot and hole. The experiment results indicate that this method is efficient and can be applied to modern paper defect inspection system.",2011,0, 4960,Strategy-based two-level fault handling mechanism for composite service,"Service composition becomes more and more popular in enterprise application, but it is prone to errors when the composite service is complex. So fault handling mechanism of composite service turns into an important challenge. However, most of current solutions only concentrate on service-level fault handling which mainly deal with service invoking failures. As process becomes more and more complex and business requirements demand to modify the running process to handle fault, we need to provide a more flexible and effective fault handling mechanism for composite service. So we propose a strategy-based two-level fault handling mechanism for composite service providing user-defined fault handling strategy. Besides the traditional fault handling such as service redundancy, we use composite service evolution for reference, present a process-level fault handling mechanism, so we can solve fault with dynamically evolution of process. Then based on this mechanism, we achieve a BPMN-based composite service execution engine SCENE BPMNEngine which sustain above-mentioned fault handling mechanism. Then we test and verify the effectiveness of this mechanism using specific cases.",2011,0, 4961,Improving quality in a distributed IP telephony system by the use of multiplexing techniques,"Nowadays, many enterprises use Voice over Internet Protocol (VoIP) in their telephony systems. Companies with offices in different countries or geographical areas can build a central managed telephony system sharing the lines of their gateways in order to increase admission probability and to save costs on international calls. So it is convenient to introduce a system to ensure a minimum QoS (Quality of Service) for conferences, and one of these solutions is CAC (Call Admission Control). In this work we study the improvements in terms of admission probability and conversation quality (R-factor) which can be obtained when RTP multiplexing techniques are introduced, as in this scenario there will be multiple conferences with the same origin and destination offices. Simulations have been carried out in order to compare simple RTP and TCRTP (Tunneling Multiplexed Compressed RTP). The results show that these parameters can be improved while maintaining an acceptable quality for the conferences if multiplexing is used.",2011,0, 4962,Influence of traffic management solutions on Quality of Experience for prevailing overlay applications,"Different sorts of peer-to-peer (P2P) applications emerge every day and they are becoming more and more popular. The performance of such applications may be measured by means of Quality of Experience (QoE) metrics. In this paper, the factors that influence these metrics are surveyed. Moreover, the impact of economic traffic management solutions (e.g., proposed by IETF) on perceived QoE for the dominant overlay applications is assessed. The possible regulatory issues regarding QoE for P2P applications are also mentioned.",2011,0, 4963,Evaluating the efficiency of data-flow software-based techniques to detect SEEs in microprocessors,"There is a large set of software-based techniques that can be used to detect transient faults. This paper presents a detailed analysis of the efficiency of dataflow software-based techniques to detect SEU and SET in microprocessors. A set of well-known rules is presented and implemented automatically to transform an unprotected program into a hardened one. SEU and SET are injected in all sensitive areas of MIPS-based microprocessor architecture. The efficiency of each rule and a combination of them are tested. Experimental results are used to analyze the overhead of data-flow techniques allowing us to compare these techniques in the respect of time, resources and efficiency in detecting this type of faults. This analysis allows us to implement an efficient fault tolerance method that combines the presented techniques in such way to minimize memory area and performance overhead. The conclusions can lead designers in developing more efficient techniques to detect these types of faults.",2011,0, 4964,Using an FPGA-based fault injection technique to evaluate software robustness under SEEs: A case study,"Microprocessor-based system's robustness under Single Event Effects is a very current concern. A widely adopted solution to make robust a microprocessor-based system consists in modifying the software application by adding redundancy and fault detection capabilities. The efficiency of the selected software-based solution must be assessed. This evaluation process allows the designers to choose the more suitable robustness technique and check if the hardened system achieves the expected dependability levels. Several approaches with this purpose can be found in the literature, but their efficiency is limited in terms of the number of faults that can be injected, as well as the level of accuracy of the fault injection process. In this paper, we propose FPGA-based fault injection techniques to evaluate software robustness methods under Single Event Upset (SEU) as well as Single Event Transient (SET). Experimental results illustrate the benefits of using the proposed fault injection method, which is able to evaluate a high amount of faults of both types of events.",2011,0, 4965,Automated testing of embedded automotive systems from requirement specification models,"Embedded software for modern automotive and avionic systems is increasingly complex. In early design phases, even when there is still uncertainty about the feasibility of the requirements, valuable information can be gained from models that describe the expected usage and the desired system reaction. The generation of test cases from these models indicates the feasibility of the intended solution and helps to identify scenarios for which the realization is hardly feasible or the intended system behavior is not properly defined. In this paper we present the formalization of requirements by models to simulate the expected field usage of a system. These so called usage models can be enriched by information about the desired system reaction. Thus, they are the basis for all subsequent testing activities: First, they can be used to verify the first implementation models and design decisions w.r.t. the fulfillment of requirements and second, test cases can be derived in a random or statistic manner. The generation can be controlled with operational profiles that describe different classes of field usage. We have applied our approach at a large German car manufacturer in the early development phase of active safety functionalities. Test cases were generated from the usage models to assess the implementation models in MATLAB/Simulink. The parametrization of the systems could be optimized and a faulty transition in the implementation models was revealed. These design and implementation faults had not been discovered with the established test method.",2011,0, 4966,A cache based algorithm to predict HDL modules faults,"Verification is the most challenging and time consuming stage in the integrated circuit development cycle. As designs complexities double every two years, novel verification methodologies are needed. We propose an algorithm that dynamically builds and updates an HDL module error proneness list. This list can be used to assist the development team to allocate resources during verification stage. The algorithm is build up using the idea that problematic modules usually hide many uncovered errors. Thus, our algorithm caches the most frequently modified and fixed modules. In an academic experiment composed by 17 modules, using a cache of size 3, we were able to correctly predict almost 80% of the faults occurrences.",2011,0, 4967,Event-driven monitoring and scheduling in express and logistics industry,"Express delivery industry is growing rapidly as the result of an explosive growth in the e-commerce, telemarketing and TV home-shopping, especially in developing countries. On the other hand parcel delay, loss and damage become the major challenges of the excessive number of express companies. In this paper we introduce an event-driven shipment monitoring and scheduling system, which utilizes the information flow and material flow together to monitor the events during the shipment lifecycle and detect any exception event. A centralized scheduling engine will generate dynamic routing for the shipment in case of exception. The system can efficiently avoid parcel loss and guarantee parcel service quality.",2011,0, 4968,Warpage simulation and optimization for the shell of color liquid crystal display monitor based on Moldflow,"The injection molding of thin-walled shell of color liquid crystal display (LCD) monitor is simulated by using Moldflow software and the warpage is predicted after molding. It is found that the warpage of plastic parts is mainly caused by uneven shrinkage. In order to reduce the warpage of the shell of LCD monitor, different cooling system of the mold are established, optimal molding conditions are found by optimizing process parameters. The results provide the basis for product design and mold manufacture. It is significantly important to improve the efficiency and quality of the plastic parts.",2011,0, 4969,Research on indirect lighting protection for military power supply,"According to lightning protection characteristics and the existing problems of military power supply, the electromagnetic transient model of key part in lighting protection such as generator, control unit and cables based on PSCAD/EMTDC power simulation software is established. The dispersed-continuous separation modeling method of electronic power device in military power supply is suggested. The indirect lightning protection for military power system is presented. It is mainly through installing SPD on the ports of electric supply line, power supply line and signal line, shielding and filtering for protection of indirect lightning. Finally, the lighting protection module device that is in high probability and security is developed and indirect lighting protection and all weather flight ability of military power supply is improved.",2011,0, 4970,Reasonable coal pillar size design in Xiaoming mine,"The pillar size has great influence not only on the stability of surrounding rock, but also on the recovery rate of coal resources, which directly affects the economic benefits of coal. According to XiaoMing tiefa coalmining group geological conditions and the use of coal pillar, it adopts field measurement, numerical simulation and theory calculation methods to get small coal pillar width and reasonable wide size of coal pillar. Use rock stratum detect to find out the fissures, faults, broken area; analyze the coal pillar stress and displacement distribution of the different pillar widths (3, 5, 8, 10,15and20m) by FLAC2D simulation of numerical simulation software, determine a reasonable size finally.lt provides a guarantee of security for the exploitation of the mine.",2011,0, 4971,An Inquiry-based Ubiquitous Tour System,"An inquiry-based ubiquitous tour system is proposed in this paper. The main concept is to use the strategy of inquiry to achieve the purposes of guiding and learning by using PDAs in a ubiquitous environment. This system has the following characteristics: the inquiry learning theory is used to design the guide activities, the provisions of clues are used to guide the inquiry activities of learners in the process of inquiry, and to use interactive learning objects to enhance motivation and learning interest, the system also uses GPS and instant messaging functions for cooperative learning. After the system is completed, a questionnaire based on the technology acceptance model is designed to assess the learning effectiveness of learners. The questionnaire includes system quality, content quality, environment quality, system mobility, perceived usefulness, perceived ease of use, and behavioral intentions to learner. There are a total of 9 hypotheses in the study questions, and a total of 17 items. The experimental results show that the research scale is highly reliable and all of the assumptions are valid.",2011,0, 4972,QoS Driven Web Services Evolution,"The loose coupling and on-demand integration are the fundamental characteristics of the Service Oriented Architecture (SOA), which have enforced rapid development of Web services. However, nonfunctional quality of service (QoS) attributes may evolve due to the changes of network conditions and locations of the service users. Some real services may update their QoS properties on-the-fly, others may turn to unavailable. Thus, addressing the problem of uninformed QoS evolution of Web services has become a significant research issue. This paper proposes a dynamic evolution framework of Web services, which uses the Collaborative Filtering (CF) to predict the QoS values and enables the evolution of Web services. In this framework, the QoS values of current users can be predicted using the past QoS data of similar users. There is no extra Web services invocation. About 1.5 millions real-world QoS data are used for evaluation and the experimental results show that it is a feasible and supplementary manner in dynamic evolution of the Web Services.",2011,0, 4973,Engineering SLS Algorithms for Statistical Relational Models,"We present high performing SLS algorithms for learning and inference in Markov Logic Networks (MLNs). MLNs are a state-of-the-art representation formalism that integrates first-order logic and probability. Learning MLNs structure is hard due to the combinatorial space of candidates caused by the expressive power of first-order logic. We present current work on the development of algorithms for learning MLNs, based on the Iterated Local Search (ILS) metaheuristic. Experiments in real-world domains show that the proposed approach improves accuracy and learning time over the existing state-of-the-art algorithms. Moreover, MAP and conditional inference in MLNs are hard computational tasks too. This paper presents two algorithms for these tasks based on the Iterated Robust Tabu Search (IRoTS) schema. The first algorithm performs MAP inference by performing a RoTS search within a ILS iteration. Extensive experiments show that it improves over the state-of the-art algorithm in terms of solution quality and inference times. The second algorithm combines IRoTS with simulated annealing for conditional inference and we show through experiments that it is faster than the current state-of-the-art algorithm maintaining the same inference quality.",2011,0, 4974,An Atomatic Fundus Image Analysis System for Clinical Diagnosis of Glaucoma,"Glaucoma is a serious ocular disease and leads blindness if it couldn't be detected and treated in proper way. The diagnostic criteria for glaucoma include intraocular pressure measurement, optic nerve head evaluation, retinal nerve fiber layer and visual field defect. The observation of optic nerve head, cup to disc ratio and neural rim configuration are important for early detecting glaucoma in clinical practice. However, the broad range of cup to disc ratio is difficult to identify early changes of optic nerve head, and different ethnic groups possess various features in optic nerve head structures. Hence, it is still important to develop various detection techniques to assist clinicians to diagnose glaucoma at early stages. In this study, we developed an automatic detection system which contains two major phases: the first phase performs a series modules of digital fundus retinal image analysis including vessel detection, vessel in painting, cup to disc ratio calculation, and neuro-retinal rim for ISNT rule, the second phase determines the abnormal status of retinal blood vessels from different aspect of view. In this study, the novel idea of integrating image in painting and active contour model techniques successfully assisted the correct identification of cup and disk regions. Several clinical fundus retinal images containing normal and glaucoma images were applied to the proposed system for demonstration.",2011,0, 4975,Evolutionary Feature Construction for Ultrasound Image Processing and its Application to Automatic Liver Disease Diagnosis,"In this paper, the self organization properties of genetic algorithms are employed to tackle the problem of feature selection and extraction in ultrasound images, which can facilitate early disease detection and diagnosis. Accurately identifying the aberrant features at a particular location of clinical ultrasound images is important to find the possibly damaged tissues. Unfortunately, it is difficult to exactly detect the regions of interest (ROIs) from relatively low quality of clinical ultrasound images. The presented evolutionary optimization algorithm presents a novel approach to building features for automatic liver cirrhosis diagnosis using a genetic algorithm. The extracted features provide several advantages over other feature extraction techniques which include: automatically construct feature set and tune their parameters, ability to integrate multiple feature sets to improve the diagnosis accuracy, and ability to find local ROIs and integrate their local features into effective global features. As compared with past approaches, we span a new way to unify the processing steps in a clinical application using the evolutionary optimization algorithms for ultrasound images. Experimental results show the effectiveness of the proposed method.",2011,0, 4976,Using Diversity to Design and Deploy Fault Tolerant Web Services,"Any software component for instance Web services is prone to unexpected failures. To guarantee business process continuity despite these failures, component replication is usually put forward as a solution to make these components fault tolerant. In this paper we illustrate the limitations of replication and suggests diversity as an alternative solution. In the context of Web services diversity stems from the semantic similarity of functionalities of Web services. Building upon this similarity, a diversity group consisting of semantically similar Web services is built and then controlled using different execution models. Each model defines how the Web services in the diversity group collaborate and step in when one of them fails to ensure operation continuity. An architecture showing the use of diversity to design and deploy fault tolerant Web services is presented in this paper.",2011,0, 4977,Execution Constraint Verification of Exception Handling on UML Sequence Diagrams,"Exception handling alters the control flow of the program. As such, errors introduced in exception handling code may influence the overall program in undesired ways. To detect such errors early and thereby decrease the programming costs, it is worthwhile to consider exception handling at design level. Preferably, design models must be extended to incorporate exception handling behavior and the control flow must be verified accordingly. Common practices for verification require a formal model and semantics of the design. Defining semantics and manually converting design models to formal models are costly. We propose an approach for verifying exception handling in UML design models, where we extend UML with exception handling notations, define execution and exception handling semantics, and automatically transform UML models to a formal model. The formal model is used for generating execution paths. Constraints are specified (as temporal logic formulas) on execution paths and are verified.",2011,0, 4978,Runtime Verification of Domain-Specific Models of Physical Characteristics in Control Software,"Control logic of embedded systems is nowadays largely implemented in software. Such control software implements, among others, models of physical characteristics, like heat exchange among system components. Due to evolution of system properties and increasing complexity, faults can be left undetected in these models. Therefore, their accuracy must be verified at runtime. Traditional runtime verification techniques that are based on states and/or events in software execution are inadequate in this case. The behavior suggested by models of physical characteristics cannot be mapped to behavioral properties of software. Moreover, implementation in a general-purpose programming language makes these models hard to locate and verify. This paper presents a novel approach to explicitly specify models of physical characteristics using a domain-specific language, to define monitors for inconsistencies by detecting and exploiting redundancy in these models, and to realize these monitors using an aspect-oriented approach. The approach is applied to two industrial case studies.",2011,0, 4979,Automatic Synthesis of Static Fault Trees from System Models,"Fault tree analysis (FTA) is a traditional reliability analysis technique. In practice, the manual development of fault trees could be costly and error-prone, especially in the case of fault tolerant systems due to the inherent complexities such as various dependencies and interactions among components. Some dynamic fault tree gates, such as Functional Dependency (FDEP) and Priority AND (PAND), are proposed to model the functional and sequential dependencies, respectively. Unfortunately, the potential semantic troubles and limitations of these gates have not been well studied before. In this paper, we describe a framework to automatically generate static fault trees from system models specified with SysML. A reliability configuration model (RCM) and a static fault tree model (SFTM) are proposed to embed system configuration information needed for reliability analysis and error mechanism for fault tree generation, respectively. In the SFTM, the static representations of functional and sequential dependencies with standard Boolean AND and OR gates are proposed, which can avoid the problems of the dynamic FDEP and PAND gates and can reduce the cost of analysis based on a combinatorial model. A fault-tolerant parallel processor (FTTP) example is used to demonstrate our approach.",2011,0, 4980,Evaluation of Experiences from Applying the PREDIQT Method in an Industrial Case Study,"We have developed a method called PREDIQT for model-based prediction of impacts of architectural design changes on system quality. A recent case study indicated feasibility of the PREDIQT method when applied on a real-life industrial system. This paper reports on the experiences from applying the PREDIQT method in a second and more recent case study - on an industrial ICT system from another domain and with a number of different system characteristics, compared with the previous case study. The analysis is performed in a fully realistic setting. The system analyzed is a critical and complex expert system used for management and support of numerous working processes. The system is subject to frequent changes of varying type and extent. The objective of the case study has been to perform an additional and more structured evaluation of the PREDIQT method and assess its performance with respect to a set of success criteria. The evaluation argues for feasibility and usefulness of the PREDIQT-based analysis. Moreover, the study has provided useful insights into the weaknesses of the method and suggested directions for future research and improvements.",2011,0, 4981,Dynamic Service Replacement to Improve Composite Service Reliability,"Service-oriented architecture (SOA) provides an ability to satisfy the increasing demand of the customer for complicated services in business environments via the composition of service components scattered on the Internet. Service composition is a mechanism to create a new service by the integration of several services to meet complex business goals. Web services are frequently exposed to unexpected service faults in network environments, because most SOA has been recently realized in the area of web services. Thus, services participating in the service composition cannot always be free of service faults, thereby decreasing the reliability of service composition. It is necessary to improve the reliability of the service composition to provide a reliable service. In this paper, we focus on the availability of a web service and propose a technique to improve service composition reliability using the web service-business process execution language (WS-BPEL) to support successful service composition. The proposed technique performs dynamic service replacement with the WS-BPEL extension. This is combined as the concept of the aspect-oriented programming when a web service fault is detected. We can prevent the failures of composite web service from unexpected service faults using our technique.",2011,0, 4982,Automatic Generation of Code for the Evaluation of Constant Expressions at Any Precision with a Guaranteed Error Bound,"The evaluation of special functions often involves the evaluation of numerical constants. When the precision of the evaluation is known in advance (e.g., when developing libms) these constants are simply precomputed once and for all. In contrast, when the precision is dynamically chosen by the user (e.g., in multiple precision libraries), the constants must be evaluated on the fly at the required precision and with a rigorous error bound. Often, such constants are easily obtained by means of formulas involving simple numbers and functions. In principle, it is not a difficult work to write multiple precision code for evaluating such formulas with a rigorous round off analysis: one only has to study how round off errors propagate through sub expressions. However, this work is painful and error-prone and it is difficult for a human being to be perfectly rigorous in this process. Moreover, the task quickly becomes impractical when the size of the formula grows. In this article, we present an algorithm that takes as input a constant formula and that automatically produces code for evaluating it in arbitrary precision with a rigorous error bound. It has been implemented in the Solly a free software tool and its behavior is illustrated on several examples.",2011,0, 4983,MiriaPOD A distributed solution for virtual network topologies management,"Virtualization technologies have grown steadily in the testing and educational environments, owing to their low costs for deploying a wide variety of scenarios. Many applications that provide network node virtualization face scalability problems when larger topologies are created because of their high memory and processing requirements. Although such applications offer methods for load distribution on multiple machines, the distribution must be manually set up through a cumbersome and error prone process. The miriaPOD distributed virtualization application provides an easy to use graphical topology editor that relies on a broker service to deploy virtual network nodes on multiple machines. The extensibility and scalability of the project enables its use for assessing migration scenarios and running network technologies harmonization tests. We also propose integration possibilities with the Moodle platform, positioning the miriaPOD infrastructure as a vital tool for testing and teaching networking concepts.",2011,0, 4984,An intellectual property core to detect task schedulling-related faults in RTOS-based embedded systems,"The use of Real-Time Operating Systems (RTOSs) became an attractive solution to simplify the design of safety-critical real-time embedded systems. Due to their stringent constraints such as battery-powered, high-speed and low-voltage operation, these systems are often subject to transient faults originated from a large spectrum of noisy sources, among them, the conducted and radiated Electromagnetic Interference (EMI). As the major consequence, the system's reliability degrades. In this paper, we present a hardware-based intellectual property (IP) core, namely RTOS-Guardian (RTOS-G) able to monitor the RTOS' execution in order to detect faults that corrupt the tasks' execution flow in embedded systems based on preemptive RTOS. Experimental results based on the Plasma microprocessor IP core running different test programs that exploit several RTOS resources have been developed. During test execution, the proposed system was exposed to conducted EMI according to the international standard IEC 61.000-4-29 for voltage dips, short interruptions and voltage transients on the power supply lines of electronic systems. The obtained results demonstrate that the proposed approach is able to provide higher fault coverage and reduced fault latency when compared to the native fault detection mechanisms embedded in the kernel of the RTOS.",2011,0, 4985,RVC-based time-predictable faulty caches for safety-critical systems,"Technology and Vcc scaling lead to significant faulty bit rates in caches. Mechanisms based on disabling faulty parts show to be effective for average performance but are unacceptable in safety critical systems where worst-case execution time (WCET) estimations must be safe and tight. The Reliable Victim Cache (RVC) deals with this issue for a large fraction of the cache bits. However, replacement bits are not protected, thus keeping the probability of failure still high. This paper proposes two mechanisms to tolerate faulty bits in replacement bits and keep time-predictability by extending the RVC. Our solutions offer different tradeoffs between cost and complexity. In particular, the Extended RVC (ERVC) has low energy and area overheads while keeping complexity at a minimum. The Reliable Replacement Bits (RRB) solution has even lower overheads at the expense of some more wiring complexity.",2011,0, 4986,Software-based control flow error detection and correction using branch triplication,"Ever Increasing use of commercial off-the-shelf (COTS) processors to reduce cost and time to market in embedded systems has brought significant challenges in error detection and recovery methods employing in such systems. This paper presents a software based control flow error detection and correction technique, so called branch TMR (BTMR), suitable for use in COTS-based embedded systems. In BTMR method, each branch instruction is triplicated and a software interrupt routine is invoked to check the correctness of the branch instruction. During the execution of a program, when a branch instruction is executed, it is compared with the second redundant branch in the interrupt routine. If a mismatch is detected, the third redundant branch instruction is considered as the error-free branch instruction; otherwise, no error has occurred. The main advantage of BTMR over previously proposed control flow checking (CFC) methods is its ability to correct CFEs as well as protection of indirect branch instructions. The BTMR method is evaluated on LEON2 embedded processor. The results show that, error correction coverage is about 96%, while memory overhead and performance overhead of BTMR is about 28% and 10%, respectively.",2011,0, 4987,Application research on temperature WSN nodes in switchgear assemblies based on TinyOS and ZigBee,"Temperature detection can timely discover the potential faults in switchgear assemblies. Appling a Wireless Sensor Networks(WSN) in on-line monitoring system is an effective measure to realize Condition Based Maintenance(CBM) for intelligent switchgear assemblies. A design solution of a temperature wireless sensor network node with CC2430 chip based on ZigBee is presented. The thermocouple is used as main sensor unit to detect the temperature-rise of key points in switchgear assemblies in real time, and the digital temperature sensor is adopted as subsidiary sensor unit to detect the environment temperature, so that the temperature of the points at high voltage and high current can be measured effectively and accurately. TinyOS, the embedded operating system adopted widely for WSN node, is researched and transplanted into CC2430, therefore the node software can be programmed and updated remotely. On this basis, the temperature wireless sensor prototypes based on ZigBee and/or TinyOS with CC2430 are designed and developed, and tested via the WSN experimental platform which is built by us. The experimental result shows that the temperature wireless sensor node can meet the requirement of the system function, and has the features such as low power dissipation, small volume, stable performance and long lifespan etc., and can be broadly applied in on-line temperature measuring and monitoring of high-voltage and low-voltage switchgear assemblies.",2011,0, 4988,Towards improved survivability in safety-critical systems,"Performance demand of Critical Real-Time Embedded (CRTE) systems implementing safety-related system features grows at an exponential rate. Only modern semiconductor technologies can satisfy CRTE systems performance needs efficiently. However, those technologies lead to high failure rates, thus lowering survivability of chips to unacceptable levels for CRTE systems. This paper presents SESACS architecture (Surviving Errors in SAfety-Critical Systems), a paradigm shift in the design of CRTE systems. SESACS is a new system design methodology consisting of three main components: (i) a multicore hardware/firmware platform capable of detecting and diagnosing hardware faults of any type with minimal impact on the worst-case execution time (WCET), recovering quickly from errors, and properly reconfiguring the system so that the resulting system exhibits a predictable and analyzable degradation in WCET; (ii) a set of analysis methods and tools to prove the timing correctness of the reconfigured system; and (iii) a white-box methodology and tools to prove the functional safety of the system and compliance with industry standards. This new design paradigm will deliver huge benefits to the embedded systems industry for several decades by enabling the use of more cost-effective multicore hardware platforms built on top of modern semiconductor technologies, thereby enabling higher performance, and reducing weight and power dissipation. This new paradigm will further extend the life of embedded systems, therefore, reducing warranty and early replacement costs.",2011,0, 4989,"Right dose, right care, every time: A distributed system for quality use of drugs","In hospitals drug dosage calculation is a complex process which involves multiple steps when calculating a single dose. This multi-step process is error-prone, as it requires experience and the full attention of physicians or nurses involved. The potential for error is high and the consequences are potentially serious. Software technologies can offer much added value in ensuring that patients receive the correct dose of a drug based on their individual needs. This paper describes a distributed drug management system that resolves critical drug dosing problems.",2011,0, 4990,A Parallel Vulnerability Detection Framework via MPI,"Open source applications have flourished recent years. Meanwhile, security vulnerabilities in such applications have grown. Since manual code auditing is error-prone, time-consuming and costly, it is necessary to find automatic solutions. To address this problem we propose an approach that combines constraint-based analysis and model checking together. Model checking technology as a constraint solver can be employed to solve the constraint-based system. CodeAuditor, the prototype implementation of our methods, is targeted at detecting vulnerabilities in C source code. With this tool, 9 previously unknown vulnerabilities in two open source applications were discovered and the observed false positive rate was at around 29%.",2011,0, 4991,Software Testing and Verification in Climate Model Development,"Over the past 30 years, most climate models have grown from relatively simple representations of a few atmospheric processes to complex multidisciplinary systems. Computer infrastructure over that period has gone from punchcard mainframes to modern parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Verification processes for model implementations rely almost exclusively on some combination of detailed analyses of output from full climate simulations and system-level regression tests. Besides being costly in terms of developer time and computing resources, these testing methodologies are limited in the types of defects they can detect, isolate, and diagnose. Mitigating these weaknesses of coarse-grained testing with finer-grained unit tests has been perceived as cumbersome and counterproductive. Recent advances in commercial software tools and methodologies have led to a renaissance of systematic fine-grained testing. This opens new possibilities for testing climate-modeling-software methodologies.",2011,0, 4992,A dynamic workflow simulation platform,"In numeric optimization algorithms errors at application level considerably affect the performance of their execution on distributed infrastructures. Hours of execution can be lost only due to bad parameter configurations. Though current grid workflow systems have facilitated the deployment of complex scientific applications on distributed environments, the error handling mechanisms remain mostly those provided by the middleware. In this paper, we propose a collaborative platform for the execution of scientific experiments in which we integrate a new approach for treating application errors, using the dynamicity and exception handling mechanisms of the YAWL workflow management system. Thus, application errors are correctly detected and appropriate handling procedures are triggered in order to save as much as possible of the work already executed.",2011,0, 4993,A dependable system based on adaptive monitoring and replication,"A multi agent system (MAS) has recently gained public attention as a method to solve competition and cooperation in distributed systems. However, MAS's vulnerability due to the propagation of failures prevents it from being applied to a large-scale system. This paper proposes a method to improve the reliability and efficiency of distributed systems. Specifically, the paper deals with the issue of fault tolerance. Distributed systems are characterized by a large number of agents, who interact according to complex patterns. The effects of a localized failure may spread across the whole network, depending on the structure of the interdependences between agents. The method monitors messages between agents to detect undesirable behaviors such as failures. Collecting the information, the method generates global information of interdependence between agents and expresses it in a graph. This interdependence graph enables us to detect or predict undesirable behaviors. This paper also shows that the method can optimize performance of a MAS and improve adoptively its reliability under complicated and dynamic environment by applying the global information acquired from the interdependence graph to a replication system. The advantages of the proposed method are illustrated through simulation experiments based on a virtual auction market.",2011,0, 4994,A face super-resolution method based on illumination invariant feature,"Human faces in surveillance video images usually have low resolution and poor quality. They need to be reconstructed in super-resolution for identification. The traditional subspace-based face super-resolution algorithms are sensitive to light. For solving the problem, this paper proposes a face super-resolution method based on illumination invariant feature. The method firstly extracts the illumination invariant features of an input low resolution image by using adaptive L1-L2 total variation model and self-quotient image in logarithmic domain. Then it projects the feature onto non-negative basis obtained by Nonnegative Matrix Factorization (NMF) in face image database. Finally it reconstructs the high resolution face images under the framework of Maximum A Posteriori(MAP) probability. Experimental results demonstrate that the proposed method outperforms the compared methods both in subjective and objective quality under poor light conditions.",2011,0, 4995,Sidescan sonar imagery processing software for underwater research,"Detailed submarine digital analysis of side scan sonar images significantly enhances the ability to assess seafloor features and artifacts digital images. These images are usually poor in their resolution if they are compared with optical images. There are commercial solutions that could solve this trouble, such as: the use of high resolution multibeam sidescan sonar, or the use of bathymetric sonar. Present work shows an economical solution to avoid this kind of problem by using digital image processing techniques under MATLAB environment. The application presented here is easy to use and has been developed under user friendly philosophy and could be operated for users at any level. Two types of sonar surveys, seafloor mapping and submerged target searches (buried or not), each require different processing methods for data analysis. Results are comparable in quality with commercial hardware solutions. Present work is the first step of a new general purpose tool that will be used in submerged objects recognition.",2011,0, 4996,A Dynamic Fault Localization Technique with Noise Reduction for Java Programs,"Existing fault localization techniques combine various program features and similarity coefficients with the aim of precisely assessing the similarities among the dynamic spectra of these program features to predict the locations of faults. Many such techniques estimate the probability of a particular program feature causing the observed failures. They ignore the noise introduced by the other features on the same set of executions that may lead to the observed failures. In this paper, we propose both the use of chains of key basic blocks as program features and an innovative similarity coefficient that has noise reduction effect. We have implemented our proposal in a technique known as MKBC. We have empirically evaluated MKBC using three real-life medium-sized programs with real faults. The results show that MKBC outperforms Tarantula, Jaccard, SBI, and Ochiai significantly.",2011,0, 4997,Context-Sensitive Interprocedural Defect Detection Based on a Unified Symbolic Procedure Summary Model,"Precise interprocedural analysis is crucial for defect detection faced with the problem of procedure call. Procedure summary is an effective and classical technique to handle this problem. However, there is no general recipe to construct and instantiate procedure summaries with context-sensitivity. This paper addresses the above challenge by introducing a unified symbolic procedure summary model (PSM), which consists of three aspects: (1) the post-condition briefly records the invocation side effects to calling context, (2) the feature means some inner attributes that might cause both the dataflow and control-flow transformation and (3) the pre-condition implies some potential dataflow safety properties that should not be violated at the call site, or there would exist defects. We represent each aspect of PSM in a three-valued logic: <;Conditional Constraints, Symbolic Expression, Abstract Value>;. Moreover, by comparing the concrete call site context (CSC) with the conditional constraints (CC), we achieve context-sensitivity while instantiating the summary. Furthermore, we proposed a summary transfer function for capturing the nesting call effect of a procedure, which transfers the procedure summary in a bottom-up manner. Algorithms are proposed to construct and instantiate the summary model at concrete call sites with context-sensitivity. Experimental results on 10 open source GCC benchmarks attest to the effectiveness of our technique on detecting null pointer dereference and out of boundary defects.",2011,0, 4998,Static Detection of Bugs Caused by Incorrect Exception Handling in Java Programs,"Exception handling is a vital but often poorly tested part of a program. Static analysis can spot bugs on exceptional paths without actually making the exceptions happen. However, the traditional methods only focus on null dereferences on exceptional paths, but do not check the states of variables, which may be corrupted by exceptions. In this paper we propose a static analysis method that combines forward flow sensitive analysis and backward path feasibility analysis, to detect bugs caused by incorrect exception handling in Java programs. We found 8 bugs in three open source server applications, 6 of which cannot be found by Find Bugs. The experiments showed that our method is effective for finding bugs related to poorly handled exceptions.",2011,0, 4999,Model-Driven Design of Performance Requirements,"Obtaining the expected performance of a workflow is much simpler if the requirements for each of its tasks are well defined. However, most of the time, not all tasks have well-defined requirements, and these must be derived by hand. This can be an error-prone and time consuming process for complex workflows. In this work, we present an algorithm which can derive a time limit for each task in a workflow, using the available task and workflow expectations. The algorithm assigns the minimum time required by each task and distributes the slack according to the weights set by the user, while checking that the task and workflow expectations are consistent with each other. The algorithm avoids having to evaluate every path in the workflow by building its results incrementally over each edge. We have implemented the algorithm in a model handling language and tested it against a naive exhaustive algorithm which evaluates all paths. Our incremental algorithm reports equivalent results in much less time than the exhaustive algorithm.",2011,0, 5000,Towards Impact Analysis of Test Goal Prioritization on the Efficient Execution of Automatically Generated Test Suites Based on State Machines,"Test prioritization aims at reducing test execution costs. There are several approaches to prioritize test cases based on collected data of previous test runs, e.g., in regression testing. In this paper, we present a new approach to test prioritization for efficient test execution that is focused on the artifacts used in model-based test generation from state machines. We propose heuristics for test goal prioritizations and evaluate them using two different test models. Our finding is that the prioritizations can have a positive impacton the test execution efficiency. This impact, however, is hard to predict for a concrete situation. Thus, the question for the general gain of test goal prioritizations is still open.",2011,0, 5001,Improving the Modifiability of the Architecture of Business Applications,"In the current rapidly changing business environment, organizations must keep on changing their business applications to maintain their competitive edges. Therefore, the modifiability of a business application is critical to the success of organizations. Software architecture plays an important role in ensuring a desired modifiability of business applications. However, few approaches exist to automatically assess and improve the modifiability of software architectures. Generally speaking, existing approaches rely on software architects to design software architecture based on their experience and knowledge. In this paper, we build on our prior work on automatic generation of software architectures from business processes and propose a collection of model transformation rules to automatically improve the modifiability of software architectures. We extend a set of existing product metrics to assess the modifiability impact of the proposed model transformation rules and guide the quality improvement process. Eventually, we can generate software architecture with desired modifiability from business processes. We conduct a case study to illustrate the effectiveness of our transformation rules.",2011,0, 5002,A Hierarchical Security Assessment Model for Object-Oriented Programs,"We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writ ability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java byte code.",2011,0, 5003,DRiVeR: Diagnosing Runtime Property Violations Based on Dependency Rules,"To ensure the reliability of complex software systems, runtime software monitoring is widely accepted to monitor and check system execution against formal properties specification at runtime. Runtime software monitoring can detect property violations, however it can not explain why a violation has occurred. Diagnosing runtime property violations is still a challenge issue. In this paper, a novel diagnosis method based on dependency rules is constructed to diagnose runtime property violations in complex software systems. A set of rules is formally defined to isolate software fault from hardware fault, then software faults is localized by combining trace slicing and dicing. The method is implemented in the runtime software monitoring system SRMS, and experimental results demonstrate that the method can effectively isolate and locate the related faults with property violations.",2011,0, 5004,Static Data Race Detection for Interrupt-Driven Embedded Software,"Interrupt mechanisms are widely used to process multiple concurrent tasks in the software without OS abstraction layer in various cyber physical systems (CPSs), such as space flight control systems. Data races caused by interrupt preemption frequently occur in those systems, leading to unexpected results or even severe system failures. In recent Chinese space projects, many software defects related to data races have been reported. How to detect interrupt based data races is an important issue in the quality assurance for aerospace software. In this paper, we propose a tool named Race Checker that can statically detect data races for interrupt-driven software. Given the source code or binary code of interrupt-driven software, the tool aggressively infers information such as interrupts priority states, interrupt enable states and memory accesses at each program point using our extended interprocedural data flow analysis. With the information above, it identifies the suspicious program points that may lead to data races. Race Checker is explicitly designed to find data race bugs in real-life aerospace software. Up to now, the tool has been applied in aerospace software V&V and found several severe data race bugs that may lead to system failures.",2011,0, 5005,Multi-layered Adaptive Monitoring in Service Robots,"Service failure in a service robot is an event that occurs when the delivered service deviates from the correct original service specified by the developers. The cause of failures is due to faults in the robot system, which can be detected based on a model. However, the monitoring task that compares the model and system's behavior is overload. In this study, we propose a multi-layered adaptive monitoring method that complements model-based fault detection. When the target component can be monitored according to their priority adaptively, it results in keeping the efficiency of fault detection, while the overload is reduced.",2011,0, 5006,A Reliability Model for Complex Systems,"A model of software complexity and reliability is developed. It uses an evolutionary process to transition from one software system to the next, while complexity metrics are used to predict the reliability for each system. Our approach is experimental, using data pertinent to the NASA satellite systems application environment. We do not use sophisticated mathematical models that may have little relevance for the application environment. Rather, we tailor our approach to the software characteristics of the software to yield important defect-related predictors of quality. Systems are tested until the software passes defect presence criteria and is released. Testing criteria are based on defect count, defect density, and testing efficiency predictions exceeding specified thresholds. In addition, another type of testing efficiency-a directed graph representing the complexity of the software and defects embedded in the code-is used to evaluate the efficiency of defect detection in NASA satellite system software. Complexity metrics were found to be good predictors of defects and testing efficiency in this evolutionary process.",2011,0, 5007,A Methodology of Model-Based Testing for AADL Flow Latency in CPS,"AADL (Architecture Analysis and Design Language) is a kind of model-based real-time CPS (Cyber-Physical System) modeling language, which has been widely used in avionics and space areas. The current challenges have been raised up on how to test CPS model described in AADL dynamically and find design fault at the design phase to iterate and refine the model architecture. This paper mainly tests the flow latency in design model based on PDA (Push-Down Automata). It abstracts the properties of flow latency in CPS model, and translates them into PDA in order to assess the latency in simulation. Meanwhile, this paper presents a case study of pilotless aircraft cruise control system to prove the feasibility of dynamic model-based testing on model performances and achieve the architecture iteration and refining aim.",2011,0, 5008,Integrating DSL-CBI and NuSMV for Modeling and Verifiying Interlocking Systems,"The Computer Based Interlocking System (CBI) is used to ensure safe train movements at a railway station. For a given station, all the train routes and the concrete safety rules associated with these are defined in the interlocking table. Currently, the development and verification of interlocking tables is entirely manual process, which is inefficient and error-prone due to the complexity of the CBI and the human interferences. Besides, the complexity and volume of the verification results tend to make users feel extremely non-understandable. In order to tackle these problems, we introduce a toolset based on Domain Specific Language for Computer Based Interlocking Systems (DSL-CBI) to automatically generate and verify the interlocking table, and then mark the conflicting routes in the railway station. In this paper, we also discuss the advantages of the toolset and the significant contribution in developing CBI based on the proposed toolset.",2011,0, 5009,Towards Synthesizing Realistic Workload Traces for Studying the Hadoop Ecosystem,"Designing cloud computing setups is a challenging task. It involves understanding the impact of a plethora of parameters ranging from cluster configuration, partitioning, networking characteristics, and the targeted applications' behavior. The design space, and the scale of the clusters, make it cumbersome and error-prone to test different cluster configurations using real setups. Thus, the community is increasingly relying on simulations and models of cloud setups to infer system behavior and the impact of design choices. The accuracy of the results from such approaches depends on the accuracy and realistic nature of the workload traces employed. Unfortunately, few cloud workload traces are available (in the public domain). In this paper, we present the key steps towards analyzing the traces that have been made public, e.g., from Google, and inferring lessons that can be used to design realistic cloud workloads as well as enable thorough quantitative studies of Hadoop design. Moreover, we leverage the lessons learned from the traces to undertake two case studies: (i) Evaluating Hadoop job schedulers, and (ii) Quantifying the impact of shared storage on Hadoop system performance.",2011,0, 5010,Enhancing the Effectiveness of Usability Evaluation by Automated Heuristic Evaluation System,"Usability defects test escapee can have a negative impact on the success of software. It is quite common for projects to have a tight timeline. For these projects, it is crucial to ensure there are effective processes in place. One way to ensure project success is to improve the manual processes of the usability inspection via automation. An automated usability tool will enable the evaluator to reduce manual processes and focus on capturing more defects in a shorter period of time. Thus improving the effectiveness of the usability inspection and minimizing defects escapee. There exist many usability testing and inspection methods. The scope of this paper is on the Heuristic Evaluation (HE) procedures automation. The Usability Management System (UMS) was developed to automate as many manual steps as possible throughout the software development life cycle (SDLC). It is important for the various teams within the organization to understand the benefits of automation. The results show that with the help of automation more usability defects can be detected. Hence, enhancing the effectiveness of usability evaluation by an automated Heuristic Evaluation System is feasible.",2011,0, 5011,Transient Fault Representativenesses Comparison Analysis,"As semiconductor technology scales into deep submicron regime, transient fault vulnerability of both combinational logic and sequential logic increase rapidly. It is predicted that in 2011 transient fault rate of combinational logic will overtake that of sequential logic in processor. In this paper, particle radiation-induced multi-bit transient faults in decoder unit, representative combinational logic components in SPARC processor, are simulated under different fault injection method, namely simulation-based fault injection and compilation supported static fault injection respectively. Fault representativenesses observed in both fault injection experiments are analyzed, and the inaccuracy factors of the static error injection method are put forward.",2011,0, 5012,Design patterns and fault-proneness a study of commercial C# software,"In this paper, we document a study of design patterns in commercial, proprietary software and determine whether design pattern participants (i.e. the constituent classes of a pattern) had a greater propensity for faults than non-participants. We studied a commercial software system for a 24 month period and identified design pattern participants by inspecting the design documentation and source code; we also extracted fault data for the same period to determine whether those participant classes were more fault-prone than non-participant classes. Results showed that design pattern participant classes were marginally more fault-prone than non-participant classes, The Adaptor, Method and Singleton patterns were found to be the most fault-prone of thirteen patterns explored. However, the primary reason for this fault-proneness was the propensity of design classes to be changed more often than non-design pattern classes.",2011,0, 5013,The Case for Software Health Management,"Software Health Management (SWHM) is a new field that is concerned with the development of tools and technologies to enable automated detection, diagnosis, prediction, and mitigation of adverse events due to software anomalies. Significant effort has been expended in the last several decades in the development of verification and validation (V&V) methods for software intensive systems, but it is becoming increasingly more apparent that this is not enough to guarantee that a complex software system meets all safety and reliability requirements. Modern software systems can exhibit a variety of failure modes which can go undetected in a verification and validation process. While standard techniques for error handling, fault detection and isolation can have significant benefits for many systems, it is becoming increasingly evident that new technologies and methods are necessary for the development of techniques to detect, diagnose, predict, and then mitigate the adverse events due to software that has already undergone significant verification and validation procedures. These software faults often arise due to the interaction between the software and the operating environment. Unanticipated environmental changes lead to software anomalies that may have significant impact on the overall success of the mission. Because software is ubiquitous, it is not sufficient that errors are detected only after they occur. Rather, software must be instrumented and monitored for failures before they happen. This prognostic capability will yield safer and more dependable systems for the future. This paper addresses the motivation, needs, and requirements of software health management as a new discipline.",2011,0, 5014,System-Software Co-Engineering: Dependability and Safety Perspective,"The need for an integrated system-software co-engineering framework to support the design of modern space systems is pressing. The current tools and formalisms tend to be tailored to specific analysis techniques and are not amenable for the full spectrum of required system aspects such as safety, dependability and performability. Additionally, they cannot handle the intertwining of hardware and software interaction. As such, the current practices lack integration and coherence. We recently developed a coherent and multidisciplinary approach towards developing space systems at architectural design level, linking all of the aforementioned aspects, and assessed it with several industrial evaluations. This paper reports on the approach, the evaluations and our perspective on current and future developments.",2011,0, 5015,Integrated Software and Sensor Health Management for Small Spacecraft,"Despite their size, small spacecraft have highly complex architectures with many sensors and computer-controlled actuators. At the same time, size, weight, and budget constraints often dictate that small spacecraft are designed as single-string systems, which means that there are no or few redundant systems. Thus, all components, including software, must operate as reliably. Faults, if present, must be detected as early as possible to enable (usually limited) forms of mitigation. Telemetry bandwidth for such spacecraft is usually very limited. Therefore, fault detection and diagnosis must be performed on-board. Further restrictions include low computational power and small memory. In this paper, we discuss the use of Bayesian networks (BNs) to monitor the health of on-board software and sensor systems, and to perform advanced on-board diagnostic reasoning. Advanced compilation techniques are used to obtain a compact SSHM (Software and Sensor Health Management) system with a powerful reasoning engine, which can run in an embedded software environment and is amenable to V&V. We successfully demonstrate our approach using an OSEK-compliant operating system kernel, and discuss in detail several nominal and fault scenarios for a small satellite simulation with a simple bang-bang controller.",2011,0, 5016,Designing of expert system for troubleshooting diagnosis on Gas Chromatography GC-2010 by means of inference method,"Gas Chromatography (GC) is used to analyze various products of volatile materials such natural gas, oils, pharmaceuticals, and foods. It is needed to control the quality of products till fulfill its requirements to be marketable. Therefore, maintenance is required to maintain the GC system running well. The expert system software for watering down the troubleshooting diagnosis on GC-2010 had been done by means of inference method. Based on the result of analysis, the probability of damage which often occurred are fluctuating pressure and high noise. The result of simulation show that the total time effectiveness using expert system software is 51.3% and the average step effectiveness to solve a problem in GC system is 21.5%.",2011,0, 5017,Preparation for strustural path analysis from plain text input output matrix,"We are dealing with a matrix of the size 136 (formerly 79) known as financial social accounting matrix (FSAM). Inside a financial SAM is a square matrix of 107 rows times 107 columns sub set known as SAM. We are augmenting a command line based software for more functionality. The addition from us includes: data base for the output, bar chart output, table generation, graphical presentation of path. Users want more versatile input not just single proprietaty plain text format. We take over preparing tedious input such as : moving exogenous rows to bottom side of matrix, error prone command line typing, filling blanks with zeroes, serial numbering of rows, separation of numbers and names. This is presented graphically with a model called use case.",2011,0, 5018,Modeling and simulation of quality analysis of internal cracks based on FTA in continuos casting billet,"To improve the internal quality of the continuous casting billet, one new approach for quality analysis and optimum control is proposed based on fault tree analysis (FTA). The interrelation between different hypotheses of the formation mechanism of internal cracks is studied to find out the root causes. According to the features of FTA and expertise, a FTA model of internal cracks is developed, then the software VC++6.0 is introduced to form the visual-FTA system, which is available to generate and analyze the fault tree automatically. Applying the analysis system to the steel Q235 supplied by a steel-maker, the qualitative analysis/quantitative results basically conform to the on-line statistical ones, and then an optimum control scheme is proposed to improve the billet quality.",2011,0, 5019,Test control&communication of hierarchical design-for-testability for testing dynamically reconfigurable computer,"The dynamically reconfigurable computer is a novel type of computers. Test control&communication of hierarchical Design-For-Testability for testing dynamically reconfigurable computer is presented in this paper. It can be used to detect both faults in the circuits of test targets and the test controller itself. Using this method, the fault in the test target can be detected in time, and the test result can be reported to the user immediately. Adopting hierarchical Design-For-Testability technique, the safety and stability of the system are enhanced, and the test result becomes more dependable.",2011,0, 5020,SkelCL - A Portable Skeleton Library for High-Level GPU Programming,"While CUDA and OpenCL made general-purpose programming for Graphics Processing Units (GPU) popular, using these programming approaches remains complex and error-prone because they lack high-level abstractions. The especially challenging systems with multiple GPU are not addressed at all by these low-level programming models. We propose SkelCL - a library providing so-called algorithmic skeletons that capture recurring patterns of parallel computation and communication, together with an abstract vector data type and constructs for specifying data distribution. We demonstrate that SkelCL greatly simplifies programming GPU systems. We report the competitive performance results of SkelCL using both a simple Mandelbrot set computation and an industrial-strength medical imaging application. Because the library is implemented using OpenCL, it is portable across GPU hardware of different vendors.",2011,0, 5021,Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing,"Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores, and this situation will only become more dire as we reach exascale computing. Exacerbating this situation, some of these faults will not be detected, manifesting themselves as silent errors that will corrupt memory while applications continue to operate but report incorrect results. This paper introduces RedMPI, an MPI library residing in the profiling layer of any standards-compliant MPI implementation. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring code changes to application source code. By providing redundancy, RedMPI is capable of transparently detecting corrupt messages from MPI processes that become faulted during execution. Furthermore, with triple redundancy RedMPI """"votes'' out MPI messages of a faulted process by replacing corrupted results with corrected results from unfaulted processes. We present an evaluation of RedMPI on an assortment of applications to demonstrate the effectiveness and assess associated overheads. Fault injection experiments establish that RedMPI is not only capable of successfully detecting injected faults, but can also correct these faults while carrying a corrupted application to successful completion without propagating invalid data.",2011,0, 5022,Capability Diagnostics of Enterprise Service Architectures Using a Dedicated Software Architecture Reference Model,"The SOA Innovation Lab - an innovation network of industry leaders in Germany and Europe - investigates the practical use of vendor platforms in a service-oriented context. For this purpose an original service-oriented ESA-Enterprise-Software-Architecture-Reference-Model, an associated ESA-Architecture-Maturity-Framework and an ESA-Pattern-Language for supporting architecture evaluation and optimization has been researched, leveraging and extending CMMI and TOGAF, as well as other service-oriented state-of-the art frameworks and methods. Current approaches for assessing architecture quality and maturity of service-oriented enterprise software architectures are rarely validated and were intuitively developed, having sparse reference model, metamodel or pattern foundations. This is a real problem because enterprise and software architects should know how advanced architecture quality concepts can successfully be used and how a stable foundation for introducing service-oriented enterprise architectures for adaptive systems looks like. Our idea and contribution is to extend existing enterprise and software architecture reference models and maturity frameworks to accord with a sound metamodel approach. We have developed and are presenting the idea of a pattern language for assessing the architecture quality of adaptable service-oriented enterprise systems. Our approach is based on ESARC -- an Enterprise Software Architecture Reference Model we have developed.",2011,0, 5023,Predicting Software Service Availability: Towards a Runtime Monitoring Approach,"This paper presents a prediction model for software services availability measured by the mean-time-to-repair (MTTR) and mean-time-to-failure (MTTF) of a service. The prediction model is based on the experimental identification of probabilistic prediction for variables that affect MTTR/MTTF, based on monitoring service data collected at runtime.",2011,0, 5024,Progressive Reliability Forecasting of Service-Oriented Software,"Reliability is an essential quality requirement for service-oriented systems. A number of models have been developed for predicting reliability of traditional software, in which code-based defects are the main concern for the causes of failures. Service-oriented software, however, shares many common characteristics with distributed systems and web applications. In addition to residual defects, the reliabilities of these types of systems can be affected by their execution context, message transmission media, and their usages. We present a case study to demonstrate that the reliability of a service varies on an hourly basis, and reliability forecasts should be recalibrated accordingly. In this study, the failure behavior of a required external service, used by a provided service, was monitored for two months to compute the initial estimates, which then continuously re-computed based on the learning of the new failure patterns. These reliabilities are integrated with the reliability of the component in the provided service. The results show that with this progressive re-calibration we provide more accurate reliability forecasts for the service.",2011,0, 5025,Reputation-Driven Web Service Selection Based on Collaboration Network,"Most of trustworthy web service selection simply focus on individual reputation and ignore the collaboration reputation between services. To enhance the collaboration trust during web service selection, a reputation model called collaboration reputation is proposed. The reputation model is built on web service collaboration network(WSCN), which is constructed in terms of the composite service execution log. Thus, the WSCN aims to maintain the trustworthy collaboration alliance among web services, In WSCN, the collaboration reputation can be assessed by two metrics, one called invoking reputation is computed by recommendation, which is selected from the community structure hiding in WSCN, the other is assessed by the invoked web service. In addition, the web service selection based on WSCN is designed.",2011,0, 5026,Measuring robustness of Feature Selection techniques on software engineering datasets,"Feature Selection is a process which identifies irrelevant and redundant features from a high-dimensional dataset (that is, a dataset with many features), and removes these before further analysis is performed. Recently, the robustness (e.g., stability) of feature selection techniques has been studied, to examine the sensitivity of these techniques to changes in their input data. In this study, we investigate the robustness of six commonly used feature selection techniques as the magnitude of change to the datasets and the size of the selected feature subsets are varied. All experiments were conducted on 16 datasets from three real-world software projects. The experimental results demonstrate that Gain Ratio shows the least stability on average while two different versions of ReliefF show the most stability. Results also show that making smaller changes to the datasets has less impact on the stability of feature ranking techniques applied to those datasets.",2011,0, 5027,Application of quantitative fault tree analysis based on fuzzy synthetic evaluation to software development for space camera,"Software reliability analysis is a significant measure to guide design of software reliability. Now software fault tree method to carry on software reliability analysis is used more and more. But it can only qualitative carry on the analysis because of many reasons. For example, the scale of software is enlarging, the software architecture is getting more and more complex, the reason of software fault is very difficult to analyze, the software is difference with hardware, and it is very hard to obtain the expiration the quantitative data and so on. The space camera is extremely high to the reliable request. So the qualitative analysis is unable to satisfy the analysis request. In view of space camera characteristics, a method of fault tree analysis based on fuzzy synthetic evaluation is proposed. The results show that the new method makes software quantitative fault tree analysis possible and optimizes. It can also solve the problem of software component which is unable to quantize precisely.",2011,0, 5028,Fast mode decision algorithm for enhancement layer of spatial and CGS scalable video coding,"The scalable video coding is an effective solution to fulfil the different requirements in modern video transmission system. Though the coding efficiency is high, the computational complexity is expensive. This paper presents a proposed algorithm to reduce the encoding time of enhancement layer through probability analysis. Firstly two classes are defined, and the Bayes Classifier decides whether the current macroblock belongs to Big Block class or Small Block class. Then an early termination strategy is checked for ignoring some inter modes. When inter-layer residual prediction is used, the optimal inter mode is set as the best inter mode of co-located macroblock in base layer. For the intra modes, the INTRA_1616 is skipped based on statistical data. Experiment results show that the proposed algorithm reduces computational complexity with negligible video quality loss and bit rate increment when compared with reference software and other methods.",2011,0, 5029,Negotiation towards Service Level Agreements: A Life Cycle Based Approach,"Service Based Systems (SBSs) are composed of loosely coupled services. Different stakeholders in these systems, e.g. service providers, service consumers, and business decision makers, have different types of concerns which may be dissimilar or inconsistent. Service Level Agreements (SLAs) play a major role in ensuring the quality of SBSs. They stipulate the availability, reliability, and quality levels required for an effective interaction between service providers and consumers. It has been noticed that because of having conflicting priorities and concerns, conflicts arise between service providers and service consumers while negotiating over the functionality of potential services. Since these stakeholders are involved with different phases the life cycle, it is really important to take into consideration these life cycle phases for proposing any kind of SLA negotiation methodology. In this research, we propose a stakeholder negotiation strategy for Service Level Agreements, which is based on prioritizing stakeholder concerns based on their frequency at each phase of the SBS development life cycle. We make use of a Collaxa BPEL Orchestration Server Loan service example to demonstrate the applicability of the proposed approach. In addition, we simulate the negotiation priority values to predict their potential impact on the cost of the SLA negotiation.",2011,0, 5030,Automatic Library Generation for BLAS3 on GPUs,"High-performance libraries, the performance-critical building blocks for high-level applications, will assume greater importance on modern processors as they become more complex and diverse. However, automatic library generators are still immature, forcing library developers to manually tune library to meet their performance objectives. We are developing a new script-controlled compilation framework to help domain experts reduce much of the tedious and error-prone nature of manual tuning, by enabling them to leverage their expertise and reuse past optimization experiences. We focus on demonstrating improved performance and productivity obtained through using our framework to tune BLAS3 routines on three GPU platforms: up to 5.4x speedups over the CUBLAS achieved on NVIDIA GeForce 9800, 2.8x on GTX285, and 3.4x on Fermi Tesla C2050. Our results highlight the potential benefits of exploiting domain expertise and the relations between different routines (in terms of their algorithms and data structures).",2011,0, 5031,Research on available resource management model of cognitive networks based on intelligence agent,"A research on available resources management model (ARMM) of cognitive networks based on intelligence agent (IA) proposed in this paper. In cognitive networks (CNs) environment, it is important to master various resources information of network accurately, in the guarantee resources information completeness, the uniformity, the stability, above the reliable foundation, overall evaluation user quality of service (QoS) and the network condition, monitor the use of resources have been allocated, check the amount of resource allocation, calculate the remaining resources, then carry on the independent management to the network resource. It not only can record the system from the use of current resources, but can predict the future network according to the remaining resources and redistribute resources to address inefficient allocation of network resources defects and improve resource utilization, enable the centralization of the network resources, intelligent, visual management, also meet the cognitive networks requirements of the various functions of resources. The ARMM of cognitive networks based on intelligence agent can be deployed in the context of CNs as well as current network environment. It could enhance the QoS of network and users because of its cognitive functions.",2011,0, 5032,RSSI vector attack detection method for wireless sensor networks,"In contrast to traditional networks, Wireless Sensor networks (WSN) are more vulnerable to attacks such as DOS, eavesdropping, tampering, node compromise, wormhole and Sibyl. To give security protection for WSN, many attack prevention and detection methods were proposed. As the second line of defence, Intrusion Detection can resist attacks under cryptography technologies are unavailable, and provide strategies for network recovery. This paper proposed a novel attack detection based on RSSI technology. It collects multi-observed RSSI values to form vectors, and uses mean vector confidential interval examination of multivariate normal population to detect malicious packets. Real experiments showed that vector-based detection approach have a better detection sensitivity and fault tolerance than single value detection approach.",2011,0, 5033,Efficient testing model based on Pareto distribution,"Regression testing is a very important part of the program development cycle as it should be executed after feature is integrated or bugs are fixed each time. There are hundreds of test cases in the case library for a little complex product, it is impossible to perform all of the test cases in regression testing every time. Test engineers always select the regression cases by their experience only, most of the cases are executed for several times but cannot find any issues, and many of the defects are escaped from test engineers because of the existing cases have not been selected to run. Pareto distribution model is introduced to assist test engineers in selecting efficient regression cases those are expected to find most of the defects at the incipient stage of testing according to quantitative efficiency of the test cases. And this model helps test engineers to forecast the progress of fault finding also.",2011,0, 5034,The application of artificial neural network in wastewater treatment,"In this paper, the applications of artificial neural network(ANN) in wastewater treatment quality control are reviewed. The main applications include three fields: to predict the effluent quality, such as the concentration of COD, N and P; to soft sensing some parameters hard to measure on site, such as BOD; to measure some parameters more accurately, such as heavy metals. In combination with the developing trend of the researches, the future developing direction of ANN is analyzed.",2011,0, 5035,PAD: Policy Aware Data center network,"Middle Boxes serve for the security in Data Center Networks (DCNs). Together with the growth of the services and applications in DCNs, flexible and scalable middle box deployment is highly required. The current middle box deployment methods are error prone. In this paper we propose Policy Aware Data center network (PAD), a flexible and scalable middle box provisioning architecture. PAD supports traditional Ethernet DCNs based on IP protocol. It allows DCN users freely define their traversal sequences of middle boxes without any complex configurations. PAD also makes DCN topology changing more easily, so VM migration, network expanding and so on can be easily implemented in PAD. PAD uses Policy Routing Information Injection (PRII) to control the middle boxes traversal, and simulation shows that PRII will bring no more than 6% throughput loss in practical utilization.",2011,0, 5036,A public cryptosystem from R-LWE,"Recently Vadim Lyubashevsky etc. built LWE problem on ring and proposed a public cryptosystem based on R-LWE, which, to a certain extent, solved the defect of large public key of this kind, but it didn't offer parameter selections and performance analysis in detail. In this paper an improved scheme is proposed by sharing a ring polynomial vector that makes public key as small as 1/m of the original scheme in multi-user environments. In additions, we introduce a parameter r to control both the private key space size and decryption errors probability, which greatly enhances the flexibly and practicality. The correctness, security and efficiency are analyzed in detail and choice of parameters is studied, at last concrete parameters are recommended for the new scheme.",2011,0, 5037,Verification and validation of UML 2.0 sequence diagrams using colored Petri nets,"One of the major challenges in the software development process is the improvement of the error detection in the early phases of the software life cycle. If the software error is detected at the design phase before of the implementation, the software quality will acceptably be increased. For this purpose, the Verification and Validation of UML diagrams play a very important role in detecting flaws at the design phase. This paper presents a Verification and Validation technique for one of the most popular UML diagrams: sequence diagrams. The proposed approach creates an executable model from UML interactions expressed in sequence diagrams using colored petri nets and uses CPN Tools to simulate the execution and to verify properties written in standard ML. In The proposed approach, we have used the sequence diagram elements including massages, send/receive events and source/destination of messages and have written properties in terms of boolean expression over the elements. The main contribution of this work is to provide an efficient mechanism to be able to track the execution state of an interaction in sequence diagram. The obtained results show that The proposed approach reduces impressively the probability of errors appearance at the software implementation phase. therefore, sofware can be more reliable at the end of the software development process.",2011,0, 5038,Online course quality maturity model based on evening university and correspondence education(OCQMM),"The contradictions between the working times and the learning time of the adult in evening university and correspondence education have become more and more serious. Anecdotal evidence suggests that the traditional educations (There are no online courses) are not able to ease the contradiction. It has been found that the online course can solve the conflict between working and learning of the adult students in evening university and correspondence education. Therefore, many academies which engaged in adult education brought in online courses model. However, how to ensure implementary quality of online course that has become a vital problem, in the paper, Online Course Quality Maturity Model Based on Evening University and Correspondence Education(OCQMM) was built. This model is not only for assessing the implementary quality of online courses in Evening University and Correspondence Education, more importantly, it can guide the institutions that engaged in adult education to meliorate the implementary process, so that the implementation quality of online course will be improved.",2011,0, 5039,Application of virtual instrument techonoly in electric courses teaching,"This paper analyzed the problems existed in practice teaching process of electric courses and put forward a new view which is to apply virtual instrument (VI) technology in theses courses. On the basis of a simple instruction for labVIEW's VI design function, an instance which designed by labVIEW to emulate and detect harmonic signals was described in detail. The instance has been used in the course of power quality and obtained a good result. More and more instances were designed based VI and used in electric courses, it can promote the combination between theory and practice and improve teaching level of these courses.",2011,0, 5040,Localized approach to distributed QoS routing with a bandwidth guarantee,"Localized Quality of Service (QoS) routing has been recently proposed as a promising alternative to the currently deployed global routing schemes. In localized routing schemes, routing decisions are taken solely based on locally collected statistical information rather than global state information. This approach significantly reduces the overheads associated with maintaining global state information at each node, which in turn improves the overall routing performance. In this paper we introduce a Localized Distributed QoS Routing algorithm (LDR), which integrates the localized routing scheme into distributed routing. We compare the performance of our algorithm against other existing localized routing algorithm, namely Credit Based Routing (CBR); and against the contemporary global routing algorithm, the Widest Shortest Path (WSP). With the aid of simulations, we show that the proposed algorithm outperforms the others in terms of the overall network blocking probability.",2011,0, 5041,Performance comparison of Multiple Description Coding and Scalable video coding,"For a video server, providing a good quality of service to highly diversified users is a challenging task because different users have different link conditions and different requirement of quality and demand. Multiple Description Coding (MDC) and Scalable video coding (SVC) are the two technical methods for quality adaptation to operate over a wide range of quality of service in heterogeneous requirements. Both are techniques of coding a video sequence in a way that multiple levels of quality can be obtained depending on the parts of the video bit stream that are received. For Scalable video coding special protection is mad for the base layer using Forward Error Protection while the streams (descriptions) in multiple description coding has been tested to simulate the advantages of diversity systems where each sub-stream has an equal probability of being correctly decoded. In this paper, the performance comparison of the two coding approaches is made by using DCT coefficients in generating the base layer and enhancement Layers for SVC and Descriptions for MDC with respect to their perspective achievements in image quality and Compression Ratio. Simulation results show that MDC out performs SVC in both cases.",2011,0, 5042,Statistical prediction modeling for software development process performance,"With the advent of the information age and more intense competitions among IT companies, it is more important to assess the quality of software development processes and products by not only measuring outcomes but also predicting outcomes. On the basis of analysis and experiments about software development processes and products, a study of the process performance modeling has been conducted with statistical analyzing past and current process performance data. In this paper, we present a statistical prediction modeling for Software Development Process Performance. For predicting delivered defects effectively, a simple case study is illustrated, and several suggestions on planning and controlling management brought from this model are analyzed in detail. At the end, the conclusions with a discussion of future research consideration are pointed out in this paper.",2011,0, 5043,Applications of MODFLOW in quantitative analysis coal seam floor water inrush condition,"In north of China, the main coal mine underlying the thin Taiyuan group limestone aquifers and the thick ordovician limestone aquifers, ordovician limestone aquifers as the main supply aquifer which with high pressure, rich of dynamic and static water storage, and the vertical water-filled geological structure(such as fault zone, karst column, etc.) as the main supply channel. and take an application case, using simulation software Visual MODFLOW conducted a three-dimensional(3D) numerical model to simulate the limestone aquifers and vertical geological structures. after calibration the model, obtained some quantitative cognition about the mining hydro-geological conditions in the perspective of groundwater systems theory. finally, and predicted the pumping rate in the different mining levels which be controlled by the vertical water-filled geological structure, for example, in 0m level: the pumping rate is only 145.83 m3/h when there were no vertical water-filled geological structure (scenario 1), and the pumping rate should reach to 416 m3/h when the vertical geological structure conducting limestone aquifers water to the mining levels (scenario 2). in -150 level: the scenario 2 pumping rate more than 1200 m3/h.",2011,0, 5044,Automatic testing framework for Internet applications,"The methods and systems for testing Internet applications are extremely hot areas in current mobile device development. However, the current test methods for Internet application (mainly by manual) are inefficient and error-prone. An automatic testing framework at the presentation layer is introduced for the Internet applications, as well as the implementation of it for the internet application Data Synchronization. By separating the presentation layer, the general testing framework could thoroughly test the internet application's presentation layer automatically and exactly reconstruct the scene to activate the bug of the applications. Using the testing framework, the cycle of the presentation stack development is dramatically accelerated.",2011,0, 5045,High-precision detection device of motor speed sensor based image recognition research,"A device is developed in this paper, which is used to detect whether the motor speed sensor meets the technical specifications, and has the ability of real-time displaying the current actual speed and automatic centering. In order to achieve the high-precision automatic centering of the device, image recognition technology is used in the device, and using ARM and servo control system to detect, recognize, high precision position control and high speed completion. It provides a strong protection for the sensor of speed to enter the market, gives a strong technical basis for the motor manufacturing and motor efficiency, and offers a reliable technical support and quality supervision for the measurement of industry and quality inspection departments in China.",2011,0, 5046,Underground drainage system based on artificial intelligence control,"By analyzing common artificial intelligence technologies, the paper introduced design of software and structure of a drainage system of coal mine underground based on artificial intelligence control. By detecting water level in sump and other parameters, the system controls pumps to work by turns and standby pump to start at duly time, so as to schedule dumps run reasonably, and it has function of fault alarming, which decreases labor intensity of workers greatly and improves utilization rate of devices. The system also has good expansibility and is fit for different fields.",2011,0, 5047,An improved credibility method based on matrix weight,"While in many fields the credibility method has obtained great successful applications, but there are still many shortcomings. This paper presents a credibility method based on matrix weights analysis, an improvement focusing on credibility method based on Euclidean distance analysis within the normal distribution. It not only solves the contradictory conclusion due to the too close numerical value between the prior probability and posterior probability, but also reduces the negative effects from non-important evidence, which better reflects the positive effect of important evidence, eliminating the defects of traditional credibility methods in these two aspects.",2011,0, 5048,Set diagnosis on the power station's devices by fuzzy neural network,"Current situation for energy system which is consist from thermo power station faced on diagnosis of generator fault set without break down maintenance. In ageing thermo power plants, large turbo generator's retrofits or main drive to upgrade was poor reliability associated with increase maintenance cost. Introduction of higher competition in the power market and subsequent introduction of new environmental constraints in our energy system, installed plants life time extension with overall performance improvement through upgrade or retrofit of main components is today a valuable. Also, set fault diagnosis during life time and predictive maintenance can be defined as collecting information from machines as they operate to aid in making decisions about their health, repair and possible improvements in order to reach maximum reliability, before any unplanned break down. As the turbo-generator fault set occurs when sensors should be put on bearing of these to detect vibration signal for extracting fault symptoms, but the relationships between faults and fault symptoms are too complex to get enough accuracy for industry application. In this paper, a new diagnosis method based on fuzzy neural network is proposed and a fuzzy neural network system is structured by associating the fuzzy set theory with neural network technology.",2011,0, 5049,R-largest order statistics for the prediction of bursts and serious deteriorations in network traffic,"Predicting bursts and serious deteriorations in Internet traffic is important. It enables service providers and users to define robust quality of service metrics to be negotiated in service level agreements (SLA). Traffic exhibits the heavy tail property for which extreme value theory is the perfect setting for the analysis and modeling. Traditionally, methods from EVT, such as block maxima and peaks over threshold were applied, each treating a different aspect of the prediction problem. In this work, the r-largest order statistics method is applied to the problem. This method is an improvement over the block maxima method and makes more efficient use of the available data by selecting the r largest values from each block to model. As expected, the quality of estimation increased with the use of this method; however, the fit diagnostics cast some doubt about the applicability of the model, possibly due to the dependence structure in the data.",2011,0, 5050,Project Management Methodologies: Are they sufficient to develop quality software,"This paper considers whether the use of a project management methodology PRINCE is sufficient to achieve quality information systems. Without the additional use of effort estimation methods and the user of measure that can predict and help control quality during system development. How to make sure the management of software quality to be effective is a critical issue that software development organizations have to face. Software quality management is a series of activities to direct and control the software quality, includes establishment of the quality policy and quality goals, quality planning, quality control, quality assurance and quality improvement. Professional bodies have paid more attention to software standards. Meanwhile, many countries are participating in significant consolidation and coordination effort.",2011,0, 5051,Automated generation of FRU devices inventory records for xTCA devices,"The Advanced Telecommunications Computing Architecture (ATCA) and Micro Telecommunications Computing Architecture (TCA) standards, intended for high-performance applications, offer an array of features that are compelling from the industry use perspective, like high reliability (99,999%) or hot-swap support. The standards incorporate the Intelligent Platform Management Interface (IPMI) for the purpose o advanced diagnostics and operation control. This standard imposes support for non-volatile Field Replaceable Unit (FRU) information for specific components of an ATCA/TCA-based system, which would typically include description of a given component. The Electronic Keying (EK) mechanism is capable of using this information for ensuring more reliable cooperation of the components. The FRU Information for the ATCA/TCA implementation elements may be of sophisticated structure. This paper focuses on a software tool facilitating the process of assembling this information, the goal of which is to make it more effective and less error-prone.",2011,0, 5052,"Virtual instrument based online monitoring, real-time detecting and automatic diagnosis management system for multi-fiber lines","Along with the popularity of the optical fiber communication, virtual instrument based online monitoring real-time detecting and automatic diagnosis management system on multi-fiber is proposed in this paper. To manage fiber lines and landmarks, simplified landmark map are presented based on the landmark list. Multi-fiber lines are online monitored, which may have the different or same parameters. When the optical power of any monitored line exceeds the set threshold, the subsystem gives a low power alarm, detecting subsystem will be started automatically to detect alarmed line. After data processing and analysis of detection results, then draws the result curve in the virtual instrument panel. User can locate plot of the curve, and zoom in event curve based on the event list. It utilizes detection results, event list, landmark list and simplified landmarks map synthetically, and presents comprehensive diagnosis conclusion of detection based on the map. To avoid fault, the system predicts fault in future through the analysis of a period of the historical detection result. This system has the advantages of excellent stability, powerful analysis, friendly interface, and convenient operation.",2011,0, 5053,Strategic management system for academic world: Expert system based on composition of cloud computing and case based reasoning system,"The presented work in this paper explores the relationship between Information Technology (IT) and Process Redesign (PR) in academic world. Existing processes for student's and faculties' vital data collection require a great deal of labor work to collect, input and analyze the information. These processes are usually slow and error prone, introducing a latency that prevents real-time data accessibility. We propose a solution to automate this process by using documents attached to existing faculty/ student datasheet that are inter-connected to exchange service. The proposal is based on the concepts of utility computing and cloud computing networks. The information becomes available in the cloud from where it can be processed by expert systems and/or distributed to administrative staff. The proof-of-concept design applies commodity computing integrated to legacy education devices, ensuring cost effectiveness and simple integration. In the presented paper, author suggests that IT also have a stronger role in software realization of an expert system. Author would like to assemble experts experience in our dumb box (Personal Computer) as knowledge base.",2011,0, 5054,Strategic e-commerce model driven-architecture for e-Learning: TQM & e-ERP Perspective,"Innovation and value proposition in distance-learning-programs (e-learning programs) is a multifaceted-activity with enormous dimensionality. The implementation of information communication technology (ICT) and enhanced enterprise-resource-planning (eERP) for smart-campuses has emerged as inevitability for industrial competitiveness in smart-factories as per the dictates of total quality management (TQM). The philosophy of productivity-management and concurrent engineering stipulates a competitive framework for e-learning for dissemination and absorption of knowledge by the intelligentsia, technologists and students. Conversely, competitiveness in today's global village demands an innovative and systems-approach for improving the learning-curves in a virtual environment. The e-learning-programs have numerous inherent perplexities since in every virtual classroom there is a much higher probability to interact with orthogonal-cultures that too with multiple-intelligences. This paper proposes a conceptual-strategic-planning-framework for diffusion of innovative in a distance learning program (Course-Technovation). The framework is based on literature review and field visits encompassing integrated E-commerce-model-driven architecture embedded with business intelligence and coupled with eERP-design functionalities.",2011,0, 5055,Detecting outliers in sliding window over categorical data streams,"Outlier mining is an important and active research issue in anomaly detection. However, it is a difficult problem since categorical data arrive at a fast rate, some data may be outdated and the outliers identified are likely to change. In this paper, we propose an efficient algorithm for mining outliers from categorical data streams, which discover closed frequent patterns in sliding window first. Then WCFPOF (Weighted Closed Frequent Pattern Outlier Factor) is introduced to measure the complete categorical data, and the corresponding candidate outliers are stored in QIS (Query Indexed Structure). By employing the decayed function, the outdated outliers are faded to generate the final outliers. Experimental results show that our algorithm has higher detection precision than FindFPOF. Otherwise, our algorithm has better scalability with different data sizes.",2011,0, 5056,An evaluation of source code mining techniques,"This paper reviews the tools and techniques which rely only on data mining methods to determine patterns from source code such as programming rules, copy paste code segments, and API usage. The work provides comparison and evaluation of the current state-of-the-art in source code mining techniques. Furthermore it identifies the essential strengths and weaknesses of individual tools and techniques to make an evaluation indicative of future potential. The pervious related works only focus on one specific pattern being mined such as special kind of bug detection. Thus, there is a need of multiple tools to test and find potential information from software which increase cost and time of development. Hence there is a strong need of tool which helps in developing quality software by automatically detecting different kind of bugs in one pass and also provides code reusability for the developers.",2011,0, 5057,A fault-tolerant permanent magnet synchronous motor drive with integrated voltage source inverter open-circuit faults diagnosis,"This paper presents a variable speed ac drive based on a permanent magnet synchronous motor, supplied by a three-phase fault-tolerant power converter. In order to achieve this, beyond the main routines, the control system integrates a reliable and simple algorithm for real-time diagnostics of inverter open-circuit faults. This algorithm performs an important role since it is able to detect an inverter malfunction and gives the information about its faulty phase. Then, the control system acts in order to first isolate the fault and then to proceed to a hardware and software reconfiguration. By doing this, a fully automated fault-tolerant variable speed drive can be achieved. Simulation and experimental results are presented showing the effectiveness of the proposed system under several operating conditions.",2011,0, 5058,An activity-based genetic algorithm approach to multiprocessor scheduling,"In parallel and distributed computing, development of an efficient static task scheduling algorithm for directed acyclic graph (DAG) applications is an important problem. The static task scheduling problem is NP-complete in its general form. The complexity of the problem increase when task scheduling is to be done in a heterogeneous environment, where the processors in the network may not be identical and take different amounts of time to execute the same task. This paper presents an activity-based genetic task scheduling algorithm for the tasks run on the network of heterogeneous systems and represented by Directed Acyclic Graphs (DAGs). First, a list scheduling algorithm is incorporated in the generation of the initial population of a GA to represent feasible operation sequences and diminish coding space when compared to permutation representation. Second, the algorithm assigns an activity to each task which is assigned on the processor, and then the quality of the solution will be improved by adding the activity and the random probability in the crossover and mutation operator. The performance of the algorithm is illustrated by comparing with the existing effectively scheduling algorithms.",2011,0, 5059,Evolutionary generation of test data for path coverage with faults detection,"The aim of software testing is to find faults in the program under test. Previous methods of path-oriented test data generation can generate test data traversing target paths, but they may not guarantee to find faults in the program. We present a method of evolutionary generation of test data for path coverage with faults detection in this paper. First, we establish a mathematical model of the problem considered in this paper, in which the number of faults detected in the path traversed by test data, and the risk level of faults are optimization objectives, and the approach level of the traversed path from the target one is a constraint. Then, we generate test data using a multi-objective evolutionary optimization algorithm with constraints. Finally, we apply the proposed method in a benchmark program bubble sort and an industrial program totinfo, and compare it with the traditional method. The experimental results conform that our method can generate test data that not only traverse the target path but also detect faults in it. Our achievement provides a novel way to generate test data for path coverage with faults detection.",2011,0, 5060,Review on integrated health management for aerospace plane,"The references at home and abroad are summarized in this paper. This paper introduces the basic concept and the application significance of aerospace plane, the main differences between the aerospace plane and general aircraft, integrated health management system and its compositions of aerospace plane, the development status of health management for aerospace plane in domestic and foreign countries. Especially it recommends special technical experiments in America, Japan and Italy, and analyzes the health management approaches of them. Finally, it also describes the main fault diagnosis methods. Integrated health management of aerospace plane generally involves a series of activities, including signal processing, monitoring, health assessment, failure prediction, decision support, human-computer interaction, restoring, and so on. In addition, this paper predicts development direction of health management for the aerospace plane.",2011,0, 5061,Improving system health monitoring with better error processing,"To help identify unexpected software events and impending hardware failures, developers typically incorporate error-checking code in their software to detect and report them. Unfortunately, implementing checks with reporting capabilities that give the most useful results comes at a price. Such capabilities should report the exact nature of impending failures and additionally limit reporting to only the first occurrence of an error to prevent flooding the error log with the same message. They must report when an existing error or fault is replaced by another error of a different nature or value. They must recognize what makes occasional faults allowable and they must reset themselves upon recovery from a reported failure so the checking process can begin anew. They must also report recovery from previously reported failures that appear to have healed themselves. Since the price associated with providing all these features is limited by budget and schedule, system reliability and health monitoring often suffer. However, there are practical techniques that can simplify the effort associated with incorporating such error detection and reporting. When done properly, they can greatly improve system reliability and health monitoring by finding potentially hidden problems during development and can also greatly improve system maintainability by providing concise running descriptions of problems when things go wrong particularly when minor errors might otherwise go unnoticed. In addition, preventative maintenance can be greatly aided by applying error detection techniques to performance monitoring in the absence of errors. Many of the techniques described in this paper take advantage of simple classes to do bookkeeping tasks such as updating and tracking statistical analysis of errors and error reporting. The paper highlights several of these classes and gives examples from actual applications.",2011,0, 5062,Automatic loading control circuit fault diagnosis system,"Automatic loading control circuit is as an important component of automatic loading system, its work's reliability directly affects the system's operational effectiveness and its safety, its fault diagnosis is important to improve the reliability of the automatic loading system. Based on describing the fault diagnosis system's structure and function of the automatic loading control circuit, CPU's reset circuit of automatically loading control circuit is as example, the fault tree method is introduced, the fault dictionary is established, and the corresponding software diagnostic process is given. Finally, according to the end of event probability, CPU's failure probability and the importance of bottom events are identified. Experiments show that the method is correct and effective.",2011,0, 5063,Numerical simulation of springback base on formability index for auto panel surface,"In order to emerge the traditional measurement's shortage of auto body panel, we proposed the method based on formability index. According to the above mentioned programs and the springback defect diagnosis, we developed a CAE module for the springback defect analysis based on VC++ environment, and solved the problems which the traditional sheet metal forming CAE software can not accurately predict. And then a forming process of U-beam is simulated by applying the proposed method which shows the method we proposed is accurate, and then we introduced some methods to optimize the adjustment amount of metal flow and the stamping dies for springback. Some suggestions are given by investigating the adjustment amount and modification of the stamping die.",2011,0, 5064,cPLC A cryptographic programming language and compiler,"Cryptographic two-party protocols are used ubiquitously in everyday life. While some of these protocols are easy to understand and implement (e.g., key exchange or transmission of encrypted data), many of them are much more complex (e.g., e-banking and e-voting applications, or anonymous authentication and credential systems). For a software engineer without appropriate cryptographic skills the implementation of such protocols is often difficult, time consuming and error-prone. For this reason, a number of compilers supporting programmers have been published in recent years. However, they are either designed for very specific cryptographic primitives (e.g., zero-knowledge proofs of knowledge), or they only offer a very low level of abstraction and thus again demand substantial mathematical and cryptographic skills from the programmer. Finally, some of the existing compilers do not produce executable code, but only metacode which has to be instantiated with mathematical libraries, encryption routines, etc. before it can actually be used. In this paper we present a cryptographically aware compiler which is equally useful to cryptographers who want to benchmark protocols designed on paper, and to programmers who want to implement complex security sensitive protocols without having to understand all subtleties. Our tool offers a high level of abstraction and outputs well-structured and documented Java code. We believe that our compiler can contribute to shortening the development cycles of cryptographic applications and to reducing their error-proneness.",2011,0, 5065,A property based security risk analysis through weighted simulation,"The estimation of security risks in complex information and communication technology systems is an essential part of risk management processes. A proper computation of risks requires a good knowledge about the probability distributions of different upcoming events or behaviours. Usually, technical risk assessment in Information Technology (IT) systems is concerned with threats to specific assets. However, for many scenarios it can be useful to consider the risk of the violation of particular security properties. The set of suitable qualities comprises authenticity of messages or non-repudiability of actions within the system but also more general security properties like confidentiality of data. Furthermore, as current automatic security analysis tools are mostly confined to a technical point of view and thereby missing implications on an application or process level, it is of value to facilitate a broader view including the relation between actions within the IT system and their external influence. The property based approach aims to help assessing risks in a process-oriented or service level view of a system and also to derive a more detailed estimation on a technical level. Moreover, as systems' complexities are growing, it becomes less feasible to calculate the probability of all patterns of a system's behaviour. Thus, a model based simulation of the system is advantageous in combination with a focus on precisely defined security properties. This paper introduces the first results supporting a simulation based risk analysis tool that enables a security property oriented view of risk. The developed tool is based on an existing formal validation, verification and simulation tool, the Simple Homomorphism Verification Tool (SHVT). The new simulation software provides a graphical interface for a monitor automaton which facilitates the explicit definition of security properties to be investigated during the simulation cycles. Furthermore, in order to model different- - likelihoods of actions in a system, weighting factors can be used to sway the behaviour where the occurrence of events is not evenly distributed. These factors provide a scheme for weighting classes of transitions. Therefore, the tool facilitates probabilistic simulation, providing information about the probability distribution of satisfaction or violation of specified properties.",2011,0, 5066,Research on UAV health evaluation based on PNN,"In view of UAV system development and the need of the UAV avionics system health evaluation, this paper establishes UAV state evaluate model of avionics system based on PNN(Probabilistic Neural Networks), designs and realizes UAV avionics system evaluation software. Finally, the simulation result indicates that this algorithm is reasonable and effective. This method can help people to design the avionics system, evaluate the UAV avionics system online and locate avionics system fault.",2011,0, 5067,Statistical method based computer fault prediction model for large computer centers,"Computer laboratories in large computer centers play an important role for the computing education in higher education institutions. Due to heavy usage, there are often hardware or software problems on these computers. Thus computer laboratories management and maintenance is a very important and long-term task. For a computer laboratory, the computers' monthly used time follows the normal distribution, which is verified by the chi-square goodness-of-fit hypothesis test. According to probability distributions based on the normal distribution, a statistical method based computer fault prediction model for public computer laboratories is presented. Computers' health state is divided into four classes: the Normal state, the Concerning state, the Warning state, and the Fault state. For different levels of computer fault, different methods are used to maintain it. The experiment and application was conducted in a university computer center with hundreds of computers in 9 computer laboratories. The experiment and application results show that it is an effective way for finding fault computers. It can be used to guide computer maintenance. Computers maintenance workload is greatly reduced by using this fault prediction model.",2011,0, 5068,A Fast and Effective Control Scheme for the Dynamic Voltage Restorer,"A novel control scheme for the dynamic voltage restorer (DVR) is proposed to achieve fast response and effective sag compensation capabilities. The proposed method controls the magnitude and phase angle of the injected voltage for each phase separately. Fast least error squares digital filters are used to estimate the magnitude and phase of the measured voltages. The utilized least error squares estimated filters considerably reduce the effects of noise, harmonics, and disturbances on the estimated phasor parameters. This enables the DVR to detect and compensate voltage sags accurately, under linear and nonlinear load conditions. The proposed control system does not need any phase-locked loops. It also effectively limits the magnitudes of the modulating signals to prevent overmodulation. Besides, separately controlling the injected voltage in each phase enables the DVR to regulate the negative- and zero-sequence components of the load voltage as well as the positive-sequence component. Results of the simulation studies in the PSCAD/EMTDC software environment indicate that the proposed control scheme 1) compensates balanced and unbalanced voltage sags in a very short time period, without phase jump and 2) performs satisfactorily under linear and nonlinear load conditions.",2011,0, 5069,Automatic Generation of Efficient Predictable Memory Patterns,"Verifying firm real-time requirements gets increasingly complex, as the number of applications in embedded systems grows. Predictable systems reduce the complexity by enabling formal verification. However, these systems require predictable software and hardware components, which is problematic for resources with highly variable execution times, such as SDRAM controllers. A predictable SDRAM controller has been proposed that addresses this problem using predictable memory patterns, which are precomputed sequences of SDRAM commands. However, the memory patterns are derived manually, which is a time-consuming and error-prone process that must be repeated for every memory device, and may result in inefficient use of scarce and expensive bandwidth. This paper addresses this issue by proposing three algorithms for automatic generation of efficient memory patterns that provide different trade-offs between run-time of the algorithm and the bandwidth guaranteed by the controller. We experimentally evaluate the algorithms for a number of DDR2/DDR3 memories and show that an appropriate choice of algorithm reduces run-time to less than a second and increases the guaranteed bandwidth by up to 10.2%.",2011,0, 5070,Optimizing the Product Derivation Process,"Feature modeling is widely used in software product-line engineering to capture the commonalities and variabilities within an application domain. As feature models evolve, they can become very complex with respect to the number of features and the dependencies among them, which can cause the product derivation based on feature selection to become quite time consuming and error prone. We address this problem by presenting techniques to find good feature selection sequences that are based on the number of products that contain a particular feature and the impact of a selected feature on the selection of other features. Specifically, we identify a feature selection strategy, which brings up highly selective features early for selection. By prioritizing feature selection based on the selectivity of features our technique makes the feature selection process more efficient. Moreover, our approach helps with the problem of unexpected side effects of feature selection in later stages of the selection process, which is commonly considered a difficult problem. We have run our algorithm on the e-Shop and Berkeley DB feature models and also on some automatically generated feature models. The evaluation results demonstrate that our techniques can shorten the product derivation processes significantly.",2011,0, 5071,Finding software fault relevant subgraphs a new graph mining approach for software debugging,"In this paper, a new approach for analyzing program behavioral graphs to detect fault suspicious subgraphs is presented. The existing graph mining approaches for bug localization merely detect discriminative subgraphs between failing and passing runs, which are not applicable when the context of a failure is not appeared in a discriminative pattern. In our proposed method, the suspicious transitions are identified by contrasting nearest neighbor failing and passing dynamic behavioral graphs. The technique takes advantage of null hypothesis testing and a new formula for ranking edges is presented. To construct the most bug relevant subgraph, the high ranked edges are applied and presented to the debugger. The experimental results on Siemens test suite and Space program reveal effectiveness of the proposed method on weighted dynamic graphs for locating bugs in comparison with other methods.",2011,0, 5072,Unit test case design metrics in test driven development,"Testing is a validation process that determines the conformance of the software's implementation to its specification. It is an important phase in unit test case design and is even more important in object-oriented systems. We want to develop test case designing criteria that give confidence in unit testing of object-oriented system. The main aim testing can be viewed as a means of assessing the existing quality of the software to probe the software for defect and fix them. We also want to develop and executing our test case automatically since this decreases the effort (and cost) of software development cycle (maintenance), and provide re-usability in Test Driven Development framework. We believe such approach is necessary for reaching the levels of confidence required in unit testing. The main goal of this paper is to assist the developers/testers to improve the quality of the ATCUT by accurately design the test case for unit testing of object-oriented software based on the test results. A blend of unit testing assisted by the domain knowledge of the test case designer is used in this paper to improve the design of test case. This paper outlines a solution strategy for deriving Automated Test Case for Unit Testing (ATCUT) metrics from object-oriented metrics via TDD concept.",2011,0, 5073,Unified framework for developing Two Dimensional software reliability growth models with change point,"In order to assure software quality and assess software reliability, many software reliability growth models (SRGMs) have been proposed. In One-Dimension Software Reliability Growth Models researcher used one factor such as Testing-Time, Testing-Effort or Coverage, etc for designing the model but in Two-Dimensional software reliability growth model, process depends on two-types of reliability growth factors like: Testing-time and Testing-effort or Testing-time and Testing-Coverage or any combination between factors. Alsozin more realistic situations, the failure distribution can be affected by many factors, such as the running environment, testing strategy and resource allocation. Once these factors are changed during testing phase, it could result in failure intensity function that increases or decreases non-monotonically and the time point corresponding to abrupt fluctuations is called change point. In this paper, we discuss generalized framework for Two-Dimensional SRGM with change-point for software reliability assessment. The models developed have been validated on real data set.",2011,0, 5074,Systemic assessment of risks for projects: A systems and Cybernetics approach,"The current and past success rate of software projects is poor. This is mainly due to the reductionist/ analytic methods used in project risk assessments. The risk assessment practices adhered today based on standards or otherwise are non-systemic. These assessments cannot provide or deal with the systemic view of Project risks. They handle the project context and complex issues in Projects inadequately. The issue with reductionist thinking is that it leads to reductionist approach to problem solving and history has shown us Project failures continue to happen. This paper explains the Systemic Assessment of Risks (SAR) methodology that is proposed for assessment of project risks by considering Project as a system. This methodology uses the Cybernetics Risk Influence Diagramming (CRID) technique for identification of probable interconnected, interrelated and emergent risks. SAR's application in a software development project in a telecommunications enterprise demonstrates the methodology with the project risks assessed systemically.",2011,0, 5075,A System for Nuclear Fuel Inspection Based on Ultrasonic Pulse-Echo Technique,"Nuclear Pressurized Water Reactor (PWR) technology has been widely used for electric energy generation. The follow-up of the plant operation has pointed out the most important items to optimize the safety and operational conditions. The identification of nuclear fuel failures is in this context. The adoption of this operational policy is due to recognition of the detrimental impact that fuel failures have on operating cost, plant availability, and radiation exposure. In this scenario, the defect detection in rods, before fuel reloading, has become an important issue. This paper describes a prototype of an ultrasonic pulse-echo system designed to inspect failed rods (with water inside) from PWR. This system combines development of hardware (ultrasonic transducer, mechanical scanner and pulser-receiver instrumentation) as well as of software (data acquisition control, signal processing and data classification). The ultrasonic system operates at center frequency of 25 MHz and failed rod detection is based on the envelope amplitude decay of successive echoes reverberating inside the clad wall. The echoes are classified by three different methods. Two of them (Linear Fisher Discriminant and Neural Network) have presented 93% of probability to identify failed rods, which is above the current accepted level of 90%. These results suggest that a combination of a reliable data acquisition system with powerful classification methods can improve the overall performance of the ultrasonic method for failed rod detection.",2011,0, 5076,Test-Driving Static Analysis Tools in Search of C Code Vulnerabilities,"Recently, a number of tools for automated code scanning came in the limelight. Due to the significant costs associated with incorporating such a tool in the software lifecycle, it is important to know what defects are detected and how accurate and efficient the analysis is. We focus specifically on popular static analysis tools for C code defects. Existing benchmarks include the actual defects in open source programs, but they lack systematic coverage of possible code defects and the coding complexities in which they arise. We introduce a test suite implementing the discussed requirements for frequent defects selected from public catalogues. Four open source and two commercial tools are compared in terms of their effectiveness and efficiency of their detection capability. A wide range of C constructs is taken into account and appropriate metrics are computed, which show how the tools balance inherent analysis tradeoffs and efficiency. The results are useful for identifying the appropriate tool, in terms of cost-effectiveness, while the proposed methodology and test suite may be reused.",2011,0, 5077,Error-Based Software Testing and Analysis,"An approach to error-based testing is described that uses simple programmer error models and focus-directed methods for detecting the effects of errors. Errors are associated with forgetting, ignorance, bandwidth and perversity. The focus-directed approach was motivated by the observation that focus is more important than methodology in detecting such errors. The strengths and weaknesses of error-based versus more methodological methods are compared using three underlying assumptions called the faith, coincidence and hindsight effects. The weaknesses of error-based testing are compensated for by establishment of an expertise-based foundation that uses research from the study of natural decision making. Examples of the application of error-based methods are given from projects in which the author had access to the programmers, making it possible to track failure back to both defect and error. The relationship of error-based testing to contemporary methods, such as context-driven and exploratory testing, is described.",2011,0, 5078,Towards Rapid Creation of Test Adaptation in On-line Model-Based Testing,"Model-based Testing (MBT) is an approach for generating test cases automatically from abstract models of the system under test (SUT). The resulting test cases are also abstract and they have to be concretized before being applied to the SUT. This task is typically delegated to the test adaptation layer. The test adaptation is usually created manually which is tedious and error prone. In this paper, we present an approach in which we take advantage of an existing test execution framework for implementing the test adaptation between an on-line MBT tool and the SUT. The approach allows the reuse of message libraries and automatic concretization/abstraction of tests in on-line testing. In addition, we also discuss ways to automate the building of the test adaptation from other artifacts of the testing process. We exemplify our approach with excerpts from a telecom case study.",2011,0, 5079,Statistical Evaluation of Test Sets Using Mutation Analysis,"Evaluation of the ability of test sets for fault detection, and indirectly also evaluation of the quality of test techniques that generate those test sets, have become more of an issue in software testing. Based on mutation analysis, this paper evaluates and compares fault detection ability of test sets using statistical techniques. In this process also different mutant types (and indirectly different fault types) are considered. A case study, drawn from a large commercial web-based system, validates the approach and analyzes its characteristics.",2011,0, 5080,Predicting Timing Performance of Advanced Mechatronics Control Systems,"Embedded control is a key product technology differentiator for many high-tech industries, including ASML. The strong increase in complexity of embedded control systems, combined with the occurrence of late changes in control requirements, results in many timing performance problems showing up only during the integration phase. The fallout of this is extremely costly design iterations, severely threatening the time-to-market and time-to-quality constraints. This paper reports on the industrial application at ASML of the Y-chart method to attack this problem. Through the largely automated construction of executable models of a wafer scanner's mechatronics control application and platform, ASML was able to obtain high-level overview early on in the development process. The system wide insight in timing bottlenecks gained this way resulted in more than a dozen improvement proposals yielding significant performance gains. These insights also led to a new development roadmap of the mechatronics control execution platform.",2011,0, 5081,On the Consensus-Based Application of Fault Localization Techniques,"A vast number of software fault localization techniques have been proposed recently with the growing realization that manual debugging is time-consuming, tedious and error-prone, and fault localization is one of the most expensive debugging activities. While some of these techniques perform better than one another on a large number of data sets, they do not do so on all data sets and therefore, the actual quality of fault localization can vary considerably by using just one technique. This paper proposes the use of a consensus-based strategy that combines the results of multiple fault localization techniques, to consistently provide high quality performance, irrespective of data set. Empirical evidence based on case studies conducted on three sets of programs (the seven programs of the Siemens suite, and the gzip and make programs) and three different fault localization techniques suggests that the consensus-based strategy holds merit and generally provides close to the best, if not the best, results. Additionally the consensus-based strategy makes use of techniques that all operate on the same set of input data, minimizing the overhead. It is also simple to include or exclude techniques from consensus, making it an easily extensible, or alternatively, tractable strategy.",2011,0, 5082,Ontology-Based Reliability Evaluation for Web Service,"Reliability has become a major quality metric for Web service. However, current reliability evaluation approaches lack a formal semantic representation and the support of incomplete or uncertain information. We propose a Web service reliability ontology (WSRO) serving as a basis to characterize the knowledge of Web service. And based on WSRO, a mapping to the probability graphical model is constructed. The Web service reliability evaluation results are obtained by the causality reasoning. Some evaluation results reveal that our approach is applicable and effective.",2011,0, 5083,Reliability and Accuracy of the Estimation Process - Wideband Delphi vs. Wisdom of Crowds,This research paper addresses the reliability of estimation techniques based on technical knowledge possessed by software engineers. The goal was to identify weaknesses and limitations in the estimation practices based on Wideband Delphi method and to propose an alternative solution. The initial experiment highlights the results of employing a different estimation method based on Wisdom of Crowds approach and compares them to standard Wideband Delphi approach. The estimation was focused on the expected quality of the software release -- predicting the distribution pattern of defects among the system components which would be found throughout the development and test phases (before reaching the customer). The final results for both estimation techniques were evaluated against real system data (the software product release development in the public safety domain).,2011,0, 5084,Quality Model Driven Dynamic Analysis,"Release managers often face a dilemma about the quality of software under delivery before a release. The presence of run-time errors such as memory leaks, buffer overflows, and deadlocks affects quality attributes such as efficiency, security, and reliability. Such errors are detected using dynamic analysis methods in practice. However, the dynamic analysis methods employed in practice are by and large ad hoc. It is essential to use dynamic analysis focusing on finding the right set of run-time errors in a software component that have the maximum impact on quality. There exists a need to identify quality attributes such as reliability, efficiency, and security that are important for a software component, or, a system. In this paper, a quality model driven dynamic analysis methodology is proposed. Various run-time errors that can arise during the execution of programs written in a language such as C++ are mapped to the respective quality attributes, thereby forming a basis for run-time error classification. Our experiences in the application of dynamic analysis on real projects are reported. The methodology reports the error findings mapped to the quality attributes along with their distributions. The reported findings help management understand quality problems and take appropriate corrective action.",2011,0, 5085,Software Reliability Prediction for Open Source Software Adoption Systems Based on Early Lifecycle Measurements,"Various OSS(Open Source Software)s are being modified and adopted into software products with their own quality level. However, it is difficult to measure the quality of an OSS before use and to select the proper one. These difficulties come from OSS features such as a lack of bug information, unknown development schedules, and variable documentations. Conventional software reliability models are not adequate to assess the reliability of a software system in which an OSS is being adopted as a new add-on feature because the OSS can be modified while Commercial Off-The-Shelf (COTS) software cannot. This paper provides an approach to assessing the software reliability of OSS adopted software system in the early stage of a software life cycle. We identify the software factors that affect the reliability of software system using the COCOMOII modeling methodology and define the module usage as a module coupling measure. We build the fault count models using the multivariate linear regression and performed the model evaluation. Early software reliability assessment in OSS adoption helps to make an effective development and testing strategies for improving the reliability of the whole system.",2011,0, 5086,Software-based analysis of the effects of electrostatic discharge on embedded systems,"This paper illustrates the use of software for monitoring and recording the effects of electrostatic discharge (ESD) on the operation of embedded systems, with the goal of facilitating root-cause analysis of resulting failures. Hardware -- based scanning techniques are typically used for analyzing the effect of ESD on systems by identifying physical coupling paths. This paper proposes software techniques that monitor registers and flags associated with peripherals of embedded systems to detect faults associated with the effects of ESD. A lightweight, cost-effective, and non-intrusive software tool has been developed that monitors and records the status of all registers associated with a designated peripheral under test, identifying the fault propagation caused by ESD in the system, and visually presenting the resulting errors. The tool has been used to detect and visually summarize ESD-induced errors on the SD card peripheral of the S3C2440 development board, using local injection and system-level scanning. Root-cause analysis of these faults can potentially assist in identification of coupling paths of electromagnetic interference, as well as determination of areas of the hardware that are more vulnerable to ESD.",2011,0, 5087,Security Monitoring of Components Using Aspects and Contracts in Wrappers,"The re-usability and modularity of components reduce the cost and complexity of the software design. It is difficult to predict run-time scenarios covering all possible circumstances to ensure that the components are fully compatible with the system. Given that, monitoring run-time behaviours of components presents a close view of the component qualities. The existing monitoring approaches either implement applications with built-in monitoring features, or observe the external resources and events to predict the status of the components. In this paper, we propose an approach to monitor the runtime behaviours of components using aspect-oriented wrappers and contracts. We design monitoring wrappers to encapsulate the monitored components. We use contracts to define the mutual obligations of two interacting components. The policies implemented in contracts are woven into component wrappers as separate aspect modules. If the component contains any flaws or vulnerabilities, the wrappers can monitor some behaviours and prevent failures propagating into the wrapped components and the rest of the system. This approach assures that the system is running in a safe environment with the erroneous behaviours detected appropriately. We conducted experiments on the run-time monitoring of SQL Injection, Cross Site Scripting attacks, and access control policies. The results show that the framework is very flexible to impose separate policies as aspects on component wrappers without the modifications of the underlying components.",2011,0, 5088,Usage-Based Online Testing for Proactive Adaptation of Service-Based Applications,"Increasingly, service-based applications (SBAs) are composed of third-party services available over the Internet. Even if third-party services have shown to work during design-time, they might fail during the operation of the SBA due to changes in their implementation, provisioning, or the communication infrastructure. As a consequence, SBAs need to dynamically adapt to such failures during run-time to ensure that they maintain their expected functionality and quality. Ideally the need for an adaptation is proactively identified, i.e., failures are predicted before they can lead to consequences such as costly compensation and roll-back activities. Currently, approaches to predict failures are based on monitoring. Due to its passive nature, however, monitoring might not cover all relevant service executions, which can diminish the ability to correctly predict failures. In this paper we demonstrate how online testing, as an active approach, can improve failure prediction by considering a broader range of service executions. Specifically, we introduce a framework and prototypical implementation that exploits synergies between monitoring, online testing and quality prediction. For online test selection and assessment we adapt usage-based testing strategies. We experimentally evaluate the strengths of our approach in predicting the need for an adaptation of an SBA.",2011,0, 5089,Characterizing the Implementation of Software Non-functional Requirements from Probabilistic Perspective,"Non-functional requirements are quality concerns of a software envisioned. As an effective treatment, goal-oriented method can capture NFR-related knowledge so that an evaluation for a specific implementation strategy can be provided. This paper makes a meaningful attempt to observe the implementation strategies of non-functional requirements in a probabilistic way, and obtain the probabilistic result for each satisficing status. The contribution of our work is to give a clear justification about whether there exists a proper implementation strategy for multiple non-functional requirements so that they can be guaranteed of the specific satisficing statuses, and if so how big the possibility is.",2011,0, 5090,Evaluating an Interactive-Predictive Paradigm on Handwriting Transcription: A Case Study and Lessons Learned,"Transcribing handwritten text is a laborious task which currently is carried out manually. As the accuracy of automatic handwritten text recognizers improves, post-editing the output of these recognizers could be foreseen as a possible alternative. Alas, the state-of-the-art technology is not suitable to perform this kind of work, since current approaches are not accurate enough and the process is usually both inefficient and uncomfortable for the user. As alternative, an interactive-predictive paradigm has gained recently an increasing popularity, mainly due to promising empirical results that estimate considerable reductions of user effort. In order to assess whether these empirical results can lead indeed to actual benefits, we developed a working prototype and conducted a field study remotely. Thirteen regular computer users tested two different transcription engines through the above-mentioned prototype. We observed that the interactive-predictive version allowed to transcribe better (less errors and fewer iterations to achieve a high-quality output) in comparison to the manual engine. Additionally, participants ranked higher such an interactive-predictive system in a usability questionnaire. We describe the evaluation methodology and discuss our preliminary results. While acknowledging the known limitations of our experimentation, we conclude that the interactive-predictive paradigm is an efficient approach for transcribing handwritten text.",2011,0, 5091,Quantifying Usability and Security in Authentication,"Substantial research has been conducted in developing sophisticated security methods with authentication mechanisms placed in the front line of defense. Since these mechanisms are based on user conduct, they may not accomplish the intended objectives with improper use. Despite the influence of usability, little research has been focused on the balance between usability and security in authentication mechanisms when evaluating the effectiveness of these systems. In this paper we present a quantification approach for assessing usable security in authentication mechanisms. The purpose of this approach is to guide the evaluation process of authentication mechanisms in a given environment by balancing usability and security and defining quantifiable quality criteria.",2011,0, 5092,"Does """"Depth"""" Really Matter? On the Role of Model Refinement for Testing and Reliability","Model-based testing attempts to generate test cases from a model focusing on relevant aspects of a given system under consideration (SUC). When SUC becomes too large to be modeled in a single step, existing design techniques usually require a modularization of the modeling process. Thereby, the refinement process results in a decomposition of the model into several hierarchical layers. Conventional testing requires the refined components be completely replaced by these subcomponents for test case generation. Mostly, this resolution of components leads to an oversized, large model where test case generation becomes very costly, and the generated test case set is very large leading to infeasible long test execution time. To solve these problems, we present a new strategy to reduce (i) the number of test cases, and (ii) the costs of test case generation and test execution. For determining the trade-off due to this cost reduction, the reliability achieved by the new approach is compared with the reliability of the conventional approach. A case study based on a large web-based commercial system validates the approach and discusses its characteristics. We found out that the new approach could detect about 80% of the faults for about 20% of the test effort compared with the conventional approach.",2011,0, 5093,Semantic-Based Test Oracles,"Test oracle is one of the most difficult parts for test automation. For software with a large number of test cases, it is always both expensive and error prone to develop and maintain test oracles. The research is motivated by industry needs of automated testing on software with standard interfaces in an open system architecture. In counter to test oracle challenges, it proposes an innovative method to represent and calculate test oracles based on the semantic model of standard interface service specification of the software under test (SUT). Semantic model provides well-defined domain knowledge of service data, functionalities and constraints. Rules are created to model the expected SUT behavior in terms of antecedents and consequents. For each service, it captures both direct input-output relations and service interactions, that is, how the execution of a service may be affected by (pre-condition) or impact (post-condition) the SUT system state. As rule languages are neutral to programming languages, oracles specified in this way are independent of SUT implementations and can be reused across different systems conforming to the same interface standards. With the support of semantic techniques and tools like ontology modeler and rule engine, the proposed approach can enhance test oracle automation based on sophisticated defined domain model. Experiments and analysis show promising improvements in test productivity and quality.",2011,0, 5094,SoftWare IMmunization (SWIM) - A Combination of Static Analysis and Automatic Testing,"Static program analysis uses many checkers to discover a very large number of programming issues, but with a high false alarm rate. With the aid of dynamic automatic testing, the actual severe defects can be confirmed by failures of test cases. After defects are fixed, similar types of defects tend to reoccur again. In this paper, we propose a SoftWare IMmunization (SWIM) method to combine static analysis and automatic testing results for detecting severe defects and preventing similar defects from reoccurring, i.e. to have the software immunized from the same type of defects. Three industrial trials of the technology demonstrated the feasibility and defect detection accuracy of the SWIM technology.",2011,0, 5095,The impact of fault models on software robustness evaluations,"Following the design and in-lab testing of software, the evaluation of its resilience to actual operational perturbations in the field is a key validation need. Software-implemented fault injection (SWIFI) is a widely used approach for evaluating the robustness of software components. Recent research [24, 18] indicates that the selection of the applied fault model has considerable influence on the results of SWIFI-based evaluations, thereby raising the question how to select appropriate fault models (i.e. that provide justified robustness evidence). This paper proposes several metrics for comparatively evaluating fault models's abilities to reveal robustness vulnerabilities. It demonstrates their application in the context of OS device drivers by investigating the influence (and relative utility) of four commonly used fault models, i.e. bit flips (in function parameters and in binaries), data type dependent parameter corruptions, and parameter fuzzing. We assess the efficiency of these models at detecting robustness vulnerabilities during the SWIFI evaluation of a real embedded operating system kernel and discuss application guidelines for our metrics alongside.",2011,0, 5096,Assessing programming language impact on development and maintenance: a study on c and c++,"Billions of dollars are spent every year for building and maintaining software. To reduce these costs we must identify the key factors that lead to better software and more productive development. One such key factor, and the focus of our paper, is the choice of programming language. Existing studies that analyze the impact of choice of programming language suffer from several deficiencies with respect to methodology and the applications they consider. For example, they consider applications built by different teams in different languages, hence fail to control for developer competence, or they consider small-sized, infrequently-used, short-lived projects. We propose a novel methodology which controls for development process and developer competence, and quantifies how the choice of programming language impacts software quality and developer productivity. We conduct a study and statistical analysis on a set of long-lived, widely-used, open source projects - Firefox, Blender, VLC, and MySQL. The key novelties of our study are: (1) we only consider projects which have considerable portions of development in two languages, C and C++, and (2) a majority of developers in these projects contribute to both C and C++ code bases. We found that using C++ instead of C results in improved software quality and reduced maintenance effort, and that code bases are shifting from C to C++. Our methodology lays a solid foundation for future studies on comparative advantages of particular programming languages.",2011,0, 5097,Socio-technical developer networks: should we trust our measurements?,"Software development teams must be properly structured to provide effectiv collaboration to produce quality software. Over the last several years, social network analysis (SNA) has emerged as a popular method for studying the collaboration and organization of people working in large software development teams. Researchers have been modeling networks of developers based on socio-technical connections found in software development artifacts. Using these developer networks, researchers have proposed several SNA metrics that can predict software quality factors and describe the team structure. But do SNA metrics measure what they purport to measure? The objective of this research is to investigate if SNA metrics represent socio-technical relationships by examining if developer networks can be corroborated with developer perceptions. To measure developer perceptions, we developed an online survey that is personalized to each developer of a development team based on that developer's SNA metrics. Developers answered questions about other members of the team, such as identifying their collaborators and the project experts. A total of 124 developers responded to our survey from three popular open source projects: the Linux kernel, the PHP programming language, and the Wireshark network protocol analyzer. Our results indicate that connections in the developer network are statistically associated with the collaborators whom the developers named. Our results substantiate that SNA metrics represent socio-technical relationships in open source development projects, while also clarifying how the developer network can be interpreted by researchers and practitioners.",2011,0, 5098,Run-time efficient probabilistic model checking,"Unpredictable changes continuously affect software systems and may have a severe impact on their quality of service, potentially jeopardizing the system's ability to meet the desired requirements. Changes may occur in critical components of the system, clients' operational profiles, requirements, or deployment environments. The adoption of software models and model checking techniques at run time may support automatic reasoning about such changes, detect harmful configurations, and potentially enable appropriate (self-)reactions. However, traditional model checking techniques and tools may not be simply applied as they are at run time, since they hardly meet the constraints imposed by on-the-fly analysis, in terms of execution time and memory occupation. This paper precisely addresses this issue and focuses on reliability models, given in terms of Discrete Time Markov Chains, and probabilistic model checking. It develops a mathematical framework for run-time probabilistic model checking that, given a reliability model and a set of requirements, statically generates a set of expressions, which can be efficiently used at run-time to verify system requirements. An experimental comparison of our approach with existing probabilistic model checkers shows its practical applicability in run-time verification.",2011,0, 5099,Detecting software modularity violations,"This paper presents Clio, an approach that detects modularity violations, which can cause software defects, modularity decay, or expensive refactorings. Clio computes the discrepancies between how components should change together based on the modular structure, and how components actually change together as revealed in version history. We evaluated Clio using 15 releases of Hadoop Common and 10 releases of Eclipse JDT. The results show that hundreds of violations identified using Clio were indeed recognized as design problems or refactored by the developers in later versions. The identified violations exhibit multiple symptoms of poor design, some of which are not easily detectable using existing approaches.",2011,0, 5100,Bringing domain-specific languages to digital forensics,"Digital forensics investigations often consist of analyzing large quantities of data. The software tools used for analyzing such data are constantly evolving to cope with a multiplicity of versions and variants of data formats. This process of customization is time consuming and error prone. To improve this situation we present DERRIC, a domain-specific language (DSL) for declaratively specifying data structures. This way, the specification of structure is separated from data processing. The resulting architecture encourages customization and facilitates reuse. It enables faster development through a division of labour between investigators and software engineers. We have performed an initial evaluation of DERRIC by constructing a data recovery tool. This so-called carver has been automatically derived from a declarative description of the structure of JPEG files. We compare it to existing carvers, and show it to be in the same league both with respect to recovered evidence, and runtime performance.",2011,0, 5101,Building and using pluggable type-checkers,"This paper describes practical experience building and using pluggable type-checkers. A pluggable type-checker refines (strengthens) the built-in type system of a programming language. This permits programmers to detect and prevent, at compile time, defects that would otherwise have been manifested as run-time errors. The prevented defects may be generally applicable to all programs, such as null pointer dereferences. Or, an application-specific pluggable type system may be designed for a single application. We built a series of pluggable type checkers using the Checker Framework, and evaluated them on 2 million lines of code, finding hundreds of bugs in the process. We also observed 28 first-year computer science students use a checker to eliminate null pointer errors in their course projects. Along with describing the checkers and characterizing the bugs we found, we report the insights we had throughout the process. Overall, we found that the type checkers were easy to write, easy for novices to productively use, and effective in finding real bugs and verifying program properties, even for widely tested and used open source projects.",2011,0, 5102,Characterizing the differences between pre- and post- release versions of software,"Many software producers utilize beta programs to predict post-release quality and to ensure that their products meet quality expectations of users. Prior work indicates that software producers need to adjust predictions to account for usage environments and usage scenarios differences between beta populations and post-release populations. However, little is known about how usage characteristics relate to field quality and how usage characteristics differ between beta and post-release. In this study, we examine application crash, application hang, system crash, and usage information from millions of Windows users to 1) examine the effects of usage characteristics differences on field quality (e.g. which usage characteristics impact quality), 2) examine usage characteristics differences between beta and post-release (e.g. do impactful usage characteristics differ), and 3) report experiences adjusting field quality predictions for Windows. Among the 18 usage characteristics that we examined, the five most important were: the number of application executed, whether the machines was pre-installed by the original equipment manufacturer, two sub-populations (two language/geographic locales), and whether Windows was 64-bit (not 32-bit). We found each of these usage characteristics to differ between beta and post-release, and by adjusting for the differences, accuracy of field quality predictions for Windows improved by ~59%.",2011,0, 5103,An industrial case study on quality impact prediction for evolving service-oriented software,"Systematic decision support for architectural design decisions is a major concern for software architects of evolving service-oriented systems. In practice, architects often analyse the expected performance and reliability of design alternatives based on prototypes or former experience. Model-driven prediction methods claim to uncover the tradeoffs between different alternatives quantitatively while being more cost-effective and less error-prone. However, they often suffer from weak tool support and focus on single quality attributes. Furthermore, there is limited evidence on their effectiveness based on documented industrial case studies. Thus, we have applied a novel, model-driven prediction method called Q-ImPrESS on a large-scale process control system consisting of several million lines of code from the automation domain to evaluate its evolution scenarios. This paper reports our experiences with the method and lessons learned. Benefits of Q-ImPrESS are the good architectural decision support and comprehensive tool framework, while one drawback is the time-consuming data collection.",2011,0, 5104,Positive effects of utilizing relationships between inconsistencies for more effective inconsistency resolution: NIER track,"State-of-the-art modeling tools can help detect inconsistencies in software models. Some can even generate fixing actions for these inconsistencies. However such approaches handle inconsistencies individually, assuming that each single inconsistency is a manifestation of an individual defect. We believe that inconsistencies are merely expressions of defects. That is, inconsistencies highlight situations under which defects are observable. However, a single defect in a software model may result in many inconsistencies and a single inconsistency may be the result of multiple defects. Inconsistencies may thus be related to other inconsistencies and we believe that during fixing, one should consider clusters of such related inconsistencies. This paper provides first evidence and emerging results that several inconsistencies can be linked to a single defect and show that with such knowledge only a subset of fixes need to be considered during inconsistency resolution.",2011,0, 5105,Automated usability evaluation of parallel programming constructs: nier track,"Multicore computers are ubiquitous, and proposals to extend existing languages with parallel constructs mushroom. While everyone claims to make parallel programming easier and less error-prone, empirical language usability evaluations are rarely done in-the-field with many users and real programs. Key obstacles are costs and a lack of appropriate environments to gather enough data for representative conclusions. This paper discusses the idea of automating the usability evaluation of parallel language constructs by gathering subjective and objective data directly in every software engineer's IDE. The paper presents an Eclipse prototype suite that can aggregate such data from potentially hundreds of thousands of programmers. Mismatch detection in subjective and objective feedback as well as construct usage mining can improve language design at an early stage, thus reducing the risk of developing and maintaining inappropriate constructs. New research directions arising from this idea are outlined for software repository mining, debugging, and software economics.",2011,0, 5106,DyTa: dynamic symbolic execution guided with static verification results,"Software-defect detection is an increasingly important research topic in software engineering. To detect defects in a program, static verification and dynamic test generation are two important proposed techniques. However, both of these techniques face their respective issues. Static verification produces false positives, and on the other hand, dynamic test generation is often time consuming. To address the limitations of static verification and dynamic test generation, we present an automated defect-detection tool, called DyTa, that combines both static verification and dynamic test generation. DyTa consists of a static phase and a dynamic phase. The static phase detects potential defects with a static checker; the dynamic phase generates test inputs through dynamic symbolic execution to confirm these potential defects. DyTa reduces the number of false positives compared to static verification and performs more efficiently compared to dynamic test generation.",2011,0, 5107,The quamoco tool chain for quality modeling and assessment,"Continuous quality assessment is crucial for the long-term success of evolving software. On the one hand, code analysis tools automatically supply quality indicators, but do not provide a complete overview of software quality. On the other hand, quality models define abstract characteristics that influence quality, but are not operationalized. Currently, no tool chain exists that integrates code analysis tools with quality models. To alleviate this, the Quamoco project provides a tool chain to both define and assess software quality. The tool chain consists of a quality model editor and an integration with the quality assessment toolkit ConQAT. Using the editor, we can define quality models ranging from abstract characteristics down to operationalized measures. From the quality model, a ConQAT configuration can be generated that can be used to automatically assess the quality of a software system.",2011,0, 5108,Using software evolution history to facilitate development and maintenance,"Much research in software engineering have been focused on improving software quality and automating the maintenance process to reduce software costs and mitigating complications associated with the evolution process. Despite all these efforts, there are still high cost and effort associated with software bugs and software maintenance, software still continues to be unreliable, and software bugs can wreak havoc on software producers and consumers alike. My dissertation aims to advance the state-of-art in software evolution research by designing tools that can measure and predict software quality and to create integrated frameworks that helps in improving software maintenance and research that involves mining software repositories.",2011,0, 5109,Test blueprint: an effective visual support for test coverage,"Test coverage is about assessing the relevance of unit tests against the tested application. It is widely acknowledged that a software with a """"good"""" test coverage is more robust against unanticipated execution, thus lowering the maintenance cost. However, insuring a coverage of a good quality is challenging, especially since most of the available test coverage tools do not discriminate software components that require a """"strong"""" coverage from the components that require less attention from the unit tests. HAPAO is an innovative test covepage tool, implemented in the Pharo Smalltalk programming language. It employs an effective and intuitive graphical representation to visually assess the quality of the coverage. A combination of appropriate metrics and relations visually shapes methods and classes, which indicates to the programmer whether more effort on testing is required. This paper presents the essence of HAPAO using a real world case study.",2011,0, 5110,Programming safety requirements in the REFLECT design flow,"The common approach to include non-functional requirements in tool chains for hardware/software embedded systems requires developers to manually change the software code and/or the hardware, in an error-prone and tedious process. In the REFLECT research project we explore a novel approach where safety requirements are described using an aspect- and strategy-oriented programming language, named LARA, currently under development. The approach considers that the weavers in the tool chain use those safety requirements specified as aspects and strategies to produce final implementations according to specific design patterns. This paper presents our approach including LARA-based examples using an avionics application targeting the FPGA-based embedded systems consisting of a general purpose processor (GPP) coupled to custom computing units.",2011,0, 5111,Reputation measure approach of web service for service selection,"In choosing web services with quality of service (QoS), the reputation attribute of QoS is very important for users to obtain reliable services in service selection. However, existing approaches rely on feedback ratings, which usually lead to the subjectivity and unfairness of reputation measure. The authors propose a reputation measure approach for web services. The approach employs three phases (i.e. feedback checking, feedback adjustment and malicious feedback detection) to enhance the reputation measure accuracy. A user survey form was first established to check the feedback ratings from these users who are lacking in feedback ability. Then the feedback ratings are adjusted with different user feedback preferences by calculating feedback similarity. Finally, the authors detect malicious feedback ratings by adopting cumulative sum method. Simulation results show that the proposed approach is effective and can greatly improve service selection process in service-oriented business applications.",2011,0, 5112,Reliability-Aware Design Optimization for Multiprocessor Embedded Systems,"This paper presents an approach for the reliability-aware design optimization of real-time systems on multi-processor platforms. The optimization is based on an extension of well accepted fault- and process-models. We combine utilization of hardware replication and software re-execution techniques to tolerate transient faults. A System Fault Tree (SFT) analysis is proposed, which computes the system-level reliability in presence of the hardware and software redundancy based on component failure probabilities. We integrate the SFT analysis with a Multi-Objective Evolutionary Algorithm (MOEA) based optimization process to perform efficient reliability-aware design space exploration. The solution resulting from our optimization contains the mapping of tasks to processing elements (PEs), the exact task and message schedule and the fault-tolerance policy assignment. The effectiveness of the approach is illustrated using several case studies.",2011,0, 5113,Modular Fault Injector for Multiple Fault Dependability and Security Evaluations,"The increasing level of integration and decreasing size of circuit elements leads to greater probabilities of operational faults. More sensible electronic devices are also more prone to external influences by energizing radiation. Additionally not only natural causes of faults are a concern of today's chip designers. Especially smart cards are exposed to complex attacks through which an adversary tries to extract knowledge from a secured system by putting it into an undefined state. These problems make it increasingly necessary to test a new design for its fault robustness. Several previous publications propose the usage of single bit injection platforms, but the limited impact of these campaigns might not be the right choice to provide a wide fault attack coverage. This paper first introduces a new in-system fault injection strategy for automatic test pattern injection. Secondly, an approach is presented that provides an abstraction of the internal fault injection structures to a more generic high level view. Through this abstraction it is possible to support the task separation of design and test-engineers and to enable the emulation of physical attacks on circuit level. The controller's generalized interface provides the ability to use the developed controller on different systems using the same bus system. The high level of abstraction is combinable with the advantage of high performance autonomous emulations on high end FPGA-platforms.",2011,0, 5114,Compatibility Study of Compile-Time Optimizations for Power and Reliability,"Historically compiler optimizations have been used mainly for improving embedded systems performance. However, for a wide range of today's power restricted, battery operated embedded devices, power consumption becomes a crucial problem that is addressed by modern compilers. Biomedical implants are one good example of such embedded systems. In addition to power, such devices need to also satisfy high reliability levels. Therefore, performance, power and reliability optimizations should all be considered while designing and programming implantable systems. Various software optimizations, e.g., during compilation, can provide the necessary means to achieve this goal. Additionally the system can be configured to trade-off between the above three factors based on the specific application requirements. In this paper we categorize previous works on compiler optimizations for low power and fault tolerance. Our study considers differences in instruction count and memory overhead, fault coverage and hardware modifications. Finally, the compatibility of different methods from both optimization classes is assessed. Five compatible pairs that can be combined with few or no limitations have been identified.",2011,0, 5115,Developing Mobile Applications for Multiple Platforms,"Developing software for mobile devices requires special attention, and it still requires more large effort compared to software development for desktop computers and servers. With the introduction and popularity of wireless devices, the diversity of the platforms has also been increased. There are different platforms and tools from different vendors such as Microsoft, Sun, Nokia, SonyEricsson and many more. Because of the relatively low-level programming interface, software development (e.g. for Symbian) platform is a tiresome and error prone task, whereas Android and Windows Mobile contains higher level structures. This keynote introduces the problem of the software development for incompatible mobile platforms. Moreover, it provides a Model-Driven Architecture (MDA) and Domain Specific Modeling Language (DSML)-based solution. We will also discuss the relevance of the model-based approach that facilitates a more efficient software development because the reuse and the generative techniques are key characteistics of model-based computing. In the presented approach, the platform-independence lies in the model transformation. This keynote illustrates the creation of model compliers on a metamodeling basis by a software package called Visual Modeling and Transformation System (VMTS), which is a multipurpose modeling and metamodel-based transformation system. A case study is also presented on how model compilers can be used to generate user interface handler code for different mobile platforms from the same platform-independent input models.",2011,0, 5116,Modeling Contextual Concerns in Enterprise Architecture,"Enterprise Architecture approaches are used to provide rigorous descriptions of the organization-wide environment, manage the alignment of deployed services to the organization's mission, end ensure a clear separation of the concerns addressed in an architecture. Thus, an effective Enterprise Architecture approach assists in the management of relations and dependencies of any components of the organization environment and supports the integration and evolution of the architecture. However, the quality of that approach is strongly influenced by the precision of the architecture context description, a fact which is not always recognized. This paper focuses on the architecture context description and addresses the gap between the stakeholders' concerns and the resulting architecture. Based on a combination of established references and standards, we show how an explicit integration of the architecture context into the architecture model improves the linking of concerns and key elements of the architecture vision. We apply our approach to a subject of increasing concern in the Information Systems area: longevity of information. Digital preservation is an interdisciplinary problem, but existent initiatives address it in a very domain-centric way, making it impossible to integrate documented knowledge into an overall organization architecture. We analyze several references and models and derive a description of the architecture context and a capability model that supports incremental development through an explicit distinction between systems and their capabilities. The presented approach allows not just any organization to assess their current digital preservation awareness and evolve their architectures to address this challenge, but in particular demonstrates the added value of an explicit architecture context model in an Enterprise Architecture approach.",2011,0, 5117,Design of nuclear measuring instrument fault diagnosis system based on circuit characteristic test,"The circuits of nuclear measuring instruments are very complicated, and the testing and fault inspection of the nuclear measuring instrument system is difficult. To solve this problem, the fault diagnosis system was designed by combing circuit characteristic test technology with virtual instrument, virtual testing, test data analysis and data management technology. The method based on circuit characteristic test, which is comparing the characteristic curves of tested circuit with the one of the corresponding normal circuit, is applied to the fault diagnosis of nuclear measuring instrument. The fault diagnosis system consists of hardware part and software part. The hardware part includes computer, data acquisition module, arbitrary waveform generator (AWG) module, interface circuit module, electric relay module, etc; the software is made up of computer management software module, control software module and testing functional software module and so on. By the fault diagnosis system, it can test the circuit characteristic of any circuit module of the nuclear measuring instruments, diagnoses and finds out the faulted parts of the nuclear measuring instruments quickly and shows the diagnosing results on the display.",2011,0, 5118,The study on stability and reliability of the nuclear detector,"Because of measurement environment and its own performance defect, the poor stability of nuclear radiation detector used on portable nuclear instrument leads to spectrum drift and decrease of energy resolution, which causes reduction of the detect efficiency. This results in lower measurement precision and accuracy of the system. Generally, the stabilization method based on hardware and software was applied. It is difficult to solve the problem about Spectrum drift caused by multiple factors. To slove the nonlinear problems caused by interference source, Multi-sensor Data Fusion Technique is adopt to design the intelligent nuclear detector with the Self-compensation Technique. It shows that the results of the project can improve the stability and reliability of the fieldwork measurement in system of the nuclear instruments. Except that it will improve the adaptive ability of the nuclear instruments to measuring environment.",2011,0, 5119,Optimal Model-Based Policies for Component Migration of Mobile Cloud Services,"Two recent trends are major motivators for service component migration: the upcoming use of cloud-based services and the increasing number of mobile users accessing Internet-based services via wireless networks. While cloud-based services target the vision of Software as a Service, where services are ubiquitously available, mobile use leads to varying connectivity properties. In spite of temporary weak connections and even disconnections, services should remain operational. This paper investigates service component migration between the mobile client and the infrastructure-based cloud as a means to avoid service failures and improve service performance. Hereby, migration decisions are controlled by policies. To investigate component migration performance, an analytical Markov model is introduced. The proposed model uses a two-phased approach to compute the probability to finish within a deadline for a given reconfiguration policy. The model itself can be used to determine the optimal policy and to quantify the gain that is obtained via reconfiguration. Numerical results from the analytic model show the benefit of reconfigurations and the impact of different reconfigurations applied to three service types, as immediate reconfigurations are in many cases not optimal, a threshold on time before reconfiguration can take place is introduced to control reconfiguration.",2011,0, 5120,The Study of OFDM ICI Cancellation Schemes in 2.4 GHz Frequency Band Using Software Defined Radio,"In Orthogonal Frequency Division Multiplexing (OFDM), frequency offset is a common problem that causes inter-carrier-interference (ICI) that degrades the quality of the transmitted signal. Many theoretical studies of the different ICI-cancellation schemes have been reported earlier by many authors. The need for experimental verification of the theoretically predicted results in 2.4 GHz frequency band is important. One of most widely used systems is Wi-Fi (IEE 802.11b) that makes use of this frequency band for short range wireless communication with throughput as high as 11 Mbps. In this work, several new ICI cancellation schemes have been tested in 2.4 GHz frequency using open source Software Defined Radio (SDR) namely GNU Radio. The GNU Radio system used in the experiment had two Universal Software Radio Peripheral (USRP N210) modules connected to a computer. Both the USRP units had one-daughterboard (XCVR2450) each for transmission and reception of radio signals. The input data to the USRP was prepared in compliance with IEEE-802.11b specification. The experimental results were compared with the theoretical results of the new Inter-Carrier Interference (ICI) cancellation schemes. The comparison of the results revealed that the new schemes are suitable for high performance transmission. The results of this paper open up new opportunities of using OFDM in heavily congested 2.4 GHz and 5 GHz bands (WiFi5: IEEE 802.a) for error free data transmission. The schemes also can be used in other frequencies where channels are heavily congested.",2011,0, 5121,The Research and Implement of Air Quality Monitoring System Based on ZigBee,"Abstract- Health, Safety and Environment management system (HSE) is a general management system of international oil and gas industry. In order to comply with HSE management system, an air quality monitoring system is researched based on ZigBee wireless sensor technology, which is applied in industrial sites. The system includes: detecting terminal, wireless router, wireless gateway, software of field devices and monitoring equipment; the system can measure a variety of gas parameters, such as: CO2 concentration, CO concentration, air quality level, temperature and humidity; Features of the system have high accuracy, quick sensitivity, wide monitoring range, etc..",2011,0, 5122,A New Face Detection Method with GA-BP Neural Network,"In this paper, the BP neural network improved by the genetic algorithm (GA) is applied to the problem of human face detection. GA is used to optimize the initial weights of the BP neural network to make full use of its global optimization and local accurate searching of the BP algorithm. Matlab Software and its neural network toolbox are used to simulate and compute. The experiment results show that the GA-BP neural network has a good performance for face detection. Furthermore, compared with the conventional BP algorithm, the GA-BP learning algorithm has more rapid convergence and better assessment accuracy of detecting quality.",2011,0, 5123,A self-healing architecture for web services based on failure prediction and a multi agent system,"Failures during web service execution may depend on a wide variety of causes. One of those is loss of Quality of Service (QoS). Failures during web service execution impose heavy costs on services-oriented architecture (SOA). In this paper, we seek to achieve a self-healing architecture to reduce failures in web services. We believe that failure prediction prevents the occurrence of failures and enhances the performance of SOA. The proposed architecture consists of three agents: Monitoring, Diagnosis and Repair. Monitoring agent measures quality parameters in communication level and predicts future values of quality parameter by Time Series Forecasting (TSF) with the help of Neural Network (NN). Diagnosis agent analyzes current and future QoS parameters values for diagnose web service failures. Based on its algorithm, the Diagnosis agent detects failures and faults in web services executions. Repair agent manages repair actions by using Selection agent.",2011,0, 5124,TMM Appraisal Assistant Tool,"Software testing is an important component in the software development life cycle, which leads to have the high quality of software product. Therefore, software industry has focused on improving the testing process for better performance. The Testing Maturity Model (TMM) is one choice to apply for improving the testing process. It guides organization about framework of software testing. The TMM Assessment Model (TMM-AM) is a test process assessment model following the TMM. The TMM-AM consists of processes to assess test capability of organizations. Currently, each organization has various limitations such as cost, effort, time, and know-how. The assessment process lacks of tools to be performed. How to help them to improve their testing process is our key point. This paper proposes a supporting tool based on the TMM-AM which each organization can assess its testing process by itself. The tool can identify test maturity level for an organization and suggest procedures to reach its goal.",2011,0, 5125,A Design and Implementation of a Terrestrial Magnetism and Acceleration Sensor Device for Worker's Motion Tracing System,"Guarantee of quality on industrial products is based on confirm whether materials, parts assembly and processing satisfy regulated standards or not. In some assemble process, it is hard to confirm the regulations are satisfied after once a work for the process have completed. For example, to fix a part using some screws, order to fasten the screws is regulated as a standard procedure to warrant accuracy. However, if screws are fastened in wrong order, the part will be fixed, however, incorrect order cannot be detected once this work have completed. We have long term experiment in a fuel tank attaching process in an automobile assembly factory. In this experiment, reliability of a sensor device we used was not enough. Also, to apply our system to other processes, improvement of reliability of the sensor is required. So we designed and implemented new sensor hardware. In this paper, we describe our new developed terrestrial magnetism sensor.",2011,0, 5126,Next-generation massively parallel short-read mapping on FPGAs,"The mapping of DNA sequences to huge genome databases is an essential analysis task in modern molecular biology. Having linearized reference genomes available, the alignment of short DNA reads obtained from the sequencing of an individual genome against such a database provides a powerful diagnostic and analysis tool. In essence, this task amounts to a simple string search tolerating a certain number of mismatches to account for the diversity of individuals. The complexity of this process arises from the sheer size of the reference genome. It is further amplified by current next-generation sequencing technologies, which produce a huge number of increasingly short reads. These short reads hurt established alignment heuristics like BLAST severely. This paper proposes an FPGA-based custom computation, which performs the alignment of short DNA reads in a timely manner by the use of tremendous concurrency for reasonable costs. The special measures to achieve an extremely efficient and compact mapping of the computation to a Xilinx FPGA architecture are described. The presented approach also surpasses all software heuristics in the quality of its results. It guarantees to find all alignment locations of a read in the database while also allowing a freely adjustable character mismatch threshold. On the contrary, advanced fast alignment heuristics like Bowtie and Maq can only tolerate small mismatch maximums with a quick deterioration of the probability to detect existing valid alignments. The performance comparison with these widely used software tools also demonstrates that the proposed FPGA computation achieves its guaranteed exact results in very competitive time.",2011,0, 5127,Design of a high performance FPGA based fault injector for real-time safety-critical systems,"Fault injection methods have long been used to assess fault tolerance and safety. However, many conventional fault injection methods face significant shortcomings, which hinder their ability to execute fault injections on target real-time safety-critical systems. We demonstrate a novel fault injection system implemented on a commercial Field-Programmable Gate Array board. The fault injector is unobtrusive to the target system as it utilizes only standardized On-Chip-Debugger (OCD) interfaces present on most current processors. This effort resulted in faults being injected orders of magnitude faster than by utilizing a commercial OCD debugger, while incorporating novel features such as concurrent injection of faults into distinct target processors. The effectiveness of this high performance fault injector was successfully demonstrated on a tightly synchronized commercial real-time safety-critical system used in nuclear power applications.",2011,0, 5128,Temporal aspects of scoring in the user based quality evaluation of HD video,The paper deals with the temporal properties of a scoring session when assessing the subjective quality of full HD video sequences using the continuous video quality tests. The performed experiment uses a modification of the standard test methodology described in ITU-R Rec. BT.500. It focuses on the reactive times and the time needed for the user ratings to stabilize at the beginning of a video sequence.,2011,0, 5129,Networked fault detection of nonlinear systems,"This paper addresses Fault Detection (FD) problem of a class of nonlinear systems which are monitored via the communications networks. A sufficient condition is derived which guarantees exponential mean-square stability of the proposed nonlinear NFD systems in the presence of packet drop, quantization error and unwanted exogenous inputs such as disturbance and noise. A Linear Matrix Inequality (LMI) is obtained for the design of the fault detection filter parameters. Finally, the effectiveness of the proposed NFD technique is extensively assessed by using an experimental testbed that has been built for performance evaluation of such systems with the use of IEEE 802.15.4 Wireless Sensor Networks (WSNs) technology. An algorithm is presented to handle floating point calculus when connecting the WSNs to the engineering design softwares such as Matlab.",2011,0, 5130,Detecting and diagnosing application misbehaviors in on-demand virtual computing infrastructures,"Numerous automated anomaly detection and application performance modeling and management tools are available to detect and diagnose faulty application behavior. However, these tools have limited utility in `on-demand' virtual computing infrastructures because of the increased tendencies for the applications in virtual machines to migrate across un-comparable hosts in virtualized environments and the unusually long latency associated with the training phase. The relocation of the application subsequent to the training phase renders the already collected data meaningless and the tools need to re-initiate the learning process on the new host afresh. Further, data on several metrics need to be correlated and analyzed in real time to infer application behavior. The multivariate nature of this problem makes detection and diagnosis of faults in real time all the more challenging as any suggested approach must be scalable. In this paper, we provide an overview of a system architecture for detecting and diagnosing anomalous application behaviors even as applications migrate from one host to another and discuss a scalable approach based on Hotelling's T2 statistic and MYT decomposition. We show that unlike existing methods, the computations in the proposed fault detection and diagnosis method is parallelizable and hence scalable.",2011,0, 5131,Interactive requirements validation for reactive systems through virtual requirements prototype,"Adequate requirements validation can prevent errors from propagating into later development phases, and eventually improve the quality of software systems. However, validating natural language requirements is often difficult and error-prone. An effective means of requirements validation for embedded software systems has been to build a working model of the requirements in the form of a physical prototype that stakeholders can interact with. However, physical prototyping can be costly, and time consuming, extending the time it takes to obtain and implement stakeholder feedback. We have developed a requirements validation technique, called Virtual Requirements Prototype (VRP), that reduces cost and stakeholder feedback time by allowing stakeholders to validate embedded software requirements through the interaction with a virtual prototype.",2011,0, 5132,Streamlining scenario modeling with Model-Driven Development: A case study,"Scenario modeling can be realized through different perspectives. In UML, scenarios are often modeled with activity models, in an early stage of development. Later, sequence diagrams are used to detail object interactions. The migration from activity diagrams to sequence diagrams is a repetitive and error-prone task. Model-Driven Development (MDD) can help streamlining this process, through transformation rules. Since the information in the activity model is insufficient to generate the corresponding complete sequence model, manual refinements are required. Our goal is to compare the relative effort of building the sequence diagrams manually with that of building them semi-automatically. Our results show a decrease in the number of operations required to build and refine the sequence model of approximately 64% when using MDD, when compared to the manual approach.",2011,0, 5133,Probabilistic fault detection and handling algorithm for testing stability control systems with a drive-by-wire vehicle,"This paper presents a probabilistic fault detection and handling algorithm (PFDH) for redundant and deterministic X-by-wire systems. The algorithm is specifically designed to guarantee safe operation of an experimental drive-by-wire vehicle used as test platform and development tool in research projects focusing on vehicle dynamics. The required flexibility of the overall system for use as a test bed influences significantly the redundancy structure of the onboard network. A black box approach to integrate newly developed user algorithms is combined with a hot-standby architecture controlled by PFDH. This way, functional redundancy for basic driving operations can be achieved despite unknown software components. PFDH is based on monitoring multiple criteria over time, including vehicle dynamics and relative error probabilities of hard- and software components provided by experts or statistical data.",2011,0, 5134,Evaluating the use of model-based requirements verification method: A feasibility study,"Requirements engineering is one of the most important and critical phases in the software development life cycle, and should be carefully performed to build high quality and reliable software. However, requirements are typically gathered through various sources and represented in natural language (NL), making requirements engineering a difficult, fault prone, and a challenging task. To address this challenge, we propose a model-based requirements verification method called NLtoSTD, which transforms NL requirements into a state transition diagram (STD) that can be verified through automated reasoning. This paper analyzes the effect of NLtoSTD method in improving the quality of requirements. To do so, we conducted an empirical study at North Dakota State University in which the participants employed the NLtoSTD method during the inspection of requirement documents to identify the amibiguities and incompleteness of requirements. The experiment results show that the proposed method is capable of finding ambiguities and missing functionalities in a set of NL requirements, and provided us with insights and feedback to improve the method. The results are promising and have motivated the refinement of NLtoSTD method and future empirical evaluation.",2011,0, 5135,Assessing think-pair-square in distributed modeling of use case diagrams,"In this paper, we propose a new method for the modeling of use case diagrams in the context of global software development. It is based on think-pair-square, a widely used cooperative method for active problem solving. The validity of the developed technology (i.e., the method and its supporting environment) has been assessed through two controlled experiments. In particular, the experiments have been conducted to compare the developed technology with a brainstorming session based on face-to-face interaction. The comparison has been performed with respect to the time needed to model use case diagrams and the quality of the produced models. The data analysis indicates a significant difference in favor of the brainstorming session for the time, with no significant impact on the requirements specification.",2011,0, 5136,Assesing the understandability of collaborative systems requirements notations: An empirical study,"As for single user systems, a proper specification of software requirements is a very important issue to achieve the quality of the collaborative systems. Nevertheless, many of these requirements are from a non-functional nature because are related to the user's need of being aware of other users, that is, the workspace awareness. In order to model these special kind of requirements, CSRML, an extension of i* has been proposed. In this paper, we present a controlled experiment to assess the understandability of this notation compared to i*. The specification of two different systems was used as experimental material and undergraduate students of Computer Science with an average of two years experience in Requirements Engineering were the experimental subjects.",2011,0, 5137,Precise is better than light a document analysis study about quality of business process models,"Business process modelling is often used in the initial phases of traditional software development to reduce faulty requirements and as starting point for building SOA based applications. Often, modellers produce business process models without following recognized guidelines and opt for light models where nodes representing the actions are simply decorated with natural language text. The potential consequence of this practice is that the quality of built business process models may be low. In this paper, we propose a method based on manual transformations to detect flaws in light business process models expressed as activity diagrams. Using that method we have executed a document analysis study with 14 business process models taken by books and websites. Preliminary results of this study show that almost all the analysed business process models contain errors and style violations (precisely 92% of them).",2011,0, 5138,A Framework to Manage Knowledge from Defect Resolution Process,"This paper presents a framework for the management, the processing and the reuse, of information relative to defects. This framework is based on the fact that each defect triggers a resolution process in which information about the detected incident (i.e. the problem) and about the applied protocol to resolve it (i.e. the solution) is collected. These different types of information are the cornerstone of the optimization of corrective and preventive processes for new defects. Experimentations show that our prototype provides a very satisfactory quality of results with good performances.",2011,0, 5139,Critical-Path-Guided Interactive Parallelisation,"With the prevalence of multi-core processors, it is essential that legacy programs are parallelised effectively and efficiently. However, compilers have not been able to automatically extract sufficient parallelism in general programs. One of the major reasons, we argue, is that algorithms are often implemented sequentially in a way that unintentionally precludes efficient parallelisation. As manual parallelisation is usually tedious and error-prone, we propose a profiling-based interactive approach to program parallelisation, by presenting a tool-chain with two main components: Embla 2, a dependence-profiler that estimates the amount of task-level parallelism in programs, and Woolifier, a source-to-source transformer that uses Embla 2's output to parallelise programs using Wool, a Cilk-like API, to express parallelism. Based on profiled dependences, our tool-chain (i) performs an automatic best-effort parallelisation and (ii) presents remaining critical paths in a concise graphical form to the programmer, who can then quickly locate and refactor parallelism bottlenecks. Using case studies from the SPEC CPU 2000 benchmarks, we demonstrate how this tool-chain enables us to efficiently parallelise legacy sequential programs, achieving significant speed-ups on commodity multi-core processors.",2011,0, 5140,Virtual Machine Provisioning Based on Analytical Performance and QoS in Cloud Computing Environments,"Cloud computing is the latest computing paradigm that delivers IT resources as services in which users are free from the burden of worrying about the low-level implementation or system administration details. However, there are significant problems that exist with regard to efficient provisioning and delivery of applications using Cloud-based IT resources. These barriers concern various levels such as workload modeling, virtualization, performance modeling, deployment, and monitoring of applications on virtualized IT resources. If these problems can be solved, then applications can operate more efficiently, with reduced financial and environmental costs, reduced under-utilization of resources, and better performance at times of peak load. In this paper, we present a provisioning technique that automatically adapts to workload changes related to applications for facilitating the adaptive management of system and offering end-users guaranteed Quality of Services (QoS) in large, autonomous, and highly dynamic environments. We model the behavior and performance of applications and Cloud-based IT resources to adaptively serve end-user requests. To improve the efficiency of the system, we use analytical performance (queueing network system model) and workload information to supply intelligent input about system requirements to an application provisioner with limited information about the physical infrastructure. Our simulation-based experimental results using production workload models indicate that the proposed provisioning technique detects changes in workload intensity (arrival pattern, resource demands) that occur over time and allocates multiple virtualized IT resources accordingly to achieve application QoS targets.",2011,0, 5141,Intelligent agent based micro grid control,"Massive interconnection of power network has posed an operational challenge. The concept of intelligent control for regulating the power network variables has been realized. The intelligent agent based control can be a solution in today's power network to maintain the dynamics such as adequate power balance along with quality voltage under changing system conditions such as load and power injection. The technology with multi-agent intelligent control may be main module of Smart Grid architecture. This paper presents a concept of multi-agent intelligent grid control. A case study has been done to demonstrate the functionality in Matlab-Simulink environment. The multi-agent system is implemented by using an open source agent building toolkit Java Agent Development framework (JADE). Finally, both micro grid simulation and multi-agent system are connected together via MACSimJX toolbox. The simulation results indicate that proposed multi-agent system may facilitate the seamless transition from grid connected to an island mode when upstream outages are detected. This reveals the intelligence of multi-agent system for controlling the micro grid operation.",2011,0, 5142,A declarative approach to hardening services against QoS vulnerabilities,"The Quality of Service (QoS) in a distributed service-oriented application can be negatively affected by a variety of factors. Network volatility, hostile exploits, poor service management, all can prevent a service-oriented application from delivering its functionality to the user. This paper puts forward a novel approach to improving the reliability, security, and availability of service-oriented applications. To counter service vulnerabilities, a special service detects vulnerabilities as they emerge at runtime, and then hardens the applications by dynamically deploying special components. The novelty of our approach lies in using a declarative framework to express both vulnerabilities and hardening strategies in a domain-specific language, independent of the service infrastructure in place. Thus, our approach will make it possible to harden service-oriented applications in a disciplined and systematic fashion.",2011,0, 5143,Safe software processing by concurrent execution in a real-time operating system,"The requirements for safety-related software systems increases rapidly. To detect arbitrary hardware faults, there are applicable coding mechanism, that add redundancy to the software. In this way it is possible to replace conventional multi-channel hardware and so reduce costs. Arithmetic codes are one possibility of coded processing and are used in this approach. A further approach to increase fault tolerance is the multiple execution of certain critical parts of software. This kind of time redundancy is easily realized by the parallel processing in an operating system. Faults in the program flow can be monitored. No special compilers, that insert additional generated code into the existing program, are required. The usage of multi-core processors would further increase the performance of such multi-channel software systems. In this paper we present the approach of program flow monitoring combined with coded processing, which is encapsulated in a library of coded data types. The program flow monitoring is indirectly realized by means of an operating system.",2011,0, 5144,On human analyst performance in assisted requirements tracing: Statistical analysis,"Assisted requirements tracing is a process in which a human analyst validates candidate traces produced by an automated requirements tracing method or tool. The assisted requirements tracing process splits the difference between the commonly applied time-consuming, tedious, and error-prone manual tracing and the automated requirements tracing procedures that are a focal point of academic studies. In fact, in software assurance scenarios, assisted requirements tracing is the only way in which tracing can be at least partially automated. In this paper, we present the results of an extensive 12 month study of assisted tracing, conducted using three different tracing processes at two different sites. We describe the information collected about each study participant and their work on the tracing task, and apply statistical analysis to study which factors have the largest effect on the quality of the final trace.",2011,0, 5145,Simulating and optimising design decisions in quantitative goal models,Making decisions among a set of alternative system designs is an essential activity of requirements engineering. It involves evaluating how well each alternative satisfies the stakeholders' goals and selecting one alternative that achieves some optimal tradeoffs between possibly conflicting goals. Quantitative goal models support such activities by describing how alternative system designs - expressed as alternative goal refinements and responsibility assignments - impact on the levels of goal satisfaction specified in terms of measurable objective functions. Analyzing large numbers of alternative designs in such models is an expensive activity for which no dedicated tool support is currently available. This paper takes a first step towards providing such support by presenting automated techniques for (i) simulating quantitative goal models so as to estimate the levels of goal satisfaction contributed by alternative system designs and (ii) optimising the system design by applying a multi-objective optimisation algorithm to search through the design space. These techniques are presented and validated using a quantitative goal model for a well-known ambulance service system.,2011,0, 5146,NSTX Power Supply configuration control upgrade,"The National Spherical Torus Experiment (NSTX) is in its second decade of operation at PPPL. NSTX has a total of 15 coil systems (which include the coils, their dedicated power supplies and associated auxiliary equipment) that create and control the plasma per the experimental objectives. Each coil system is individually controllable via the NSTX Power Supply Real Time Controller (PSRTC) software code written in C language. The NSTX has great flexibility in both the configuration of its coil system and in the operating envelope afforded by the connected power supplies. To ensure proper operation and to minimize the probability of lost runtime due to system faults, the project has developed a procedure that governs system configuration. The Integrated System Test Procedure (ISTP-001) documents the NSTX machine parameters, experiment configuration limits, machine protection settings and device settings. This paper will describe calculations for the ISTP 001 methodology and system protection settings; record keeping of the various configuration revisions and the upgrade in progress to improve readability and calculation capabilities.",2011,0, 5147,On performance of combining methods for three-node half-duplex cooperative diversity network,"We analysis the performance of the ad-hoc network with a base station, a mobile and a third station acting as a relay. Three combining methods for the Amplify-and-Forward (AF) protocol and the Decode-and-Forward (DF) protocol are compared. Simulations indicate that the Amplifyand-Forward (AF) protocol beats the Decode-and-Forward (DF) protocol under all these three combining methods. To combine the incoming signals the channel quality should be estimated as well as possible, more estimation accuracy requires more resource. A very simple combining method can obtain the performance compared with that by optimal combining methods approximately. At the same time, all three combining methods for both diversity protocols can achieve the maximum diversity order.",2011,0, 5148,Automatic measurement of electrical parameters of signal relays,"The manufacturing process of Metal to Carbon relays used in railway signaling systems for configuring various circuits of signals / points / track circuits etc. consists of seven phases from raw material to finished goods. To ensure in-process quality, the electrical parameters are measured manually after each stage. Manual measurement process is tedious, error prone and involves lot of time, effort and manpower. Besides, it is susceptible to manipulation and may lead to inferior quality products being passed, either due to deliberation or due to malefic intentions. Due to erroneous measurement of electrical parameters, the functional reliability of relays is adversely affected. To enhance the trustworthiness of measurement of electrical parameters & to make the process faster, an automated measurement system having proprietary application software and a testing jig attachment has been developed. When the relay is fixed on the testing jig, the software scans all the relay contacts and measures all the electrical parameters viz. operating voltage / current, contact resistance, release voltage / current, coil resistance etc. The results are displayed on the computer screen and stored in a database file.",2011,0, 5149,An integrated Automatic Test Generation and executing system,"This paper presents an integrated Automatic Test Generation (ATG) and Automatic Test Executing/Equipment (ATE) system for complex boards. We developed an ATG technique called Behavior-Based Automatic Test Generation technique (namely BBATG). BBATG uses the device behavior fault model and represents a circuit board as interconnection of devices. A behavior of a device is a set of functions with timing relations on its in/out pins. When used for a digital circuit board test generation, BBATG utilizes device behavior libraries to drive behavior error signals and sensitize paths along one or multiple vectors so that a heavy and complicated iterating process can be avoided for sequential circuit test deductions. We have developed a complete set of test executing software and test supporting hardware for the ATE which can use the BBATG generated test data directly to detect behavior faults and diagnose faults at the device level for complex circuit boards. In addition, we have proposed and implemented useful technique, especially Design For Testability (DFT) [1][2] application technique on the integrated system, so the test generating/executing for complex boards with VLSI can be further simplified and optimized.",2011,0, 5150,Comparing software design for testability to hardware DFT and BIST,"Software is replacing hardware whenever possible, and this trend is increasing. Software faults are every bit as pervasive and difficult to deal with as hardware faults. Debugging software faults is manual, time consuming, often elusive and since they affect all systems deployed, most often they are critical. Design for Debugging would ensure that a software package can be readily debugged for any software fault. A comprehensive software test, however, is intended to eliminate the need for ad hoc debugging and ideally all bugs (we call software faults) would be caught and identified by the software test. Thus, it is imperative that the software community adopt means to ensure that software components are designed in a way that will detect and isolate software faults. This requirement is familiar to designers of hardware systems. Could the discipline of hardware design for testability (DFT) and Built-In [Self] Test (BIST) apply to software design for testability? The purpose of this paper is to discuss how many of the testability requirements and techniques for hardware DFT can be applied to software.",2011,0, 5151,Software tools: A key component in the successful implementation of the ATML standards,"This paper examines the IEEE Automatic Test Markup Language (ATML) family of standards and some of the impediments which must be overcome to successfully implement these standards. The paper specifically focuses on how software tools can help alleviate these issues and increase the benefits of using these new standards in Automatic Test System (ATS) related applications. The ATML standards provide a common exchange format for test data adhering to the Extensible Markup Language (XML) standard. ATML promises to provide interoperability between tools and multiple test platforms through the standardization of common test related data. The ATML standards have now been published through the IEEE Standards Coordinating Committee 20 (SCC20) committee and are beginning to exhibit considerable interest in the ATS community and are now a requirement on some new Department of Defense (DoD) ATS programs. Different aspects of ATML related tools shall be discussed such as ATML Development tools which assist in the generation of ATML compliant instance files, new ATS related tools which use ATML data in their applications and the modification of existing ATS tools to utilize ATML Data. This paper also examines the work in progress of a Small Business Innovative Research (SBIR) Naval Air Systems Command (NAVAIR) sponsored program to develop ATML and test diagram tools. Utilizing ATML standards without the benefit of tools can be a labor-intensive, error-prone process, and requires an intimate knowledge of the ATML and XML standards. Employing the ATML standards on ATS programs promises to significantly reduce costs and schedule; the use of software tools are a key component in the success of these implementations and will help promote the use of ATML throughout the test industry.",2011,0, 5152,Risk minimization in modernization projects of plant automation A knowledge-based approach by means of semantic web technologies,"In high-wage countries the number of Greenfield projects for plant automation is decreasing. In contrast to this, plant modernization becomes more and more important. The estimation of the costs for a re-engineering of the existing plant automation is an error-prone task which has to be done in the bidding phase of a modernization project. This article describes a knowledge-based approach to reduce the risk potential in the bidding phase of plant modernization projects. Based on a concept for rough plant modeling in CAEX and technologies of the semantic web a concept for a software assistance system is presented.",2011,0, 5153,Resolving state inconsistency in distributed fault-tolerant real-time dynamic TDMA architectures,"State consistency in safety-critical distributed systems is mandatory for synchronizing distributed decisions as found in dynamic time division multiple access (TDMA) schedules in the presence of faults. A TDMA schedule that supports networked systems making decisions at run time is sensitive to transient faults, because stations can make incorrect local decisions at run time and cause state inconsistency and collisions. We refer to this type of TDMA schedule as a dynamic TDMA schedule. Faulty decisions are especially undesirable for safety-critical systems with hard real-time constraints. Hence, real-time communication schedules must have the capability of detecting state inconsistency within a fixed amount of time. In this paper, we show through experimentation that state inconsistency is a real problem, and we propose a solution for resolving state inconsistency in TDMA schedules.",2011,0, 5154,Large-Scale Simulator for Global Data Infrastructure Optimization,"IT infrastructures in global corporations are appropriately compared with nervous systems, in which body parts (interconnected datacenters) exchange signals (request responses) in order to coordinate actions (data visualization and manipulation). A priori inoffensive perturbations in the operation of the system or the elements composing the infrastructure can lead to catastrophic consequences. Downtime disables the capability of clients reaching the latest versions of the data and/or propagating their individual contributions to other clients, potentially costing millions of dollars to the organization affected. The imperative need of guaranteeing the proper functioning of the system not only forces to pay particular attention to network outages, hot-objects or application defects, but also slows down the deployment of new capabilities, features and equipment upgrades. Under these circumstances, decision cycles for these modifications can be extremely conservative, and be prolonged for years, involving multiple authorities across departments of the organization. Frequently, the solutions adopted are years behind state-of-the art technologies or phased out compared to leading research on the IT infrastructure field. In this paper, the utilization of a large-scale data infrastructure simulator is proposed, in order to evaluate the impact of """" what if"""" scenarios on the performance, availability and reliability of the system. The goal is to provide data center operators a tool that allows understanding and predicting the consequences of the deployment of new network topologies, hardware configurations or software applications in a global data infrastructure, without affecting the service. The simulator was constructed using a multi-layered approach, providing a granularity down to the individual server component and client action, and was validated against a downscaled version of the data infrastructure of a Fortune 500 company.",2011,0, 5155,The Fading Boundary between Development Time and Run Time,"Summary form only given. Modern software applications are often embedded in highly dynamic contexts. Changes may occur in the requirements, in the behavior of the environment in which the application is embedded, in the usage profiles that characterize interactive aspects. Changes are difficult to predict and anticipate, and are out of control of the application. Their occurrence, however, may be disruptive, and therefore the software must also change accordingly. In many cases, changes to the software cannot be handled off-line, but require the software to self react by adapting its behavior dynamically, in order to continue to ensure the required quality of service. The big challenge in front of us is how to achieve the necessary degrees of flexibility and dynamism required in this setting without compromising dependability of the applications. To achieve dependability, a software engineering paradigm shift is needed. The traditional focus on quality, verification, models, and model transformations must extend from development time to run time. Not only software development environments (SDEs) are important for the software engineer to develop better software. Feature-full Software Run-time Environments (SREs) are also key. SREs must be populated by a wealth of functionalities that support on-line monitoring of the environment, inferring significant changes through machine learning methods, keeping models alive and updating them accordingly, reasoning on models about requirements satisfaction after changes occur, and triggering model-driven self-adaptive reactions, if necessary. In essence, self adaptation must be grounded on the firm foundations provided by formal methods and tools in a seamless SDE SRE setting. The talk discusses these concepts by focusing on non-functional requirements-reliability and performance-that can be expressed in quantitative probabilistic requirements. In particular, it shows how probabilistic model checking can help reasoning about re- - quirements satisfaction and how it can be made run-time efficient. The talk reports on some results of research developed within the SMScom project, funded by the European Commission, Programme IDEAS-ERC, Project 227977 (http://www.erc-smscom.org/).",2011,0, 5156,A Novel Energy-Aware Fault Tolerance Mechanism for Wireless Sensor Networks,"Sensors in a Wireless Sensor Network (WSN) are prone to failure, due to energy depletion, hardware failures, etc. Fault tolerance is one of the critical issues in WSNs. The existing fault tolerant mechanisms either consume significant extra energy to detect and recover from the failures or need to use additional hardware and software resource. In this paper, we propose a novel energy-aware fault tolerance mechanism for WSN, called Informer Homed Routing (IHR). In our IHR, the non cluster head nodes limit and select the target of their data transmission. Therefore, it consumes less energy. Our experiments show that our proposed protocol can dramatically reduce energy consumption, compared to two existing protocols, LEACH and DHR.",2011,0, 5157,"Digital microfluidic biochips: Functional diversity, More than Moore, and cyberphysical systems","Summary form only given. The 2010 International Technology Roadmap for Semiconductors (ITRS) predicted that bio-medical chips will soon revolutionize the healthcare market. These bio-medical chips should be able to sense and actuate, store and manipulate data, and transmit information. To realize such bio-medical chips, the integration of embedded systems and microfluidics inevitably leads to a new research dimension for More than Moore and beyond. This tutorial will introduce attendees to the emerging technology of digital microfluidics, which is poised to play a key role in the transformation of healthcare and the interplay between biochemistry and embedded systems. Advances in droplet-based digital microfluidics have led to the emergence of biochip devices for automating laboratory procedures in biochemistry and molecular biology. These devices enable the precise control of nanoliter-volume droplets of biochemical samples and reagents. Therefore, integrated circuit (IC) technology can be used to transport and transport chemical payload in the form of micro/nanofluidic droplets. As a result, non-traditional biomedical applications and markets (e.g., high-throughout DNA sequencing, portable and point-of-care clinical diagnostics, protein crystallization for drug discovery), and fundamentally new uses are opening up for ICs and systems. However, continued growth (and larger revenues resulting from technology adoption by pharmaceutical and healthcare companies) depends on advances in chip integration and design-automation tools. In particular, design-automation tools are needed to ensure that biochips are as versatile as the macro-labs that they are intended to replace. This is therefore an opportune time for the semiconductor industry and circuit/system designers to make an impact in this emerging field. This tutorial offers attendees an opportunity to bridge the semiconductor ICs/systems industry with the biomedical and pharmaceutic- l industries. The audience will see how a biochip compiler can translate protocol descriptions provided by an end user (e.g., a chemist or a nurse at a doctor's clinic) to a set of optimized and executable fluidic instructions that will run on the underlying digital microfluidic platform. Testing techniques will be described to detect faults after manufacture and during field operation. Sensor integration and close coupling between the underlying hardware and the control software in a cyberphysical framework will also be described. A number of case studies based on representative assays and laboratory procedures will be interspersed in appropriate places throughout the tutorial. Commercial devices and advanced prototypes from the major company in this market segment (Advanced Liquid Logic, Inc.) will be described, and ongoing activity on newborn screening using digital microfluidic biochips at several large hospitals in Illinois will be highlighted. The topics covered in the tutorial include the following: 1) Technology and application drivers: Motivation and background, actuation methods, electrowetting and digital microfluidics, review of micro-fabrication processes, applications to biochemistry, medicine, and laboratory procedures. 2) System-level design automation: Synthesis techniques: scheduling of fluidic operations, resource binding (mapping of operations to on-chip resources), module placement. 3) Physical-level design automation: droplet routing, defect tolerance, chip-level design, and design of pin-constrained biochips. 4) Testing and design-for-testability: Defects, fault modeling, test planning, reconfiguration techniques, sensor integration and cyberphysical system design.",2011,0, 5158,Spectrum-Based Health Monitoring for Self-Adaptive Systems,"An essential requirement for the operation of self-adaptive systems is information about their internal health state, i.e., the extent to which the constituent software and hardware components are still operating reliably. Accurate health information enables systems to recover automatically from (intermittent) failures in their components through selective restarting, or self-reconfiguration. This paper explores and assesses the utility of Spectrum-based Fault localisation (SFL) combined with automatic health monitoring for self-adaptive systems. Their applicability is evaluated through simulation of online diagnosis scenarios, and through implementation in an adaptive surveillance system inspired by our industrial partner. The results of the studies performed confirm that the combination of SFL with online monitoring can successfully provide health information and locate problematic components, so that adequate self-* techniques can be deployed.",2011,0, 5159,A New Approach for a Fault Tolerant Mobile Agent System,"Improving the survivability of mobile agents in the presence of agent server failures with unreliable underlying networks is a challenging issue. In this paper, we address a fault tolerance approach of deploying cooperating agents to detect agent failures as well as to recover services in mobile agent systems. Three types of agents are involved, which are the actual agent, the supervisor agent and the replicas. We introduce a failure detection and recovery protocol by employing a message-passing mechanism among these three kinds of agents. Different failure scenarios and their corresponding recovery procedures are discussed. We choose Fatomas approach as a basic method. Message complexity of this approach is O (m2). Cooperative agents haven't been considered in this approach. We are going to improve this message complexity in a system of cooperative agents.",2011,0, 5160,On-line detection of stator and rotor faults occurring in induction machine diagnosis by parameters estimation,"The authors propose a diagnosis method for on-line interturns short-circuit windings and broken bars detection by parameters estimation. For predictive detection, Kalman filtering algorithm has been adapted to take into account the on-line parameters deviations in faulty case. Experimental rig is used to validate the on-line identification of stator default. Within the framework of the rotor defects diagnosis, it is difficult to conduct experimental tests to validate the on-line identification of such default. For this reason, one propose an on-line technique to detect rotor broken bars. This technique was validated by using a finite element software (Flux2D). Estimation results show a good agreement and demonstrate the possibility of on-line stator and rotor faults detection.",2011,0, 5161,Implicit SIP proxy overload detection mechanism based on response behavior,"Detecting overload in telecommunication networks is an important goal, because it allows to react on it and to reduce traffic smartly to ensure constant user experienced quality of service. This applies not only to media as voice and video, but also to signaling, which is responsible for setting up these media. An overloaded signaling network increases the response delay and reduces the successfully processed service requests and therefore the revenue. We propose an implicit overload detection mechanism for SIP networks that allows detecting overloaded components by their response behavior. This mechanism realizes maximum throughput with marginal response delay increase in the case of congestion without protocol modifications or extensions and therefore ensures proper operation of SIP networks in case of overload.",2011,0, 5162,SIP proxy high-load detection by continuous analysis of response delay values,"The 3GPP has chosen the Session Initiation Protocol as signalling protocol for the IP Multimedia Subsystem; therefore, it is expected that telecom operators will widely use it for their systems. SIP relies on an underlying transport protocol, like, e.g., TCP, UDP or SCTP. In the case of UDP, SIP has to ensure itself that messages will be reliably delivered. For this purpose, retransmission timers within the SIP transaction state machine are used. On the other hand, retransmissions can lead to congestion or even cause a congestion collapse if traffic load becomes too high, and services of the operator may become unavailable. It is therefore important to detect an imminent collapse and to act accordingly in order to keep the users' perceived quality high. We propose to continuously measure response delay values to detect high-load situations that can lead to a collapse in order to be able to reduce the traffic load early enough for avoiding congestion situations. We validate this approach by means of dedicated simulations.",2011,0, 5163,From Boolean to quantitative synthesis,"Motivated by improvements in constraint-solving technology and by the increase of routinely available computational power, partial-program synthesis is emerging as an effective approach for increasing programmer productivity. The goal of the approach is to allow the programmer to specify a part of her intent imperatively (that is, give a partial program) and a part of her intent declaratively, by specifying which conditions need to be achieved or maintained. The task of the synthesizer is to construct a program that satisfies the specification. As an example, consider a partial program where threads access shared data without using any synchronization mechanism, and a declarative specification that excludes data races and deadlocks. The task of the synthesizer is then to place locks into the program code in order for the program to meet the specification. In this paper, we argue that quantitative objectives are needed in partial-program synthesis in order to produce higher-quality programs, while enabling simpler specifications. Returning to the example, the synthesizer could construct a naive solution that uses one global lock for shared data. This can be prevented either by constraining the solution space further (which is error-prone and partly defeats the point of synthesis), or by optimizing a quantitative objective that models performance. Other quantitative notions useful in synthesis include fault tolerance, robustness, resource (memory, power) consumption, and information flow.",2011,0, 5164,A Method for Software Process Capability / Maturity Models Customization to Specific Domains,"Software Process Capability/Maturity Models (SPCMMs) are repositories of best practices for software processes suitable for assessing and/or improving processes in software intensive organizations. Each software development domain, however, presents particular needs, which has led to the tendency of SPCMMs customization for specific domains, which has often been undertaken in an unsystematic way. This paper presents a method for the customization of SPCMMs for specific domains, developed based on standards development, process modeling and knowledge engineering techniques as well as experiences reported in the literature. Formative evaluations of the method have taken place through case studies and summative evaluation has been conducted through an Expert Panel. The observed results reveal early evidence that the method is suitable for SPCMMs customization.",2011,0, 5165,Contributions and Perspectives in Architectures of Software Testing Environments,"Producing high quality software systems has been one of the most important software development concerns. In this perspective, Software Architecture and Software Testing are two important research areas that have contributed in that direction. The attention given to the software architecture has played a significant role in determining the success of software systems. Otherwise, software testing has been recognized as a fundamental activity for assuring the software quality; however, it is an expensive, error-prone, and time consuming activity. For this reason, a diversity of testing tools and environments has been developed; however, they have been almost always designed without an adequate attention to their evolution, maintenance, reuse, and mainly to their architectures. Thus, this paper presents our main contributions to systematize the development of testing tools and environments, aiming at improving their quality, reuse, and productivity. In particular, we have addressed architectures for software testing tools and environments and have also developed and made available testing tools. We also state perspectives of research in this area, including open research issues that must be treated, considering the unquestionable relevance of testing automation to the testing activity.",2011,0, 5166,On the Interplay between Structural and Logical Dependencies in Open-Source Software,"Structural dependencies have long been explored in the context of software quality. More recently, software evolution researchers have investigated logical dependencies between artifacts to assess failure-proneness, detect design issues, infer code decay, and predict likely changes. However, the interplay between these two kinds of dependencies is still obscure. By mining 150 thousand commits from the Apache Software Foundation repository and employing object-oriented metrics reference values, we concluded that 91% of all established logical dependencies involve non-structurally related artifacts. Furthermore, we found some evidence that structural dependencies do not lead to logical dependencies in most situations. These results suggest that dependency management methods and tools should rely on both kinds of dependencies, since they represent different dimensions of software evolvability.",2011,0, 5167,Analyzing Refactorings on Software Repositories,"Currently analysis of refactoring in software repositories is either manual or only syntactic, which is time-consuming, error-prone, and non-scalable. Such analysis is useful to understand the dynamics of refactoring throughout development, especially in multi-developer environments, such as open source projects. In this work, we propose a fully automatic technique to analyze refactoring frequency, granularity and scope in software repositories. It is based on SAFEREFACTOR, a tool that analyzes transformations by generating tests to detect behavioral changes - it has found a number of bugs in refactoring implementations within some IDEs, such as Eclipse and Netbeans. We use our technique to analyze five open source Java projects (JHotDraw, ArgoUML, SweetHome 3D, HSQLDB and jEdit). From more than 40,723 software versions, 39 years of software development, 80 developers and 1.5 TLOC, we have found that: 27% of changes are refactorings. Regarding the refactorings, 63,83% are Low level, and 71% have local scope. Our results indicate that refactorings are frequently applied before likely functionality changes, in order to better prepare design for accommodating additions.",2011,0, 5168,A Model for the Evaluation of Educational Games for Teaching Software Engineering,"Teaching software engineering through educational games is expected to have several benefits. Various games have already been developed in this context, yet there is still a lack of assessment models to measure the real benefits and quality of these educational resources. This article presents the development of a model for assessing the quality of educational games for teaching software engineering. The model has been systematically derived from literature and evaluated in terms of its applicability, usefulness, validity and reliability through a series of case studies, applying educational board games in software engineering courses. Early results indicate that the model can be used to assess the aspects of motivation, user experience and learning of educational SE games.",2011,0, 5169,Tuning Static Data Race Analysis for Automotive Control Software,"Implementation of concurrent software systems is difficult and error-prone. Race conditions can cause intermittent failures, which are rarely found during testing. In safety-critical applications, the absence of race conditions should be demonstrated before deployment of the system. Several static analysis techniques to show the absence of data races are known today. In this paper, we report on our experiences with a static data race detector. We define a basic analysis based on classical lockset analysis and present three enhancements to that algorithm. We evaluate and compare the effectiveness of the basic and enhanced analysis algorithms empirically for an automotive embedded system. We find that the number of warnings could be reduced by more than 40% and that the ratio of true positives per total number of warnings could be doubled.",2011,0, 5170,Are the Clients of Flawed Classes (Also) Defect Prone?,"Design flaws are those characteristics of design entities (e.g., methods, classes) which make them harder to maintain. Existing studies show that classes revealing particular design flaws are more change and defect prone than the other classes. Since various collaborations are found among the instances of classes, classes are not isolated within the source code of object-oriented systems. In this paper we investigate if classes using classes revealing design flaws are more defect prone than classes which do not use classes revealing design flaws. We detect four design flaws in three releases of Eclipse and investigate the relation between classes that use/do not use flawed classes and defects. The results show that classes that use flawed classes are defect prone and this does not depend on the number of the used flawed classes. This findings show a new type of correlation between design flaws and defects, bringing evidence related to an increased likelihood of exhibiting defects for classes that use classes revealing design flaws. Based on the provided evidence, practitioners are advised once again about the negative impact design flaws have at a source code level.",2011,0, 5171,Distortion Measurement for Automatic Document Verification,"Document forgery detection is important as techniques to generate forgeries are becoming widely available and easy to use even for untrained persons. In this work, two types of forgeries are considered: forgeries generated by re-engineering a document and forgeries that are generated using scanning and printing a genuine document. An unsupervised approach is presented to automatically detect forged documents of these types by detecting the geometric distortions introduced during the forgery process. Using the matching quality between all pairs of documents, outlier detection is performed on the summed matching quality to identify the tampered document. Quantitative evaluation is done on two public data sets, reporting a true positive rate from to 0.7 to 1.0.",2011,0, 5172,"Risk Management in Global Software Development Projects: Challenges, Solutions, and Experience","The benefits of using globally distributed sites for the development, maintenance, and operation of software-based systems and services are obvious. But global development also bears large risks. What seems at first to be economically reasonable often proves to be too expensive. Missing adjustment of communication and processes between different sites and insufficient knowledge of suitable management practices and organizational skills often lead to insufficient product quality. Global development and maintenance processes are difficult to control and often additional costs arise, especially for quality assurance and follow-up activities. Mastering global software projects requires, on the one hand, suitable tailoring of software development tasks and their distribution to different sites based on multiple criteria (not only cost!). On the other hand, appropriate process and management practices need to be established. Quantitative models can then be used to assess cost, schedule goals, and quality risks. I will introduce fundamental techniques for the establishment of well-understood and manageable distributed development processes and discuss different ways for managing risks. Based on a technique for splitting up development tasks and a multidisciplinary decision model for """"smart"""" task distribution to different sites, I will demonstrate how distributed development processes can be organized in a productive way. This will be done by using examples from industry projects. Additionally, I will present upcoming topics such as cloud-supported global software development or the software factory, a research and development infrastructure at the University of Helsinki that supports systematic testing of novel distributed development techniques. Finally I will show how, in order to avoid global development risks, the application of fundamental software engineering principles must be emphasized.",2011,0, 5173,Integrating Early V&V Support to a GSE Tool Integration Platform,"The ever-growing market pressure and complex products demand high quality work and effectiveness from software practitioners. This relates also for the methods and tools they use for the development of software-intensive systems. Validation and verification (V&V) are the cornerstones of the overall quality of a system. By performing efficient V&V activities to detect defects during the early phases of development, the developers are able to save time and effort required for fixing them. Tool support is available for all types of V&V activities, especially testing, model checking, syntactic verification, and inspection. In distributed development the role of tools is even more relevant than in single-site development, and tool integration is often imperative for ensuring the effectiveness of work. In this paper, we discuss how a tool integration framework was extended to support early V&V activities via continuous integrations. We find that integrating early V& V supporting tools is feasible and useful, and makes a tool integration framework even more beneficial.",2011,0, 5174,Requirement Development Life Cycle: The Industry Practices,"Requirements engineering activities act as a backbone of software development. The more efforts devoted during requirements engineering activities guarantee a better software product. Appropriate selection of requirements has been a challenge for software industry. This selection will increase the probability of success of the software product. Each year many cases are registered against companies for not fulfilling product requirements appropriately. The product failure mostly depends on, either by missing important requirements or capturing irrelevant requirements. SDLC consists of stages where software starts from scratch to a refined product. Requirements Development Life cycle (RDLC) consists of stages where requirements gets initiated, raised, refined, forcefully changed, implemented and validated. The processes to capture requirements vary industry to industry. This paper presents several requirements engineering processes used during the development of requirements, in industry. These processes will identify appropriate requirements and develop a quality product within budget on time. These practices are captured within the Pakistan software industry. This paper also explains the motivations for selecting particular methods, within company, during requirements development and the results associated with it. The processes captured in this paper, from different companies, can be an education for software industry.",2011,0, 5175,Wiring harness assembly detection system based on image processing technology,"Describes a image processing based wiring harness assembly technique detection system. Real-time collection on detected harness's color, location and other information by image processing, to determine whether the errors mounted harness by image processing software. First through the spatial filter method, using the median filter to eliminate noise and interference of the collected images in order to obtain a clear picture. Then using Otsu method for image thresholding, and Sobel operator to detect edge to determine the wiring harness of the exact location in the image. Finally, use color matching algorithm to match and compare the image to finalize the wiring harness eligibility. Testing accuracy of the system has greatly improved than the previous manual detection. Improving test efficiency, product quality and productivity.",2011,0, 5176,Correct Implementation of Open Real-Time Systems,"Correct and efficient implementation of open real-time systems is still a costly and error-prone process. We present a rigorous model-based implementation method of such systems based on the use of two models: (i) an abstract model representing the interactions between the environment and the application and its timing behavior without considering any execution platform, (ii) a physical model representing the behavior of the abstract model running on a given platform by taking into account execution times. We define an Execution Engine that performs the online computation of schedules for a given application so as to meet its timing constraints. In contrast to standard even-driven programming techniques, our method allows static analysis and online checking of essential properties such as time-safety and time-robustness. We implemented the Execution Engine for BIP programs and validated our method for a module of an autonomous rover.",2011,0, 5177,Autonomic Configuration Adaptation Based on Simulation-Generated State-Transition Models,"Configuration management is a complex task, even for experienced system administrators, which makes self-managing systems a particularly desirable solution. This paper describes a novel contribution to self-managing systems, including an autonomic configuration self-optimization methodology. Our solution involves a systematic simulation method that develops a state-transition model of the behavior of a service-oriented system in terms of its configuration and performance. At run time, the system's behavior is monitored and classified in one of the model states. If this state may lead to futures that violate service level agreements, the system configuration is changed toward a safer future state. Similarly, a satisfactory state that is over-provisioned may be transitioned to a more economical satisfactory state. Aside from the typical benefits of self-optimization, our approach includes an intuitive, explainable decision model, the ability to predict the future with some accuracy avoiding trial-and-error, offline training, and the ability to improve the model at run-time. We demonstrate this methodology in an experiment where Amazon EC2 instances are added and removed to handle changing request volumes to a real service-oriented application. We show that a knowledge base generated entirely in simulation can be used to make accurate changes to a real-world application.",2011,0, 5178,E-Quality: A graph based object oriented software quality visualization tool,"Recently, with increasing maintenance costs, studies on software quality are becoming increasingly important and widespread because high quality software means more easily maintainable software. Measurement plays a key role in quality improvement activities and metrics are the quantitative measurement of software design quality. In this paper, we introduce a graph based object-oriented software quality visualization tool called """"E-Quality"""". E-Quality automatically extracts quality metrics and class relations from Java source code and visualizes them on a graph-based interactive visual environment. This visual environment effectively simplifies comprehension and refactoring of complex software systems. Our approach assists developers in understanding of software quality attributes by level categorization and intuitive visualization techniques. Experimental results show that the tool can be used to detect software design flaws and refactoring opportunities.",2011,0, 5179,Turn-to-turn fault detection in transformers using negative sequence currents,"This paper presents a new, simple and efficient protection technique which is based on negative sequence currents. Using this protection technique, it is possible to detect minor internal turn-to-turn faults in power transformers. Also, it can differentiate between internal and external faults. The discrimination is achieved by comparing the phase shift between two phasors of total negative sequence current. The new protection technique has been studied via an extensive simulation study using PSCAD/EMTDCTM software in a three-phase power system and also has been compared with a traditional differential algorithm. The results indicate that the new technique can provide a fast and sensitive approach for identifying minor internal turn-to-turn faults in power transformers.",2011,0, 5180,Computing indicators of creativity,"Currently, the most common measurement of creativity is based on tests of divergence. These creativity tests include divergent thinking, divergent feeling, etc. In most cases the evaluation criteria is a subjective appraisal by a trained """"rater"""" to assess the amount of divergence from the """"norm"""" a particular submitted solution has to a presented or discovered task. The larger the divergence from the standard, the more creative the solution is. Although the quality and quantity of the solutions to the task must be considered, divergence from the accepted """"norm"""" is a significant indicator of creativity. Using the current model for showing creative divergence, a method for evaluating the divergence of programming solutions as compared to the standard tutorial solution, in order to indicate creativity should be in line with current creativity research. Instead of subjective """"rater evaluations"""" a method of calculating numerical divergence from programming solutions was devised. This method was employed on three separate class conditions and yielded three separate divergence patterns, indicating that the divergence calculation appears to demonstrate, not only that creativity can be shown to exist in programming solutions, but that the calculation is sensitive enough to differentiate between different class learning conditions of the same teacher. So based on the idea that creativity can be shown through divergence in thinking and feeling, it stands to reason that creativity in programming could be revealed through a similar divergence to a standard norm through calculating the divergence to that norm. Consequently, this divergence calculation method shows promising indicators to inform the measurement of creativity within programming and possibly other scientific areas.",2011,0, 5181,Reliable State Retention-Based Embedded Processors Through Monitoring and Recovery,"State retention power gating and voltage-scaled state retention are two effective design techniques, commonly employed in embedded processors, for reducing idle circuit leakage power. This paper presents a methodology for improving the reliability of embedded processors in the presence of power supply noise and soft errors. A key feature of the method is low cost, which is achieved through reuse of the scan chain for state monitoring, and it is effective because it can correct single and multiple bit errors through hardware and software, respectively. To validate the methodology, ARM CortexTM-M0 embedded microprocessor (provided by our industrial project partner) is implemented in field-programmable gate array and further synthesized using 65-nm technology to quantify the cost in terms of area, latency, and energy. It is shown that the proposed methodology has a small area overhead (8.6%) with less than 4% worst-case increase in critical path and is capable of detecting and correcting both single bit and multibit errors for a wide range of fault rates.",2011,0, 5182,Detection and classification device for malaria parasites in thick-blood films,"In Thailand, malaria diagnosis still relies primarily on microscopic examination of Giemsa-stained thick and thin blood films. However, the method requires vigorously trained technicians to correctly identify the disease, and is known to be error-prone due to human fatigue. The limited number of such technicians further reduces the effectiveness of the attempt to control malaria. Thus, this project aims to develop an automated system to identify and analyze parasite species on thick blood films by image analysis techniques. The system comprises two main components: (1) Image acquisition unit and (2) Image analysis module. In our work, we have developed an image acquisition system that can be easily mounted on most conventional light microscopes. It automatically controls the movement of microscope stage in 3-directional planes. The vertical adjustment (focusing) can be made in a nanometer range (7-9 nm). Images are acquired with a digital camera that is installed at the top of microscope. The captured images are analyzed by our image analysis software which utilizes the state-of-the-art algorithms to detect and identify malaria parasites.",2011,0, 5183,A novel approach to sentence alignment from comparable corpora,This paper introduces a new technique to select candidate sentences for alignment from bilingual comparable corpora. Tests were done utilizing Wikipedia as a source for bilingual data. Our test languages are English and Chinese. A high quality of sentence alignment is illustrated by a machine translation application.,2011,0, 5184,Impact of attribute selection on defect proneness prediction in OO software,"Defect proneness prediction of software modules always attracts the developers because it can reduce the testing efforts as well as software development time. In the current context, with the piling up of constraints like requirement ambiguity and complex development process, developing fault free reliable software is a daunting task. To deliver reliable software, software engineers are required to execute exhaustive test cases which become tedious and costly for software enterprises. To ameliorate the testing process one can use a defect prediction model so that testers can focus their efforts on defect prone modules. Building a defect prediction model becomes very complex task when the number of attributes is very large and the attributes are correlated. It is not easy even for a simple classifier to cope with this problem. Therefore, while developing a defect proneness prediction model, one should always be careful about feature selection. This research analyzes the impact of attribute selection on Naive Bayes (NB) based prediction model. Our results are based on Eclipse and KC1 bug database. On the basis of experimental results, we show that careful combination of attribute selection and machine learning apparently useful and, on the Eclipse data set, yield reasonable good performance with 88% probability of detection and 49% false alarm rate.",2011,0, 5185,A Mobile Camera Tracking System Using GbLN-PSO with an Adaptive Window,"The availability of high quality and inexpensive video camera, as well as the increasing need for automated video analysis is leading towards a great deal of interest in numerous applications. However the video tracking systems is still having many open problems. Thus, some of research activities in a video tracking system are still being explored. Generally, most of the researchers are used a static camera in order to track an object motion. However, the use of a static camera system for detecting and tracking the motion of an object is only capable for capturing a limited view. Therefore, to overcome the above mentioned problem in a large view space, researcher may use several cameras to capture images. Thus, the cost will increases with the number of cameras. To overcome the cost increment a mobile camera is employed with the ability to track the wide field of view in an environment. Conversely, mobile camera technologies for tracking applications have faced several problems; simultaneous motion (when an object and camera are concurrently movable), distinguishing objects in occlusion, and dynamic changes in the background during data capture. In this study we propose a new method of Global best Local Neighborhood Oriented Particle Swarm Optimization (GbLN-PSO) to address these problems. The advantages of tracking using GbLN-PSO are demonstrated in experiments for intelligent human and vehicle tracking systems in comparison to a conventional method. The comparative study of the method is provided to evaluate its capabilities at the end of this paper.",2011,0, 5186,"Fault Injection, A Fast Moving Target in Evaluations","Differential Fault Analysis has been known since 1996 (Dan Boneh, Richard A. DeMillo and Richard J. Lipton, """"The Bellcore Attack"""") [1]. Before that, the implementa tions of cryptographic functions were developed without the awareness of fault analysis attacks. The first fault injection set-ups produced single voltage glitches or single light flashes at a single location on the silicon. A range of countermeasures has been developed and applied in cryptographic devices since. But while the countermeasures against perturbation attacks were being developed, attack techniques also evolved. The accuracy of the timing was improved, multiple light flashes were used to circumvent double checks, perturbation attacks were being combined with side channels such as power consumption and detection methods developed to prevent chips from blocking after they detected the perturbation attempt. Against all these second generation attack methods new countermeasures were developed. This raised the level of security of secure microcontroller chips to a high level, especially compared to products of ten years ago. The certification schemes are mandating more and more advanced tests to keep secure systems secure in the future. One of the latest requirements is light manipulation test using power consumption waveform based triggering with multiple light flashes at multiple locations on the silicon. If attack scenarios that are as complicated as this one are in scope where will it end? The equipment necessary for the attack is expensive and special software is required. The perturbation attacks that are performed outside security labs and universities are of a different level.",2011,0, 5187,Piggy-Backing Link Quality Measurements to IEEE 802.15.4 Acknowledgements,"In this paper we present an approach to piggyback link quality measurements to IEEE 802.15.4 acknowledgement frames by generating acknowledgements in software instead of relying on hardware support. We show that the software-generated ACKs can be sent meeting the timing constraints of IEEE 802.15.4. This allows for a standard conforming, energy neutral dissemination of link quality related information in IEEE 802.15.4 networks. This information is available at no cost when transmitting data and can be used as input for various schemes for adaptive transmission power control and to assess the current channel quality.",2011,0, 5188,Dangers and Joys of Stock Trading on the Web: Failure Characterization of a Three-Tier Web Service,"Characterizing latent software faults is crucial to address dependability issues of current three-tier systems. A client should not have a misconception that a transaction succeeded, when in reality, it failed due to a silent error. We present a fault injection-based evaluation to characterize silent and non-silent software failures in a representative three-tier web service, one that mimics a day trading application widely used for benchmarking application servers. For failure characterization, we quantify distribution of silent and non-silent failures, and recommend low cost application-generic and application-specific consistency checks, which improve the reliability of the application. We inject three variants of null-call, where a callee returns null to the caller without executing business logic. Additionally, we inject three types of unchecked exceptions and analyze the reaction of our application. Our results show that 49% of error injections from null-calls result in silent failures, while 34% of unchecked exceptions result in silent failures. Our generic-consistency check can detect silent failures in null-calls with an accuracy as high as 100%. Non-silent failures with unchecked exceptions can be detected with an accuracy of 42% with our application-specific checks.",2011,0, 5189,Analyzing Performance of Lease-Based Schemes under Failures,"Leases have proved to be an effective concurrency control technique for distributed systems that are prone to failures. However, many benefits of leases are only realized when leases are granted for approximately the time of expected use. Correct assessment of lease duration has proven difficult for all but the simplest of resource allocation problems. In this paper, we present a model that captures a number of lease styles and semantics used in practice. We consider a few performance characteristics for lease-based systems and analytically derive how they are affected by lease duration. We confirm our analytical findings by running a set of experiments with the OO7 benchmark suite using a variety of workloads and fault loads.",2011,0, 5190,FastFIX: An approach to self-healing,"The EU FP7 FastFIX project tackles issues related to remote software maintenance. In order to achieve this, the project considers approaches relying on context elicitation, event correlation, fault-replication and self-healing. Self-healing helps systems return to a normal state after the occurrence of a fault or vulnerability exploitation has been detected. The problem is intuitively appealing as a way to automate the different maintenance type processes (corrective, adaptive and perfective) and forms an interesting area of research that has inspired many research initiatives. In this paper, we propose a framework for automating corrective maintenance and present its early stage development, based on software control principles. Our approach automates the engineering of self-healing systems as it does not require the system to be designed in a specific way. Instead it can be applied to legacy systems and automatically equips them with observation and control points. Moreover, the proposed approach relies on a sound control theory developed for Discrete Event Systems. Finally, this paper contributes to the field by introducing challenges for effective application of this approach to relevant industrial systems.",2011,0, 5191,Assessing risk for network resilience,"Communication networks and the Internet, in particular, have become a critical infrastructure for daily life, business and governance. Various challenging conditions can render networks or parts thereof unusable, with severe consequences. Protecting a network from all possible challenges is infeasible because of monetary, hardware and software constraints. Hence, a methodology to measure the risk imposed by the various challenges threatening the system is a necessity. In this paper, we present a risk assessment process to identify the challenges with the highest potential impact to a network and its users. The result of this process is a prioritised list of challenges and associated system faults, which can guide network engineers towards the mechanisms that have to be built into the network to ensure network resilience, whilst meeting cost constraints. Furthermore, we discuss how outcomes from the intermediate steps of our risk assessment process can be used to inform network resilience design. A better understanding of these aspects and a way to determine reliable measures are open issues, and represent important new research items in the context of resilient and survivable networks.",2011,0, 5192,Measurement methods for QoS in VoIP review,"In the last years there is a merging trend towards unified communication systems over IP protocols from desktop to handheld devices. However this trend brings forth the limited QoS control existing in these types of networks. The lack of cost-effective QoS strategies is felt negatively directly by the end user in terms of both communication quality and increasing costs. Therefore, this paper analyses which are the main methods available for measuring the QoS in VoIP networks for both audio and video calls and how neural networks can be used to predict the quality as perceived from the end user perspective.",2011,0, 5193,The design of a software fault prone application using evolutionary algorithm,"Most of the current project management software's are utilizing resources on developing areas in software projects. This is considerably essential in view of the meaningful impact towards time and cost-effective development. One of the major areas is the fault proneness prediction, which is used to find out the impact areas by using several approaches, techniques and applications. Software fault proneness application is an application based on computer aided approach to predict the probability that the software contains faults. The application will uses object oriented metrics and count metrics values from open source software as input values to the genetic algorithm for generation of the rules to classify the software modules in the categories of Faulty and Non Faulty modules. At the end of the process, the result will be visualized using genetic algorithm applet, bar and pie chart. This paper will discussed the detail design of software fault proneness application by using genetic algorithm based on the object oriented approach and will be presented using the Unified Modeling Language (UML). The aim of the proposed design is to develop an automated tool for software development group to discover the most likely software modules to be high problematic in the future.",2011,0, 5194,Low-Complexity Encoding Method for H.264/AVC Based on Visual Perception,"H.264/AVC standard achieved excellent encoding performance of at the cost of increased computational complexity and falling encoding speed. In order to overcome poor real-time encoding performance of H.264/AVC, aiming at computing redundancy, based on the integration of visual selective attention mechanism and low complexity encoding of information analysis and visual perception, making use of the distribution of motion vector and the relationship between the mode decision probability and the human visual attention a low-complexity H.264/AVC video coding scheme is implemented in this paper. The simulation results show that the approach encoding effectively resolves the conflict between coding accuracy and speed, saving about 80% coding time on average, which effectively maintains good video quality and overall improves the encoding performance of H.264/AVC.",2011,0, 5195,Got Issues? Do New Features and Code Improvements Affect Defects?,"There is a perception that when new features are added to a system that those added and modified parts of the source-code are more fault prone. Many have argued that new code and new features are defect prone due to immaturity, lack of testing, as well unstable requirements. Unfortunately most previous work does not investigate the link between a concrete requirement or new feature and the defects it causes, in particular the feature, the changed code and the subsequent defects are rarely investigated. In this paper we investigate the relationship between improvements, new features and defects recorded within an issue tracker. A manual case study is performed to validate the accuracy of these issue types. We combine defect issues and new feature issues with the code from version-control systems that introduces these features, we then explore the relationship of new features with the fault-proneness of their implementations. We describe properties and produce models of the relationship between new features and fault proneness, based on the analysis of issue trackers and version-control systems. We find, surprisingly, that neither improvements nor new features have any significant effect on later defect counts, when controlling for size and total number of changes.",2011,0, 5196,An Empirical Validation of the Benefits of Adhering to the Law of Demeter,"The Law of Demeter formulates the rule-of-thumb that modules in object-oriented program code should """"only talk to their immediate friends"""". While it is said to foster information hiding for object-oriented software, solid empirical evidence confirming the positive effects of following the Law of Demeter is still lacking. In this paper, we conduct an empirical study to confirm that violating the Law of Demeter has a negative impact on software quality, in particular that it leads to more bugs. We implement an Eclipse plugin to calculate the amount of violations of both the strong and the weak form of the law in five Eclipse sub-projects. Then we discover the correlation between violations of the law and the bug-proneness and perform a logistic regression analysis of three sub-projects. We also combine the violations with other OO metrics to build up a model for predicting the bug-proneness for a given class. Empirical results show that violations of the Law of Demeter indeed highly correlate with the number of bugs and are early predictor of the software quality. Based on this evidence, we conclude that obeying the Law of Demeter is a straight-forward approach for developers to reduce the number of bugs in their software.",2011,0, 5197,Assessing Software Quality by Program Clustering and Defect Prediction,"Many empirical studies have shown that defect prediction models built on product metrics can be used to assess the quality of software modules. So far, most methods proposed in this direction predict defects by class or file. In this paper, we propose a novel software defect prediction method based on functional clusters of programs to improve the performance, especially the effort-aware performance, of defect prediction. In the method, we use proper-grained and problem-oriented program clusters as the basic units of defect prediction. To evaluate the effectiveness of the method, we conducted an experimental study on Eclipse 3.0. We found that, comparing with class-based models, cluster-based prediction models can significantly improve the recall (from 31.6% to 99.2%) and precision (from 73.8% to 91.6%) of defect prediction. According to the effort-aware evaluation, the effort needed to review code to find half of the total defects can be reduced by 6% if using cluster-based prediction models.",2011,0, 5198,Modularization Metrics: Assessing Package Organization in Legacy Large Object-Oriented Software,"There exist many large object-oriented software systems consisting of several thousands of classes that are organized into several hundreds of packages. In such software systems, classes cannot be considered as units for software modularization. In such context, packages are not simply classes containers, but they also play the role of modules: a package should focus to provide well identified services to the rest of the software system. Therefore, understanding and assessing package organization is primordial for software maintenance tasks. Although there exist a lot of works proposing metrics for the quality of a single class and/or the quality of inter-class relationships, there exist few works dealing with some aspects for the quality of package organization and relationship. We believe that additional investigations are required for assessing package modularity aspects. The goal of this paper is to provide a complementary set of metrics that assess some modularity principles for packages in large legacy object-oriented software: Information-Hiding, Changeability and Reusability principles. Our metrics are defined with respect to object-oriented dependencies that are caused by inheritance and method call. We validate our metrics theoretically through a careful study of the mathematical properties of each metric.",2011,0, 5199,ImpactScale: Quantifying change impact to predict faults in large software systems,"In software maintenance, both product metrics and process metrics are required to predict faults effectively. However, process metrics cannot be always collected in practical situations. To enable accurate fault prediction without process metrics, we define a new metric, ImpactScale. ImpactScale is the quantified value of change impact, and the change propagation model for ImpactScale is characterized by probabilistic propagation and relation-sensitive propagation. To evaluate ImpactScale, we predicted faults in two large enterprise systems using the effort-aware models and Poisson regression. The results showed that adding ImpactScale to existing product metrics increased the number of detected faults at 10% effort (LOC) by over 50%. ImpactScale also improved the predicting model using existing product metrics and dependency network measures.",2011,0, 5200,Identifying distributed features in SOA by mining dynamic call trees,"Distributed nature of web service computing imposes new challenges on software maintenance community for localizing different software features and maintaining proper quality of service as the services change over time. In this paper, we propose a new approach for identifying the implementation of web service features in a service oriented architecture (SOA) by mining dynamic call trees that are collected from distributed execution traces. The proposed approach addresses the complexities of SOA-based systems that arise from: features whose locations may change due to changing of input parameters; execution traces that are scattered throughout different service provider platforms; and trace files that contain interleaving of execution traces related to different concurrent service users. In this approach, we execute different groups of feature-specific scenarios and mine the resulting dynamic call trees to spot paths in the code of a service feature, which correspond to a specific user input and system state. This allows us to focus on a the implementation of a specific feature in a distributed SOA-based system for different maintenance tasks such as bug localization, structure evaluation, and performance analysis. We define a set of metrics to assess structural properties of a SOA-based system. The effectiveness and applicability of our approach is demonstrated through a case study consisting of two service-oriented banking systems.",2011,0, 5201,Structural conformance checking with design tests: An evaluation of usability and scalability,"Verifying whether a software meets its functional requirements plays an important role in software development. However, this activity is necessary, but not sufficient to assure software quality. It is also important to check whether the code meets its design specification. Although there exists substantial tool support to assure that a software does what it is supposed to do, verifying whether it conforms to its design remains as an almost completely manual activity. In a previous work, we proposed design tests - test-like programs that automatically check implementations against design rules. Design test is an application of the concept of test to design conformance checking. To support design tests for Java projects, we developed DesignWizard, an API that allows developers to write and execute design tests using the popular JUnit testing framework. In this work, we present a study on the usability and scalability of DesignWizard to support structural conformance checking through design tests. We conducted a qualitative usability evaluation of DesignWizard using the Think Aloud Protocol for APIs. In the experiment, we challenged eleven developers to compose design tests for an open-source software project. We observed that the API meets most developers' expectations and that they had no difficulties to code design rules as design tests. To assess its scalability, we evaluated DesignWizard's use of CPU time and memory consumption. The study indicates that both are linear functions of the size of software under verification.",2011,0, 5202,Graph-based detection of library API imitations,"It has been a common practice nowadays to employ third-party libraries in software projects. Software libraries encapsulate a large number of useful, well-tested and robust functions, so that they can help improve programmers' productivity and program quality. To interact with libraries, programmers only need to invoke Application Programming Interfaces (APIs) exported from libraries. However, programmers do not always use libraries as effectively as expected in their application development. One commonly observed phenomenon is that some library behaviors are re-implemented by client code. Such re-implementation, or imitation, is not just a waste of resource and energy, but its failure to abstract away similar code also tends to make software error-prone. In this paper, we propose a novel approach based on trace subsumption relation of data dependency graphs to detect imitations of library APIs for achieving better software maintainability. Furthermore, we have implemented a prototype of this approach and applied it to ten large real-world open-source projects. The experiments show 313 imitations of explicitly imported libraries with high precision average of 82%, and 116 imitations of static libraries with precision average of 75%.",2011,0, 5203,A probabilistic software quality model,"In order to take the right decisions in estimating the costs and risks of a software change, it is crucial for the developers and managers to be aware of the quality attributes of their software. Maintainability is an important characteristic defined in the ISO/IEC 9126 standard, owing to its direct impact on development costs. Although the standard provides definitions for the quality characteristics, it does not define how they should be computed. Not being tangible notions, these characteristics are hardly expected to be representable by a single number. Existing quality models do not deal with ambiguity coming from subjective interpretations of characteristics, which depend on experience, knowledge, and even intuition of experts. This research aims at providing a probabilistic approach for computing high-level quality characteristics, which integrate expert knowledge, and deal with ambiguity at the same time. The presented method copes with goodness functions, which are continuous generalizations of threshold based approaches, i.e. instead of giving a number for the measure of goodness, it provides a continuous function. Two different systems were evaluated using this approach, and the results were compared to the opinions of experts involved in the development. The results show that the quality model values change in accordance with the maintenance activities, and they are in a good correlation with the experts' expectations.",2011,0, 5204,Predicting post-release defects using pre-release field testing results,"Field testing is commonly used to detect faults after the in-house (e.g., alpha) testing of an application is completed. In the field testing, the application is instrumented and used under normal conditions. The occurrences of failures are reported. Developers can analyze and fix the reported failures before the application is released to the market. In the current practice, the Mean Time Between Failures (MTBF) and the Average usage Time (AVT) are metrics that are frequently used to gauge the reliability of the application. However, MTBF and AVT cannot capture the whole pattern of failure occurrences in the field testing of an application. In this paper, we propose three metrics that capture three additional patterns of failure occurrences: the average length of usage time before the occurrence of the first failure, the spread of failures to the majority of users, and the daily rates of failures. In our case study, we use data derived from the pre-release field testing of 18 versions of a large enterprise software for mobile applications to predict the number of post-release defects for up to two years in advance. We demonstrate that the three metrics complement the traditional MTBF and AVT metrics. The proposed metrics can predict the number of post-release defects in a shorter time frame than MTBF and AVT.",2011,0, 5205,Using source code metrics to predict change-prone Java interfaces,"Recent empirical studies have investigated the use of source code metrics to predict the change- and defect-proneness of source code files and classes. While results showed strong correlations and good predictive power of these metrics, they do not distinguish between interface, abstract or concrete classes. In particular, interfaces declare contracts that are meant to remain stable during the evolution of a software system while the implementation in concrete classes is more likely to change. This paper aims at investigating to which extent the existing source code metrics can be used for predicting change-prone Java interfaces. We empirically investigate the correlation between metrics and the number of fine-grained source code changes in interfaces of ten Java open-source systems. Then, we evaluate the metrics to calculate models for predicting change-prone Java interfaces. Our results show that the external interface cohesion metric exhibits the strongest correlation with the number of source code changes. This metric also improves the performance of prediction models to classify Java interfaces into change-prone and not change-prone.",2011,0, 5206,Industrial experiences with automated regression testing of a legacy database application,"This paper presents a practical approach and tool (DART) for functional black-box regression testing of complex legacy database applications. Such applications are important to many organizations, but are often difficult to change and consequently prone to regression faults during maintenance. They also tend to be built without particular considerations for testability and can be hard to control and observe. We have therefore devised a practical solution for functional regression testing that captures the changes in database state (due to data manipulations) during the execution of a system under test. The differences in changed database states between consecutive executions of the system under test, on different system versions, can help identify potential regression faults. In order to make the regression test approach scalable for large, complex database applications, classification tree models are used to prioritize test cases. The test case prioritization can be applied to reduce test execution costs and analysis effort. We report on how DART was applied and evaluated on business critical batch jobs in a legacy database application in an industrial setting, namely the Norwegian Tax Accounting System (SOFIE) at the Norwegian Tax Department (NTD). DART has shown promising fault detection capabilities and cost-effectiveness and has contributed to identify many critical regression faults for the past eight releases of SOFIE.",2011,0, 5207,A clustering approach to improving test case prioritization: An industrial case study,"Regression testing is an important activity for controlling the quality of a software product, but it accounts for a large proportion of the costs of software. We believe that an understanding of the underlying relationships in data about software systems, including data correlations and patterns, could provide information that would help improve regression testing techniques. We conjecture that if test cases have common properties, then test cases within the same group may have similar fault detection ability. As an initial approach to investigating the relationships in massive data in software repositories, in this paper, we consider a clustering approach to help improve test case prioritization. We implemented new prioritization techniques that incorporate a clustering approach and utilize code coverage, code complexity, and history data on real faults. To assess our approach, we have designed and conducted empirical studies using an industrial software product, Microsoft Dynamics Ax, which contains real faults. Our results show that test case prioritization that utilizes a clustering approach can improve the effectiveness of test case prioritization techniques.",2011,0, 5208,Code Hot Spot: A tool for extraction and analysis of code change history,"Commercial software development teams have limited time available to focus on improvements to their software. These teams need a way to quickly identify areas of the source code that would benefit from improvement, as well as quantifiable data to defend the selected improvements to management. Past research has shown that mining configuration management systems for change information can be useful in determining faulty areas of the code. We present a tool named Code Hot Spot, which mines change records out of Microsoft's TFS configuration management system and creates a report of hot spots. Hot spots are contiguous areas of the code that have higher values of metrics that are indicators of faulty code. We present a study where we use this tool to study projects at ABB to determine areas that need improvement. The resulting data have been used to prioritize areas for additional code reviews and unit testing, as well as identifying change prone areas in need of refactoring.",2011,0, 5209,Relating developers' concepts and artefact vocabulary in a financial software module,"Developers working on unfamiliar systems are challenged to accurately identify where and how high-level concepts are implemented in the source code. Without additional help, concept location can become a tedious, time-consuming and error-prone task. In this paper we study an industrial financial application for which we had access to the user guide, the source code, and some change requests. We compared the relative importance of the domain concepts, as understood by developers, in the user manual and in the source code. We also searched the code for the concepts occurring in change requests, to see if they could point developers to code to be modified. We varied the searches (using exact and stem matching, discarding stop-words, etc.) and present the precision and recall. We discuss the implication of our results for maintenance.",2011,0, 5210,"Precise detection of un-initialized variables in large, real-life COBOL programs in presence of unrealizable paths","Using variables before assigning any values to them are known to result in critical failures in an application. Few compilers warn about the use of some, but not all uses of un-initialized variables. The problem persists, especially in COBOL systems, due to lack of reliable program analysis tools. A critical reason is the presence of large number of control flow paths due to the use of un-structured constructs of the language. We present the problems faced by one of our big clients in his large, COBOL based software system due to the use of un-initialized variables. Using static data and control-flow analysis to detect them, we observed large number of false positives (imprecision) introduced due to the unrealizable paths in the un-structured COBOL code. We propose a solution to address the realizability issue. The solution is based on the summary based function analysis, which is adapted for COBOL Paragraphs and Sections, to handle the perform-through and fall-through control-flow, and is significantly engineered to scale for large programs (single COBOL program extending to tens of thousands of lines). Using this technique, we noted very large reduction, 45% on an average, in the number of false positives for the un-initialized variables.",2011,0, 5211,Source code comprehension strategies and metrics to predict comprehension effort in software maintenance and evolution tasks - an empirical study with industry practitioners,"The goal of this research was to assess the consistency of source code comprehension strategies and comprehension effort estimation metrics, such as LOC, across different types of modification tasks in software maintenance and evolution. We conducted an empirical study with software development practitioners using source code from a small paint application written in Java, along with four semantics-preserving modification tasks (refactoring, defect correction) and four semantics-modifying modification tasks (enhancive and modification). Each task has a change specification and corresponding source code patch. The subjects were asked to comprehend the original source code and then judge whether each patch meets the corresponding change specification in the modification task. The subjects recorded the time to comprehend and described the comprehension strategies used and their reason for the patch judgments. The 24 subjects used similar comprehension strategies. The results show that the comprehension strategies and effort estimation metrics are not consistent across different types of modification tasks. The recorded descriptions indicate the subjects scanned through the original source code and the patches when trying to comprehend patches in the semantics-modifying tasks while the subjects only read the source code of the patches in semantics-preserving tasks. An important metric for estimating comprehension efforts of the semantics-modifying tasks is the Code Clone Subtracted from LOC(CCSLOC), while that of semantics-preserving tasks is the number of referred variables.",2011,0, 5212,Evidence-based software process recovery: A post-doctoral view,"Software development processes are often viewed as a panacea for software quality: prescribe a process and a quality project will emerge. Unfortunately this has not been the case, as practitioners are prone to push against processes that they do not perceive as helpful, often much to the dismay of stakeholders such as their managers. Yet practitioners still tend to follow some sort of software development processes regardless of the prescribed processes. Thus if a team wants to recover the software development processes of a project or if team is trying to achieve a certification such as ISO9000 or CMM, the team will be tasked with describing their development processes. Previous research has tended to focus on modifying existing projects in order to extract process related information. In contrast, our approach of software process recovery attempts to analyze software artifacts extracted from software repositories in order to infer the underlying software development processes visible within these software repositories.",2011,0, 5213,Practical combinatorial (t-way) methods for detecting complex faults in regression testing,"Regression testing can be among the most challenging of software assurance tasks because program changes often introduce faults, including unexpected interactions among different parts of the code. Unanticipated interactions may also occur when software is modified for a new platform. Techniques such as pairwise testing are not sufficient for detecting these faults, because empirical evidence shows that some errors are triggered only by the interaction of three, four, or more parameters. However, new algorithms and tools make it possible to generate tests that cover complex combinations of values (2-way to 6-way), or to analyze existing test suites and automatically generate tests that provide combinatorial coverage. The key advantage of this approach is that it produces better testing using a fraction of the tests required by other methods.",2011,0, 5214,A novel software-based defect-tolerance approach for application-specific embedded systems,"Traditional approaches for improving yield are based on the use of hardware redundancy (HR), and their benefits are limited for high defect densities due to increasing layout complexities and diminishing return effects. This research is based on an observation that completely correct operation of user programs can be guaranteed while using chips with one or more unrepairable memory modules if software-level techniques satisfy two condistions: (1) defects only affect a few memory cells rather than cause malfunction for the entire memory module, and (2) either we do not use any part of the memory affected by the un-repaired defect, or we do use the affected part, but only in a manner that does not excite the un repaired defect to cause errors. This paper proposes a software based defect-tolerance (SBDT) approach in combination with HR to utilize defective memory chips for application-specific systems. The proposed approach requires known and fixed program and information about defective locations for each memory module, hence this paper focuses on SoCs and other application-specific systems built around processors, such as DSP and graphics processors. We model an application program and defective memory copies as described next.",2011,0, 5215,A case study on the application of an artefact-based requirements engineering approach,"[Background:] Nowadays, industries are facing the problem that the Requirements Engineering (RE) process is highly volatile, since it depends on project influences from the customer's domain or from process models used. Artefact-based approaches promise to provide guidance in the creation of consistent artefacts in volatile project environments, because these approaches concentrate on the artefacts and their dependencies, instead of prescribing processes. Yet missing, however, is empirical evidence on the advantages of applying artefact-based RE approaches in real projects. [Aim:] We developed a customisable artefact-based RE approach for the domain of business information systems. Our goal is to investigate the advantages and limitations of applying this customisable approach in an industrial context. [Method:] We conduct a case study with our artefact-based RE approach and its customisation procedure. For this, we apply it at a software development project at Siemens following the steps of the customisation procedure. We assess our approach in direct comparison with the previously used RE approach considering possible improvements in the process and in the quality of the produced artefacts. [Results:] We show that our approach is flexible enough to respond to the individual needs in the analysed project environment. Although the approach is not rated to be more productive, we find an improvement in the syntactic and the semantic quality of the created artefacts. [Conclusions:] We close a gap in the RE literature by giving empirical evidence on the advantages of artefact orientation in RE in an industrial setting.",2011,0, 5216,An empirical validation of FindBugs issues related to defects,"Background: Effective use of bug finding tools promise to speed up the process of source code verification and to move a portion of discovered defects from testing to coding phase. However, many problems related to their usage, especially the large number of false positives, could easily hinder the potential benefits of such tools. Aims: Assess the percentage and type of issues of a popular bugfinding tool (FindBugs) that are actual defects. Method: We analyzed 301 Java Projects developed at a university with FindBugs, collecting the issues signalled on the source code. Afterwards, we checked the precision of issues with information on changes, we ranked and validated them using both manual inspection and validation with tests failures. Results: We observed that a limited set of issues have high precision and conversely we identified those issues characterized by low precision. We compared findings first with our previous experiment and then to related work: results are consistent with both of them. Conclusions: Since our and other empirical studies demonstrated that few issues are related to real defects with high precision, developers could enable only them (or prioritize), reducing the information overload of FindBugs and having the possibility to discover defects earlier. Furthermore, the technique presented in the paper can be adopted to other tools on a code base with tests to find issues with high precision that can be checked on code in production to find defects earlier.",2011,0, 5217,Empirical data-based modeling of teaching material sharing network dynamics,"Teaching material sharing networks (TMSN) may enrich teachers teaching capacity and quality through sharing. To effectively manage a network and its evolution, managers have to characterize members' behaviors such as joining/leaving the network (membership) and uploading/downloading teaching materials (sharing) and network state dynamics of membership, teaching material (TM) quantity and quality. The challenge presented in this paper is to design a methodology for modeling individual behaviors and network dynamics to predict network evolution based on empirical data of the network. SCTNet, a TMSN among elementary school teachers, serves as an exemplary network for designing the modeling methodology. Novelty of the design has three folds. i) Probabilistic individual behaviors are modeled to capture the individual difference. In particular, the features of probabilities with respect to states from data are slow start, fast growth, and saturation, thus the Bass diffusion model is adopted to model how the probabilities are affected by network states ii) How network states evolve over time with respect to the current states and individual behavior probabilities is then described by a set of Bass-Model embedded difference equations. iii) Because of limited empirical data for modeling, a Quasi-bootstrap based nonlinear least square (NLS) method is used to estimate Bass model parameters of the behavior probabilities along with the network evolution. User behaviors and network dynamics thus obtained were validated via an agent-based simulation (ABS), and the results observe that the accuracy of membership evolution reproduced by ABS matches the empirical data of SCTNet by more than 95%. This proven modeling accuracy shed the light for a better TMSN network management.",2011,0, 5218,Detecting emergent behavior in distributed systems using an ontology based methodology,"Lack of central control makes the design of distributed software systems a challenging task because of possible unwanted behavior at runtime, commonly known as emergent behavior. Developing methodologies to detect emergent behavior prior to the implementation stage of the system can lead to huge savings in time and cost. However manual review of requirements and design documents for real-life systems is inefficient and error prone; thus automation of analysis methodologies is considered greatly beneficial. This paper proposes the utilization of an ontology-based approach to analyze system requirements expressed by a set of message sequence charts (MSC). This methodology involves building a domain-specific ontology of the system, and examines the requirements based on this ontology. The advantages of this approach in comparison with other methodologies are its consistency and increased level of automation. The effectiveness of this approach is explained using a case study of an IntelliDrive system.",2011,0, 5219,Predicting software defects: A cost-sensitive approach,"Find software defects is a complex and slow task which consumes most of the development budgets. In order to try reducing the cost of test activities, many researches have used machine learning to predict whether a module is defect-prone or not. Defect detection is a cost-sensitive task whereby a misclassification is more costly than a correct classification. Yet, most of the researches do not consider classification costs in the prediction models. This paper introduces an empirical method based in a COCOMO (COnstructive COst MOdel) that aims to assess the cost of each classifier decision. This method creates a cost matrix that is used in conjunction with a threshold-moving approach in a ROC (Receiver Operating Characteristic) curve to select the best operating point regarding cost. Public data sets from NASA (National Aeronautics and Space Administration) IV&V (Independent Verification & Validation) Facility Metrics Data Program (MDP) are used to train the classifiers and to provide some development effort information. The experiments are carried out through a methodology that complies with validation and reproducibility requirements. The experimental results have shown that the proposed method is efficient and allows the interpretation of the classifier performance in terms of tangible cost values.",2011,0, 5220,Calculating the strength of ties of a social network in a semantic search system using hidden Markov models,"The Web of information has grown to millions of independently evolved decentralized information repositories. Decentralization of the web has advantages such as no single point of failure and improved scalability. Decentralization introduces challenges such as ontological, communication and negotiation complexity. This has given rise to research to enhance the infrastructure of the Web by adding semantic to the search systems. In this research we view semantic search as an enabling technique for the general Knowledge Management (KM) solutions. We argue that, semantic integration, semantic search and agent technology are fundamental components of an efficient KM solution. This research aims to deliver a proof-of-concept for semantic search. A prototype agent-based semantic search system supported by ontological concept learning and contents annotation is developed. In this prototype, software agents, deploy ontologies to organize contents in their corresponding repositories; improve their own search capability by finding relevant peers and learn new concepts from each other; conduct search on behalf of and deliver customized results to the users; and encapsulate complexity of search and concept learning process from the users. A unique feature of this system is that the semantic search agents form a social network. We use Hidden Markov Model (HMM) to calculate the tie strengths between agents and their corresponding ontologies. The query will be forwarded to those agents with stronger ties and relevant documents are returned. We have shown that this will improve the search quality. In this paper, we illustrate the factors that affect the strength of the ties and how these factors can be used by HMM to calculate the overall tie strength.",2011,0, 5221,Design and creation of Dysarthric Speech Database for development of QoLT software technology,"In this paper we will introduce the work of creation of a speech database to develop speech technology for disabled persons, which has been done as part of a national program to help better life for Korean people. We will report about the creation of speech database of a total of 160 persons: prompting items, designs, etc. for the creation of a database which is needed to develop an embedded key-word spotting speech recognition system tailored for the persons disabled in articulation. The created database is being used by the technology development team in the national program to study the phonetic characteristics of the different types of disabled persons, develop the automatic method to assess degrees of disability, investigate the phonetic features of speech of the disabled, and design and implement the software prototype for personal embedded speech recognition systems adapted to the disabled persons.",2011,0, 5222,Intelligent alarm management,"An ergonomic problem for the plant operators has appeared in the modern electronic control systems, in which configure an alarm is very easy. We present a methodology and an intelligent software tool to manage alarms and make early fault detection and diagnosis in industrial processes, integrating three techniques to detect and diagnose faults. The three techniques use available information in industrial environments: The alarms of the electronic control system; the fault knowledgebase of the process, formulated in terms of rules; and a simplified model used to detect disturbances in the process. A prototype in a Fluid Catalytic Cracking process is shown.",2011,0, 5223,Towards identifying OS-level anomalies to detect application software failures,"The next generation of critical systems, namely complex Critical Infrastructures (LCCIs), require efficient runtime management, reconfiguration strategies, and the ability to take decisions on the basis of current and past behavior of the system. Anomaly-based detection, leveraging information gathered at Operating System (OS) level (e.g., number of system call errors, signals, and holding semaphores in the time unit), seems to be a promising approach to reveal online application faults. Recently an experimental campaign to evaluate the performance of two anomaly detection algorithms was performed on a case study from the Air Traffic Management (ATM) domain, deployed under the popular OS used in the production environment, i.e., Red Hat 5 EL. In this paper we investigate the impact of the OS and the monitored resources on the quality of the detection, by executing experiments on Windows Server 2008. Experimental results allow identifying which of the two operating systems provides monitoring facilities best suited to implement the anomaly detection algorithms that we have considered. Moreover numerical sensitivity analysis of the detector parameters is carried out to understand the impact of their setting on the performance.",2011,0, 5224,Remote diagnostics and performance analysis for a wireless sensor network,"Wireless Sensor Networks (WSNs) comprise embedded sensor nodes that operate autonomously in a multi-hop topology. The challenges are unreliable wireless communications, harsh environment, and limited energy and computation resources. To ensure the desired level of service, it is essential to diagnose performance issues e.g. due to low quality links or energy depletion. This paper presents remote diagnostics and performance analysis that comprise self-diagnostics on embedded sensor nodes, the remote collection of diagnostics, and the design of a diagnostics analysis tool. Unlike the related proposals, our approach allows correcting detected problems by identifying the reasons for misbehavior. The diagnostics is verified with a practical WSN implementation. It has only a small overhead, less than 18 B/min per node in the implementation, allowing the use in bandwidth and energy constrained WSNs.",2011,0, 5225,A multi agent system model for evaluating quality service of clinical engineering department,"Biomedical technology is strategically important to the operational effectiveness of healthcare facilities. As a consequence, clinical engineers have become an essential figure in hospital environment: their role in maintenance, support, evaluation, integration, assessment of new, advanced and complex technologies in point of view of patient safety and cost reduction is become inalienable. For this reason, nations have begun to establish Clinical Engineering Department, but, unfortunately, in a very diversified and fragmented way. So, a tool able to evaluate and improve the quality of current services is needed. Hence, this work builds a model that acts as a reference tool in order to assess the quality of an existing Clinical Engineering Department, underlining its defaulting aspects and suggesting improvements.",2011,0, 5226,Mosaicing of optical microscope imagery based on visual information,"Tools for high-throughput high-content image analysis can simplify and expedite different stages of biological experiments, by processing and combining different information taken at different time and in different areas of the culture. Among the most important in this field, image mosaicing methods provide the researcher with a global view of the biological sample in a unique image. Current approaches rely on known motorized x-y stage offsets and work in batch mode, thus jeopardizing the interaction between the microscopic system and the researcher during the investigation of the cell culture. In this work we present an approach for mosaicing of optical microscope imagery, based on local image registration and exploiting visual information only. To our knowledge, this is the first approach suitable to work on-line with non-motorized microscopes. To assess our method, the quality of resulting mosaics is quantitatively evaluated through on-purpose image metrics. Experimental results show the importance of model selection issues and confirm the soundness of our approach.",2011,0, 5227,SOC HW/SW co-verification technology for application of FPGA test and diagnosis,"Process of configuration and fault scan is required to be repeated many times before all resources of a FPGA-under-test are tested and diagnosed. Both FPGA test system and test schemes have been studied and presented in the keynote. Construction of the in-house developed FPGA test system is based on SOC HW/SW co-verification technology. Algorithms for FPGA test and diagnosis covering all FPGA resources such as, configurable logic blocks (CLBs), interconnect resources (IRs), input/output blocks (IOBs), wide edge decoder, et al with minimum configuration numbers are also discussed. Not only multiple faults in FPGA can be detected, but location and type of the multiple faults can also be determined by the FPGA test system and associated test schemes. Furthermore, 100% fault coverage can be achieved in experiment.",2011,0, 5228,Combating class imbalance problem in semi-supervised defect detection,"Detection of defect-prone software modules is an important topic in software quality research, and widely studied under enough defect data circumstance. An improved semi-supervised learning approach for defect detection involving class imbalanced and limited labeled data problem has been proposed. This approach employs random under-sampling technique to resample the original training set and updating training set in each round for co-train style algorithm. In comparison with conventional machine learning approaches, our method has significant superior performance in the aspect of AUC (area under the receiver operating characteristic) metric. Experimental results also show that with the proposed learning approach, it is possible to design better method to tackle the class imbalanced problem in semi-supervised learning.",2011,0, 5229,On the Effectiveness of Contracts as Test Oracles in the Detection and Diagnosis of Race Conditions and Deadlocks in Concurrent Object-Oriented Software,"The idea behind Design by Contract (DbC) is that a method defines a contract stating the requirements a client needs to fulfill to use it, the precondition, and the properties it ensures after its execution, the post condition. Though there exists ample support for DbC for sequential programs, applying DbC to concurrent programs presents several challenges. We have proposed a solution to these challenges in the context of Java as programming language and the Java Modeling language as specification language. This paper presents our findings when applying our DbC technique on an industrial case study to evaluate the ability of contract-based, runtime assertion checking code at detecting and diagnosing race conditions and deadlocks during system testing. The case study is a highly concurrent industrial system from the telecommunications domain, with actual faults. It is the first work to systematically investigate the impact of contract assertions for the detection of race conditions and deadlocks, along with functional properties, in an industrial system.",2011,0, 5230,A Qualitative Study of Open Source Software Development: The Open EMR Project,"Open Source software is competing successfully in many areas. The commercial sector is recognizing the benefits offered by Open Source development methods that lead to high quality software. Can these benefits be realized in specialized domains where expertise is rare? This study examined discussion forums of an Open Source project in a particular specialized application domain - electronic medical records - to see how development roles are carried out, and by whom. We found through a qualitative analysis that the core developers in this system include doctors and clinicians who also use the product. We also found that the size of the community associated with the project is an order of magnitude smaller than predicted, yet still maintains a high degree of responsiveness to issues raised by users. The implication is that a few experts and a small core of dedicated programmers can achieve success using an Open Source approach in a specialized domain.",2011,0, 5231,Exploring Software Measures to Assess Program Comprehension,"Software measures are often used to assess program comprehension, although their applicability is discussed controversially. Often, their application is based on plausibility arguments, which, however, is not sufficient to decide whether software measures are good predictors for program comprehension. Our goal is to evaluate whether and how software measures and program comprehension correlate. To this end, we carefully designed an experiment. We used four different measures that are often used to judge the quality of source code: complexity, lines of code, concern attributes, and concern operations. We measured how subjects understood two comparable software systems that differ in their implementation, such that one implementation promised considerable benefits in terms of better software measures. We did not observe a difference in program comprehension of our subjects as the software measures suggested it. To explore how software measures and program comprehension could correlate, we used several variants of computing the software measures. This brought them closer to our observed result, however, not as close as to confirm a relationship between software measures and program comprehension. Having failed to establish a relationship, we present our findings as an open issue to the community and initiate a discussion on the role of software measures as comprehensibility predictors.",2011,0, 5232,Network Versus Code Metrics to Predict Defects: A Replication Study,"Several defect prediction models have been proposed to identify which entities in a software system are likely to have defects before its release. This paper presents a replication of one such study conducted by Zimmermann and Nagappan on Windows Server 2003 where the authors leveraged dependency relationships between software entities captured using social network metrics to predict whether they are likely to have defects. They found that network metrics perform significantly better than source code metrics at predicting defects. In order to corroborate the generality of their findings, we replicate their study on three open source Java projects, viz., JRuby, ArgoUML, and Eclipse. Our results are in agreement with the original study by Zimmermann and Nagappan when using a similar experimental setup as them (random sampling). However, when we evaluated the metrics using setups more suited for industrial use -- forward-release and cross-project prediction -- we found network metrics to offer no vantage over code metrics. Moreover, code metrics may be preferable to network metrics considering the data is easier to collect and we used only 8 code metrics compared to approximately 58 network metrics.",2011,0, 5233,Measuring Architectural Change for Defect Estimation and Localization,"While there are many software metrics measuring the architecture of a system and its quality, few are able to assess architectural change qualitatively. Given the sheer size and complexity of current software systems, modifying the architecture of a system can have severe, unintended consequences. We present a method to measure architectural change by way of structural distance and show its strong relationship to defect incidence. We show the validity and potential of the approach in an exploratory analysis of the history and evolution of the Spring Framework. Using other, public datasets, we corroborate the results of our analysis.",2011,0, 5234,Handling Estimation Uncertainty with Bootstrapping: Empirical Evaluation in the Context of Hybrid Prediction Methods,"Reliable predictions are essential for managing software projects with respect to cost and quality. Several studies have shown that hybrid prediction models combining causal models with Monte Carlo simulation are especially successful in addressing the needs and constraints of today's software industry: They deal with limited measurement data and, additionally, make use of expert knowledge. Moreover, instead of providing merely point estimates, they support the handling of estimation uncertainty, e.g., estimating the probability of falling below or exceeding a specific threshold. Although existing methods do well in terms of handling uncertainty of information, we can show that they leave uncertainty coming from imperfect modeling largely unaddressed. One of the consequences is that they probably provide over-confident uncertainty estimates. This paper presents a possible solution by integrating bootstrapping into the existing methods. In order to evaluate whether this solution does not only theoretically improve the estimates but also has a practical impact on the quality of the results, we evaluated the solution in an empirical study using data from more than sixty projects and six estimation models from different domains and application areas. The results indicate that the uncertainty estimates of currently used models are not realistic and can be significantly improved by the proposed solution.",2011,0, 5235,Inferring Skill from Tests of Programming Performance: Combining Time and Quality,"The skills of software developers are important to the success of software projects. Also, when studying the general effect of a tool or method, it is important to control for individual differences in skill. However, the way skill is assessed is often ad hoc, or based on unvalidated methods. According to established test theory, validated tests of skill should infer skill levels from well-defined performance measures on multiple, small, representative tasks. In this respect, we show how time and quality, which are often analyzed separately, can be combined as task performance and subsequently be aggregated as an approximation of skill. Our results show significant positive correlations between our proposed measures of skill and other variables, such as seniority, lines of code written, and self-evaluated expertise. The method for combining time and quality is a promising first step to measuring programming skill in both industry and research settings.",2011,0, 5236,Modeling the Number of Active Software Users,"More and more software applications are developed within a software ecosystem (SECO), such as the Face book ecosystem and the iPhone AppStore. A core asset of a software ecosystem is its users, and the behavior of the users strongly affects the decisions of software vendors. The number of active users reflects user satisfaction and quality of the applications in a SECO. However, we can hardly find any literature about the number of active software users. Because software users are one of the most important assets of a software business, this information is very sensitive. In this paper, we analyzed the traces of software application users within a large scale software ecosystem with millions of active users. We identified useful patterns of user behavior, and proposed models that help to understand the number of active application users. The model we proposed better predicts the number of active users than just looking at the traditional retention rate. It also provides a fast way to monitor user satisfaction of online software applications. We have therefore provided an alternative way for SECO platform vendors to identify rising or falling applications, and for third party application vendors to identify risks and opportunity of their products.",2011,0, 5237,What are Problem Causes of Software Projects? Data of Root Cause Analysis at Four Software Companies,"Root cause analysis (RCA) is a structured investigation of a problem to detect the causes that need to be prevented. We applied ARCA, an RCA method, to target problems of four medium-sized software companies and collected 648 causes of software engineering problems. Thereafter, we applied grounded theory to the causes to study their types and related process areas. We detected 14 types of causes in 6 process areas. Our results indicate that development work and software testing are the most common process areas, whereas lack of instructions and experiences, insufficient work practices, low quality task output, task difficulty, and challenging existing product are the most common types of the causes. As the types of causes are evenly distributed between the cases, we hypothesize that the distributions could be generalizable. Finally, we found that only 2.5% of the causes are related to software development tools that are widely investigated in software engineering research.",2011,0, 5238,Obtaining Thresholds for the Effectiveness of Business Process Mining,"Business process mining is a powerful tool to retrieve the valuable business knowledge embedded in existing information systems. The effectiveness of this kind of proposal is usually evaluated using recall and precision, which respectively measure the completeness and exactness of the retrieved business processes. Since the effectiveness assessment of business process mining is a difficult and error-prone activity, the main hypothesis of this work studies the possibility of obtaining thresholds to determine when recall and precision values are appropriate. The business process mining technique under study is MARBLE, a model-driven framework to retrieve business processes from existing information systems. The Bender method was applied to obtain the thresholds of the recall and precision measures. The experimental data used as input were obtained from a set of 44 business processes retrieved with MARBLE through a family of case studies carried out over the last two years. The study provides thresholds for recall and precision measures, which facilitates the interpretation of their values by means of five linguistic labels that range from low to very high. As a result, recall must be high (with at least a medium precision above 0.56), and precision must also be high (with at least a low recall of 0.70) to ensure that business processes were recovered (by using MARBLE) with an effectiveness value above 0.65. The thresholds allowed us to ascertain with more confidence whether MARBLE can effectively mine business processes from existing information systems. In addition, the provided results can be used as reference values to compare MARBLE with other similar business process mining techniques.",2011,0, 5239,Predicting software black-box defects using stacked generalization,"Defect number prediction is essential to make a key decision on when to stop testing. For more applicable and accurate prediction, we propose an ensemble prediction model based on stacked generalization (PMoSG), and use it to predict the number of defects detected by third-party black-box testing. Taking the characteristics of black-box defects and causal relationships among factors which influence defect detection into account, Bayesian net and other numeric prediction models are employed in our ensemble models. Experimental results show that our PMoSG model achieves a significant improvement in accuracy of defect numeric prediction than any individual model, and achieves best prediction accuracy when using LWL(Locally Weighted Learning) method as level-1 model.",2011,0, 5240,Automated extraction of architecture-level performance models of distributed component-based systems,"Modern enterprise applications have to satisfy increasingly stringent Quality-of-Service requirements. To ensure that a system meets its performance requirements, the ability to predict its performance under different configurations and workloads is essential. Architecture-level performance models describe performance-relevant aspects of software architectures and execution environments allowing to evaluate different usage profiles as well as system deployment and configuration options. However, building performance models manually requires a lot of time and effort. In this paper, we present a novel automated method for the extraction of architecture-level performance models of distributed component-based systems, based on monitoring data collected at run-time. The method is validated in a case study with the industry-standard SPECjEnterprise2010 Enterprise Java benchmark, a representative software system executed in a realistic environment. The obtained performance predictions match the measurements on the real system within an error margin of mostly 10-20 percent.",2011,0, 5241,Iterative mining of resource-releasing specifications,"Software systems commonly use resources such as network connections or external file handles. Once finish using the resources, the software systems must release these resources by explicitly calling specific resource-releasing API methods. Failing to release resources properly could result in resource leaks or even outright system failures. Existing verification techniques could analyze software systems to detect defects related to failing to release resources. However, these techniques require resource-releasing specifications for specifying which API method acquires/releases certain resources, and such specifications are not well documented in practice, due to the large amount of manual effort required to document them. To address this issue, we propose an iterative mining approach, called RRFinder, to automatically mining resource-releasing specifications for API libraries in the form of (resource-acquiring, resource-releasing) API method pairs. RRFinder first identifies resource-releasing API methods, for which RRFinder then identifies the corresponding resource-acquiring API methods. To identify resource-releasing API methods, RRFinder performs an iterative process including three steps: model-based prediction, call-graph-based propagation, and class-hierarchy-based propagation. From heterogeneous information (e.g., source code, natural language), the model-based prediction employs a classification model to predict the likelihood that an API method is a resource-releasing method. The call-graph-based and class-hierarchy-based propagation propagates the likelihood information across methods. We evaluated RRFinder on eight open source libraries, and the results show that RRFinder achieved an average recall of 94.0% with precision of 86.6% in mining resource-releasing specifications, and the mined specifications are useful in detecting resource leak defects.",2011,0, 5242,Towards dynamic backward slicing of model transformations,"Model transformations are frequently used means for automating software development in various domains to improve quality and reduce production costs. Debugging of model transformations often necessitates identifying parts of the transformation program and the transformed models that have causal dependence on a selected statement. In traditional programming environments, program slicing techniques are widely used to calculate control and data dependencies between the statements of the program. Here we introduce program slicing for model transformations where the main challenge is to simultaneously assess data and control dependencies over the transformation program and the underlying models of the transformation. In this paper, we present a dynamic backward slicing approach for both model transformation programs and their transformed models based on automatically generated execution trace models of transformations.",2011,0, 5243,Observations on the connectedness between requirements-to-code traces and calling relationships for trace validation,"Traces between requirements and code reveal where requirements are implemented. Such traces are essential for code understanding and change management. Unfortunately, the handling of traces is highly error prone, in part due to the informal nature of requirements. This paper discusses observations on the connectedness between requirements-to-code traces and calling relationships within the source code. These observations are based on the empirical evaluation of four case study systems covering 150 KLOC and 59 sample requirements. We found that certain patterns of connectedness have high or low likelihoods of occurring. These patterns can thus be used to confirm or reject existing traceability - hence they are useful for validating requirements-to-code traces.",2011,0, 5244,Stateful testing: Finding more errors in code and contracts,"Automated random testing has shown to be an effective approach to finding faults but still faces a major unsolved issue: how to generate test inputs diverse enough to find many faults and find them quickly. Stateful testing, the automated testing technique introduced in this article, generates new test cases that improve an existing test suite. The generated test cases are designed to violate the dynamically inferred contracts (invariants) characterizing the existing test suite. As a consequence, they are in a good position to detect new faults, and also to improve the accuracy of the inferred contracts by discovering those that are unsound. Experiments on 13 data structure classes totalling over 28,000 lines of code demonstrate the effectiveness of stateful testing in improving over the results of long sessions of random testing: stateful testing found 68.4% new faults and improved the accuracy of automatically inferred contracts to over 99%, with just a 7% time overhead.",2011,0, 5245,Generating essential user interface prototypes to validate requirements,"Requirements need to be validated at an early stage of analysis to address inconsistency and incompleteness issues. Capturing requirements usually involves natural language analysis, which is often imprecise and error prone, or translation into formal models, which are difficult for non-technical stakeholders to understand and use. Users often best understand proposed software systems from the likely user interface they will present. To this end we describe novel automated tool support for capturing requirements as Essential Use Cases and translating these into Essential User Interface low-fidelity rapid prototypes. We describe our automated tool supporting requirements capture, lo-fi user interface prototype generation and consistency management.",2011,0, 5246,CloneDifferentiator: Analyzing clones by differentiation,"Clone detection provides a scalable and efficient way to detect similar code fragments. But it offers limited explanation of differences of functions performed by clones and variations of control and data flows of clones. We refer to such differences as semantic differences of clones. Understanding these semantic differences is essential to correctly interpret cloning information and perform maintenance tasks on clones. Manual analysis of semantic differences of clones is complicated and error-prone. In the paper, we present our clone analysis tool, called Clone-Differentiator. Our tool automatically characterizes clones returned by a clone detector by differentiating Program Dependence Graphs (PDGs) of clones. CloneDifferentiator is able to provide a precise characterization of semantic differences of clones. It can provide an effective means of analyzing clones in a task oriented manner.",2011,0, 5247,Automatically detecting the quality of the query and its implications in IR-based concept location,"Concept location is an essential task during software maintenance and in particular program comprehension activities. One of the approaches to this task is based on leveraging the lexical information found in the source code by means of Information Retrieval techniques. All IR-based approaches to concept location are highly dependent on the queries written by the users. An IR approach, even though good on average, might fail when the input query is poor. Currently there is no way to tell when a query leads to poor results for IR-based concept location, unless a considerable effort is put into analyzing the results after the fact. We propose an approach based on recent advances in the field of IR research, which aims at automatically determining the difficulty a query poses to an IR-based concept location technique. We plan to evaluate several models and relate them to IR performance metrics.",2011,0, 5248,Generating program inputs for database application testing,"Testing is essential for quality assurance of database applications. Achieving high code coverage of the database application is important in testing. In practice, there may exist a copy of live databases that can be used for database application testing. Using an existing database state is desirable since it tends to be representative of real-world objects' characteristics, helping detect faults that could cause failures in real-world settings. However, to cover a specific program code portion (e.g., block), appropriate program inputs also need to be generated for the given existing database state. To address this issue, in this paper, we propose a novel approach that generates program inputs for achieving high code coverage of a database application, given an existing database state. Our approach uses symbolic execution to track how program inputs are transformed before appearing in the executed SQL queries and how the constraints on query results affect the application's execution. One significant challenge in our problem context is the gap between program-input constraints derived from the program and from the given existing database state; satisfying both types of constraints is needed to cover a specific program code portion. Our approach includes novel query formulation to bridge this gap. Our approach is loosely integrated into Pex, a state-of-the-art white-box testing tool for .NET from Microsoft Research. Empirical evaluations on two real database applications show that our approach assists Pex to generate program inputs that achieve higher code coverage than the program inputs generated by Pex without our approach's assistance.",2011,0, 5249,Prioritizing tests for fault localization through ambiguity group reduction,"In practically all development processes, regression tests are used to detect the presence of faults after a modification. If faults are detected, a fault localization algorithm can be used to reduce the manual inspection cost. However, while using test case prioritization to enhance the rate of fault detection of the test suite (e.g., statement coverage), the diagnostic information gain per test is not optimal, which results in needless inspection cost during diagnosis. We present RAPTOR, a test prioritization algorithm for fault localization, based on reducing the similarity between statement execution patterns as the testing progresses. Unlike previous diagnostic prioritization algorithms, RAPTOR does not require false negative information, and is much less complex. Experimental results from the Software Infrastructure Repository's benchmarks show that RAPTOR is the best technique under realistic conditions, with average cost reductions of 40% with respect to the next best technique, with negligible impact on fault detection capability.",2011,0, 5250,System design for PCB defects detection based on AOI technology,"A design of PCB automatic defects detection system based on AOI technology is presented. The hardware design is emphatically introduced including illumination module, image acquisition module, motion control unit, PC, graphic display device and operation unit. Simultaneously, the software design is briefly explained. This design is a non-contact PCB defects detection technology which can not only detect open circuit and short circuit defects, but also can detect wire gaps, voids, scratch defects etc. The highest resolution of the design is 15m and the detection success rate is over 95%.",2011,0, 5251,New approach to determine the critical number of failure in software systems,"Software-Engineering is very important today. In industry (specifically by software critical system) it is important to produce high reliable software, i.e. software with low proportion of faults. To produce such reliable software, a long handling process is required, and because this process consumes a large amount of time and resources to achieve the desired reliability goals it is useful to use Software Reliability Stochastic Models to predict the required software testing time. In this paper a new approach to reflecting the residual number of critical failures in software-systems is introduced. There are currently very few processes enabling us to predict the reliability of the critical failures or the critical failure rate for critical systems. Furthermore, we will focus on distinguishing the critical failures in the software. We will thus distinguish both critical as well as non-critical failures in the Software. Therefore it is important to divide the process into two classes, detection- and correction class. To develop an approach it is necessary to determine corresponding distribution functions and model assumptions.",2011,0, 5252,Texture feature based fingerprint recognition for low quality images,"Fingerprint-based identification is one of the most well-known and publicized biometrics for personal identification. Extracting features out of poor quality prints is the most challenging problem faced in this area. In this paper, the texture feature based approach for fingerprint recognition using Discrete Wavelet Transform (DWT) is developed to identify the low quality fingerprint from inked-printed images on paper. The fingerprint image from paper is very poor quality image and sometimes it is complex with fabric background. Firstly, a center point area of the fingerprint is detected and keeping the Core Point as center point, the image of size w x w is cropped. Gabor filtering is applied for fingerprint enhancement over the orientation image. Finally, the texture features are extracted by analyzing the fingerprint with Discrete Wavelet Transform (DWT) and Euclidean distance metric is used as similarity measure. The accuracy is improved up to 98.98%.",2011,0, 5253,System Monitoring with Metric-Correlation Models,"Modern software systems expose management metrics to help track their health. Recently, it was demonstrated that correlations among these metrics allow errors to be detected and their causes localized. Prior research shows that linear models can capture many of these correlations. However, our research shows that several factors may prevent linear models from accurately describing correlations, even if the underlying relationship is linear. Common phenomena we have observed include relationships that evolve, relationships with missing variables, and heterogeneous residual variance of the correlated metrics. Usually these phenomena can be discovered by testing for heteroscedasticity of the underlying linear models. Such behaviour violates the assumptions of simple linear regression, which thus fail to describe system dynamics correctly. In this paper we address the above challenges by employing efficient variants of Ordinary Least Squares regression models. In addition, we automate the process of error detection by introducing the Wilcoxon Rank-Sum test after proper correlations modeling. We validate our models using a realistic Java-Enterprise-Edition application. Using fault-injection experiments we show that our improved models capture system behavior accurately.",2011,0, 5254,Thermal analysis and experimental validation on cooling efficiency of thin film transistor liquid crystal display (TFT-LCD) panels,"This research explored the thermal analysis and modeling of a 32 thin film transistor liquid crystal display (TFT-LCD) panel, in the purpose of making possible improvements in cooling efficiencies. The illumination of the panel was insured by 180 light emitting diodes (LEDs) located at the top and bottom edges of the panels. These LEDs dissipate high heat flux at low thermal resistance. Hence, in order to insure good image quality in panels and long service life, an adequate thermal management is necessary. For this purpose, a commercially available computational fluid dynamics (CFD) simulation software FloEFD was used to predict the temperature distribution. This thermal prediction by computational method was validated by an experimental thermal analysis by attaching 10 thermocouples on the back cover of the panel and measuring the temperatures. Also, thermal camera images of the panel by FLIR Thermacam SC 2000 test device were also analyzed.",2011,0, 5255,A Self Healing Action Composition Agent,"The establishment of a self-healing agent has received much interest in multiple domains such as : Web services, production supply chain, transport systems, etc. This agent has a set of actions. Its role is to respond to user request with a plan of composed actions, to on-line diagnose the status of the plan execution and to automatically repair the plan when a fault is detected during the plan's execution. To this end, three main areas are studied and modeled for the establishment of such an agent : composition, diagnosis and repair.",2011,0, 5256,Machine-Learning Models for Software Quality: A Compromise between Performance and Intelligibility,"Building powerful machine-learning assessment models is an important achievement of empirical software engineering research, but it is not the only one. Intelligibility of such models is also needed, especially, in a domain, software engineering, where exploration and knowledge capture is still a challenge. Several algorithms, belonging to various machine-learning approaches, are selected and run on software data collected from medium size applications. Some of these approaches produce models with very high quantitative performances, others give interpretable, intelligible, and """"glass-box"""" models that are very complementary. We consider that the integration of both, in automated decision-making systems for assessing software product quality, is desirable to reach a compromise between performance and intelligibility.",2011,0, 5257,Impact of Data Sampling on Stability of Feature Selection for Software Measurement Data,"Software defect prediction can be considered a binary classification problem. Generally, practitioners utilize historical software data, including metric and fault data collected during the software development process, to build a classification model and then employ this model to predict new program modules as either fault-prone (fp) or not-fault-prone (nfp). Limited project resources can then be allocated according to the prediction results by (for example) assigning more reviews and testing to the modules predicted to be potentially defective. Two challenges often come with the modeling process: (1) high-dimensionality of software measurement data and (2) skewed or imbalanced distributions between the two types of modules (fp and nfp) in those datasets. To overcome these problems, extensive studies have been dedicated towards improving the quality of training data. The commonly used techniques are feature selection and data sampling. Usually, researchers focus on evaluating classification performance after the training data is modified. The present study assesses a feature selection technique from a different perspective. We are more interested in studying the stability of a feature selection method, especially in understanding the impact of data sampling techniques on the stability of feature selection when using the sampled data. Some interesting findings are found based on two case studies performed on datasets from two real-world software projects.",2011,0, 5258,Automatic Construction of Deployment Descriptors for Web Applications,"Web applications are a kind of component based distributed systems, and these components are deployed in various containers and engines. In web applications, runtime deployment descriptors are corresponding to vendor specific platforms. Due to the complexities of applications and environments, it is tedious and error-prone for deployers to create runtime deployment descriptors manually. In this paper, we propose a generalized approach to promote the automation degree in runtime deployment descriptors construction. We view deployment descriptor schemas as models and create transformation relations between schema elements by a comprehensive matching method. Transformation code, in form of XSLT, can be generated base on parameterized templates. We implement a prototype and evaluate the effects of this approach with some experiments finally.",2011,0, 5259,Computing Properties of Large Scalable and Fault-Tolerant Logical Networks,"As the number of processors embedded in high performance computing platforms becomes higher and higher, it is vital to force the developers to enhance the scalability of their codes in order to exploit all the resources of the platforms. This often requires new algorithms, techniques and methods for code development that add to the application code new properties: the presence of faults is no more an occasional event but a challenge. Scalability and Fault-Tolerance issues are also present in hidden part of any platform: the overlay network that is necessary to build for controlling the application or in the runtime system support for messaging which is also required to be scalable and fault tolerant. In this paper, we focus on the computational challenges to experiment with large scale (many millions of nodes) logical topologies. We compute Fault-Tolerant properties of different variants of Binomial Graphs (BMG) that are generated at random. For instance, we exhibit interesting properties regarding the number of links regarding some desired Fault-Tolerant properties and we compare different metrics with the Binomial Graph structure as the reference structure. A software tool has been developed for this study and we show experimental results with topologies containing 21000 nodes. We also explain the computational challenge when we deal with such large scale topologies and we introduce various probabilistic algorithms to solve the problems of computing the conventional metrics.",2011,0, 5260,Device register classes for embedded systems,"A device register is the view any peripheral device presents to the software world. Low-level routines in typical embedded systems, e.g., device drivers, communicate with devices by reading and writing device registers. Many processors use memory-mapped I/O, which assigns device registers to fixed addresses in conventional memory. To high level languages like C or C++, memory mapped devices behave like ordinary data objects to some extent. Programs use assignment operators to read values from or write values to memory mapped device registers. Unfortunately, traditional approaches for organizing and accessing memory-mapped devices are inconvenient and error prone. In this paper, a new way of writing and using C++ classes which encapsulate memory mapped device registers is described. The principle is extended to handle I/O mapped device registers for coprocessors. A Device Register Class description language is also described.",2011,0, 5261,Design and development of the CO2 enriched Seawater Distribution System,"The kinetics of the reaction that occurs when CO2 and seawater are in contact is a complex function of temperature, alkalinity, final pH and TCO2 which taken together determine the time required for complete equilibrium. This reaction is extremely important to the study of Ocean Acidification (OA) and is the critical technical driver in the Monterey Bay Aquarium Research Institute's (MBARI) Free Ocean CO2 Enrichment (FOCE) experiments. The deep water FOCE science experiments are conducted at depths beyond scuba diver reach and demand that a valid perturbation experiment operate at a stable yet naturally fluctuating lower pH condition and avoid large or rapid pH variation as well as incomplete reactions, when we expose an experimental region or sample. Therefore, the technical requirement is to create a CO2 source in situ that is stable and well controlled. After extensive research and experimentation MBARI has developed the ability to create an in situ source of CO2 enriched seawater (ESW) for distribution and subsequent use in an ocean acidification experiment. The system mates with FOCE, but can be used in conjunction with other CO2 experimental applications in deep water. The ESW system is completely standalone from FOCE. While the chemical changes induced by the addition of fossil fuel CO2 on the ocean are well known and easily predicted, the biological consequences are less clear and the subject of considerable debate. Experiments have been successfully carried out on land to investigate the effects of elevated atmospheric CO2 levels in various areas around the globe but only limited work on CO2 impacts to ocean environmental systems have been carried out to date. With rising concern over the long-term reduction in ocean pH, there is a need for viable in situ techniques to carry out experiments on marine biological systems. Previous investigations have used aqua- ia that compromise these studies because of reduced ecological complexity and buffering capacity. Additionally, aquaria use tightly controlled experimental conditions such as temperature, artificial light, and water quality that do not represent the natural ocean variability. In order to study the future effects of ocean acidification, scientists and engineers at MBARI have developed a technique and apparatus for an in situ perturbation experiment, the FOCE experimental platform. At the time of this writing the FOCE system and associated ESW are attached to the Monterey Accelerated Research System (MARS) cabled observatory. Engineering validation and tuning experiments using remote control and real time experimental feedback are underway. Additionally, an extensive instrumentation suite provides all of the necessary data for pH calculation and experimental control. The ESW is a separately deployed system that stores and distributes CO2 enriched seawater. It receives power and communications via an underwater mateable electrical tether. The CO2 enriched seawater is pumped into the FOCE sections from the ESW. This paper describes the design, development, and testing of the underwater ESW Distribution System as well as the software control algorithms as applied to FOCE. The paper covers the initial prototype, lessons learned, and the final operational version.",2011,0, 5262,A preliminary investigation towards test suite optimization approach for enhanced State-Sensitivity Partitioning,"Testing is crucial in software development. Continuous researches being done to discover effective approaches in testing that capable to detect faults despite of reducing cost. Previous work in State-Sensitivity Partitioning (SSP) technique, which based on all-transition coverage criterion, has been introduced to avoid exhaustively testing the entire data states of a module by partitioning it based on state's sensitivity towards events, conditions and action. The test data for that particular module testing is in form of event sequences (or test sequence) and sets of test sequences in test cases will perform SSP test suite. The problem occurs in SSP test suite is data state redundancy that leads towards suite growth. This paper aims to discuss an initial step of our ongoing research in enhancing prior SSP test suite. Our work will try to find out the best way in removing redundant data state in order to minimize the suite size but yet capable to detect faults introduced by five selective mutation operators effectively as the original suite.",2011,0, 5263,3-dimensional analysis of Ground Penetrating Radar image for non-destructive road inspection,"Regular maintenance of highway is an important issue to ensure safety of the vehicles using the road. Most of existing method of highway inspections are destructive, which take much times, efforts, and costs. In this paper, we propose GPR (Ground Penetrating Radar) imaging to detect possible defect of the road. GPR scanning on a plane parallel to the road yields 3D images, so that slice-by-slice images can be generated for a comprehensive evaluation. First, we simulate the subsurface-scanning with GPR-Max software, by setting up the parameters similar to expected real-condition. Then, we set up the experiment in our GPR Test-Range, in which a Network Analyzer is employed as a GPR. We compare and analyze both of the simulation and Test-Range results, including slice analysis, to asses the quality of the method. Our results indicates implementability of such 3D GPR imaging for road inspection.",2011,0, 5264,Input-input relationship constraints in T-way testing,"T-way testing is designed to detect faults due to interaction. In order to be effective, all t combinations of input parameters must be tested. While many t-way strategies can be used to generate the t-way test data (e.g. IPOG, AETG, GT-Way, Jenny, TVG and MIPOG), most do not ensure that all t combinations of input parameters can be practically tested. Addressing this issue, this paper highlights a new type of constraints that might prevent some t-way parameter interactions from being tested (and hence compromising the effectiveness of t-way testing), termed input-input relationship constraints. Apart from ensuring all t combinations are properly tested, input-input relationship constraints can further optimize the generated test data since all impossible combinations are completely ignored. In addition, this paper also introduces a new strategy that supports input-input relationship constraints and demonstrates the correctness of the strategy as well as the effectiveness of test data with input-input relationship.",2011,0, 5265,ConfigChecker: A tool for comprehensive security configuration analytics,"Recent studies show that configurations of network access control is one of the most complex and error prone network management tasks. For this reason, network misconfiguration becomes the main source for network unreachablility and vulnerability problems. In this paper, we present a novel approach that models the global end-to-end behavior of access control configurations of the entire network including routers, IPSec, firewalls, and NAT for unicast and multicast packets. Our model represents the network as a state machine where the packet header and location determines the state. The transitions in this model are determined by packet header information, packet location, and policy semantics for the devices being modeled. We encode the semantics of access control policies with Boolean functions using binary decision diagrams (BDDs). We then use computation tree logic (CTL) and symbolic model checking to investigate all future and past states of this packet in the network and verify network reachability and security requirements. Thus, our contributions in this work is the global encoding for network configurations that allows for general reachability and security property-based verification using CTL model checking. We have implemented our approach in a tool called ConfigChecker. While evaluating ConfigChecker, we modeled and verified network configurations with thousands of devices and millions of configuration rules, thus demonstrating the scalability of this approach. We also present a SCAP-based tool on top of ConfigChecker that integrates host and network configuration compliance checking in one model and allows for executing comprehensive analysis queries in order to verify security and risk requirements across the end-to-end network as a single system.",2011,0, 5266,Measuring firewall security,"In the recent years, more attention is given to firewalls as they are considered the corner stone in Cyber defense perimeters. The ability to measure the quality of protection of a firewall policy is a key step to assess the defense level for any network. To accomplish this task, it is important to define objective metrics that are formally provable and practically useful. In this work, we propose a set of metrics that can objectively evaluate and compare the hardness and similarities of access policies of single firewalls based on rules tightness, the distribution of the allowed traffic, and security requirements. In order to analyze firewall polices based on the policy semantic, we used a canonical representation of firewall rules using Binary Decision Diagrams (BDDs) regardless of the rules format and representation. The contribution of this work comes in measuring and comparing firewall security deterministically in term of security compliance and weakness in order to optimize security policy and engineering.",2011,0, 5267,Vulnerability hierarchies in access control configurations,"This paper applies methods for analyzing fault hierarchies to the analysis of relationships among vulnerabilities in misconfigured access control rule structures. Hierarchies have been discovered previously for faults in arbitrary logic formulae [11,10,9,21], such that a test for one class of fault is guaranteed to detect other fault classes subsumed by the one tested, but access control policies reveal more interesting hierarchies. These policies are normally composed of a set of rules of the form if [conditions] then [decision], where [conditions] may include one or more terms or relational expressions connected by logic operators, and [decision] is often 2-valued (grant or deny), but may be n-valued. Rule sets configured for access control policies, while complex, often have regular structures or patterns that make it possible to identify generic vulnerability hierarchies for various rule structures such that an exploit for one class of configuration error is guaranteed to succeed for others downstream in the hierarchy. A taxonomy of rule structures is introduced and detection conditions computed for nine classes of vulnerability: added term, deleted term, replaced term, stuck-at-true condition, stuck-at-false condition, negated condition, deleted rule, replaced decision, negated decision. For each configuration rule structure, detection conditions were analyzed for the existence of logical implication relations between detection conditions. It is shown that hierarchies of detection conditions exist, and that hierarchies vary among rule structures in the taxonomy. Using these results, tests may be designed to detect configuration errors, and resulting vulnerabilities, using fewer tests than would be required without knowledge of the hierarchical relationship among common errors. In addition to practical applications, these results may help to improve the understanding of access control policy configurations.",2011,0, 5268,Critiquing Rules and Quality Quantification of Development-Related Documents,"As the development of embedded systems grows in scale, it is becoming more important for engineers to share development documents such as requirements, design specifications and testing specifications, and to accurately circulate and understand the information necessary for development. Also, many defects that can be originated in the surface expression of the documents are reported through investigations of causes of defects in embedded systems development, In this paper, we highlight improper surface expressions of Japanese documents, and define quality criteria and critiquing rules to detect problems such as ambiguous expressions or omissions of information. We also carry out visual quality inspections and evaluate detection performance, correlations and working time. Then, we verify the validity of the critiquing rules we have defined and apply them to the document critiquing tool to evaluate the quality of the actual documents used in the development of embedded systems. And we quantify the quality of these documents by automatically detecting improper expression. We also apply supplemental critiquing rules to the document critiquing tool for use by non-native speakers of Japanese, and verify its efficacy at improving the quality of Japanese documents created by foreigners.",2011,0, 5269,A Proposal of NHPP-Based Method for Predicting Code Change in Open Source Development,"This paper proposes a novel method for predicting the amount of source code changes (changed lines of code: changed-LOC) in the open source development (OSD). While the software evolution can be observed through the public code repository in OSD, it is not easy to understand and predict the state of the whole development because of the huge amount of less-organized information.The method proposed in the paper predicts the code changes by using only data freely available from the code repository the code-change time stamp and the changed-LOC.The method consists of two steps: 1) to predict the number of occurrences of code changes by using a non-homogeneous Poisson process (NHPP)-based model, and 2) to predict the amount of code changes by using the outcome of the step-1 and the previously changed-LOC.The empirical work shows that the proposed method has an ability to predict the changed-LOC in the next 12 months with less than 10% error.",2011,0, 5270,An Empirical Study of Fault Prediction with Code Clone Metrics,"In this paper, we present a replicated study to predict fault-prone modules with code clone metrics to follow Baba's experiment. We empirically evaluated the performance of fault prediction models with clone metrics using 3 datasets from the Eclipse project and compared it to fault prediction without clone metrics. Contrary to the original Baba's experiment, we could not significantly support the effect of clone metrics, i.e., the result showed that F1-measure of fault prediction was not improved by adding clone metrics to the prediction model. To explain this result, this paper analyzed the relationship between clone metrics and fault density. The result suggested that clone metrics were effective in fault prediction for large modules but not for small modules.",2011,0, 5271,Quantifying the Effectiveness of Testing Efforts on Software Fault Detection with a Logit Software Reliability Growth Model,"Quantifying the effects of software testing metrics such as the number of test runs on the fault detection ability is quite important to design and manage effective software testing. This paper focuses on the regression model which represents the causal relationship between the software testing metrics and the fault detection probability. In a numerical experiment, we perform the quantitative estimation of the causal relationship through the quantization of software testing metrics.",2011,0, 5272,Using Efficient Machine-Learning Models to Assess Two Important Quality Factors: Maintainability and Reusability,"Building efficient machine-learning assessment models is an important achievement of empirical software engineering research. Their integration in automated decision-making systems is one of the objectives of this work. It aims at empirically verify the relationships between some software internal artifacts and two quality attributes: maintainability and reusability. Several algorithms, belonging to various machine-learning approaches, are selected and run on software data collected from medium size applications. Some of these approaches produce models with very high quantitative performances; others give interpretable and """"glass-box"""" models that are very complementary.",2011,0, 5273,An Exploratory Study on the Impact of Usage of Screenshot in Software Inspection Recording Activity,"This paper describes an exploratory study on theuse of screenshots for recording software inspection activities such as defect reproduction and correction. Although detected defects are usually recorded in writing, using screenshots to record detected defects should ecrease the percentage of irreproducible defects and the time needed to reproduce defects during the defect orrection phase. An experiment was conducted to clarify the efficiency of using screenshots to record detected defects. One practitioner group and two student groups participated in the experiment. The recorder in each group used a prototype support tool for capturing screenshots during the experiment. Each group conducted two trials: one with a general spreadsheet application to support recording, the other with the prototype tool that supportsrecording inspection activities. After the inspection meeting, the recorder was asked to reproduce the recorded defects. The percentage of reproduce defects and time to reproduce defects was measured. The results of the experiment show that use of screenshots increases the percentage of reproduced defects and decreases the time needed to reproduce the defects. The results also indicate that use of the recording tool affected the types of defects.",2011,0, 5274,Software Metrics Based on Coding Standards Violations,"Software metrics is one of promise technique to capture the size and quality of products, development process in order to assess a software development. Many software metrics based on various aspects of a product and/or a process have been proposed. There is some research which discuss the relation between software metrics and faults to use these metrics as the indicator of quality. Most of these software metrics are based on structural features of products or process information related to explicit fault. In this paper, we focus on latent faults detected by static analysis techniques. The coding checker is widely used to find coding standards violations which are strongly relating to latent faults. In this paper, we propose new software metrics based on coding standards violations to capture latent faults in a development. We analyze two open source projects by using proposed metrics and discuss the effectiveness.",2011,0, 5275,FMEA-Based Control Mechanism for Embedded Control Software,"Current software FMEA analysis depends on the static model of the embedded real-time system which cannot fully assess the dynamics of control loops and changes in timing. Control block diagram is not only the static model but also the dynamic model of the system. Though the simulation of the control blocks diagram, the dynamics of control lops and changes in timing can result in the injection of the failure mode. For illustration, a small embedded control system is utilized. Empirical results show that more detailed information such as the dynamics of control loops and changes in timing will provide the comprehensive effect analysis for software FMEA. Tough FMEA and control mechanism analysis techniques which assess operation under normal conditions and simulation of dynamic timing and failure conditions, a complete validation of the safety characteristics of embedded real-time control systems can result.",2011,0, 5276,CriticalFault: Amplifying Soft Error Effect Using Vulnerability-Driven Injection,"As future microprocessors will be prone to various types of errors, researchers have looked into cross-layer hardware-software reliability solutions to reduce overheads. These mechanisms are shown to be effective when evaluated with statistical fault injection (SFI). However, under SFI, a large number of injected faults can be derated, making the evaluation less rigorous. To handle this problem, we propose a biased fault injection framework called Ciritical Fault that leverages vulnerability analysis to identify faults that are more likely to stress test the underlying reliability solution. Our experimental results show that the injection space is reduced by 30% and a large portion of injected faults cause software aborts and silent data corruptions. Overall, Critical Fault allows us to amplify soft error effects on reliability mechanism-under-test, which can help improve current techniques or inspire other new fault-tolerant mechanisms.",2011,0, 5277,OpenMDSP: Extending OpenMP to Program Multi-Core DSP,"Multi-core Digital Signal Processors (DSP) are widely used in wireless telecommunication, core network transcoding, industrial control, and audio/video processing etc. Comparing with general purpose multi-processors, the multi-core DSPs normally have more complex memory hierarchy, such as on-chip core-local memory and non-cache-coherent shared memory. As a result, it is very challenging to write efficient multi-core DSP applications. The current approach to program multi-core DSPs is based on proprietary vendor SDKs, which only provides low-level, non-portable primitives. While it is acceptable to write coarse-grained task level parallel code with these SDKs, it is very tedious and error prone to write fine-grained data parallel code with them. We believe it is desired to have a high-level and portable parallel programming model for multi-core DSPs. In this paper, we propose Open MDSP, an extension of Open MP designed for multi-core DSPs. The goal of Open MDSP is to fill the gap between Open MP memory model and the memory hierarchy of multi-core DSPs. We propose three class of directives in Open MDSP: (1) data placement directives allow programmers to control the placement of global variables conveniently, (2) distributed array directives divide whole array into sections and promote them into core-local memory to improve performance, and (3) stream access directives promote big array into core-local memory section by section during a parallel loop's processing. We implement the compiler and runtime system for Open MDSP on Free Scale MSC8156. Benchmarking result shows that seven out of nine benchmarks achieve a speedup of more than 5 with 6 threads.",2011,0, 5278,An Evaluation of Vectorizing Compilers,"Most of today's processors include vector units that have been designed to speedup single threaded programs. Although vector instructions can deliver high performance, writing vector code in assembly language or using intrinsics in high level languages is a time consuming and error-prone task. The alternative is to automate the process of vectorization by using vectorizing compilers. This paper evaluates how well compilers vectorize a synthetic benchmark consisting of 151 loops, two application from Petascale Application Collaboration Teams (PACT), and eight applications from Media Bench II. We evaluated three compilers: GCC (version 4.7.0), ICC (version 12.0) and XLC (version 11.01). Our results show that despite all the work done in vectorization in the last 40 years 45-71% of the loops in the synthetic benchmark and only a few loops from the real applications are vectorized by the compilers we evaluated.",2011,0, 5279,On High-Assurance Scientific Workflows,"Scientific Workflow Management Systems (S-WFMS), such as Kepler, have proven to be an important tools in scientific problem solving. Interestingly, S-WFMS fault-tolerance and failure recovery is still an open topic. It often involves classic fault-tolerance mechanisms, such as alternative versions and rollback with re-runs, reliance on the fault-tolerance capabilities provided by subcomponents and lower layers such as schedulers, Grid and cloud resources, or the underlying operating systems. When failures occur at the underlying layers, a workflow system sees this as failed steps in the process, but frequently without additional detail. This limits S-WFMS' ability to recover from failures. We describe a light weight end-to-end S-WFMS fault-tolerance framework, developed to handle failure patterns that occur in some real-life scientific workflows. Capabilities and limitations of the framework are discussed and assessed using simulations. The results show that the solution considerably increase workflow reliability and execution time stability.",2011,0, 5280,An Availability Model of a Virtual TMR System with Applications in Cloud/Cluster Computing,"Three important factors in dependable computing are cost, error correction and high availability. In this paper we will focus on assessing a proposed model that encapsulates all three important factors and a virtual architecture that can be implemented in the IaaS layer of cloud computing. The proposed model will be assessed against a popular existing architecture (Triple Modular Redundant System TMR) and the availability analysis done with Fault-Trees combined with Markov Chains. These experiments will demonstrate that the virtualization of the TMR system using the architecture that we have proposed, will achieve almost the same level of availability/reliability and cost, along with the inherent advantages of virtual systems. Advantages include faster system restart, efficient use of resources and migration.",2011,0, 5281,Using Automated Control Charts for the Runtime Evaluation of QoS Attributes,"As modern software systems operate in a highly dynamic context, they have to adapt their behaviour in response to changes in their operational environment or/and requirements. Triggering adaptation depends on detecting quality of service (QoS) violations by comparing observed QoS values to predefined thresholds. These threshold-based adaptation approaches result in late adaptations as they wait until violations have occurred. This may lead to undesired consequences such as late response to critical events. In this paper we introduce a statistical approach CREQA - Control Charts for the Runtime Evaluation of QoS Attributes. This approach estimates at runtime capability of a system, and then it monitors and provides early detection of any changes in QoS values allowing timely intervention in order to prevent undesired consequences. We validated our approach using a series of experiments and response time datasets from real world web services.",2011,0, 5282,Software-Based Instrumentation for Localization of Faults Caused by Electrostatic Discharge,"Electrostatic discharge (ESD) is often the cause of system-level failure or malfunction of embedded systems. The underlying faults are difficult to localize, as the information gained from the hardware-based diagnostic methods typically in use lacks sufficient detail. The alternative proposed in this paper is software instrumentation that monitors key registers and flags to detect anomalies indicative of failure. In contrast to hardware-based techniques, which use invasive probes that can alter the very phenomena being studied, the proposed approach makes use of standard peripherals such as the serial or Ethernet port to monitor and record the effect of ESD. We illustrate the use of this software instrumentation technique in conjunction with a three-dimensional ESD injection system to produce a sensitivity map that visualizes the susceptibility of various segments of an embedded system to ESD.",2011,0, 5283,Cardio: Adaptive CMPs for reliability through dynamic introspective operation,"Current technology scaling enables the integration of tens of processing elements into a single chip, and future technology nodes will soon allow the integration of hundreds of cores per device. While very powerful, many experts agree that these systems will be prone to a significant number of permanent and transient faults during their lifetime. If not properly handled, effects of runtime failures can be dramatic. In this work, we propose Cardio, a distributed architecture for reliable chip multiprocessors. Cardio, a novel approach for on-chip reliability is based on hardware detectors that spot failures and on software routines that reorganize the system to work around faulty components. Compared to previous online reliability solutions, Cardio provides failure reactivity comparable to hardware-only reliable solutions while requiring a much lower area overhead. Cardio operates a distributed resource manager to collect health information about components and leverages a robust distributed control mechanism to manage system-level recovery. Our architecture operational as long as at least one general purpose processor is still functional in the chip. We evaluated our design using a custom simulator and estimate its runtime impact on the SPECMPI benchmarks to be lower than 3%. We estimate its dynamic reconfiguration time to be comprised between 20 and 50 thousand cycles per failure.",2011,0, 5284,Full-system analysis and characterization of interactive smartphone applications,"Smartphones have recently overtaken PCs as the primary consumer computing device in terms of annual unit shipments. Given this rapid market growth, it is important that mobile system designers and computer architects analyze the characteristics of the interactive applications users have come to expect on these platforms. With the introduction of high-performance, low-power, general purpose CPUs in the latest smartphone models, users now expect PC-like performance and a rich user experience, including high-definition audio and video, high-quality multimedia, dynamic web content, responsive user interfaces, and 3D graphics. In this paper, we characterize the microarchitectural behavior of representative smartphone applications on a current-generation mobile platform to identify trends that might impact future designs. To this end, we measure a suite of widely available mobile applications for audio, video, and interactive gaming. To complete this suite we developed BBench, a new fully-automated benchmark to assess a web-browser's performance when rendering some of the most popular and complex sites on the web. We contrast these applications' characteristics with those of the SPEC CPU2006 benchmark suite. We demonstrate that real-world interactive smartphone applications differ markedly from the SPEC suite. Specifically the instruction cache, instruction TLB, and branch predictor suffer from poor performance. We conjecture that this is due to the applications' reliance on numerous high level software abstractions (shared libraries and OS services). Similar trends have been observed for UI-intensive interactive applications on the desktop.",2011,0, 5285,Evaluating the viability of process replication reliability for exascale systems,"As high-end computing machines continue to grow in size, issues such as fault tolerance and reliability limit application scalability. Current techniques to ensure progress across faults, like checkpoint-restart, are increasingly problematic at these scales due to excessive overheads predicted to more than double an application's time to solution. Replicated computing techniques, particularly state machine replication, long used in distributed and mission critical systems, have been suggested as an alternative to checkpoint-restart. In this paper, we evaluate the viability of using state machine replication as the primary fault tolerance mechanism for upcoming exascale systems. We use a combination of modeling, empirical analysis, and simulation to study the costs and benefits of this approach in comparison to check-point/restart on a wide range of system parameters. These results, which cover different failure distributions, hardware mean time to failures, and I/O bandwidths, show that state machine replication is a potentially useful technique for meeting the fault tolerance demands of HPC applications on future exascale platforms.",2011,0, 5286,Large scale debugging of parallel tasks with AutomaDeD,"Developing correct HPC applications continues to be a challenge as the number of cores increases in today's largest systems. Most existing debugging techniques perform poorly at large scales and do not automatically locate the parts of the parallel application in which the error occurs. The over head of collecting large amounts of runtime information and an absence of scalable error detection algorithms generally cause poor scalability. In this work, we present novel, highly efficient techniques that facilitate the process of debugging large scale parallel applications. Our approach extends our previous work, AutomaDeD, in three major areas to isolate anomalous tasks in a scalable manner: (i) we efficiently compare elements of graph models (used in AutomaDeD to model parallel tasks) using pre-computed lookup-tables and by pointer comparison; (ii) we compress per-task graph models before the error detection analysis so that comparison between models involves many fewer elements; (iii) we use scalable sampling-based clustering and nearest-neighbor techniques to isolate abnormal tasks when bugs and performance anomalies are manifested. Our evaluation with fault injections shows that AutomaDeD scales well to thousands of tasks and that it can find anomalous tasks in under 5 seconds in an online manner.",2011,0, 5287,Self-Awareness in Autonomous Nano-Technology Swarm Missions,"NASA is currently exploring swarm-based technologies, targeting the development of prospective exploration missions to explore regions of space, where single large spacecraft would be impractical. Such systems are envisioned to operate autonomously and their success factor depends highly on self-awareness capabilities. This research emphasizes the development of algorithms and prototyping models for self-awareness in swarm-based space-exploration systems. This article tackles the self-initiation and self-healing properties of swarm-based space-exploration systems.",2011,0, 5288,Device Driver Generation and Checking Approach,"Optimizing time and effort in embedded systems design is essential nowadays. The increased productivity gap together with the reduced time to market make the design of some components of the system the main design bottleneck.Taking into account the natural complexity of HdS design, a software checking technique helps finding bugs. However the increasing complexity of HdS makes the development and use of checking techniques a challenge.Reducing the time spent to build the checking environment can be a solution for this kind of problem. This can be accomplished by automating the generation of the checking environment from a device specification. The use of virtual platforms also represents an advantage since it supports to start the HdS development in an initial design phase.This paper proposes an approach for checking errors during the development of a very error prone Hardware dependent Software, that is device drivers. The proposed checking mechanism can be generated from a device specification using a language called Temporal DevC. Taking a device description in TDevC, the proposed approach generates a driver checking mechanism based on state machines. Experiments show the efficiency and effectiveness of the proposed mechanism, enabling its use for the detection of unwanted flows in the device driver simulation as well.",2011,0, 5289,Testing embedded software by metamorphic testing: A wireless metering system case study,"In this paper, we present our experience of testing wireless embedded software. We used a wireless metering system in operation, and its software as a case study to demonstrate how a property-based testing technique, called metamorphic testing, can be used in detecting software failures of this wireless embedded system. Our study shows that a careful design of test environments and selection of system properties will enable us to trace back the cause of failures and help in fault diagnosis and debugging.",2011,0, 5290,An adaptive H.264 video protection scheme for video conferencing,"Real-time video communication such as Internet video conferencing is often afflicted by packet loss over the network. To improve the quality of video, error protection schemes have been introduced based on FMO in H.264 whose encoding efficiency is unacceptable. This paper presents a novel region of interest (ROI) protection scheme that can accurately extract ROI area using facial recognition and greatly speedup video encoding based on feedback using x264 codec implementation. In this scheme, the video receiver uses a packet loss prediction model to predict whether to send feedback to the video sender that dynamically adjust the ROI protecting scheme. Experiments prove that the quality of the ROI area can be effectively improved by the scheme whose encoding performance increases by 50 times compared with FMO based algorithms.",2011,0, 5291,Pixel domain referenceless visual degradation detection and error concealment for mobile video,"In mobile video applications, where unreliable networks are commonplace, corrupted video packets can have a profound impact on the quality of the user experience. In this paper, we show that, in a wide range of operating conditions, selectively reusing data resulting from decodable errorneous packets leads to better results than frame copy. This selection is guided by a novel concept that combines motion estimation and a measure of blocking artifacts at block edges to predict visual degradation caused by the decoding of erroneous packets. Simulation results show that, by using the proposed solution, the H.264/AVC JM reference software decoder can select the best option between frame copy and the erroneous frame decoding in 82% of test cases. We also obtain an average gain of 1.95 dB for concealed frames (when they differ from those concealed by the JM decoder).",2011,0, 5292,Automated image quality assessment for camera-captured OCR,"Camera-captured optical character recognition (OCR) is a challenging area because of artifacts introduced during image acquisition with consumer-domain hand-held and Smart phone cameras. Critical information is lost if the user does not get immediate feedback on whether the acquired image meets the quality requirements for OCR. To avoid such information loss, we propose a novel automated image quality assessment method that predicts the degree of degradation on OCR. Unlike other image quality assessment algorithms which only deal with blurring, the proposed method quantifies image quality degradation across several artifacts and accurately predicts the impact on OCR error rate. We present evaluation results on a set of machine-printed document images which have been captured using digital cameras with different degradations.",2011,0, 5293,"The evidential independent verification of software of information and control systems, critical to safety: Functional model of scenario","The results of development of the techniques which form the scenario of target technology Evidential independent verification of I&C Systems Software of critical application and utilities of the scenario support at information, analytical and organizational levels are presented in the article. The result of the scenario implementation is the quantitative definition of latent faults probability and completeness of test coverage for critical software. This technology can be used by I&C systems developers, certification and regulation bodies to carry out independent verification (or certification) during modernization and modification of critical software directly on client objects without intruding (interrupting) in technological processes.",2011,0, 5294,Diagnosis infrastructure of software-hardware systems,"This article describes an infrastructure and technologies for diagnosis. A transactional graph model and method for diagnosis of digital system-on-chip are developed. They are focused to considerable decrease the time of fault detection and memory for storage of diagnosis matrix by means of forming ternary relations in the form of test, monitor, and functional component. The following problems are solved: creation of digital system model in the form of transaction graph and multitree of fault detection tables, as well as ternary matrices for activating functional components in tests, relative to the selected set of monitors; development of a method for analyzing the activation matrix to detect the faults with given depth and synthesizing logic functions for subsequent embedded hardware fault diagnosing.",2011,0, 5295,Software testing of a simple network,"It is costly to have defective networks and nodes. There are many factors involved in the cost of defective design of networks. The size of development team, stage of development when the defect occurs, routing protocols and subtlety of the defect are only a few of the possibilities. Testing software, therefore has to be designed to detect the defect, and as early as possible in the design cycle. Otherwise the costs can be overwhelming. This is yet another compelling argument for QA engineers to justify up-front test costs similar to the electronics design programs of JTAG (Joint Test Action Group for boundary scan) or BIST (Built-in Self Test) circuitry.",2011,0, 5296,A stochastic formulation of successive software releases with faults severity,"Software companies are coming with multiple add-ons to survive in the pure competitive environment. Each succeeding up-gradation offers some performance enhancement and distinguishing itself from the past release. If the size of the software system is large, the number of faults detected during the testing phase becomes large, and the number of faults, which are removed through each debugging, becomes small compared to initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a multi-release software reliability growth model based on Ito's type of differential equation. The model categorizes Faults in two categories: simple and hard with respect to time which they take for isolation and removal after their observation. The model developed is validated on real data set.",2011,0, 5297,A Connection-Based Signature Approach for Control Flow Error Detection,"Control Flow Errors (CFEs) are major impairments of software system correctness. These CFEs can be caused by operational faults with respect to the execution environment of a software system. Several techniques are proposed to monitor the control flow using signature-based approaches. These techniques partition a software program into branch-free blocks and assign a unique signature for each block. They detect CFEs by comparing the runtime signatures of these blocks with pre-computed signatures based on the program Control Flow Graph (CFG). Unfortunately, branch-free block partitioning does not completely include all the program connections. Consequently, these techniques may fail to detect some invalid transitions due to lack of signatures associated with those missing connections. In this paper, we propose a connection-based signature approach for CFE detection. We first describe our connection-based signature structure in which we partition the program components into Connection Implementation Blocks (CIBs). Each CIB is associated with a Connection-based CFG (CCFG) to represent the control structure of its code segment. We present our control flow monitor structure and CFE checking algorithm using these CCFGs. The error detection approach is evaluated using PostgreSQL open-source database. The results show that this technique is capable of detecting CFEs in different software versions with variable numbers of randomly injected faults.",2011,0, 5298,Investigation on Safety-Related Standards for Critical Systems,"In each application domain for safety-critical systems, international organizations have issued regulations concerned with the development, implementation, validation and maintenance of safety-critical systems. In particular, each of them indicate a definition of what safety means, proper qualitative and quantitative properties for evaluating the quality of the system under development, and a set of methodologies to be used for assessing the fulfilment of the mentioned properties. These standards are today and essential tool for ensuring the required safety levels in many domains that require extremely high dependability. This paper summarizes the analysis on a set of well-known safety standards in different domains of critical systems with the intend of highlighting similarities and differences among them, pointing out common areas of interest and reporting on which features the newest (and upcoming) standards are focusing.",2011,0, 5299,Assessing Measurements of QoS for Global Cloud Computing Services,"Many global distributed cloud computing applications and services running over the Internet, between globally dispersed clients and servers, will require certain levels of Quality of Service (QoS) in order to deliver and give a sufficiently smooth user experience. This would be essential for real-time streaming multimedia applications like online gaming and watching movies on a pay as you use basis hosted in a cloud computing environment. However, guaranteeing or even predicting QoS in global and diverse networks supporting complex hosting of application services is a very challenging issue that needs a stepwise refinement approach to be solved as the technology of cloud computing matures. In this paper, we investigate if latency in terms of simple Ping measurements can be used as an indicator for other QoS parameters such as jitter and throughput. The experiments were carried out on a global scale, between servers placed in universities in Denmark, Poland, Brazil and Malaysia. The results show some correlation between latency and throughput, and between latency and jitter, even though the results are not completely consistent. As a side result, we were able to monitor the changes in QoS parameters during a number of 24-hour periods. This is also a first step towards defining QoS parameters to be included in Service Level Agreements for cloud computing in the foreseeable future.",2011,0, 5300,A Runtime Fault Detection Method for HPC Cluster,"As the number of nodes keeps increasing, faults have become commonplace for HPC cluster. For fast recovery from faults, the fault detection method is necessary. Based on the usage patterns of HPC cluster, a automatic runtime fault detection mechanism is proposed in this paper: First, the normal activities for nodes in HPC cluster are modeled using runtime state by clustering analysis, Second, the fault detection process is implemented by comparing the current runtime state of nodes with normal activity models. A fault alarm is made immediately when the current runtime state deviates from the normal activity models. In the experiments, the faults are simulated by fault injection methods and the experimental results show that the runtime fault detection method in this paper can detect faults with high accuracy.",2011,0, 5301,Analysis of rotor fault detection in inverter fed induction machines at no load by means of finite element method,"This paper analyzes a new method for detecting defective rotor bars at zero load and standstill by means of modeling using the finite element method (FEM). The detection method uses voltage pulses generated by the switching of the inverter to excite the machine and measures the corresponding reaction of the machine phase currents, which can be used to identify a modulation of the transient leakage inductance caused by asymmetries within the machine. The presented 2D finite element model and the simulation procedure are oriented towards this approach and are developed by means of the FEM software ANSYS. The analysis shows how the transient flux linkage imposed by voltage pulses is influenced by a broken bar leading to very distinct rotor-fixed modulation, that can be clearly exploited for monitoring. Simulation results are presented to show the transient flux paths. These simulation results are supported by measurements on a specially manufactured induction machine.",2011,0, 5302,Component-wise optimization for a commercial central cooling plant,"Thermal comfort and energy savings are two main goals of heating, ventilation and air conditioning (HVAC) systems. In this paper, the optimization-simulation approach is proposed for effective energy saving potential in a commercial central cooling plant by refining the model of optimal operation for system components and deriving optimal conditions for their operation subject to technical and human comfort constraints. To investigate the potential of energy savings and air quality, a real-world commercial building, located in a hot and dry climate region, together with its central cooling plant is used for experimentation and data collection. Both inputs and outputs of the existing central cooling plant are measured from the field monitoring in one typical week in the summer. Optimization is performed by using empirically-based models of the central cooling plant components. Optimization algorithms implemented on a transient simulation software package, are used to solve the minimization problem of energy consumption for each considered control strategies and predict the HVAC system optimized set-points under transient load. The integrated simulation tool was validated by comparing predicted and measured power consumption of the chiller during the first day of July. Results show that between 3.2% and 11.8% power savings can be obtained by this approach while maintaining the predicted mean vote (PMV) from -0.5 to +1 for most of the summer time.",2011,0, 5303,System Failure Forewarning Based on Workload Density Cluster Analysis,"Each computer system contains design objectives for long-term usage, so the operator must conduct a continuous and accurate assessment of system performance in order to detect the potential factors that will degrade system performance. Condition indicators are the basic components of diagnosis. It is important to select feature vectors that meet the criteria in order to provide true accuracy and powerful diagnostic routines. Our goal is to indicate the actual system status according to the workload, and use clustering techniques to analyze the workload distribution density to build diagnostic templates. Such templates can be used for system failure forewarning. In the proposed system, we present an approach, based on workload density cluster analysis to automatically monitor the health of software systems and system failure forewarning. Our approach consists of tracking the workload density of metric clusters. We employ the statistical template model to automatically identify significant changes in cluster moving, therefore enabling robust fault detection. We observed two circumstances from the experiment results. First, under most normal status, the lowest accuracy value is approximate our theoretical minimum threshold of 84%. Such result implies a close correlation between our measured and real system status. Second, the command data used by the system could predict 90% of events announced, which reveals the prediction effectiveness of this proposed system. Although it is infeasible for the system to process the largest possible fault events in the deployment of resources, we could apply statistics to characterize the anomalous behaviors to understand the nature of emergencies and to test system service under such scenarios.",2011,0, 5304,Variable Precision Rough Set-Based Fault Diagnosis for Web Services,"Web service is the emergent technology for constructing more complex and flexible software system for business applications. However, some new features of Web service-based software such as heterogeneity and loose coupling bring great trouble to the latter fault debugging and diagnosis. In the paper, variable precision rough set-based diagnosis framework is presented. In such debugging model, SOAP message monitoring and service invocation instrument are used to record service interface information. Meanwhile, factors of execution context are also viewed as conditional attributes of knowledge representation system. The final execution result is treated as the decision attribute, and failure ontology is utilized to classify system's failure behaviors. Based on this extended information system, variable precision rough set reasoning is performed to generate the probability association rules, which are the clues for locating the possible faulty services. In addition, the experiment on a real-world Web services system is performed to demonstrate the feasibility and effectiveness of our proposed method.",2011,0, 5305,A hybrid method for constructing High Level Architecture of BBS user network,"It is useful to understand the High Level Architecture (HLA) of the user network of Bulletin Board Systems (BBS) for some applications. In this paper, we construct the HLA of a BBS user network through hybrid static and dynamic analysis of the quantitative temporal graphs that are extracted from the BBS entries. We detect the HLA framework first though the static structural analysis of the aggregation of the temporal graphs. Then, we identify the HLA components including communities, community cores, and hubs elaborately through the dynamic analysis of the quantitative temporal attributes of nodes. The hybrid method guarantees the HLA quality as it removes the false components from the HLA. It controls the computational cost at a low level also. A metric is proposed to evaluate the HLA efficiency in information transmission. The experiments show that the HLA constructed by the hybrid method outperforms that constructed by the comparative method.",2011,0, 5306,Algorithm analyzer to check the efficiency of codes,Efficiency of codes developed is always an issue in software development. Software can be said to be of good quality if the measurable features of the software can be quantitatively checked for adoption of standards or following certain set rules. Software metrics can therefore come into play in the sense of helping to measure certain characteristics of software. The issue and factors pertaining to efficiency of a code will be addressed by software metrics. Existing tools that are used to analyze several software metrics have come a long way in helping to assess this very important part of software development. This paper described how software metrics can be used in analyzing efficiency of the developed code in early stage of development. A tool (algorithm analyzer) was developed to enable analyze a given code to check its efficiency level and produce efficiency reports based on the analysis. The system is able to help the code checking whilst maintaining the standards of coding for its users. With the reports that are generated it would be easy for users to determine the efficiency of their object oriented codes.,2011,0, 5307,A study of process improvement best practices,"Software project success depends on various reasons including project control, software standards and procedures. Software development organizations realize the importance of using best practices to improve software development practices. An increasing number of literature have described about process improvement best practices and standards. Formal process improvement frameworks have emerged widely to promote the use of systematic processes for software engineering. These approaches identify best practices for managing software engineering quality. They provide methods for assessing an organization's process maturity level and capability. In this article, recent process improvement best practices and standards are presented. Its objective is to analyze the existing approaches towards software process improvement initiatives. Another objective is to determine the issues related to adoption of process improvement and standards. The research outcome is to obtain the significant process improvement issues and formulate a classification of generic steps for process improvements.",2011,0, 5308,Design of the mechanical condition monitoring system for Molded Case Circuit Breakers,"In this paper, a mechanical condition monitoring system of Molded Case Circuit Breaker (MCCB) is designed and developed. The operation principle of the monitoring system, the hardware design and the software design are introduced in detail. The three-phase voltage, the voltage between auxiliary normally closed contacts and the motor current are detected during the opening operation, closing operation and reclosing operation of MCCB, wherein these operations are driven by the motor. The mechanical condition characteristic parameters, including closing time, opening time, reclosing time, three phases asynchronous time, closing speed, opening speed, reclosing speed and the force on the handle, can be calculated by analyzing the voltage signals and current signals. The test results show that the system has a good performance. Moreover the characteristic parameters of circuit breaker, obtained in the test, provide test data for the theory research on remaining life prediction of MCCB.",2011,0, 5309,Wavelet analysis and application on the technology detecting single phase-to-ground fault line for distribution network,"The single phase-to-ground fault is the most frequent accident in distribution network. In order to achieve detecting fault line, the wavelet packet decomposition method was used in this paper. The first step is to analyze the transient characteristics of fault line and the wavelet packet decomposition of transient ground capacitive current. Then the energy method is used to extract the feature band, so judging fault line can be done by analyzing the relationship of amplitude and polarity with singularity and modulus maxima theory. In the end, the feasibility of this method is verified with MATLAB software.",2011,0, 5310,SmartCM a smart card fault injection simulator,"Smart card are often the target of software or hardware attacks. The most recent attack is based on fault injection which modifies the behavior of the application. We propose an evaluation of the effect of the propagation and the generation of hostile application inside the card. We designed several countermeasures and models of smart cards. Then we evaluate the ability of these countermeasures to detect the faults, and the latency of the detection. In a second step we evaluate the mutant with respect to security properties in order to focus only on the dangerous mutants.",2011,0, 5311,Exploiting Text-Related Features for Content-based Image Retrieval,"Distinctive visual cues are of central importance for image retrieval applications, in particular, in the context of visual location recognition. While in indoor environments typically only few distinctive features can be found, outdoors dynamic objects and clutter significantly impair the retrieval performance. We present an approach which exploits text, a major source of information for humans during orientation and navigation, without the need for error-prone optical character recognition. To this end, characters are detected and described using robust feature descriptors like SURF. By quantizing them into several hundred visual words we consider the distinctive appearance of the characters rather than reducing the set of possible features to an alphabet. Writings in images are transformed to strings of visual words termed visual phrases, which provide significantly improved distinctiveness when compared to individual features. An approximate string matching is performed using N-grams, which can be efficiently combined with an inverted file structure to cope with large datasets. An experimental evaluation on three different datasets shows significant improvement of the retrieval performance while reducing the size of the database by two orders of magnitude compared to state-of-the-art. Its low computational complexity makes the approach particularly suited for mobile image retrieval applications.",2011,0, 5312,Efficient Clustering-based Algorithm for Predicting File Size and Structural Similarity of Transcoded JPEG Images,"The problem of adapting JPEG images to satisfy constraints such as file size and resolution arises in a number of applications, from universal media access to multimedia messaging services. Visually optimized adaptation, however, commands a non-negligible computational cost which we aim to minimize using predictors. In previous works, we presented predictors and systems to achieve low-cost near-optimal adaptation of JPEG images. In this work, we propose a new approach to file size and quality prediction resulting from the Transcoding of a JPEG image subject to changes in quality factor and resolution. We show that the new predictor significantly outperforms the previously proposed solutions in accuracy.",2011,0, 5313,Towards Energy Consumption Measurement in a Cloud Computing Wireless Testbed,"The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of """"all-IP"""" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.",2011,0, 5314,Implementation and Usability Evaluation of a Cloud Platform for Scientific Computing as a Service (SCaaS),"Scientific computing requires simulation and visualization involving large data sets among collaborating teams. Cloud platforms offer a promising solution via ScaaS. We report on the architecture, implementation and User Experience (UX) evaluation of one such SCaaS platform implementing TOUGH2V2.0, a numerical simulator for sub-surface fluid and heat flow, offered as a service. Results from example simulations, with virtualization of workloads in a multi-tenant, Virtual Machine (VM)-based cloud platform, are presented. These include fluid production from a geothermal reservoir, diffusive and advective spreading of contaminants, radial flow from a CO2 injection well and gas diffusion of a chemical through porous media. Prepackaged VM pools deployed autonomically ensure that sessions are provisioned elastically on demand. Users can access data-intensive visualizations via a web-browser. Authentication, user state and sessions are managed securely via an Access Gateway, to autonomically redirect and manage the workflows when multiple concurrent users are accessing their own sessions. Usability in the cloud and the traditional desktop are comparatively assessed, using several UX metrics. Simulated network conditions of different quality were imposed using a WAN emulator. Usability was found to be good for all the simulations under even moderately degraded network quality, as long as latency was not well above 100 ms. Hosting of a complex scientific computing application on an actual, global Enterprise cloud platform (as opposed to earlier remoting platforms) and its usability assessment, both presented for the first time, are the essential contributions of this work.",2011,0, 5315,Development of an Online Energy Auditing Software Application with Remote SQL-Database Support,"Energy efficiency is a very current topic, both locally and internationally. It follows that methodologies to improve efficiency are also gaining importance. In the context of the local utility Eskom, Demand Side Management and improved Energy Efficiency in particular have been identified as major components of the campaign to reduce the negative impacts of the current constraints in generation and transmission capacities. Manual methodologies, however, are time-consuming, error-prone and require highly skilled manpower. This paper describes the development of an energy auditing tool to assist in improving the energy auditing process together with a methodology for calculating the usage-profiles of the various loads. The theoretical results are also presented.",2011,0, 5316,Optimal Sizing of Combined Heat & Power (CHP) Generation in Urban Distribution Network (UDN),"The capacity of Combine Heat and Power (CHP) generation connected to Urban Distribution Network (UDN) will increase significantly as a result of EU government targets and initiatives. CHP generation can have significant impact on the power flow, voltage profile, fault current level and the power quality for customers and electricity suppliers. The connection of CHP plant at UDN creates a number of welldocumented impacts with voltage rise and fault current level being the dominant effects. A range of options have traditionally been used to mitigate adverse impacts but these generally revolve around network upgrades, the cost of which may be considerable. Connection of CHP generation can fundamentally alter the operation of UDN. Where CHP plant capacity is comparable to or larger than local demand there are likely to be observable impacts on network power flows, voltage regulation and fault current level. New connection of CHP schemes must be evaluated to identify and quantify any adverse impact on the security and quality of local electricity supplies. The impacts that arise from an individual CHP scheme are assessed in details when the developer makes an application for the connection of the CHP plant. The objective of this paper is to use static method to develop techniques that provide means of determining the optimum capacity of a CHP plant that may be accommodate within UDN. The main tool used in this paper is ERAC power analyzing software incorporating load flow and fault current level analysis. These analysis are demonstrated on 15 busbar network resembling part of typical UDN. In order to determine optimal placement and sizing of a CHP plant that could be connected at any particular busbar on UDN without causing a significant adverse impact on performance of the UDN, the multiple linear regression model is created and demonstrated using the data obtain by the analysis performed by ERAC power analyzing software.",2011,0, 5317,AudioGene: Computer-based prediction of genetic factors involved in non-syndromic hearing impairment,"AudioGene is a software system developed at the University of Iowa to classify and predict gene mutations that indicate causal or increased risk factors of disease. We focus on a concise example - the most likely genetic causes of a particular form of inherited hearing loss - ADNSHL. Whereas the cost and throughput involved in the collection of genomic data have advanced dramatically during the past decade, gathering and interpreting clinical information regarding disease diagnosis remains slow, costly and error-prone. AudioGene employs machine-learning techniques in an iterative procedure to prioritize probable genetic risk factors of disease, which are then verified with a molecular (wet lab) assay. In our current implementation AudioGene achieves 67% first-choice accuracy (versus 23% using a majority classifier). When the top three choices are considered, accuracy increases to 83%. This has numerous implications for reducing the cost of genetic screening as well as increasing the power of novel gene discovery efforts. While AudioGene is focused on hearing loss, the design and underlying mechanisms are generalizable to many other diseases including heart disease, cancer and mental illness.",2011,0, 5318,Automatic functionality detection in behavior-based IDS,"Detection of malicious functionalities presents an effective way to detect malware in behavior-based IDS. A technology including the utilization of Colored Petri Nets for the generalized description and consequent detection of specific malicious functionalities from system call data has been previously developed, verified and presented. A successful effort was made to neutralize possible attempts to obfuscate this approach. Nevertheless, the approach has two major drawbacks. First, target functionalities have to be initially specified by an expert, which is a time consuming, sometimes subjective and error prone process. Second, the identification of typical functionalities indicative of malicious programs is not generally straightforward and requires reverse engineering and careful study of many instances of malware. Our paper addresses these drawbacks, clearing the way for a full-scale practical application of this technology. We utilized graph mining and graph similarity assessment algorithms for processing system call data resulting in automatic extraction of functionalities from system call data. This enabled us to identify sets of functionalities suggesting software maliciousness and construct a general obfuscation-resilient malware detector. The paper presents the results of the implementation and testing of the described technologies on the computer network testbed.",2011,0, 5319,Computational resiliency for distributed applications,"In recent years, computer network attacks have decreased overall reliability of computer systems and undermined confidence in mission-critical software. These robustness issues are magnified in distributed applications, which provide multiple points of failure and attack. The notion of resiliency is concerned with constructing applications that are able to operate through a wide variety of failures, errors, and malicious attacks. A number of approaches have been proposed in the literature based on fault tolerance achieved through replication of resources. In general, these approaches provide graceful degradation of performance to the point of failure but do not guarantee progress in the presence of multiple cascading and recurrent failures. Our approach is to dynamically replicate message-passing processes, detect inconsistencies in their behavior, and restore the level of fault tolerance as a computation proceeds. This paper describes a novel operating system technology for resilient message-passing applications that is automated, scalable, and transparent. The technology provides mechanisms for process replication, process migration, and adaptive failure detection. To quantify the performance overhead of the technology, we benchmark a distributed application exemplar to represent a broader class of applications.",2011,0, 5320,Analysis and implementation of the virtual network system,"Messaging applications want to use different communication networks. But the unfortunate state of affairs is that applications need to use several different application programming interfaces (APIs) and to design protocols on how and when to use a specific communication network(s). This is troublesome, error-prone and APIs vary a lot; applications want to use just one API but get the benefit of several communications networks. In this paper we detail, implement and test in real field tests a virtual network system (VNS). We argue why messaging applications should use and benefit from VNS. The VNS is a middleware solution that enables seamless usage of different networks. Netlink Next Generation (NLNG) protocol is a practical implementation of VNS and we elaborate on it's features, design choices and problems faced. The NLNG protocol has its origins in the Linux Netlink and VNS concept is a middleware, thus we provide a comparison between these previous works and our work. The VNS system and NLNG protocol have already been tested with several applications, e.g., mail clients, tracking and command softwares, and network interfaces, e.g., IP, VHF, HF, GSM SMS and TETRA SDS.",2011,0, 5321,Adaptive Failure Detection via Heartbeat under Hadoop,"Hadoop has become one popular framework to process massive data sets in a large scale cluster. However, it is observed that the detection of the failed worker is delayed, which may result in a significant increase in the completion time of jobs with different workload. To cope with it, we present two mechanisms: Adaptive interval and Reputation-based Detector that support Hadoop to detect the failed worker in the shortest time. The Adaptive interval is trying to dynamically configure the expiration time which is adaptive to the job size. The Reputation-based Detector is trying to evaluate the reputation of each worker. Once the reputation of a worker is lower than a threshold, then the worker will be considered as a failed worker. In our experiments, we demonstrate that both of these strategies have achieved great improvement in the detection of the failed worker. Specifically, the Adaptive interval has a relatively better performance with small jobs, while the Reputation-based Detector is more suitable for large jobs.",2011,0, 5322,Software Fault Prediction Framework Based on aiNet Algorithm,"Software fault prediction techniques are helpful in developing dependable software. In this paper, we proposed a novel framework that integrates testing and prediction process for unit testing prediction. Because high fault prone metrical data are much scattered and multi-centers can represent the whole dataset better, we used artificial immune network (aiNet) algorithm to extract and simplify data from the modules that have been tested, then generated multi-centers for each network by Hierarchical Clustering. The proposed framework acquires information along with the testing process timely and adjusts the network generated by aiNet algorithm dynamically. Experimental results show that higher accuracy can be obtained by using the proposed framework.",2011,0, 5323,A controlled switching methodology for transformer inrush current elimination: Theory and experimental validation,"Transformers are generally energized by closing the circuit breakers at random times. Consequently, this operation generates high transient inrush currents as a result of the asymmetrical magnetic flux produced in the windings. In light of these facts, this paper presents a strategy to control the switching phenomena which occurs during power transformer inrush. The general idea consists of calculating the pre-existing magnetic fluxes left on the core limbs as a function of operating voltage previously applied to the transformer, just prior to the moment in which de-energization has happened. By using these data and the equations to predict the most suitable closing moments, it is shown the proposal effectiveness at accomplishing the main target here pointed out. Experimental investigations are carried out in order to demonstrate the application method and its validation. The results show the feasibility of building hardware and software structures to drastically reduce the transformer inrush currents.",2011,0, 5324,Specialist tool for monitoring the measurement degradation process of induction active energy meters,"This paper presents a methodology and a specialist tool for failure probability analysis of induction type watt-hour meters, considering the main variables related to their measurement degradation processes. The database of the metering park of a distribution company, named Elektro Electricity and Services Co., was used for determining the most relevant variables and to feed the data in the software. The modeling developed to calculate the watt-hour meters probability of failure was implemented in a tool through a user friendly platform, written in Delphi language. Among the main features of this tool are: analysis of probability of failure by risk range; geographical localization of the meters in the metering park, and automatic sampling of induction type watt-hour meters, based on a risk classification expert system, in order to obtain information to aid the management of these meters. The main goals of the specialist tool are following and managing the measurement degradation, maintenance and replacement processes for induction watt-hour meters.",2011,0, 5325,Efficient mode selection with extreme value detection based pre-processing algorithm for H.264/AVC fast intra mode decision,"The mode decision in the intra prediction of a H.264/AVC encoder requires complicated computations, and spends much time to select the best mode that achieves the minimum rate-distortion (RD). The complicated computations for the mode decision cause the difficulty in real-time applications, especially for software based encoders. This study creates an efficient fast algorithm, which is called Extreme Value Detection (EVD), to predict the best direction mode except for the DC mode for fast intra mode decision. The EVD based edge detection predicts luma-44, luma-1616, and chroma-88 modes effectively. At the first step, we use the pre-processing mode selection algorithm to find the primary mode which is selected for fast prediction. At the second step, the selected fewer high-potential candidate modes are applied to calculate the RD cost for the mode decision. This method reduces encoding time effectively, and meanwhile also maintains the same video quality. Simulation results show that the proposed EVD method reduces the encoding time by 63%, and requires bit-rate increase about 2.6% and peak signal-to-noise ratio (PSNR) decrease about 0.08 dB in QCIF and CIF sequences, compared with the H.264/AVC JM 14.2 software. This method achieves less PSNR degradation and bit-rate increase than the previous methods with more encoding time reduction.",2011,0, 5326,Prototype design of low cost four channels digital electroencephalograph for sleep monitoring,"The electrical activity in brain or known as electroencephalogram (EEG) signal is being used in the diagnosis of sleep quality. Based on EEG signal, power of brain wave that related with a sleep quality could be obtained by analysis of power spectral density. The problem in developing countries, for example in Indonesia, EEG instrument is not widely available in each region of the country. This project designed and implemented four channels digital EEG, in which the design of hardware and software concepts was adopted from OpenEEG project. A four channels EEG amplifier operated by battery with average gain magnitude of 6100 times, bandwidth 0.05-60Hz and slope gradient of -60.00 dB/decade is developed. Digital board consists of AT-mega8 and serial interface with optocoupler is used to interface and viewed EEG signal on notebook. The prototype has successfully detected patterns of cardiac signal simultaneously with good SNR. In EEG measurement through monitoring the brain wave sleep, the data generated by PSD (Power Spectral Density) graph show the dominance of the brain signals at 7-9Hz (alpha) and 3-5Hz (theta). From several tests and measurements, this research concludes that the prototype of low cost EEG 4 channels is capable of acquiring satisfactory brain wave monitoring during sleep from healthy volunteer.",2011,0, 5327,Memory Leak Detection Based on Memory State Transition Graph,"Memory leak is a common type of defect that is hard to detect manually. Existing memory leak detection tools suffer from lack of precise interprocedural alias and path conditions. To address this problem, we present a static interprocedural analysis algorithm, which captures memory actions and path conditions precisely, to detect memory leak in C programs. Our algorithm uses path-sensitive symbolic execution to track the memory actions in different program paths guarded by path conditions. A novel analysis model called Memory State Transition Graph (MSTG) is proposed to describe the tracking process and its results. An MSTG is generated from a procedure. Nodes in an MSTG contain states of memory objects which record the function behaviors precisely. Edges in anMSTG are annotated with path conditions collected by symbolic execution. The path conditions are checked for satisfiability to reduce the number of false alarms and the path explosion. In order to do interprocedural analysis, our algorithm generates a summary for each procedure from the MSTG and applies the summary at the procedure's call sites. Our implemented tool has found several memory leak bugs in some open source programs and detected more bugs than other tools in some programs from the SPEC2000 benchmarks. In some cases, our tool produces many false positives, but most of them are caused by the same code patterns which are easy to check.",2011,0, 5328,Use Cases Modeling for Scalable Model-Checking,"Formal methods are effective techniques for automating software verifications to satisfy quality and reliability. However, the application of these techniques within industrial settings remains limited due to the complexity of produced models. Context-aware verification can circumvent this complexity by reducing the scope of the verification to some specific environmental conditions. We previously proposed a Context Description Language (CDL) to facilitate the formalization of requirements and contexts. However, the number of CDL models required to precisely formalize contexts grow rapidly according to the complexity of the system and manually writing CDL models is difficult and error prone task. In this paper, we propose a tool-supported framework that assists engineers in describing system contexts. We extended UML use cases with scenarios descriptions and we linked a domain specification vocabulary to automatically generate CDL models. An industrial case study is presented to illustrate the effectiveness of our approach.",2011,0, 5329,RobusTest: A Framework for Automated Testing of Software Robustness,"Robustness of a software system is defined as the degree to which the system can behave ordinarily and in conformance with the requirements in extraordinary situations. By increasing the robustness many failures which decrease the quality of the system can be avoided or masked. When it comes to specifying, testing and assessing software robustness in an efficient manner the methods and techniques are not mature yet. This paper presents RobusTest, a framework for testing robustness properties of a system with currently focus on timing issues. The expected robust behavior of the system is formulated as properties. The properties are then used to automatically generate robustness test cases and assess the results. An implementation of RobusTest in Java is presented here together with results from testing different, open-source implementations of the XMPP instant messaging protocol. By executing 400 test cases that were automatically generated from properties on two such implementations we found 11 critical failures and 15 nonconformance problems as compared to the XMPP specification.",2011,0, 5330,Evotec: Evolving the Best Testing Strategy for Contract-Equipped Programs,"Automated random testing is efficient at detecting faults but it is certainly not an optimal testing strategy for every given program. For example, an automated random testing tool ignores that some routines have stronger preconditions, they use certain literal values, or they are more error-prone. Taking into account such characteristics may increase testing effectiveness. In this article, we present Evotec, an enhancement of random testing which relies on genetic algorithms to evolve a best testing strategy for contract-equipped programs. The resulting strategy is optimized for detecting more faults, satisfying more routine preconditions and establishing more object states on a given set of classes to test. Our experiment tested 92 classes over 1710 hours. It shows that Evotec detected 29% more faults than random+ and 18% more faults than the precondition-satisfaction strategy.",2011,0, 5331,Towards Automatic Discovery of co-authorship Networks in the Brazilian Academic Areas,"In Brazil, individual curricula vitae of academic researchers, that are mainly composed of professional information and scientific productions, are managed into a single software platform called Lattes. Currently, the information gathered from this platform is typically used to evaluate, analyze and document the scientific productions of Brazilian research groups. Despite the fact that the Lattes curricula has semi-structured information, the analysis procedure for medium and large groups becomes a time consuming and highly error-prone task. In this paper, we describe an extension of the script Lattes (an open-source knowledge extraction system from the Lattes platform), for analysing individuals Lattes curricula and automatically discover large-scale co-authorship networks for any academic area. Given some knowledge domain (academic area), the system automatically allows to identify researchers associated with the academic area, extract every list of scientific productions of the researchers, discretized by type and publication year, and for each paper, identify the co-authors registered in the Lattes Platform. The system also allows the generation of different types of networks which may be used to study the characteristics of academic areas at large scale. In particular, we explored the node's degree and Author Rank measures for each identified researcher. Finally, we confirm through experiments that the system facilitates a simple way to generate different co-authorship networks. To the best of our knowledge, this is the first study to examine large-scale co-authorship networks for any Brazilian academic area.",2011,0, 5332,A New Approach to Evaluate Performance of Component-Based Software Architecture,"Nowadays, by technology developments, software systems enlarge in scale and complexity. In large systems and to overcome complexity, software architecture has been considered as a connected notion with product quality and plays a crucial role in the quality of final system. The aim of the analysis of software architecture is to recognize potential risks and investigating qualitative needs of software design before the process of production and implementation. Achievement to this goal reduces the costs and improves the software quality. In this paper, a new approach is presented to evaluate performance of component-based software architecture for software systems with distributed architecture. In this approach, at first system is modeled as a Discrete Time Markov Chain and then the required parameters are taken from, to produce a Product Form Queueing Network. Limitations of source, like restrictions of the number of threads in a particular machine, are also regarded in the model. The prepared model is solved by the SHARPE software packages. As the result of the solution of the produced model in this approach, throughput and the average response time and bottlenecks in different workloads of system are predicted and some suggestions are presented to improve the system performance.",2011,0, 5333,Non-intrusive reconfigurable HW/SW fault tolerance approach to detect transient faults in microprocessor systems,"This paper presents a non-intrusive hybrid fault detection approach that combines hardware and software techniques to detect transient faults in microprocessors. Such faults have a major influence in microprocessor systems, affecting both data and control flow. In order to protect the system, an application-oriented hardware module is automatically generated and reconfigured on the system during runtime. When combined with fault tolerance techniques based on software, this solution offers full system protection against transient faults. A fault injection campaign is performed using a MIPS microprocessor executing a set of applications. HW/SW implementation in a reprogrammable platform shows minimal memory area and execution time overhead. Fault injection results show the efficiency of this method on detecting 100% of faults.",2011,0, 5334,Microprocessor soft error rate prediction based on cache memory analysis,"Static raw soft-error rates (SER) of COTS microprocessors are classically obtained with particle accelerators, but they are far larger than real application failure rates that depend on the dynamic application behavior and on the cache protection mechanisms. In this paper, we propose a new methodology to evaluate the real cache sensitivity for a given application, and to calculate a more accurate failure rate. This methodology is based on the monitoring of cache accesses, and requires a microprocessor simulator. It is applied in this paper to the LEON3 soft-core with several benchmarks. Results are validated by fault injections on one implementation of the processor running the same programs: the proposed tool predicted all errors with only a small over-estimation.",2011,0, 5335,Automated wafer defect map generation for process yield improvement,"Spatial Signature Analysis (SSA) is used to detect a reoccurring failure signature in today wafer fabrication. In order for SSA to be effective, it must correlate the signature to a wafer defect maps library. However, classifying the signatures for the library is time consuming and tedious. The Manual Visual Inspection (MVI) of several failure bins in a wafer map for multiple lots can lead to fatigue for the operator and resulted in inaccurate representation of the failure signature. Hence, an automated wafer map extraction process is proposed here to replace the MVI while ensuring accuracy of the failure signature library. Clustering tool namely Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is utilized to extract the wafer spatial signature while ignoring the outliners. The appropriate size for the clustered signature is investigated and its performance is compared to the MVI signature. The analysis shows that for 3 selected failure modes, 20% occurrence rate clustered pattern provide similar performance to a 50% MVI signature. The proposed technique leads to a significant reduction in the time required for extracting current and new signatures, allowing faster yield response and improvement.",2011,0, 5336,"High level synthesis of stereo matching: Productivity, performance, and software constraints","FPGAs are an attractive platform for applications with high computation demand and low energy consumption requirements. However, design effort for FPGA implementations remains high - often an order of magnitude larger than design effort using high level languages. Instead of this time-consuming process, high level synthesis (HLS) tools generate hardware implementations from high level languages (HLL) such as C/C++/SystemC. Such tools reduce design effort: high level descriptions are more compact and less error prone. HLS tools promise hardware development abstracted from software designer knowledge of the implementation platform. In this paper, we examine several implementations of stereo matching, an active area of computer vision research that uses techniques also common for image de-noising, image retrieval, feature matching and face recognition. We present an unbiased evaluation of the suitability of using HLS for typical stereo matching software, usability and productivity of AutoPilot (a state of the art HLS tool), and the performance of designs produced by AutoPilot. Based on our study, we provide guidelines for software design, limitations of mapping general purpose software to hardware using HLS, and future directions for HLS tool development. For the stereo matching algorithms, we demonstrate between 3.5X and 67.9X speedup over software (but less than achievable by manual RTL design) with a five-fold reduction in design effort vs. manual hardware design.",2011,0, 5337,Software-Based Detecting and Recovering from ECC-Memory Faults,"According to the problem that the ECC cannot correct the multibit error in ECC memory, this paper proposes a memory error processing method on software level. On the foundation of revising the Linux kernel code, the method can discover this area of influence area of memory error according to seek the process information mapping to the mistaken address. This way can avoid wastage to the user due to the system halting caused by memory error. The experimental results show that the method can have a certain degree of memory error repair and do not affect the normal work of the system.",2011,0, 5338,Using Behavioral Profiles to Detect Software Flaws in Network Servers,"Some software faults, namely security vulnerabilities, tend to elude conventional testing methods. Since the effects of these faults may not be immediately perceived nor have a direct impact on the server's execution (e.g., a crash), they can remain hidden even if exercised by the test cases. Our detection approach consists in inferring a behavioral profile of a network server that models its correct execution by combining information about the implemented state machine protocol and the server's internal execution. Flaws are automatically detected if the server's behavior deviates from the profile while processing the test cases. This approach was implemented in a tool, which was used to analyze several FTP vulnerabilities, showing that it can effectively find various kinds of flaws.",2011,0, 5339,The Early Identification of Detector Locations in Dependable Software,"The dependability properties of a software system are usually assessed and refined towards the end of the software development lifecycle. Problems pertaining to software dependability may necessitate costly system redesign. Hence, early insights into the potential for error propagation within a software system would be beneficial. Further, the refinement of the dependability properties of software involves the design and location of dependability components called detectors and correctors. Recently, a metric, called spatial impact, has been proposed to capture the extent of error propagation in a software system, providing insights into the location of detectors and correctors. However, the metric only provides insights towards the end of the software development life cycle. In this paper, our objective is to investigate whether spatial impact can enable the early identification of locations for detectors. To achieve this we first hypothesise that spatial impact is correlated with module coupling, a metric that can be evaluated early in the software development life cycle, and show this relationship to hold. We then evaluate module coupling for the modules of a complex software system, identifying modules with high coupling values as potential locations for detectors. We then enhanced these modules with detectors and perform fault-injection analysis to determine the suitability of these locations. The results presented demonstrate that our approach can permit the early identification of possible detector locations.",2011,0, 5340,Uncertainty Propagation through Software Dependability Models,"Stochastic models are often employed to study dependability of critical systems and assess various hardware and software fault-tolerance techniques. These models take into account the randomness in the events of interest (aleatory uncertainty) and are generally solved at fixed parameter values. However, the parameter values themselves are determined from a finite number of observations and hence have uncertainty associated with them (epistemic uncertainty). This paper discusses methods for computing the uncertainty in output metrics of dependability models, due to epistemic uncertainties in the model input parameters. Methods for epistemic uncertainty propagation through dependability models of varying complexity are presented with illustrative examples. The distribution, variance and expectation of model output, due to epistemic uncertainty in model input parameters are derived and analyzed to understand their limiting behavior.",2011,0, 5341,Feature Interaction Faults Revisited: An Exploratory Study,"While a large body of research is dedicated to testing for feature interactions in configurable software, there has been little work that examines what constitutes such a fault at the code level. In consequence, we do not know how prevalent real interaction faults are in practice, what a typical interaction fault looks like in code, how to seed interaction faults, or whether current interaction testing techniques are effective at finding the faults they aim to detect. We make a first step in this direction, by deriving a white box criterion for an interaction fault. Armed with this criterion, we perform an exploratory study on hundreds of faults from the field in two open source systems. We find that only three of the 28 which appear to be interaction faults are in fact due to features' interactions. We investigate the remaining 25 and find that, although they could have been detected without interaction testing, varying the system configuration amplifies the fault-finding power of a test suite, making these faults easier to expose. Thus, we characterize the benefits of interaction testing in regards to both interaction and non-interaction faults. We end with a discussion of several mutations that can be used to mimic interaction faults based on the faults we see in practice.",2011,0, 5342,Adaptive Regression Testing Strategy: An Empirical Study,"When software systems evolve, different amounts and types of code modifications can be involved in different versions. These factors can affect the costs and benefits of regression testing techniques in different ways, and thus, there may be no single regression testing technique that is the most cost-effective technique to use on every version. To date, many regression testing techniques have been proposed, but no research has been done on the problem of helping practitioners systematically choose appropriate techniques on new versions as systems evolve. To address this problem, we propose adaptive regression testing (ART) strategies that attempt to identify the regression testing techniques that will be the most cost-effective for each regression testing session considering organization's situations and testing environment. To assess our approach, we conducted an experiment focusing on test case prioritization techniques. Our results show that prioritization techniques selected by our approach can be more cost-effective than those used by the control approaches.",2011,0, 5343,Parametric Bootstrapping for Assessing Software Reliability Measures,"The bootstrapping is a statistical technique to replicate the underlying data based on the resampling, and enables us to investigate the statistical properties. It is useful to estimate standard errors and confidence intervals for complex estimators of complex parameters of the probability distribution from a small number of data. In software reliability engineering, it is common to estimate software reliability measures from the fault data (fault-detection time data) and to focus on only the point estimation. However, it is difficult in general to carry out the interval estimation or to obtain the probability distributions of the associated estimators, without applying any approximate method. In this paper, we assume that the software fault-detection process in the system testing is described by a non-homogeneous Poisson process, and develop a comprehensive technique to study the probability distributions on significant software reliability measures. Based on the maximum likelihood estimation, we assess the probability distributions of estimators such as the initial number of software faults remaining in the software, software intensity function, mean value function and software reliability function, via parametric bootstrapping method.",2011,0, 5344,Using Dependability Benchmarks to Support ISO/IEC SQuaRE,"The integration of Commercial-Off-The-Shelf (COTS) components in software has reduced time-to-market and production costs, but selecting the most suitable component, among those available, remains still a challenging task. This selection process, typically named benchmarking, requires evaluating the behaviour of eligible components in operation, and ranking them attending to quality characteristics. Most existing benchmarks only provide measures characterising the behaviour of software systems in absence of faults ignoring the hard impact that both accidental and malicious faults have on software quality. However, since using COTS to build a system may motivate the emergence of dependability issues due to the interaction between components, benchmarking the system in presence of faults is essential. The recent ISO/IEC 25045 standard copes with this lack by considering accidental faults when assessing the recoverability capabilities of software systems. This paper proposes a dependability benchmarking approach to determine the impact that faults (noted as disturbances in the standard) either accidental or malicious may have on the quality features exhibited by software components. As will be shown, the usefulness of the approach embraces all evaluator profiles (developers, acquirers and third-party evaluators) identified in the ISO/IEC 25000 """"SQuaRE"""" standard. The feasibility of the proposal is finally illustrated through the benchmarking of three distinct software components, which implement the OLSR protocol specification, competing for integration in a wireless mesh network.",2011,0, 5345,RAMpage: Graceful Degradation Management for Memory Errors in Commodity Linux Servers,"Memory errors are a major source of reliability problems in current computers. Undetected errors may result in program termination, or, even worse, silent data corruption. Recent studies have shown that the frequency of permanent memory errors is an order of magnitude higher than previously assumed and regularly affects everyday operation. Often, neither additional circuitry to support hardware-based error detection nor downtime for performing hardware tests can be afforded. In the case of permanent memory errors, a system faces two challenges: detecting errors as early as possible and handling them while avoiding system downtime. To increase system reliability, we have developed RAMpage, an online memory testing infrastructure for commodity x86-64-based Linux servers, which is capable of efficiently detecting memory errors and which provides graceful degradation by withdrawing affected memory pages from further use. We describe the design and implementation of RAMpage and present results of an extensive qualitative as well as quantitative evaluation.",2011,0, 5346,Automatic Robustness Assessment of DDS-Compliant Middleware,"The next generation of critical systems requires an efficient, scalable and robust data dissemination infrastructure. Middleware solutions compliant with the novel OMG standard, called Data Distribution Service (DDS), are being traditionally used for architecting large-scale systems, because they well meet the requirements of scalability, seamless decoupling and fault tolerance. Due to such features, industrial practitioners are enforcing the adoption of such middleware solutions also within the context of critical systems. However, these systems pose serious dependability requirements, which in turn demand DDS compliant products also to realize reliable data dissemination in different and heterogeneous contexts. Hence, assessing the supported reliability degree and proposing improvement strategies becomes crucial and requires a clear understanding of DDS compliant middleware failing behavior. This paper illustrates an innovative tool to automatically evaluate the robustness of DDS-compliant middleware based on a fault injection technique. Specifically, experiments have been conducted on an actual implementation of the DDS standard, by means of injecting a set of proper invalid inputs through its API and analyzing the achieved outcomes.",2011,0, 5347,Autonomic Resource Management Handling Delayed Configuration Effects,"Today, cloud providers offer customers access to complex applications running on virtualized hardware. Nevertheless, big virtualized data centers become stochastic environments with performance fluctuations. The growing number of cloud services makes a manual steering impossible. An automatism on the provider side is needed. In this paper, we present a software solution located in the Software as a Service layer with autonomous agents that handle user requests. The agents allocate resources and configure applications to compensate performance fluctuations. They use a combination of Support Vector Machines and Model-Predictive Control to predict and plan future configurations. This allows them to handle configuration delays for requesting new virtual machines and to guarantee time-dependent service level objectives (SLOs). We evaluated our approach on a real cloud system with a high-performance software and a three-tier e-commerce application. The experiments show that the agents accurately configure the application and plan horizontal scalings to enforce SLO fulfillments even in the presence of noise.",2011,0, 5348,Efficiently Synchronizing Virtual Machines in Cloud Computing Environments,"Infrastructure as a Service (IaaS), a form of cloud computing, is gaining attention for its ability to enable efficient server administration in dynamic workload environments. In such environments, however, updating the software stack or content files of virtual machines (VMs) is a time-consuming task, discouraging administrators from frequently enhancing their services and fixing security holes. This is because the administrator has to upload the whole new disk image to the cloud platform via the Internet, which is not yet fast enough that large amounts of data can be transferred smoothly. Although the administrator can apply only incremental updates directly to the running VMs, he or she has to carefully consider the type of update and perform operations on all the running VMs, such as application restarts and operating system reboots. This is a tedious and error-prone task. This paper presents a technique for synchronizing VMs with less time and lower administrative burden. We introduce the Virtual Disk Image Repository, which runs on the cloud platform and automatically updates the virtual disk image and the running VMs with only the incremental update information. We also show a mechanism that performs necessary operations on the running VM such as restarting server processes, based on the types of files that are updated. We implemented a prototype on Linux 2.6.31.14 and Amazon Elastic Compute Cloud. The experimental results show that our technique can synchronize VMs in an order-of-magnitude shorter time than the conventional disk-image-based VM cloning method. Although our system imposes about 30% overhead on the developer's environment, it imposes no observable overhead on public servers and correctly performs necessary operations to put updates into effect.",2011,0, 5349,VM Leakage and Orphan Control in Open-Source Clouds,"Computer systems often exhibit degraded performance due to resource leakage caused by erroneous programming or malicious attacks, and computers can even crash in extreme cases of resource exhaustion. The advent of cloud computing provides increased opportunities to amplify such vulnerabilities, thus affecting a significant number of computer users. Using simulation, we demonstrate that cloud computing systems based on open-source code could be subjected to a simple malicious attack capable of degrading availability of virtual machines (VMs). We describe how the attack leads to VM leakage, causing orphaned VMs to accumulate over time, reducing the pool of resources available to users. We identify a set of orphan control processes needed in multiple cloud components, and we illustrate how such processes detect and eliminate orphaned VMs. We show that adding orphan control allows an open-source cloud to sustain a higher level of VM availability during malicious attacks. We also report on the overhead of implementing orphan control.",2011,0, 5350,Valuing quality of experience: A brave new era of user satisfaction and revenue possibilities,"Telecommunication market today is defined by a plethora of innovative products and technologies that constantly raising the bar of technical feasibility in both hardware and software. Meanwhile users constantly demand better quality and improved attributes for all applications, becoming less and less tolerant in errors or inconsistencies. Evaluation methods that were dominant for several years in the field seem to have limited effect on assess end-user satisfaction, leading to unhappy customers and lower revenue for key market players. Ensuring Quality of Service (QoS) proved no longer capable to increase market share therefore a novel evaluation method is necessary. The aim of the present paper is to present a new framework of user-oriented quality assessment that tries to measure the overall experience derived from a telecommunication product. Provided that modern services are based over the principal of sharing an overall experience with others, it seems certain that the new method of estimating Quality of Experience (QoE) will produce much better results, needed by both providers and customers.",2011,0, 5351,A method for copper lines classification,"Recently many end users show attention on the quality of Internet access; in Italy, a national wide measure campaign, sponsored by Italian Communication Regulatory Authority, allows user to evaluate his bandwidth using a licit software. In this context a primary end user need is to have the possibility to measure its bandwidth and to compare it with the parameters declared by ISPs. Assuming the availability of a standard recognized methodology to measure bandwidth on user access link, a problem with this approach arises when the measured performances are lower than the declared quality of service. When this happens, the problem could depend on several factors not directly attributable to the ISP. In this work, we propose a solution by which it is possible to characterize a physical line. The idea is to detect the situations in which the performances are degraded due to an unsatisfactory physical line state. To make this detection some real cases are considerate.",2011,0, 5352,Performance Analysis of Cloud Centers under Burst Arrivals and Total Rejection Policy,"Quality of service, QoS, has a great impact on wider adoption of cloud computing. Maintaining the QoS at an acceptable level for cloud users requires an accurate and well adapted performance analysis approach. In this paper, we describe a new approximate analytical model for performance evaluation of cloud server farms under burst arrivals and solve it to obtain important performance indicators such as mean request response time, blocking probability, probability of immediate service and probability distribution of number of tasks in the system. This model allows cloud operators to tune the parameters such as the number of servers and/or burst size, on one side, and the values of blocking probability and probability that a task request will obtain immediate service, on the other.",2011,0, 5353,Electrically detected magnetic resonance study of a near interface trap in 4H SiC MOSFETs,"It is well known that 4H silicon carbide (SiC) based metal oxide silicon field effect transistors (MOSFETs) have great promise in high power and high temperature applications. The reliability and performance of these MOSFETs is currently limited by the presence of SiC/SiO2 interface and near interface traps which are poorly understood. Conventional electron paramagnetic resonance (EPR) studies of silicon samples have been utilized to argue for carbon dangling bond interface traps [1]. For several years, with several coworkers, we have explored these silicon carbide based MOSFETs with electrically detected magnetic resonance (EDMR), [2,3] establishing a connection between an isotropic EDMR spectrum with g=2.003 and deep level defects in the interface/near interface region of SiC MOSFETs. We tentatively linked the spectrum to a silicon vacancy or closely related defect. This assessment was tentative because we were not previously able to quantitatively evaluate the electron nuclear hyperfine interactions at the site. Through multiple improvements in EDMR hardware and data acquisition software, we have achieved a very large improvement in sensitivity and resolution in EDMR, which allows us to detect side peak features in the EDMR spectra caused by electron nuclear hyperfine interactions. This improved resolution allows far more definitive conclusions to be drawn about defect structure. In this work, we provide extremely strong experimental evidence identifying the structure of that defect. The evidence comes from very high resolution and sensitivity fast passage (FP) mode [4, 5] electrically detected magnetic resonance (EDMR) or FPEDMR of the ubiquitous EDMR spectrum.",2011,0, 5354,PV system monitoring and performance of a grid connected PV power station located in Manchester-UK,"In the last two decades renewable resources have gained more attention due to continuing energy demand, along with the depletion in fossil fuel resources and their environmental effects to the planet. This paper presents a novel approach in monitoring PV power stations. The monitoring system enables system degradation early detection by calculating the residual difference between the model predicted and the actual measured power parameters. The model being derived using the MATLAB/SIMULINK software package and is designed with a dialog box to enable the user input of the PV system parameters. The performance of the developed monitoring system was examined and validated under different operating condition and faults e.g. dust, shadow and snow. Results were simulated and analyzed using the environmental parameters of irradiance and temperature. The irradiance and temperature data is gathered from a 28.8kW grid connected solar power system located on the tower block within the MMU campus in central Manchester. These real-time parameters are used as inputs of the developed PV model. Repeatability and reliability of the developed model performance were validated over a one and half year's period.",2011,0, 5355,A mixed method study to identify factors affecting software reusability in reuse intensive development,"The objectives of reusing software are to reduce the cost and amount of resources used to produce quality software that is on time. These objectives are achieved by reusing software artefacts. The reuse insensitive software development approaches, such as component based software development (CBSD) and software product lines (SPL) development, make use of reusable software assets. The use of open source software (OSS) is common in the software industry, especially in CBSD. However, recent research suggests the use of OSS in SPL. In this paper the results of a mixed method study are presented. The study focuses on identifying the factors affecting reusability of software in a reuse intensive software development environment. The first part of the study is based on interviews with experts and professionals working with OSS in a reuse intensive environment. The next part describes a survey is conducted to assess the importance of the factors. The procedures followed and results obtained of the both research activities are presented.",2011,0, 5356,Parallelization of an ultrasound reconstruction algorithm for non destructive testing on multicore CPU and GPU,"The CIVA software platform developed by CEA-LIST offers various simulation and data processing modules dedicated to non-destructive testing (NDT). In particular, ultrasonic imaging and reconstruction tools are proposed, in the purpose of localizing echoes and identifying and sizing the detected defects. Because of the complexity of data processed, computation time is now a limitation for the optimal use of available information. In this article, we present performance results on parallelization of one computationally heavy algorithm on general purpose processors (GPP) and graphic processing units (GPU). GPU implementation makes an intensive use of atomic intrinsics. Compared to initial GPP implementation, optimized GPP implementation runs up to 116 faster and GPU implementation up to 631. This shows that, even with irregular workloads, combining software optimization and hardware improvements, GPU give high performance.",2011,0, 5357,Efficient Gender Classification Using Interlaced Derivative Pattern and Principal Component Analysis,"With the wealth of image data that is now becoming increasingly accessible through the advent of the world wide web and proliferation of cheap, high quality digital cameras it is becoming ever more desirable to be able to automatically classify Gender into appropriate category such that intelligent agents and other such intelligent software might make better informed decisions regarding them without a need for excessive human intervention. In this paper, we present a new technique which provides superior performance superior than existing gender classification techniques. We first detect the face portion using Voila Jones face detector and then Interlaced Derivative Pattern (IDP)extract discriminative facial features for gender which are passed through Principal Component Analysis (PCA) to eliminate redundant features and thus reduce dimension. Keeping in mind strengths of different classifiers three classifiers K-nearest neighbor, Support Vector Machine and Fisher Discriminant Analysis are combined, which minimizes the classification error rate. We have used Stanford University Medical students (SUMS) face database for our experiment. Comparing our results and performance with existing techniques our proposed method provides high accuracy rate and robustness to illumination change.",2011,0, 5358,ASAP: A Self-Adaptive Prediction System for Instant Cloud Resource Demand Provisioning,"The promise of cloud computing is to provide computing resources instantly whenever they are needed. The state-of-art virtual machine (VM) provisioning technology can provision a VM in tens of minutes. This latency is unacceptable for jobs that need to scale out during computation. To truly enable on-the-fly scaling, new VM needs to be ready in seconds upon request. In this paper, We present an online temporal data mining system called ASAP, to model and predict the cloud VM demands. ASAP aims to extract high level characteristics from VM provisioning request stream and notify the provisioning system to prepare VMs in advance. For quantification issue, we propose Cloud Prediction Cost to encodes the cost and constraints of the cloud and guide the training of prediction algorithms. Moreover, we utilize a two-level ensemble method to capture the characteristics of the high transient demands time series. Experimental results using historical data from an IBM cloud in operation demonstrate that ASAP significantly improves the cloud service quality and provides possibility for on-the-fly provisioning.",2011,0, 5359,Risk management assessment using SERIM method,"Software development is a complex process that involved many activities and has a big uncertainty to success. It is also a typical of activity that can be costly if mismanaged. Many factors can lead the success and also can cause software project failure. The failure actually can be detected early if we can adopt the concept of risk management and implemented it into software development project. SERIM is a method to measure risk in software engineering, proposed by Karolak [4]. SERIM is based on the mathematics of probability. SERIM uses some parameters which are derived from risk factors. The factors are: Organization, Estimation, Monitoring, Development Methodology, Tools, Risk Culture, Usability, Correctness, Reliability and Personnel. Each factor then measured by some questions metrics and there is 81 software metric questions for all factors. The factors then related and mapped into SDLC phases and risk management activities to calculate the probability (P). SERIM uses 28 probability variables to assess the risk potentials. The SERIM method then implemented to assess the risk of information system development, TrainSys, which is developed for a training and education unit in an organization. The result is useful to determine the low probability of TrainSys project success factor. The result also show the dominant and highest factor need to address in order to improve the quality of process and product of software development.",2011,0, 5360,An integrated health and contingency management case study on an autonomous ground robot,"Autonomous robotic vehicles are playing an increasingly important role in support of a wide variety of present and future critical missions. Due to the absence of timely operator/pilot interaction and potential catastrophic consequence of unattended faults and failures, a real-time, onboard health and contingency management system is desired. This system would be capable of detecting and isolating faults, predicting fault progression and automatically reconfiguring the system to accommodate faults. This paper presents the implementation of an integrated health and contingency management system on an autonomous ground robot. This case study is conducted to demonstrate the feasibility and benefit of using real-time prognostics and health management (PHM) information in robot control and mission reconfiguration. Several key software modules including a HyDE-based diagnosis reasoner, particle filtering-based prognosis server and a prognostics-enhanced mission planner are presented in this paper with illustrative experimental results.",2011,0, 5361,A data placement algorithm with binary weighted tree on PC cluster-based cloud storage system,"The need and use of scalable storage on cloud has rapidly increased in last few years. Organizations need large amount of storage for their operational data and backups. To address this need, high performance storage servers for cloud computing are the ultimate solution, but they are very expensive. Therefore we propose efficient cloud storage system by using inexpensive and commodity computer nodes. These computer nodes are organized into PC cluster as datacenter. Data objects are distributed and replicated in a cluster of commodity nodes located in the cloud. In the proposed cloud storage system, a data placement algorithm which provides a highly available and reliable storage is proposed. The proposed algorithm applies binary tree to search storage nodes. It supports the weighted allocation of data objects, balancing load on PC cluster with minimum cost. The proposed system is implemented with HDFS and experimental results prove that the proposed algorithm can balance storage load depending on the disk space, expected availability and failure probability of each node in PC cluster.",2011,0, 5362,A Software-Based Self-Test methodology for on-line testing of processor caches,"Nowadays, on-line testing is essential for modern high-density microprocessors to detect either latent hardware defects or new defects appearing during lifetime both in logic and memory modules. For cache arrays, the flexibility to apply online different March tests is a critical requirement. For small memory arrays that may lack programmable Memory Built-In Self-Test (MBIST) circuitry, such as L1 cache arrays, Software-Based Self-Test (SBST) can be a flexible and low-cost solution for on-line March test application. In this paper, an SBST program development methodology is proposed for online periodic testing of L1 data and instruction cache, both for tag and data arrays. The proposed SBST methodology utilizes existing special purpose instructions that modern Instruction Set Architectures (ISAs) implement to access caches for debug-diagnostic and performance purposes, termed hereafter Direct Cache Access (DCA) instructions, as well as, performance monitoring mechanisms to overcome testability challenges. The methodology has been applied to 2 processor benchmarks, OpenRISC and LEON3 to demonstrate its high adaptability, and experimental comparison results against previous contributions show that the utilization of DCA instructions significantly improves test code size (83%) and test duration (72%) when applied to the same benchmark (LEON3).",2011,0, 5363,Performance assessment of ASD team using FPL football rules as reference,"Agile software development (ASD) teams are committed to frequent, regular, high-quality deliverables. Agile team requires to produce high-quality code in short time span. Agile suggests methodologies like extreme programming and scrum to resolve the issues faced by the developers. Extreme programming is a methodology of ASD which suggests pair programming. But for a number of reasons, pairing is the most controversial and least universally-embraced agile programmer practice [1]. The reason for this is that certain task requires lot of deep thinking and so pairing (lack of privacy) does not work here. Certain personalities too do not work well with pairing. In scrum, daily standup-meeting is the method used to resolve impediments. Those impediments that are not resolved are added to product backlog. This adds to cost. There can be online mentors (e-Mentors) to help programmers resolve their domain issues. The selection of such mentors depends on their skill set and availability [3]. In order to sustain e-Mentoring, the experts who act as mentors in the respective domain (application / technology / tools) have to be rewarded for their assists. The mentor could be within the development team or can be part of any other project team. By seeing the similarities between the sports team and the Agile team, a way of recognizing and rewarding these assists is suggested in this paper. The set of rules used in Fantasy Premier League for performance assessment of football players is taken here as a reference for assessing agile team.",2011,0, 5364,Wavelet ANN based fault diagnosis in three phase induction motor,"This paper proposes a protection scheme based on Wavelet Multi Resolution Analysis and Artificial Neural Networks which detects and classifies various faults like Single phasing, Under voltage, Unbalanced supply, Stator Turn fault, Stator Line to Ground fault, Stator Line to Line fault, Broken bars and Locked rotor of a three-phase induction motor. The three phase Induction Motor is represented by a universal model which is valid for a wide range of frequencies. The same has been simulated using MATLAB/Simulink software and tested for various types of motor faults. The wavelet decomposition of three-phase stator currents is carried out with Bi-Orthogonal 5.5 (Bior5.5). The maximum value of the absolute peak value of the highest level (d1) coefficients of three-phase currents is defined as fault index which is compared with a predefined threshold to detect the fault. The normalized fourth level approximate (a4) coefficients of these currents are fed to a Feedforward neural network to classify various faults. The normalized peak d1 coefficients of three-phase currents are fed to another Feedforward neural network to identify the faulty phase of stator internal faults. The algorithm has been tested for various incidence angles and proved to be simple, reliable and effective in detecting and classifying the various faults and also in identifying the faulty phase of stator.",2011,0, 5365,Computerized instrumentation Automatic measurement of contact resistance of metal to carbon relays used in railway signaling,"The Contact Resistance of metal to carbon relays used in railway signaling systems is a vital quality parameter. The manual measurement process is tedious, error prone and involves lot of time, effort and manpower. Besides, it is susceptible to manipulation and may adversely affect the functional reliability of relays due to erroneous measurements. To enhance the trustworthiness of measurement of contact resistance & to make the process faster, an automated measurement system having specially designed application software and a testing jig attachment has been developed. When the relay is fixed on the testing jig, the software scans all the relay contacts and measures the CR. The results are displayed on the computer screen and stored in a database file.",2011,0, 5366,Approach to predict the software reliability with different methods,"This particular essay expounds upon how one can foresee and predict software reliability. There are two major components that exist within a computer system: hardware and software. The reliabilities between the two are comparable because both are stochastic processes, which can be described by probability distributions. With this said, software reliability is the probability that will function without failure in a given software and in a given environment during a specified period of time. Thus, this is why software reliability is a major and key factor in software developmental processes and quality. However, one can spot the difference between software reliability and hardware reliability where it concerns the quality duration and the fact that software reliability does not decrease its reliability over time.",2011,0, 5367,Pair analysis of requirements in software engineering education,"Requirements Analysis and Design is found to be one of the crucial subjects in Software Engineering education. Students need to have deeper understanding before they could start to analyse and design the requirements, either using models or textual descriptions. However, the outcomes of their analysis are always vague and error-prone. We assume that this issue can be handled if pair analysis is conducted where all students are assigned with partners following the concept of pair-programming. To prove this, we have conducted a small preliminary evaluation to compare the outcomes of solo work and pair analysis work for three different groups of students. The performance, efficacy and students' satisfaction and confidence level are evaluated.",2011,0, 5368,Adopting Six Sigma approach in predicting functional defects for system testing,"This research focuses on constructing a mathematical model to predict functional defects in system testing by applying Six Sigma approach. The motivation behind this effort is to achieve zero known post release defects of the software delivered to end-user. Besides serving as the indicator of optimizing testing process, predicting functional defects at the start of testing allows testing team to put comprehensive test coverage, find as many defects as possible and determine when to stop testing so that all known defects are contained within testing phase. Design for Six Sigma (DfSS) is chosen as the methodology as it emphasizes on customers' requirement and systematic techniques to build the model. Historical data becomes the crucial elements in this study. Metrics related to potential predictors and their relationships for the model are identified, which focuses on metrics in phases prior to testing phase. Repeatability and capability of testers' consistency in finding defects are analyzed. Type of data required are also identified and collected. The metrics of selected predictors which incorporate testing and development metrics are measured against total functional defects using multiple regression analysis. The best and most significant mathematical model generated by the regression analysis is selected as the proposed prediction model for functional defects in system testing phase. Validation of the model is then conducted to prove the goodness for implementation. Recommendation and future research work are provided at the end of this study.",2011,0, 5369,Efficient prediction of software fault proneness modules using support vector machines and probabilistic neural networks,"A software fault is a defect that causes software failure in an executable product. Fault prediction models usually aim to predict either the probability or the density of faults that the code units contain. Many fault prediction models using software metrics have been proposed in the Software Engineering literature. This study focuses on evaluating high-performance fault predictors based on support vector machines (SVMs) and probabilistic neural networks (PNNs). Five public NASA datasets from the PROMISE repository are used to make these predictive models repeatable, refutable, and verifiable. According to the obtained results, the probabilistic neural networks generally provide the best prediction performance for most of the datasets in terms of the accuracy rate.",2011,0, 5370,Power cable inspections using Matlab graphical user interface aided by thermal imaging,"This paper proposed an efficient method to predict and solve an abnormal conditions in all electrical cables, using Matlab GUI (Graphic User Interface) with thermal imaging (infrared (IR) camera). The use of traditional techniques (without IR camera) not easily to predict faults and give a complete diagonous about them. Using any type of thermal camera, which can detect the thermal state of abnormal conditions of cables by technical operator (thermographer) and send this thermal images to a novel software program (GUI) technique, which using cables data base able to : 1) obtain the thermal profile of the system; 2) process and analyze thermal data, and 3) apply a simulated artificial technique to determine the particular condition or fault corresponding to the thermal signature. The new performed report can contains: 1) problems that founds in the components and the system itself, 2) suggest remedy and perform any necessary corrective action in a suitable time, and 3) give the priority of these problems with respect to repair (maintenance) time.",2011,0, 5371,Increasing test coverage using human-based approach of fault injection testing,"Fault injection testing (FIT) approach validates system's fault tolerance mechanism by actively injecting software faults into the targeted areas in the system in order to accelerate its failure rate. This highly complements other testing approaches such as requirements and regression testing implemented during the same testing phase. During testing, it is impossible to run all possible test scenarios. It is especially difficult to predict how the user might use the system functionality correctly as per design. The human interaction through the system may be varies and will leads to the functionality loophole. It is therefore important to have strategic testing approach for evaluating the dependability of computer systems especially in human errors. This paper proposed on applying Knowledge-Based, Fault Prediction Model and Test Case Prioritization approaches that can be combined to increase the test coverage. The goal of this paper is to highlight the needs and advantages of the selected approaches in performing FIT as one of effective testing techniques in the ongoing quest for increased software quality.",2011,0, 5372,H.264 deblocking filter enhancement,This paper proposes new software-based techniques for speeding and reducing the complexity of the deblocking filter used in the state-of-the-art H.264 international video coding standard to improve the visual quality of the decoded video frames. The proposed techniques are classified as standard-compliant and standard-noncompliant techniques. The standard-compliant techniques optimize the standard filter through optimizing the boundary strength calculation and group filtering of macroblocks. The standard-noncompliant techniques predict the new boundary strength and edge detection conditions from previous values. Experimental results on both an embedded platform and a desktop PC show significant increment in performance improvement that reaches 47% for the standard-compliant techniques and 80% for the standard-noncompliant techniques. They also demonstrate that for standard-noncompliant techniques the quality degradation computed using the Peak Signal to Noise Ratio is insignificant.,2011,0, 5373,Geometric mean based trust management system for WSNs (GMTMS),"The Wireless Sensor Network (WSN) nodes are high-volume in number, and their deployment environment may be hazardous, unattended and/or hostile and sometimes dangerous. The traditional cryptographic and security mechanisms in WSNs cannot detect the node physical capture, and due to the malicious or selfish nodes even total breakdown of network may take place. Also, the traditional security mechanisms in WSNs requires sophisticated software, hardware, large memory, high processing speed and communication bandwidth at node. Hence, they are not sufficient for secure routing of message from source to destination in WSNs. Alternatively, trust management schemes consist a powerful tool for the detection of unexpected node behaviours (either faulty or malicious). In this paper, we propose a new geometric mean based trust management system by evaluating direct trust from the QoS characteristics (trust metrics) and indirect trust from recommendations by neighbour nodes, which allows for trusted nodes only to participate in routing.",2011,0, 5374,Incorporating fault tolerance in GA-based scheduling in grid environment,"Grid systems differ from traditional distributed systems in terms of their large scale, heterogeneity and dynamism. These factors contribute towards higher frequency of fault occurrences; large scale causes lower values of Mean Time To Failure (MTTF), heterogeneity results in interaction faults (protocol mismatches) between communicating dissimilar nodes and dynamism with dynamically varying resource availability due to resources autonomously entering and leaving the grid effects execution of jobs. Another factor that increases probability of failure of applications is that applications running on grid are long running computations taking days to finish. Incorporating fault tolerance in scheduling algorithms is one of the approaches for handling faults in grid environment. Genetic Algorithms are a popular class of meta-heuristic algorithms used for grid scheduling. These are stochastic search algorithms based on the natural process of fitness based selection and reproduction. This paper combines GA-based scheduling with fault tolerance techniques such as checkpointing (dynamic) by modifying the fitness function. Also certain scenarios such as checkpointing without migration for resources with different downtimes and autonomous nature of grid resource providers are considered in building fitness functions. The motivation behind the work is that scheduling-assisted fault tolerance would help in finding the appropriate schedule for the jobs which would complete in the minimum time possible even when resources are prone to failures and thus help in meeting job deadlines. Simulation results for the proposed techniques are presented with respect to makespan and flowtime and fitness value of the resultant schedule obtained. The results show improvement in makespan and flowtime of the adaptive checkpointing approaches over static checkpointing approach. Also the approach which takes into consideration the last failure times of resources perform better than the approach bas- d only on the mean failure times of resources.",2011,0, 5375,Can Linux be Rejuvenated without Reboots?,"Operating systems (OSes) are crucial for achieving high availability of computer systems. Even if the applications running on the operating system are highly available, a bug inside the kernel may result in a failure of the entire software stack. Rejuvenating OSes is a promising approach to prevent and recover from transient errors. Unfortunately, OS rejuvenation takes a lot of time because we do not have any method other than rebooting the entire OS. In this paper we explore the possibility of rejuvenating Linux without reboots. In our previous research, we investigated the scope of error propagation in Linux. The propagation scope is process-local if the error is confined in the process context that activated it. The scope is kernel-global if the error propagates to other processes' contexts or global data structures. If most errors are process- local, we can rejuvenate the Linux kernel without rebooting the entire kernel because the kernel goes back to a consistent and clean state simply by killing and revoking the resources of the faulting process. Our conclusion is that Linux can be rejuvenated without reboots with high probability. Linux is coded in a defensive way and thus, most of the manifested errors (96%) were process-local and only one error was kernel- global.",2011,0, 5376,Measuring the quality characteristics of assembly code on embedded platforms,The paper describes the implementation of programming tool for measuring quality characteristics of assembly code. The aim of this paper is to prove the usability of these metrics for assessing the quality of assembly code generated by C Compiler for DSP architecture in order to improve the Compiler. The analysis of test results showed that the compiler generates good quality assembly code.,2011,0, 5377,Graphical tool for generating linker configuration files in embedded systems,"Absolute loader in embedded software is frequently used because its simplicity is superior for systems with limited resources. Absolute loader implies that memory map needs to be defined. Maintaining memory map by hand is hard and error prone process. This paper proposes a solution by implementing of the memory map graphical editor. The graphical editor is implemented using Graphical Modeling Framework in Eclipse IDE, using rapid model driven development.",2011,0, 5378,A test method of interconnection online detection of NoC based on 2D Torus topology,"On the basis of the study in Network on Chip (NoC) topologies, routing algorithm, data exchange and virtual channel technology, we design an online detection method of interconnection for 2D torus structure of NoC system in this paper. This method can detect the data errors during transmission, and identify the error results from the routing switch failure or the data transmission interconnection line failure. Then we design a sub-router based on the wormhole exchange using E-cube routing algorithm, and a check module which is suitable for the original routing node functions and work feature. Finally, we simulate the method by Verilog HDL and quartus II software. The experiment results show that the method can detect data errors caused by the router failure or interconnect failure and can locate the fault.",2011,0, 5379,Software Maintenance through Supervisory Control,"This work considers the case of system maintenance where systems are already deployed and for which some faults or security issues were not detected during the testing phase. We propose an approach based on control theory that allows for automatic generation of maintenance fixes. This approach disables faulty or vulnerable system functionalities and requires to instrument the system before deployment so that it can later be monitored and interact with a supervisor at runtime. This supervisor ensures some property designed after deployment in order to avoid future executions of faulty or vulnerable system functionalities. This property corresponds to a set of safe behaviors described as a Finite State Machine. The computation of supervisors can be performed automatically, relying on a sound Supervisory Control Theory. We first introduce some basic notions of Supervisory Control theory, then we present and illustrate our approach which also relies on automatic models extraction and instrumentation.",2011,0, 5380,Toward Intelligent Software Defect Detection - Learning Software Defects by Example,"Source code level software defect detection has gone from state of the art to a software engineering best practice. Automated code analysis tools streamline many of the aspects of formal code inspections but have the drawback of being difficult to construct and either prone to false positives or severely limited in the set of defects that can be detected. Machine learning technology provides the promise of learning software defects by example, easing construction of detectors and broadening the range of defects that can be found. Pinpointing software defects with the same level of granularity as prominent source code analysis tools distinguishes this research from past efforts, which focused on analyzing software engineering metrics data with granularity limited to that of a particular function rather than a line of code.",2011,0, 5381,Fault Detection through Sequential Filtering of Novelty Patterns,"Multi-threaded applications are commonplace in today's software landscape. Pushing the boundaries of concurrency and parallelism, programmers are maximizing performance demanded by stakeholders. However, multi-threaded programs are challenging to test and debug. Prone to their own set of unique faults, such as race conditions, testers need to turn to automated validation tools for assistance. This paper's main contribution is a new algorithm called multi-stage novelty filtering (MSNF) that can aid in the discovery of software faults. MSNF stresses minimal configuration, no domain specific data preprocessing or software metrics. The MSNF approach is based on a multi-layered support vector machine scheme. After experimentation with the MSNF algorithm, we observed promising results in terms of precision. However, MSNF relies on multiple iterations (i.e., stages). Here, we propose four different strategies for estimating the number of the requested stages.",2011,0, 5382,New integrated hybrid evaporative cooling system for HVAC energy efficiency improvement,"Cooling systems in buildings are required to be more energy-efficient while maintaining the standard air quality. The aim of this paper is to explore the potential of reducing the energy consumption of a central air-conditioned building taking into account comfort conditions. For this, we propose a new hybrid evaporative cooling system for HVAC efficiency improvement. The integrated system will be modeled and analyzed to accomplish the energy conservation and thermal comfort objectives. Comparisons of the proposed hybrid evaporative cooling approach with current technologies are included to show its advantages. To investigate the potential of energy savings and air quality, a real-world commercial building, located in a hot and dry climate region, together with its central cooling plant is used in the case study. The energy consumption and relevant data of the existing central cooling plant are acquired in a typical summer week. The performance with different cooling systems is simulated by using a transient simulation software package. New modules for the proposed system are developed by using collected experimental data and implemented with the transient tool. Results show that more than 52% power savings can be obtained by this system while maintaining the predicted mean vote (PMV) between -1 to +1 for most of summer time.",2011,0, 5383,The quality process in a professional context: Software industry case,"In the current context of the software's market, the stress is laid on the cost, the calendar and the functionalities. In order to ensure these criteria, the implementation of a step quality ISO 9001:2008 is necessary. The quality of a developed product is influenced by the quality of the production process. This is important in software development as some product quality attributes are hard to assess. For these reasons, there are several standards of quality management and processes' management that constitute an essential pathway to improve quality. In industry, speaking about quality implies focusing on production, but in software industry we must speak design. The text of ISO 9001 standard covers design but gives more importance to the production. So in order to be applied to the software field, the ISO 9001 standard must be explained further. The use of standards is an important factor of economy, efficiency and quality promoting better adaptation of products, processes and services to purposes assigned to them, through prevention of barriers to trade and facilitating international technology cooperation. So we start from a strong will to create the necessary conditions so that the quality standards in the companies would be oriented into the software field. It would be interesting to develop an approach to adapt quality standards (mainly ISO9001: 2008 standard) which are basically oriented to the industrial sector into the software field. The ISO 9001:2008 standard is general and provides the organizational requirements needed to implement a quality management system. Our work is based on the adaptation of this standard for a quality management system related to the software production. This communication includes the consideration of the requirements of ISO9001:2008 standard. We will provide more interpretations of these requirements, and we will study the state of the art of researches that have attempted to adapt the ISO 9001 standard for the software f- eld and working on the update of the guidelines of the ISO 90003 standard in relation to the requirements of ISO 9001:2008 standard. In addition to that we will search for the potential results for the adaptation of ISO 9001:2008 requirements to the field of software in order to consider these results as a starting step for our work.",2011,0, 5384,Autonomic Computing: Applications of Self-Healing Systems,"Self-Management systems are the main objective of Autonomic Computing (AC), and it is needed to increase the running system's reliability, stability, and performance. This field needs to investigate some issues related to complex systems such as, self-awareness system, when and where an error state occurs, knowledge for system stabilization, analyze the problem, healing plan with different solutions for adaptation without the need for human intervention. This paper focuses on self-healing which is the most important component of Autonomic Computing. Self-healing is a technique that aims to detect, analyze, and repair existing faults within the system. All of these phases are accomplished in real-time system. In this approach, the system is capable of performing a reconfiguration action in order to recover from a permanent fault. Moreover, self-healing system should have the ability to modify its own behavior in response to changes within the environment. Recursive neural network has been proposed and used to solve the main challenges of self-healing, such as monitoring, interpretation, resolution, and adaptation.",2011,0, 5385,An Automated Detection Method of Solder Joint Defects Using 3D Computed Tomography for IC Package Inspection,"Recent electronics parts continue to decrease in size so that it is more likely to exhibit defects. Lately, computed tomography scanning technique has been introduced to provide useful tools for the internal inspection of electronic packages. In this paper, we presents a novel method for detecting solder joint defects in the 3D packaging devices. Our method is composed of three steps. First, mis-alignment during the CT scan process is corrected. Second, open solder joints, missing solder joints, and solder bridges are detected using blob labeling procedure. Finally, head-in-pillow defect is inspected by the principal curvature analysis. The experimental results demonstrated that our method accurately detected solder joint defects within less than one second. Our method can be successfully applied to inline manufacturing, which requires rapid inspection of whole chips.",2011,0, 5386,A synopsis of self-healing functions in wireless networks,"In early 20th century when technology evolve, the performance of systems suffered from the problems of complexity, increasing cost of maintenance and software of hardware failure caused by unpredictable behavior and poor manageability. This fostered researchers to discover new design and techniques that enable the systems to operate autonomously. In 2001, IBM introduces self-managing capabilities (self-organizing, self-healing, self-optimization and self-protection) with autonomous behavior. In this survey, the main concerns are self-healing autonomic computing. Self-healing is an autonomic computing system that detects and diagnoses errors without the need of human intervention. A number of concepts, techniques and functions have been developed in different application areas of self-healing. This survey gives an overview about some approaches and solutions of past and current research in self-healing classified to operating system, routing, security and web services. These proposed approaches and solutions were developed to solve the problems that arise in manual intervention system. To achieve the perfect of self-healing behaviors, its remains an open and significant challenge that can be accomplished through a combination of process changes, new technologies and architecture and open industry standards.",2011,0, 5387,Improving path selection by handling loops in automatic test data generation,"Generating path oriented test data is one of the most powerful methods in generating appropriate test data which selects all complete paths in Control Flow Graph (CFG) and generates appropriate data to traverse the selected paths. In path selecting phase, different paths could be selected according to loops iteration that most of them are infeasible. Because the number of loops iteration is detected dynamically through the program execution in most cases. In earlier techniques, researchers either refused to handle loops or dealt with them by simplifying; thus, no effective solutions have been represented up to now. In paths with loops, proposed algorithm firstly attempts to determine the exact number of loops iteration. Then if the iterations remain unknown, this number will be decided by the tester. This technique is executed based on symbolic evaluation and loop information. Finally, selected paths can all be traversed; moreover, with reducing the number of infeasible paths, the time of generating test data will be reduced remarkably.",2011,0, 5388,Full 4 emission data collection and reconstruction for small animal PET imaging,"Most of the current animal PET detector systems are cylindrical or similar multi-sided polygonal geometries with limited axial field of view. The object is placed near the center of the detector ring during the emission data collection. The signal that can be detected has a limited angular range; the paraxial signal cannot be detected. Moreover, the sensitivity for the objects positioned at different locations inside the FOV is different. The central part of the FOV has higher sensitivity than that near the end along the axis. The lack of paraxial detectability which means non-uniform sampling, along with the non-uniform sensitivity of a PET system, will affect the uniformity of the overall image quality. Also, currently widely used PET data processing and reconstruction algorithms are sinogram based, which usually uses different sizes for the voxel in axial and transaxial directions. Because of these non-uniformities, the resolution and quality of current PET image are anisotropic. In order to achieve isotropic results, the emissions from the object must be collected in full 4 space and reconstructed accordingly. Here we propose a full 4 emission data collection method, which involving the rotation of the object during the emission collection in a plane parallel to the detector ring axis. The full 4 emission data are then reconstructed and processed to generate the 3D image set with cubic voxel, uniform resolution and signal-to-noise ratio in all directions and locations. Both Monte Carlo simulations and experiments are carried out with simulated and real mouse phantoms. Emission data in both 4 and conventional modes are collected and then processed. Point source is used in the experiment as the fiducial mark. Our results show that with full 4 collection method, the image qualities are substantially improved in several aspects such as the axial distortions and the uniformity of the SNR. Moreover, the axial strip-like ar- ifacts in the conventional mode are canceled in the full 4 mode, therefore less smooth window is needed during the reconstruction which led to higher resolution.",2011,0, 5389,Evaluation of the SensL SPMMatrix for use as a detector for PET and gamma camera applications,"The SPMMatrix from SensL (SensL, Cork, Ireland) is a large area photodetector consisting of a 4 4 array of SensL SPMArray4 detectors, each a 4 4 array of silicon photomultiplier (SiPM) pixels, giving a total of 256 SiPM pixels. In addition, the device has 32 amplifiers and analog-to-digital conversion (ADC) channels and a FPGA-based data acquisition board The anodes of the SiPMs are chained together according to an array/pixel wiring scheme developed by SensL to reduce the number of readout electronics channels to 32. In this work we conducted a preliminary evaluation of the SPMMatrix device to assess its suitability for PET and gamma camera applications. One commercially manufactured 44 array of 3.17 3.17 10 mm3 LYSO crystals was coupled to one of the 44 SiPM pixel arrays, effectively giving a one-to-one coupling of the scintillator crystal and SiPM pixels. A custom data acquisition program that allowed acquisition of all 32 ADC channels was used to acquire data from the device. A 68Ge source was used for all testing. High quality flood images were obtained from the SPMMatrix device, with all crystals being well resolved Two methods were investigated for determining the energy resolution. The better method, using the hardware sum of the pixels in one SPMArray4 detector, gave an energy resolution of 17%. The resolution degraded to 21% when the energy value was calculated by a software sum of the pixel values. This decrease in energy resolution is likely due to the contribution of dark noise from the pixels in the other arrays and due to the array/pixel multiplexing strategy. In comparison, the same L YSO array tested with a single SPMArray4 detector and NIM electronics gave an energy resolution of 14.6%.",2011,0, 5390,Tomographic performance characteristics of the IQSPECT system,"The IQSPECT system was introduced by Siemens in 2010 to significantly improve the efficiency of myocardial perfusion imaging (MPI) using conventional, large field-of-view (FOV) SPECT and SPECTCT systems. With IQSPECT, it is possible to perform MPI scans in one-fourth the time or using one-fourth the administered dose as compared to a standard protocol using parallel-hole collimators. This improvement is achieved by means of a proprietary multifocal collimator that rotates around the patient in a cardio-centric orbit resulting in a four-fold magnification of the heart while keeping the entire torso in the FOV. The data are reconstructed using an advanced reconstruction algorithm that incorporates measured values for gantry deflections, collimator-hole angles, and system point response function. This article explores the boundary conditions of IQSPECT imaging, as measured using the Data Spectrum cardiac torso phantom with the cardiac insert. Impact on reconstructed image quality was evaluated for variations in positioning of the myocardium relative to the sweet spot, scan-arc limitations, and for low-dose imaging protocols. Reconstructed image quality was assessed visually using the INVIA 4DMSPECT and quantitatively using Siemens internal IQ assessment software. The results indicated that the IQSPECT system is capable of tolerating possible mispositioning of the myocardium relative to the sweet spot by the operator, and that no artifacts are introduced by the limited angle coverage. We also found from the study of multiple low dose protocols that the dwell time will need to be adjusted in order to acquire data with sufficient signal-to-noise ratio for good reconstructed image quality.",2011,0, 5391,Process automation of metal to Carbon relays: On Line measurement of electrical parameters,"The manufacturing process of Metal to Carbon relays used in railway signaling systems for configuring various circuits of signals / points / track circuits etc. consists of seven phases from raw material to finished goods. To ensure in-process quality, the physical, electrical and various other parameters are measured manually with non-automated equipment, after each stage. Manual measurements are tedious, error prone and involve lot of time, effort and manpower. Besides, they are susceptible to manipulation and may lead to inferior quality products being passed, either due to deliberation or due to malefic intentions. Due to erroneous measurement of electrical parameters, the functional reliability of relays is adversely affected. To enhance the trustworthiness of measurement of electrical parameters & to make the process faster, an automated measurement system having proprietary application software and a testing jig attachment has been developed. When the relay was fixed on the testing jig, the software scanned all the relay contacts and measured all the electrical parameters viz. operating voltage / current, contact resistance, release voltage / current, coil resistance etc. The result was stored in a database file and ported on an internet website. Thus, the test results of individual relays were available on-line, with date & time tags and could be easily monitored.",2011,0, 5392,Digital anthropomorphic phantoms of non-rigid human respiratory and voluntary body motions: A tool-set for investigating motion correction in 3D reconstruction,"Patient respiratory and body motions occurring during emission tomography create artifacts in the images, which can mislead diagnosis. For example, in myocardial-perfusion imaging these artifacts can be mistaken for perfusion defects. Various software and hardware approaches have been developed to detect and compensate for motion. A practical way to test these methods is to simulate realistic motion with digital anthropomorphic phantoms. However, simulated motions often do not correspond to real patient motions. In this study, we are creating XCAT phantoms based on real body and respiratory motion data acquired from MR scans of volunteers. We are exploring different MRI acquisition methods to allow respiratory amplitude-binned modeling of both inspiration and expiration, which portrays both non-rigid motion and motion hysteresis during breathing. Simultaneous to MRI, the positions of reflective markers placed on the body-surface are tracked in 3D via stereo optical imaging. This enables correlation of the internal organ motion (e.g. heart, liver etc.) in our models with external marker-motion as would be observed during clinical imaging. Our digital anthropomorphic phantoms can serve as a realistic dataset with known truth for investigating motion correction in 3D iterative reconstructions.",2011,0, 5393,Functionality test of a readout circuit for a 1mm3 resolution clinical PET system,"We are developing a 1mm3 resolution Positron Emission Tomography (PET) camera dedicated to breast imaging, which collects high energy photons emitted from radioactively labeled agents injected in the patients to detect molecular signatures of breast cancer. The camera consists of 8 8 arrays of 1 1 1mm3 lutetium yttrium oxyorthosilicate (LYSO) crystals coupled to position sensitive avalanche photo-diodes (PSAPDs). The camera is built out of 2 panels each having 9 cartridges. A cartridge houses 8 layers of 16 dual LYSO-PSAPD modules and features 1024 readout channels. Amplification, shaping and triggering is done using the 36 channel RENA-3TM chip. 32 of these chips are needed per cartridge, or 576 for the entire camera. The RENA-3TM needs functionality and performance validation before assembly in the camera. A Chip Tester board was built to ensure RENA functionality and quality, featuring a 20,000-cycle chip holder and the ability to charge inject each channel individually. Lab View software was written to collect data on all RENA-3TM channels automatically. Charge was injected using an arbitrary waveform generator. Software was written to validate gain, linearity, cross talk, and timing noise characteristics. Gain is tested using a linear fit, typical values of gain are 13 and -16 for positive and negative injected charges respectively. Timing analysis was based on analyzing the phase shift between U and V timing channels. A phase shift of 90 5 and timing noise of 2ns were considered acceptable.",2011,0, 5394,Direct 3D PET image reconstruction into MR image space,"A method which includes both the motion correction and image registration transformation parameters from PET image space to MR image space within the system matrix of the MLEM algorithm is presented. This approach can be of particular significance in the fields of neuroscience and psychiatry, whereby PET is used to investigate differences in activation patterns between groups of participants (such as healthy controls and patients). This requires all images to be registered in a common spatial atlas. Currently, image registration is performed post-reconstruction. This introduces interpolation effects in the final image and causes image resolution degradation. Furthermore, motion correction introduces a further level of interpolation and possible resolution degradation. To include the transformation parameters (both for motion correction and registration) within the iterative PET reconstruction framework (through iterative use of actual software packages routinely applied after reconstruction) should reduce these interpolation effects and thus improve image resolution. Furthermore, it opens the possibility of direct reconstruction of the PET data into standardized stereotaxic atlases, e.g. ICBM152. To validate the proposed method, this work investigates registration, using 2D and 3D simulations based on the HRRT scanner geometry, between different image spaces using rigid body transformation parameters calculated using the mutual information similarity criterion. The quality of reconstruction was assessed using bias-variance and mean absolute error analyses to quantify differences with current post-reconstruction registration methods. We demonstrate a reduction in bias and in mean absolute error in reconstructed mean ROI activity when using the proposed method.",2011,0, 5395,A Network Status Evaluation Method under Worm Propagation,"The measurement of worm propagation impact on network status remained an elusive goal. This paper analyzes the worm characteristics and network traffic and service, introduces evaluation metrics and presents a new method to assess the network situation under worm propagation. The applicability of this method is verified by simulated experiments with the network simulation tool LSNEMUlab test bed.",2011,0, 5396,The Testing and Diagnostic System on AVR of the Movable Electricity Generating Set,"AVR (Automatic Voltage Regulator) is the most important module, which can control the output voltage of generating sets, guarantee the voltage stability, improve power quality and decide the performance of electricity generating sets, so this paper introduces the design method of the AVR detecting and diagnostic system, which based on the fault database. The paper introduces the platform of the testing and diagnostic system on AVR from the hardware and software design, composed of the industrial computer (main controller), the programmed power supply, the data acquisition unit, and the software programmed by Lab Windows/CVI. The FTA (Fault Tree Analysis) method, establish fault database and analyse fault of AVR, is Applyed. Through testing the certain type's AVR, the method, proposd in the paper, is proved to be feasibile and versatile, and is satisfied with the detection and diagnosis for AVR eventually.",2011,0, 5397,Detection of power quality disturbances in presence of DFIG wind farm using wavelet transform based energy function,"Wavelet transform based energy function approach for detection of some power quality (PQ) disturbances such voltage sag, voltage flicker, voltage swell, harmonics, inter harmonics in grid connected wind power system is proposed in this paper. The current signal is processed through Wavelet transform for PQ events detection. Initially, the current is retrieved at a sampling frequency of 20 kHz and DWT is used to decompose the signals of PQ events and to extract its useful information. In the case study, the power quality disturbances are created in the grid, and proposed algorithm detects the power quality disturbances effectively within one and half cycles for 60 Hz system. Thus, a new diagnostic method based on the grid modulating signals pre-processed by Discrete Wavelet Transform (DWT) is proposed to detect grid power quality disturbances. The system is simulated using MATLAB software and simulation results demonstrate the effectiveness of the proposed approach under time-varying conditions.",2011,0, 5398,Design and FPGA implementation of digital noise generator based on superposition of Gaussian process,"Currently in the design of digital communication system, in order to detect the communication quality, plenty of tests need to be done in a noisy environment. The common method is adding analog noise to the transmitted data on the radio. Adding digital noise has always been a difficulty. A digital noise generator based on superposition of Gaussian process is presented in this paper. And hardware program simulation is carried out using Quartus II combined with Modelsim software, even the performance of the digital noise generator is tested on FPGA, simultaneously, compared with the single Gaussian noise.",2011,0, 5399,Impacts of automatic loop restoration schemes on service reliability,"Automatic loop restoration schemes are employed in electric power distribution systems to perform fault detection, isolation, and service restoration activities sequentially and automatically, so as to significantly reduce customer interruption time. This paper aims to quantitatively assess the impacts of employing an automatic loop restoration scheme, with and without a communication link, on the major attributes of the service reliability. In addition, the effect of the operational failure of communication facilities is taken into account. A typical Finnish urban distribution network is utilized in this paper for the quantitative reliability assessment studies. A powerful software package referred to as """"Smart Grid Simulator"""" is used for directing the reliability studies.",2011,0, 5400,Increasing security of supply by the use of a Local Power Controller during large system disturbances,"This paper describes intelligent ways in which distributed generation and local loads can be controlled during large system disturbances, using Local Power Controllers. When distributed generation is available, and a system disturbance is detected early enough, the generation can be dispatched, and its output power can be matched as closely as possible to local microgrid demand levels. Priority-based load shedding can be implemented to aid this process. In this state, the local microgrid supports the wider network by relieving the wider network of the micro-grid load. Should grid performance degrade further, the local microgrid can separate itself from the network and maintain power to the most important local loads, re-synchronising to the grid only after more normal performance is regained. Such an intelligent system would be a suitable for hospitals, data centres, or any other industrial facility where there are critical loads. The paper demonstrates the actions of such Local Power Controllers using laboratory experiments at the 10kVA scale.",2011,0, 5401,Simple scoring system for ECG quality assessment on Android platform,"Work presented in this paper was undertaken in response to the PhysioNet/CinC Challenge 2011: Improving the quality of ECGs collected using mobile phones. For the purpose of this challenge we have developed an algorithm that uses five simple rules, detecting the most common distortions of the ECG signal in the out of hospital environment. Using five if-then rules arranges for easy implementation and reasonably swift code on the mobile device. Our results on test set B were well-outside the top ten algorithms (Best score: 0.932; Our score: 0.828). Nevertheless our algorithm placed second among those providing open-source code for evaluation on the data set C, where neither data nor scores were released to the participants before the end of the challenge. The difference in the scores of the top two algorithms was minimal (Best score: 0.873; Our score: 0.872). As a consequence, relative success of simple algorithm on undisclosed set C raises questions about the over-fitting of more sophisticated algorithms - question that is hovering above many recently published results of automated methods for medical applications.",2011,0, 5402,Mind-mapping: An effective technique to facilitate requirements engineering in agile software development,"Merging agile with more traditional approaches in software development is a challenging task, especially when requirements are concerned: the main temptation is to let two opposite schools of thought become rigid in their own assumptions, without trying to recognize which advantages could come from either side. Mind mapping seems to provide a suitable solution for both parties: those who develop within an agile method and those who advocate proper requirements engineering practice. In this paper, mind mapping has been discussed as a suitable technique to elicit and represent requirements within the SCRUM model: specifically, we have focused on whether and how mind maps could lead to the development of a suitable product backlog, which in SCRUM plays the role of an initial requirements specification document. In order to experimentally assess how effectively practitioners could rely on a product backlog for their first development sprint, we have identified the adoption of mind maps as the independent variable and the quality of the backlog as the dependent variable, the latter being measured against the """"function points"""" metric. Our hypothesis (i.e., mind maps are effective in increasing the quality of product backlogs) has been tested within an existing SCRUM project (the development of a digital library by an academic institution), and several promising data have been obtained and further discussed.",2011,0, 5403,Translating unknown words using WordNet and IPA-based-transliteration,"Due to small available English-Bangla parallel corpus, Example-Based Machine Translation (EBMT) system has high probability of handling unknown words. To improve translation quality for Bangla language, we propose a novel approach for EBMT using WordNet and International-Phonetic-Alphabet(IPA)-based transliteration. Proposed system first tries to find semantically related English words from WordNet for the unknown word. From these related words, we choose the semantically closest related word whose Bangla translation exists in English-Bangla dictionary. If no Bangla translation exists, the system uses IPA-based-transliteration. For proper nouns, the system uses Akkhor transliteration mechanism. We implemented the proposed approach in EBMT, which improved the quality of good translation by 16 points.",2011,0, 5404,Towards a performance estimate in semi-structured processes,"Semi-structured processes are business workflows, where the execution of the workflow is not completely controlled by a workflow engine, i.e., an implementation of a formal workflow model. Examples are workflows where actors potentially have interaction with customers reporting the result of the interaction in a process aware information system. Building a performance model for resource management in these processes is difficult since the information required for a performance model is only partially recorded. In this paper we propose a systematic approach for the creation of an event log that is suitable for available process mining tools. This event log is created by an incremental cleansing of data. The proposed approach is evaluated in a case study where the quality of the derived event log i assessed by domain experts.",2011,0, 5405,Design pattern prediction techniques: A comparative analysis,"There are many design patterns available in literature to predict refactoring. However literature gives a comprehensive study to evaluate and compare various design patterns so that quality professionals may select an appropriate design pattern. To find a technique which performs better in general is an undesirable problem because behavior of a design pattern also depends on many other features like pre-deployment of design pattern, structural, behavioral etc. we have conducted an empirical survey of various design pattern in terms of various evaluation software metrics. In this paper we have presented comparison of few design patterns on metrics basis.",2011,0, 5406,A new custom designed cleft lip and palate implant based on MARP,"Something about 1 million skeletal defects are reported each year, which are in need of bone-grafting to be cured. Note that population of the world is getting older and so the probability of bone fractures is increasing, while dealing with this would have lots of social and economical effects. Bone grafts can be both autologous and autogenous or as an alternative, be made of nonorganic materials such as metals, polymers, ceramics or composite materials. Maxillofacial problems can cause a variety of malfunctions and abnormalities in the body and to handle these defects, two solutions exist, including surgical approach and non-surgical approach. In the first method, one should wait until the end of bone growth ages, and for the second, due to the need of wearing prosthesis, modifications of the prosthesis is essential as child grows. In this research a patient with cleft lip and palate was chosen and an appropriate implant was designed for him. An AFM analysis was performed to make see the stress distribution pattern. Final results show that using this method can help both patients and surgeons by improving implant-tissue contact.",2011,0, 5407,Semantic Process Management Environment,"As the knowledge-based society has been constructed, the size of work process grows bigger and the amount of the information that has to be analyzed increases. So the necessity of the process management and improvement has been required highly. This study suggests the process management method to support a company's survival strategy to get the competitive power in difficult situation to predict future business environment. The suggested process management method applies ontology for formalizing and sharing the several generalized process management concept. In ontology, several techniques from Six Sigma and PSP are defined for process definition, execution and measurement. With ontology, we provide formal knowledge base for both process management environment and human stakeholders. Also, we can easily improve our environment by extending our process ontology to adapt new management methods.",2011,0, 5408,An open-source application to model and solve dynamic fault tree of real industrial systems,"In recent years, a new generation of modeling tools for the risk assessment have been developed. The concept of """"dynamic"""" was exported also in the field of reliability and techniques like dynamic fault tree, dynamic reliability block diagrams, boolean logic driven Markov processes, etc., have become of use. But, despite the promises of researchers and the efforts of end-users, the dynamic paradox hangs: risk assessment procedures are not as straight as they were with the traditional static methods and, what is worse, it is difficult to assess the reliability of these results. Far from deny the importance of the scientific achievement, we have tested and cursed some of these dynamic tools realizing that none of them was appropriate to solve a real case. In this context, we decided to develop a new DFT reliability solver, based on the Monte Carlo simulative approach. The tool is greatly powerful because it is written with Matlab code, hence is open-source and can be extended. In this first version, we have implemented the most used dynamic gates (PAND, SEQ, FDEP and SPARE), the existence of repeated events and the possibility to simulate different cumulative distribution function of failure (Weibull, negative exponential CDF and constant). The tool is provided with a snappy graphic user interface written in Java, which allows an easy but efficient modeling of any fault tree schema. The tool has been tested with many literature cases of study and results encourage other developments.",2011,0, 5409,Impact of SIPS performance on power systems integrity,"An increasing number of utilities are using System Integrity Protection Schemes (SIPS) to minimize the probability of large disturbances and to enhance power system reliability. This trend leads to the use of an increased number of SIPS resulting in additional risks to system security. This paper proposes a procedure based on Markov Modeling for assessing the risk of a SIPS failure or misoperation. The proposed method takes into consideration failures in the three stages of SIPS operation: arming, activation and implementation. This method is illustrated using an example of a Generation Rejection Scheme (GRS) for preventing cascading outages that may lead to load shedding. In addition, system operators tend to have the SIPS always armed to prevent a failure to operate when required. However, this can result in increased probability of SIPS misoperation (operation when not needed). Therefore, the risk introduced to the system by having the SIPS always armed and ready to initiate actions is examined and compared with the risk of automatic or manual arming of SIPS only when required. Sensitivity analysis is also performed to determine how different factors can affect the ability of the SIPS to operate in a dependable and secure manner.",2011,0, 5410,Design of adaptive line protection under smart grid,"Smart grid will bring new opportunities to development of relay protection, new sensor technology is used in smart grid. Simplified algorithm to protect data, reducing data processing time. With the State Grid Corporation of China launched the construction of smart grid, smart grid caused by the characteristics of network reconfiguration, distributed power access technologies such as micro-network operation, to put forward new demands on relay protection, based on local measurement information and a small amount of regional information makes conventional protection face greater difficulties to solve these problems; the same time, research and application on new technologies (such as new sensor technology, clock synchronization and data synchronization technology, computer technology, optical fiber communication technology, etc.) provided a broad space for development of relay protection. Adaptive protection means that the protection must adapt to changing system conditions, the computer relay protection must have a hierarchical configuration of communication lines to exchange information with computer network of other devices. For now, fiber optic communication lines are best medium in large amounts of information transmission and conversion in adaptive protection. Adaptive protection is a protection theory, according to this theory, which allows adjustment of the various protection functions, making them more adapted to practical power system operation. The key idea is to make certain changes to the protection system to respond due to load changes, such as power failures caused by switching operations or changes in the power system. Adaptive protection under the relevant literature to a definition: Adaptive protection is a basic principle of protection, this principle makes the relay can automatically adjust to various protection functions, or changes to more suitable for a given power system conditions.. In addition to conventional protection, b- t also must have a clear adaptive function modules, and only in this case can be called adaptive protection. For the general protection adaptive capacity and detect some aspects of complex fault there are some limitations, hardware circuit of microprocessor line protection device is discussed in this thesis, both hardware and algorithm considers the anti-jamming methods. On the software side, the use of a relatively new method of frequency measurement, dynamically tracking changes of frequency, real-time adjustment of sampling frequency for sampling; and using the new algorithm, improving data accuracy and simplifying the hardware circuit. The adaptive principle is applied to microprocessor line protection, adaptive instantaneous overcurrent protection, overcurrent protection principles, etc, to meet the requirements of rapid change operation mode to improve the performance of line protection. Relay protection needs to adapt to frequent changes in the power system operating mode, correctly remove various failure and equipment, and adaptive relay protection maintains a system of standard features in case of parameter changes. The simulation results show that it is an effective adaptive method.",2011,0, 5411,A method for online analyzing excitation systems performance based on PMU measurements,"A method based on synchronized phasor measurement technology for analyzing and evaluating the dynamic regulating performance of excitation system using dynamic electrical data acquired by phasor measurement unit (PMU) is proposed. Combined with an engineering processing of corresponding excitation system performance parameters, the method realizes the calculation and analysis of the main excitation system performance indexes through detecting and extracting the course of a generator disturbance and its excitation system response. Meanwhile, its whole software system functions are designed and realized based on the system structure of a WAMS. It is concluded through the method introduction and practical project applications that compared with conventional analysis methods, this method has the advantages of online analysis, offline research, simpleness and practicality, convenient use, and its computed results can be treated as an important reference for the evaluation of dynamic regulating performances of a excitation system.",2011,0, 5412,Comprehension oriented software fault location,"Software errors can potentially lead to disastrous consequences. Unfortunately, debugging software errors can be difficult and time-consuming. A comprehension oriented software fault location approach (COFL) is proposed in this paper to provide automated assistance in bug location. It not only locates program predicates predicting bugs, but also provides high efficiency demand-driven data flow and control flow analysis to help developers understand the causes and contexts of bugs.",2011,0, 5413,A finite queuing model with generalized modified Weibull testing effort for software reliability,"This study incorporates a generalized modified Weibull (GMW) testing effort function (TEF) into failure detection process (FDP) and fault correction process (FCP). Although some researchers have been devoted to model these two processes, the influence of the amount of resources on lag of these two processes is not discussed. The amount of resources consumed can be depicted as testing effort function (TEF), and can largely influence failure detection speed and the time to correct a detected failure. Thus, in this paper, we will integrate a TEF into FDP and FCP. Further, we show that a GMW TEF can be expressed as a TEF curve, and present a finite server queuing (FSQ) model which permits a joint study of FDP and FCP two processes. An actual software failure data set is analyzed to illustrate the effectiveness of proposed model. Experimental results show that the proposed model has a fairly accurate predication capability.",2011,0, 5414,A study on cooling efficiency improvement of thin film transistor Liquid Crystal Display (TFT-LCD) modules,"In recent years, LCD (Liquid Crystal Display) TVs are taking the place of CRT (Cathode Ray Tube) TVs very fast by bringing new display technologies into use. LCD module technology is divided into two main groups; the first one is CCFL (Cold Cathode Fluorescent Lamp) display which was the first type used in LCD TV, the other one is the LED (Light Emitting Diode) module which is the newest display technology comes to make slim TV design. There is a thermal challenge making slim TV design. The purpose of this paper is to investigate the thermal analysis and modeling of a 32"""" TFT-LCD LED module, The performance of LCD TV is strongly dependant on thermal effects such as temperature and its distribution on LCD displays The illumination of the display was insured by 180 light emitting diodes (LEDs) located at the top and bottom edges of the modules. Hence, in order to insure good image quality in display and long service life, an adequate thermal management is necessary. For this purpose, a commercially available computational fluid dynamics (CFD) simulation software FloEFD was used to predict the temperature distribution. This thermal prediction by computational method was validated by an experimental thermal analysis by attaching 10 thermocouples on the back cover of the modules and measuring the temperatures. Also, thermal camera images of the display by FLIR Thermacam SC 2000 test device were also analyzed.",2011,0, 5415,UIO-based diagnosis of aircraft engine control systems using scilab,"Fault diagnosis is of significant importance to the robustness of aeroengine control systems. This paper makes use of full-order unknown input observers (UIOs) to facilitate the diagnosis of sensor/actuator faults in engine control systems. The built-in ui-observer function in Scilab, however, can not give satisfying performance, in terms of observer realization. Hence we rewrite this UIO program in standard Scilab scripts and decouple the effect of unknown disturbances upon state estimation to improve the sensitivity to engine faults. An evaluation platform is created on the basis of the Xcos tool in a Simulink-like manner. All the above work is accomplished in the Scilab environment. Experimental results on an aircraft turbofan engine demonstrate that the suggested UIO diagnostic method has good anti-disturbance ability and can effectively detect and isolate sensor/actuator faults under various fault conditions.",2011,0, 5416,Assessing integrated measurement and evaluation strategies: A case study,"This paper presents a case study aimed at understanding and comparing integrated strategies for software measurement and evaluation, considering a strategy as a resource from the assessed entity standpoint. The evaluation focus is on the quality of the capabilities of a measurement and evaluation strategy taking into account three key aspects: i) the conceptual framework, centered on a terminological base, ii) the explicit specification of the process, and iii) the methodological/technological support. We consider a strategy is integrated if to great extent these three capabilities are met simultaneously. In the illustrated case study two strategies i.e. GQM+Strategies (Goal-Question-Metric), and GOCAME (Goal-Oriented Context-Aware Measurement and Evaluation) are evaluated. The given results allowed us the understanding of strengths and weaknesses for both strategies, and planning improvement actions as well.",2011,0, 5417,Power system on-line risk assessment and decision support based on weighting fault possibility model,"The outdoor component weighting fault possibility model is established based on operational state of components, utility theory and probability theory. The model has considered the dispatchers' operation experience and sensitivity to the weather. Using this model, the risk indices of power system can be simulated and calculated. The online risk analysis software of power supply (RAPS) for regional grid has been developed by using AC-DC hybrid algorithm and introducing some advanced technology, including parallel computing, dynamic node ordering optimization, matrix inverse optimization, etc. The system can supply dispatchers with real-time decision information, and provide decision support for dispatchers. By applying this software in a regional grid, the validity and practicality of this model have been proved.",2011,0, 5418,Research on the relationship between curvature radius of deflection basin and stress state on bottom of semi-rigid type base,"In China'current specification for the design of asphalt pavement, tensile stress of asphalt layer bottom is one of design indexes. However, the design index can not be detected and verified in practical engineering application. Research and analysis of deflection bowl in this paper show that there is a relationship between tensile stress of each layer bottom and deflections in different positions. The dynamic analysis model of rigid pavement under falling weight deflectometer load was established by utilizing ANSYS for researching the relationship between tensile stress of semi-rigid layer bottom and the curvature radius of deflection basin. This paper tries to seek a testing method to characterize pavement design indexes. It would be helpful to establish a relationship between theoretical calculation and the actual test of engineering. It also contribute to evaluate the performance of pavement and riding quality.",2011,0, 5419,An ultrasonic testing method of multi-layer adhesive joint based on wavelet analysis,"In order to detect the debond defect of multilayer metal-rubber adhesive joint, the mature ultrasonic detecting technique was used. By rebuilding an existent ultrasonic detecting instrument and utilizing a high speed data acquisition card and LabVIEW developing environment, an ultrasonic signal acquisition and analysis platform was successfully built. Echo signal acquisition was programmed. The signal was decomposed and reconstructed by utilizing db7 wavelet in Matlab script node of LabVIEW. A contrast experiment was carried out on two specimens which have debond defect. The result shows that the reconstructed signals are remarkably different between the well-bonded areas and the debonded areas, which indicates that the wavelet analysis method is effective in detecting the debond defect under the first rubber layer.",2011,0, 5420,Sampling + DMR: Practical and low-overhead permanent fault detection,"With technology scaling, manufacture-time and in-field permanent faults are becoming a fundamental problem. Multi-core architectures with spares can tolerate them by detecting and isolating faulty cores, but the required fault detection coverage becomes effectively 100% as the number of permanent faults increases. Dual-modular redundancy(DMR) can provide 100% coverage without assuming device-level fault models, but its overhead is excessive. In this paper, we explore a simple and low-overhead mechanism we call Sampling-DMR: run in DMR mode for a small percentage (1% of the time for example) of each periodic execution window (5 million cycles for example). Although Sampling-DMR can leave some errors undetected, we argue the permanent fault coverage is 100% because it can detect all faults eventually. SamplingDMR thus introduces a system paradigm of restricting all permanent faults' effects to small finite windows of error occurrence. We prove an ultimate upper bound exists on total missed errors and develop a probabilistic model to analyze the distribution of the number of undetected errors and detection latency. The model is validated using full gate-level fault injection experiments for an actual processor running full application software. Sampling-DMR outperforms conventional techniques in terms of fault coverage, sustains similar detection latency guarantees, and limits energy and performance overheads to less than 2%.",2011,0, 5421,Monitoring high performance data streams in vertical markets: Theory and applications in public safety and healthcare,"Over the last several years, monitoring high performance data stream sources has become very important in various vertical markets. For example, in the public safety sector, monitoring and automatically identifying individuals suspected of terrorist or criminal activity without physically interacting with them has become a crucial security function. In the healthcare industry, noninvasive mechanical home ventilation monitoring has allowed patients with chronic respiratory failure to be moved from the hospital to a home setting without jeopardizing quality of life. In order to improve the efficiency of large data stream processing in such applications, we contend that data stream management systems (DSMS) should be introduced into the monitoring infrastructure. We also argue that monitoring tasks should be performed by executing data stream queries defined in Continuous Query Language (CQL), which we have extended with: 1) new operators that allow creation of a sophisticated event-based alerting system through the definition of threshold schemes and threshold activity scheduling, and 2) multimedia support, which allows manipulation of continuous multimedia data streams using a similarity-based join operator which permits correlation of data arriving in multimedia streams with static content stored in a conventional multimedia database. We developed a prototype in order to assess these proposed concepts and verified the effectiveness of our framework in a lab environment.",2011,0, 5422,Distortion estimation for reference frame modification methods,"Due to the transmission of encoded video over error prone channels, using error resilient techniques at the encoder has become an essential issue. These techniques try to decrease the impact of transmission errors by using different approaches such as inserting Intra MacroBlocks (MBs), changing the prediction structure, or considering the channel state in selecting the best MB modes. In this work, we make use of the channel aware mode decision scheme used in the Loss Aware Rate Distortion Optimization (LARDO) method while simultaneously using the prediction structure of the Improved Generalized Source Channel Prediction (IGSCP) technique. In order to combine these two schemes, we estimate the end-to-end distortion for the IGSCP prediction structure in the H.264/AVC encoder. Simulation results, using the JSVM software, demonstrate the effectiveness of our technique for different sequences.",2011,0, 5423,Architecture for Embedding Audiovisual Feature Extraction Tools in Archives,"Soon, it will no longer be sufficient for only archivists to annotate audiovisual material. Not only is the number of archivists limited, but the time they spend on annotating one item is insufficient to create time-based and detailed descriptions about the content to make fully optimized video search possible. Furthermore, as a result of file-based production methods, we observe an accelerated increase in newly created audiovisual material that must be described. Fortunately, high-quality feature extraction (FE) tools are increasingly being developed by research institutes. These tools examine the audiovisual essence and return particular information about the analyzed video, audio, or both streams. For example, the tools can automatically detect shot boundaries, detect and recognize faces and objects, and segment audio streams. As a result, they quickly and cheaply generate metadata that can be used for indexing and searching. In addition, they relieve archivists of the need to perform tedious, repetitive, but necessary low-added value tasks, such as identifying within an audio stream the speech and the music segments. Although most tools are not yet commercially offered, these solutions are expected to become available soon for broadcasters and media companies alike. This paper describes a solution for integrating such FE tools within the annotation workflow of a media company. This solution, in the form of an architecture and workflow, is scalable, extensible, and loosely coupled and has clear and easy-to-implement interfaces. As such, our architecture allows additional tools to be plugged in irrespective of the software and hardware used by the media company. By integrating FE tools within the workflow of the annotating audiovisual essence, more and better metadata can be created, allowing other tools to improve indexing, search, and retrieval of media material within audiovisual archives.",2011,0,3965 5424,Practical development of an Eclipse-based software fault prediction tool using Naive Bayes algorithm,"Despite the amount of effort software engineers have been putting into developing fault prediction models, software fault prediction still poses great challenges. This research using machine learning and statistical techniques has been ongoing for 15years, and yet we still have not had a breakthrough. Unfortunately, none of these prediction models have achieved widespread applicability in the software industry due to a lack of software tools to automate this prediction process. Historical project data, including software faults and a robust software fault prediction tool, can enable quality managers to focus on fault-prone modules. Thus, they can improve the testing process. We developed an Eclipse-based software fault prediction tool for Java programs to simplify the fault prediction process. We also integrated a machine learning algorithm called Naive Bayes into the plug-in because of its proven high-performance for this problem. This article presents a practical view to software fault prediction problem, and it shows how we managed to combine software metrics with software fault data to apply Naive Bayes technique inside an open source platform.",2011,1, 5425,Comparing Boosting and Bagging Techniques With Noisy and Imbalanced Data,"This paper compares the performance of several boosting and bagging techniques in the context of learning from imbalanced and noisy binary-class data. Noise and class imbalance are two well-established data characteristics encountered in a wide range of data mining and machine learning initiatives. The learning algorithms studied in this paper, which include SMOTEBoost, RUSBoost, Exactly Balanced Bagging, and Roughly Balanced Bagging, combine boosting or bagging with data sampling to make them more effective when data are imbalanced. These techniques are evaluated in a comprehensive suite of experiments, for which nearly four million classification models were trained. All classifiers are assessed using seven different performance metrics, providing a complete perspective on the performance of these techniques, and results are tested for statistical significance via analysis-of-variance modeling. The experiments show that the bagging techniques generally outperform boosting, and hence in noisy data environments, bagging is the preferred method for handling class imbalance.",2011,1, 5426,An industrial case study of classifier ensembles for locating software defects,"As the application layer in embedded systems dominates over the hardware, ensuring software quality becomes a real challenge. Software testing is the most time-consuming and costly project phase, specifically in the embedded software domain. Misclassifying a safe code as defective increases the cost of projects, and hence leads to low margins. In this research, we present a defect prediction model based on an ensemble of classifiers. We have collaborated with an industrial partner from the embedded systems domain. We use our generic defect prediction models with data coming from embedded projects. The embedded systems domain is similar to mission critical software so that the goal is to catch as many defects as possible. Therefore, the expectation from a predictor is to get very high probability of detection (pd). On the other hand, most embedded systems in practice are commercial products, and companies would like to lower their costs to remain competitive in their market by keeping their false alarm (pf) rates as low as possible and improving their precision rates. In our experiments, we used data collected from our industry partners as well as publicly available data. Our results reveal that ensemble of classifiers significantly decreases pf down to 15% while increasing precision by 43% and hence, keeping balance rates at 74%. The cost-benefit analysis of the proposed model shows that it is enough to inspect 23% of the code on local datasets to detect around 70% of defects.",2011,1, 5427,An ant colony optimization algorithm to improve software quality prediction models: Case of class stability,"ContextAssessing software quality at the early stages of the design and development process is very difficult since most of the software quality characteristics are not directly measurable. Nonetheless, they can be derived from other measurable attributes. For this purpose, software quality prediction models have been extensively used. However, building accurate prediction models is hard due to the lack of data in the domain of software engineering. As a result, the prediction models built on one data set show a significant deterioration of their accuracy when they are used to classify new, unseen data. ObjectiveThe objective of this paper is to present an approach that optimizes the accuracy of software quality predictive models when used to classify new data. MethodThis paper presents an adaptive approach that takes already built predictive models and adapts them (one at a time) to new data. We use an ant colony optimization algorithm in the adaptation process. The approach is validated on stability of classes in object-oriented software systems and can easily be used for any other software quality characteristic. It can also be easily extended to work with software quality predictive problems involving more than two classification labels. ResultsResults show that our approach out-performs the machine learning algorithm C4.5 as well as random guessing. It also preserves the expressiveness of the models which provide not only the classification label but also guidelines to attain it. ConclusionOur approach is an adaptive one that can be seen as taking predictive models that have already been built from common domain data and adapting them to context-specific data. This is suitable for the domain of software quality since the data is very scarce and hence predictive models built from one data set is hard to generalize and reuse on new data.",2011,1, 5428,Effective and Efficient Memory Protection Using Dynamic Tainting,"Programs written in languages allowing direct access to memory through pointers often contain memory-related faults, which cause nondeterministic failures and security vulnerabilities. We present a new dynamic tainting technique to detect illegal memory accesses. When memory is allocated, at runtime, we taint both the memory and the corresponding pointer using the same taint mark. Taint marks are then propagated and checked every time a memory address m is accessed through a pointer p; if the associated taint marks differ, an illegal access is reported. To allow always-on checking using a low overhead, hardware-assisted implementation, we make several key technical decisions. We use a configurable, low number of reusable taint marks instead of a unique mark for each allocated area of memory, reducing the performance overhead without losing the ability to target most memory-related faults. We also define the technique at the binary level, which helps handle applications using third-party libraries whose source code is unavailable. We created a software-only prototype of our technique and simulated a hardware-assisted implementation. Our results show that 1) it identifies a large class of memory-related faults, even when using only two unique taint marks, and 2) a hardware-assisted implementation can achieve performance overheads in single-digit percentages.",2012,0, 5429,Evaluation and Measurement of Software Process ImprovementA Systematic Literature Review,"BACKGROUND-Software Process Improvement (SPI) is a systematic approach to increase the efficiency and effectiveness of a software development organization and to enhance software products. OBJECTIVE-This paper aims to identify and characterize evaluation strategies and measurements used to assess the impact of different SPI initiatives. METHOD-The systematic literature review includes 148 papers published between 1991 and 2008. The selected papers were classified according to SPI initiative, applied evaluation strategies, and measurement perspectives. Potential confounding factors interfering with the evaluation of the improvement effort were assessed. RESULTS-Seven distinct evaluation strategies were identified, wherein the most common one, Pre-Post Comparison, was applied in 49 percent of the inspected papers. Quality was the most measured attribute (62 percent), followed by Cost (41 percent), and Schedule (18 percent). Looking at measurement perspectives, Project represents the majority with 66 percent. CONCLUSION-The evaluation validity of SPI initiatives is challenged by the scarce consideration of potential confounding factors, particularly given that Pre-Post Comparison was identified as the most common evaluation strategy, and the inaccurate descriptions of the evaluation context. Measurements to assess the short and mid-term impact of SPI initiatives prevail, whereas long-term measurements in terms of customer satisfaction and return on investment tend to be less used.",2012,0, 5430,Invariant-Based Automatic Testing of Modern Web Applications,"Ajax-based Web 2.0 applications rely on stateful asynchronous client/server communication, and client-side runtime manipulation of the DOM tree. This not only makes them fundamentally different from traditional web applications, but also more error-prone and harder to test. We propose a method for testing Ajax applications automatically, based on a crawler to infer a state-flow graph for all (client-side) user interface states. We identify Ajax-specific faults that can occur in such states (related to, e.g., DOM validity, error messages, discoverability, back-button compatibility) as well as DOM-tree invariants that can serve as oracles to detect such faults. Our approach, called Atusa, is implemented in a tool offering generic invariant checking components, a plugin-mechanism to add application-specific state validators, and generation of a test suite covering the paths obtained during crawling. We describe three case studies, consisting of six subjects, evaluating the type of invariants that can be obtained for Ajax applications as well as the fault revealing capabilities, scalability, required manual effort, and level of automation of our testing approach.",2012,0, 5431,A Fiber-Optic Multisensor System for Predischarges Detection on Electrical Equipment,"An innovative detection prototype, developed to improve the reliability of distribution networks is described. It is based on a multisensing approach including three different types of fiber-optic sensors. These sensors are based on different detection principles to measure, respectively, light ignition, sound pressure, and ozone changes produced by predischarge phenomena on medium voltage (MV) electrical equipments. A multifunctional software interface was developed to manage simultaneous acquisition and processing of all signal outputs. Preliminary tests were performed inside a MV switchboard inducing defects to simulate predischarge phenomena. A first analysis of simultaneous responses of the three sensors confirmed the feasibility of this combined approach as a potential diagnostic tool to assess the condition of MV electrical components.",2012,0, 5432,Automating Data Analysis and Acquisition Setup in a Silicon Debug Environment,"With the growing size of modern designs and more strict time-to-market constraints, design errors can unavoidably escape pre-silicon verification and reside in silicon prototypes. Due to those errors and faults in the fabrication process, silicon debug has become a necessary step in the digital integrated circuit design flow. Embedded hardware blocks, such as scan chains and trace buffers, provide a means to acquire data of internal signals in real time for debugging. However, the amount of the data is limited compared to pre-silicon debugging. This paper presents an automated software solution to analyze this sparse data to detect suspects of the failure in both the spatial and temporal domain. It also introduces a technique to automate the configuration process for trace-buffer-based hardware in order to acquire helpful information for debugging the failure. The technique takes the hardware constraints into account and identifies alternatives for signals not part of the traceable set so that their values can be restored by implications. The experiments demonstrate the effectiveness of the proposed software solution in terms of run-time and resolution.",2012,0, 5433,Towards Better Fault Localization: A Crosstab-Based Statistical Approach,"It is becoming prohibitively expensive and time consuming, as well as tedious and error-prone, to perform debugging manually. Among the debugging activities, fault localization has been one of the most expensive, and therefore, a large number of fault-localization techniques have been proposed over the recent years. This paper presents a crosstab-based statistical technique that makes use of the coverage information of each executable statement and the execution result (success or failure) with respect to each test case to localize faults in an effective and efficient manner. A crosstab is constructed for each executable statement, and a statistic is computed to determine the suspiciousness of the corresponding statement. Statements with a higher suspiciousness are more likely to contain bugs and should be examined before those with a lower suspiciousness. Case studies are performed on both small- (the Siemens and Unix suites) and large-sized programs (space, grep, gzip, and make), and results suggest that the crosstab-based technique (CBT) is more effective (in terms of a smaller percentage of executable statements that have to be examined until the first statement containing the fault is reached) than other techniques, such as Tarantula. Further studies using the Siemens suite reveal that the proposed technique is also more effective at locating faults than other statistically oriented techniques, such as SOBER and Liblit05. Additional experiments evaluate the CBT from other perspectives, such as its efficiency in terms of time taken, its applicability to object-oriented languages (on a very large Java program: Ant), and its sensitivity to test suite size, and demonstrate its superior performance.",2012,0, 5434,Adaptive Estimation-Based Leakage Detection for a Wind Turbine Hydraulic Pitching System,"Operation and maintenance (OM) cost has contributed a major share in the cost of energy for wind power generation. Condition monitoring can help reduce the OM cost of wind turbine. Among the wind turbine components, the fault diagnosis of the hydraulic pitching system is investigated in this study. The hydraulic pitching system is critical for energy capture, load reduction, and aerodynamic braking. The fault detection of internal and external leakages in the hydraulic pitching system is studied in this paper. Based on the dynamic model of the hydraulic pitching system, an adaptive parameter estimation algorithm has been developed in order to identify the internal and external leakages under the time-varying load on the pitch axis. This scheme can detect and isolate individual faults in spite of their strong coupling in the hydraulic model. A scale-down setup has been developed as the hydraulic pitch emulator, with which the proposed method is verified through experiments. The pitching-axis load input is obtained from simulation of a 1.5-MW variable-speed-variable-pitch turbine model under turbulent wind profiles on the FAST (fatigue, aerodynamics, structural, and tower) software developed by the National Renewable Energy Laboratory. With the experimental data, the leakage and leakage coefficients can be predicted via the proposed method with good performance.",2012,0, 5435,Apply Quantitative Management Now,"The Assessment Approach for Quantitative Process Management (A2QPM) helps identify software process measures for quantitative analysis even when organizations lack formal systems for process measurement. A2QPM is the first approach to quantitative management that offers software organizations a well-defined, detailed guideline for assessing their software processes and applying beneficial quantitative techniques to improve them. All the A2QPM applications we've described resulted in quantitative analysis implementations. Although the organizations had institutionalized neither a measurement process nor the processes that were subject to assessment, A2QPM nevertheless enabled quantitative improvement of the issues under study, such as process performance and product quality. Although we didn't intend the approach to be a shortcut for high ML appraisals, it has also helped organizations on their way to high maturity.",2012,0, 5436,Software Fault Prediction Using Quad Tree-Based K-Means Clustering Algorithm,"Unsupervised techniques like clustering may be used for fault prediction in software modules, more so in those cases where fault labels are not available. In this paper a Quad Tree-based K-Means algorithm has been applied for predicting faults in program modules. The aims of this paper are twofold. First, Quad Trees are applied for finding the initial cluster centers to be input to the A'-Means Algorithm. An input threshold parameter governs the number of initial cluster centers and by varying the user can generate desired initial cluster centers. The concept of clustering gain has been used to determine the quality of clusters for evaluation of the Quad Tree-based initialization algorithm as compared to other initialization techniques. The clusters obtained by Quad Tree-based algorithm were found to have maximum gain values. Second, the Quad Tree- based algorithm is applied for predicting faults in program modules. The overall error rates of this prediction approach are compared to other existing algorithms and are found to be better in most of the cases.",2012,1, 5437,Monetary Cost-Aware Checkpointing and Migration on Amazon Cloud Spot Instances,"Recently introduced spot instances in the Amazon Elastic Compute Cloud (EC2) offer low resource costs in exchange for reduced reliability; these instances can be revoked abruptly due to price and demand fluctuations. Mechanisms and tools that deal with the cost-reliability tradeoffs under this schema are of great value for users seeking to lessen their costs while maintaining high reliability. We study how mechanisms, namely, checkpointing and migration, can be used to minimize the cost and volatility of resource provisioning. Based on the real price history of EC2 spot instances, we compare several adaptive checkpointing schemes in terms of monetary costs and improvement of job completion times. We evaluate schemes that apply predictive methods for spot prices. Furthermore, we also study how work migration can improve task completion in the midst of failures while maintaining low monetary costs. Trace-based simulations show that our schemes can reduce significantly both monetary costs and task completion times of computation on spot instance.",2012,0, 5438,"Reasoning about the Reliability of Diverse Two-Channel Systems in Which One Channel Is """"Possibly Perfect""""","This paper refines and extends an earlier one by the first author [1]. It considers the problem of reasoning about the reliability of fault-tolerant systems with two channels (i.e., components) of which one, A, because it is conventionally engineered and presumed to contain faults, supports only a claim of reliability, while the other, B, by virtue of extreme simplicity and extensive analysis, supports a plausible claim of perfection. We begin with the case where either channel can bring the system to a safe state. The reasoning about system probability of failure on demand (pfd) is divided into two steps. The first concerns aleatory uncertainty about 1) whether channel A will fail on a randomly selected demand and 2) whether channel B is imperfect. It is shown that, conditional upon knowing pA (the probability that A fails on a randomly selected demand) and pB (the probability that channel B is imperfect), a conservative bound on the probability that the system fails on a randomly selected demand is simply pA X pB. That is, there is conditional independence between the events A fails and B is imperfect. The second step of the reasoning involves epistemic uncertainty, represented by assessors' beliefs about the distribution of (pA, pB), and it is here that dependence may arise. However, we show that under quite plausible assumptions, a conservative bound on system pfd can be constructed from point estimates for just three parameters. We discuss the feasibility of establishing credible estimates for these parameters. We extend our analysis from faults of omission to those of commission, and then combine these to yield an analysis for monitored architectures of a kind proposed for aircraft.",2012,0, 5439,EasyPDP: An Efficient Parallel Dynamic Programming Runtime System for Computational Biology,"Dynamic programming (DP) is a popular and efficient technique in many scientific applications such as computational biology. Nevertheless, its performance is limited due to the burgeoning volume of scientific data, and parallelism is necessary and crucial to keep the computation time at acceptable levels. The intrinsically strong data dependency of dynamic programming makes it difficult and error-prone for the programmer to write a correct and efficient parallel program. Therefore, this paper builds a runtime system named EasyPDP aiming at parallelizing dynamic programming algorithms on multicore and multiprocessor platforms. Under the concept of software reusability and complexity reduction of parallel programming, a DAG Data Driven Model is proposed, which supports those applications with a strong data interdependence relationship. Based on the model, EasyPDP runtime system is designed and implemented. It automatically handles thread creation, dynamic data task allocation and scheduling, data partitioning, and fault tolerance. Five frequently used DAG patterns from biological dynamic programming algorithms have been put into the DAG pattern library of EasyPDP, so that the programmer can choose to use any of them according to his/her specific application. Besides, an ideal computing distribution model is proposed to discuss the optimal values for the performance tuning arguments of EasyPDP. We evaluate the performance potential and fault tolerance feature of EasyPDP in multicore system. We also compare EasyPDP with other methods such as Block-Cycle Wavefront (BCW). The experimental results illustrate that EasyPDP system is fine and provides an efficient infrastructure for dynamic programming algorithms.",2012,0, 5440,Meeting Soft Deadlines in Scientific Workflows Using Resubmission Impact,"We propose a new heuristic called Resubmission Impact to support fault tolerant execution of scientific workflows in heterogeneous parallel and distributed computing environments. In contrast to related approaches, our method can be effectively used on new or unfamiliar environments, even in the absence of historical executions or failure trace models. On top of this method, we propose a dynamic enactment and rescheduling heuristic able to execute workflows with a high degree of fault tolerance, while taking into account soft deadlines. Simulated experiments of three real-world workflows in the Austrian Grid demonstrate that our method significantly reduces the resource waste compared to conservative task replication and resubmission techniques, while having a comparable makespan and only a slight decrease in the success probability. On the other hand, the dynamic enactment method manages to successfully meet soft deadlines in faulty environments in the absence of historical failure trace information or models.",2012,0, 5441,Formal Analysis of the Probability of Interaction Fault Detection Using Random Testing,"Modern systems are becoming highly configurable to satisfy the varying needs of customers and users. Software product lines are hence becoming a common trend in software development to reduce cost by enabling systematic, large-scale reuse. However, high levels of configurability entail new challenges. Some faults might be revealed only if a particular combination of features is selected in the delivered products. But testing all combinations is usually not feasible in practice, due to their extremely large numbers. Combinatorial testing is a technique to generate smaller test suites for which all combinations of t features are guaranteed to be tested. In this paper, we present several theorems describing the probability of random testing to detect interaction faults and compare the results to combinatorial testing when there are no constraints among the features that can be part of a product. For example, random testing becomes even more effective as the number of features increases and converges toward equal effectiveness with combinatorial testing. Given that combinatorial testing entails significant computational overhead in the presence of hundreds or thousands of features, the results suggest that there are realistic scenarios in which random testing may outperform combinatorial testing in large systems. Furthermore, in common situations where test budgets are constrained and unlike combinatorial testing, random testing can still provide minimum guarantees on the probability of fault detection at any interaction level. However, when constraints are present among features, then random testing can fare arbitrarily worse than combinatorial testing. As a result, in order to have a practical impact, future research should focus on better understanding the decision process to choose between random testing and combinatorial testing, and improve combinatorial testing in the presence of feature constraints.",2012,0, 5442,Structural Complexity and Programmer Team Strategy: An Experimental Test,"This study develops and empirically tests the idea that the impact of structural complexity on perfective maintenance of object-oriented software is significantly determined by the team strategy of programmers (independent or collaborative). We analyzed two key dimensions of software structure, coupling and cohesion, with respect to the maintenance effort and the perceived ease-of-maintenance by pairs of programmers. Hypotheses based on the distributed cognition and task interdependence theoretical frameworks were tested using data collected from a controlled lab experiment employing professional programmers. The results show a significant interaction effect between coupling, cohesion, and programmer team strategy on both maintenance effort and perceived ease-of-maintenance. Highly cohesive and low-coupled programs required lower maintenance effort and were perceived to be easier to maintain than the low-cohesive programs and high-coupled programs. Further, our results would predict that managers who strategically allocate maintenance tasks to either independent or collaborative programming teams depending on the structural complexity of software could lower their team's maintenance effort by as much as 70 percent over managers who use simple uniform resource allocation policies. These results highlight the importance of achieving congruence between team strategies employed by collaborating programmers and the structural complexity of software.",2012,0, 5443,Clone Management for Evolving Software,"Recent research results suggest a need for code clone management. In this paper, we introduce JSync, a novel clone management tool. JSync provides two main functions to support developers in being aware of the clone relation among code fragments as software systems evolve and in making consistent changes as they create or modify cloned code. JSync represents source code and clones as (sub)trees in Abstract Syntax Trees, measures code similarity based on structural characteristic vectors, and describes code changes as tree editing scripts. The key techniques of JSync include the algorithms to compute tree editing scripts, to detect and update code clones and their groups, to analyze the changes of cloned code to validate their consistency, and to recommend relevant clone synchronization and merging. Our empirical study on several real-world systems shows that JSync is efficient and accurate in clone detection and updating, and provides the correct detection of the defects resulting from inconsistent changes to clones and the correct recommendations for change propagation across cloned code.",2012,0, 5444,Mutation-Driven Generation of Unit Tests and Oracles,"To assess the quality of test suites, mutation analysis seeds artificial defects (mutations) into programs; a nondetected mutation indicates a weakness in the test suite. We present an automated approach to generate unit tests that detect these mutations for object-oriented classes. This has two advantages: First, the resulting test suite is optimized toward finding defects modeled by mutation operators rather than covering code. Second, the state change caused by mutations induces oracles that precisely detect the mutants. Evaluated on 10 open source libraries, our test prototype generates test suites that find significantly more seeded defects than the original manually written test suites.",2012,0, 5445,Statistical Reliability Estimation of Microprocessor-Based Systems,"What is the probability that the execution state of a given microprocessor running a given application is correct, in a certain working environment with a given soft-error rate? Trying to answer this question using fault injection can be very expensive and time consuming. This paper proposes the baseline for a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success. The presented work goes beyond the dependability evaluation problem; it also has the potential to become the backbone for new tools able to help engineers to choose the best hardware and software architecture to structurally maximize the probability of a correct execution of the target software.",2012,0, 5446,Formal Specification-Based Inspection for Verification of Programs,"Software inspection is a static analysis technique that is widely used for defect detection, but which suffers from a lack of rigor. In this paper, we address this problem by taking advantage of formal specification and analysis to support a systematic and rigorous inspection method. The aim of the method is to use inspection to determine whether every functional scenario defined in the specification is implemented correctly by a set of program paths and whether every program path of the program contributes to the implementation of some functional scenario in the specification. The method is comprised of five steps: deriving functional scenarios from the specification, deriving paths from the program, linking scenarios to paths, analyzing paths against the corresponding scenarios, and producing an inspection report, and allows for a systematic and automatic generation of a checklist for inspection. We present an example to show how the method can be used, and describe an experiment to evaluate its performance by comparing it to perspective-based reading (PBR). The result shows that our method may be more effective in detecting function-related defects than PBR but slightly less effective in detecting implementation-related defects. We also describe a prototype tool to demonstrate the supportability of the method, and draw some conclusions about our work.",2012,0, 5447,Visual Readability Analysis: How to Make Your Writings Easier to Read,"We present a tool that is specifically designed to support a writer in revising a draft version of a document. In addition to showing which paragraphs and sentences are difficult to read and understand, we assist the reader in understanding why this is the case. This requires features that are expressive predictors of readability, and are also semantically understandable. In the first part of the paper, we, therefore, discuss a semiautomatic feature selection approach that is used to choose appropriate measures from a collection of 141 candidate readability features. In the second part, we present the visual analysis tool VisRA, which allows the user to analyze the feature values across the text and within single sentences. Users can choose between different visual representations accounting for differences in the size of the documents and the availability of information about the physical and logical layout of the documents. We put special emphasis on providing as much transparency as possible to ensure that the user can purposefully improve the readability of a sentence. Several case studies are presented that show the wide range of applicability of our tool. Furthermore, an in-depth evaluation assesses the quality of the measure and investigates how well users do in revising a text with the help of the tool.",2012,0, 5448,Data Acquisition of a Tensile Test Stand for Cryogenic Environment,Superconducting magnets and components are exposed to mechanical forces during cool down or current operation. The mechanical strength of used materials has to fulfill the specified requirements. A tensile test in cryogenic environment is one option in material testing to assess usability of materials. The PHOENIX facility at Karlsruhe Institute of Technology-Institute for Technical Physics is designated to analyse specimen on tensile load under cryogenic conditions. PHOENIX was adapted for economic operation. A total number of ten samples can be tested one after the other during one cool down cycle. PHOENIX is subject to a quality management system and it is planned to be accredited according to ISO 17025 standard in the near future. Focus of work will be the qualification of steel samples for quality assurance of cryogenic magnet components of the poloidal field and toroidal field coils in the framework of the ITER project. Recent instrumentation and software provide a standard degree of automation for the measurement tasks to be performed during a tensile test. The paper describes instrumentation equipment and implemented software features of the measurement and control system.,2012,0, 5449,Location of DC Line Faults in Conventional HVDC Systems With Segments of Cables and Overhead Lines Using Terminal Measurements,"This paper presents a novel algorithm to determine the location of dc line faults in an HVDC system with a mixed transmission media consisting of overhead lines and cables, using only the measurements taken at the rectifier and inverter ends of the composite transmission line. The algorithm relies on the traveling-wave principle, and requires the fault-generated surge arrival times at two ends of the dc line as inputs. With accurate surge arrival times obtained from time-synchronized measurements, the proposed algorithm can accurately predict the faulty segment as well as the exact fault location. Continuous wavelet transform coefficients of the input signal are used to determine the precise time of arrival of traveling waves at the dc line terminals. Two possible input signals-the dc voltage measured at the converter terminal and the current through the surge capacitors connected at the dc line end-are examined and both signals are found to be equally effective for detecting the traveling-wave arrival times. Performance of the proposed fault-location scheme is analyzed through detailed simulations carried out using the electromagnetic transient simulation software PSCAD. The impact of measurement noise on the fault-location accuracy is also studied in this paper.",2012,0, 5450,Analyzing Massive Machine Maintenance Data in a Computing Cloud,"We present a novel framework, CloudView, for storage, processing and analysis of massive machine maintenance data, collected from a large number of sensors embedded in industrial machines, in a cloud computing environment. This paper describes the architecture, design, and implementation of CloudView, and how the proposed framework leverages the parallel computing capability of a computing cloud based on a large-scale distributed batch processing infrastructure that is built of commodity hardware. A case-based reasoning (CBR) approach is adopted for machine fault prediction, where the past cases of failure from a large number of machines are collected in a cloud. A case-base of past cases of failure is created using the global information obtained from a large number of machines. CloudView facilitates organization of sensor data and creation of case-base with global information. Case-base creation jobs are formulated using the MapReduce parallel data processing model. CloudView captures the failure cases across a large number of machines and shares the failure information with a number of local nodes in the form of case-base updates that occur in a time scale of every few hours. At local nodes, the real-time sensor data from a group of machines in the same facility/plant is continuously matched to the cases from the case-base for predicting the incipient faults-this local processing takes a much shorter time of a few seconds. The case-base is updated regularly (in the time scale of a few hours) on the cloud to include new cases of failure, and these case-base updates are pushed from CloudView to the local nodes. Experimental measurements show that fault predictions can be done in real-time (on a timescale of seconds) at the local nodes and massive machine data analysis for case-base creation and updating can be done on a timescale of minutes in the cloud. Our approach, in addition to being the first reported use of the cloud architecture for maintenance data storag- , processing and analysis, also evaluates several possible cloud-based architectures that leverage the advantages of the parallel computing capabilities of the cloud to make local decisions with global information efficiently, while avoiding potential data bottlenecks that can occur in getting the maintenance data in and out of the cloud.",2012,0, 5451,Robust White Matter Lesion Segmentation in FLAIR MRI,"This paper discusses a white matter lesion (WML) segmentation scheme for fluid attenuation inversion recovery (FLAIR) MRI. The method computes the volume of lesions with subvoxel precision by accounting for the partial volume averaging (PVA) artifact. As WMLs are related to stroke and carotid disease, accurate volume measurements are most important. Manual volume computation is laborious, subjective, time consuming, and error prone. Automated methods are a nice alternative since they quantify WML volumes in an objective, efficient, and reliable manner. PVA is initially modeled with a localized edge strength measure since PVA resides in the boundaries between tissues. This map is computed in 3-D and is transformed to a global representation to increase robustness to noise. Significant edges correspond to PVA voxels, which are used to find the PVA fraction (amount of each tissue present in mixture voxels). Results on simulated and real FLAIR images show high WML segmentation performance compared to ground truth (98.9% and 83% overlap, respectively), which outperforms other methods. Lesion load studies are included that automatically analyze WML volumes for each brain hemisphere separately. This technique does not require any distributional assumptions/parameters or training samples and is applied on a single MR modality, which is a major advantage compared to the traditional methods.",2012,0, 5452,Virtual Appliance Size Optimization with Active Fault Injection,"Virtual appliances store the required information to instantiate a functional Virtual Machine (VM) on Infrastructure as a Service (IaaS) cloud systems. Large appliance size obstructs IaaS systems to deliver dynamic and scalable infrastructures according to their promise. To overcome this issue, this paper offers a novel technique for virtual appliance developers to publish appliances for the dynamic environments of IaaS systems. Our solution achieves faster virtual machine instantiation by reducing the appliance size while maintaining its key functionality. The new virtual appliance optimization algorithm identifies the removable parts of the appliance. Then, it applies active fault injection to remove the identified parts. Afterward, our solution assesses the functionality of the reduced virtual appliance by applying the-appliance developer provided-validation algorithms. We also introduce a technique to parallelize the fault injection and validation phases of the algorithm. Finally, the prototype implementation of the algorithm is discussed to demonstrate the efficiency of the proposed algorithm through the optimization of two well-known virtual appliances. Results show that the algorithm significantly decreased virtual machine instantiation time and increased dynamism in IaaS systems.",2012,0, 5453,Altered Fingerprints: Analysis and Detection,"The widespread deployment of Automated Fingerprint Identification Systems (AFIS) in law enforcement and border control applications has heightened the need for ensuring that these systems are not compromised. While several issues related to fingerprint system security have been investigated, including the use of fake fingerprints for masquerading identity, the problem of fingerprint alteration or obfuscation has received very little attention. Fingerprint obfuscation refers to the deliberate alteration of the fingerprint pattern by an individual for the purpose of masking his identity. Several cases of fingerprint obfuscation have been reported in the press. Fingerprint image quality assessment software (e.g., NFIQ) cannot always detect altered fingerprints since the implicit image quality due to alteration may not change significantly. The main contributions of this paper are: 1) compiling case studies of incidents where individuals were found to have altered their fingerprints for circumventing AFIS, 2) investigating the impact of fingerprint alteration on the accuracy of a commercial fingerprint matcher, 3) classifying the alterations into three major categories and suggesting possible countermeasures, 4) developing a technique to automatically detect altered fingerprints based on analyzing orientation field and minutiae distribution, and 5) evaluating the proposed technique and the NFIQ algorithm on a large database of altered fingerprints provided by a law enforcement agency. Experimental results show the feasibility of the proposed approach in detecting altered fingerprints and highlight the need to further pursue this problem.",2012,0, 5454,Joint H.264/scalable video coding-multiple input multiple output rate control for wireless video applications,"Integrating H.264 scalable video coding (SVC) technology with a multiple input multiple output (MIMO) wireless system can significantly enhance the overall performance of high-quality real-time wireless video transmissions. However, the state-of-the-art techniques in these two areas are largely developed independently. In this research, with the objective to deliver the optimal visual quality and accurate rate regulation for wireless video applications, the authors propose a novel joint H.264/SVC-MIMO rate control (RC) algorithm for video compression and transmission over MIMO systems. The authors first present a systematic architecture for H.264/SVC compression and transmission over MIMO systems. Then, based on MIMO channel properties, the authors use a packet-level two-state Markov model to estimate MIMO channel states and predict the number of retransmitted bits in the presence of automatic repeat request. Finally, an efficient joint rate controller is proposed to regulate the output bit rate of each layer according to the available channel throughput and buffer fullness. The authors' extensive simulation results demonstrate that their algorithm outperforms JVT-W043 RC algorithm, adopted in the H.264/SVC reference software, by providing more accurate output bit rate, reducing buffer overflow, lessen frame skipping, and finally, improves the overall coding quality.",2012,0, 5455,A Collaboration Maturity Model: Development and Exploratory Application,This paper presents a Collaboration Maturity Model (Col-MM) to assess an organization's team collaboration quality. The Col-MM is intended to be sufficiently generic to be applied to any type of collaboration and useable by practitioners for con-ducting self-assessments. The Col-MM was developed during a series of Focus Group meetings with professional collaboration experts. The model was piloted and subsequently applied in the automotive industry. This paper reports on the development and first field application of the Col-MM. The paper further serves as a starting point for future research in this area.,2012,0, 5456,Short-Circuit Fault Protection Strategy for High-Power Three-Phase Three-Wire Inverter,"This paper proposes a four-stage fault protection scheme against the short-circuit fault for the high-power three-phase three-wire combined inverter to achieve high reliability. The short-circuit fault on the load side is the focus of this paper, and the short-circuit fault of switching devices is not involved. Based on the synchronous rotating frame, the inverter is controlled as a voltage source in the normal state. When a short-circuit fault (line-to-line fault or balanced three-phase fault) occurs, the hardware-circuit-based hysteresis current control strategy can effectively limit the output currents and protect the switching devices from damage. In the meantime, the software controller detects the fault and switches to the current controlled mode. Under the current controlled state, the inverter behaves as a current source until the short-circuit fault is cleared by the circuit breaker. After clearing the fault, the output voltage recovers quickly from the current controlled state. Therefore, the selective protection is realized and the critical loads can be continuously supplied by the inverter. The operational principle, design consideration, and implementation are discussed in this paper. The simulation and experimental results are provided to verify the validity of theoretical analysis.",2012,0, 5457,Predicting expert developers for newly reported bugs using frequent terms similarities of bug attributes,"A software bug repository not only contains the data about software bugs, but also contains the information about the contribution of developers, quality engineers (testers), managers and other team members. It contains the information about the efforts of team members involved in resolving the software bugs. This information can be analyzed to identify some useful knowledge patterns. One such pattern is identifying the developers, who can help in resolving the newly reported software bugs. In this paper a new algorithm is proposed to discover experts for resolving the newly assigned software bugs. The purpose of proposed algorithm is two fold. First is to identify the appropriate developers for newly reported bugs. And second is to find the expertise for newly reported bugs that can help other developers to fix these bugs if required. All the important information in software bug reports is of textual data types like bug summary, description etc. The algorithm is designed using the analysis of this textual information. Frequent terms are generated from this textual information and then term similarity is used to identify appropriate experts (developers) for the newly reported software bug.",2012,0, 5458,Techniques and Tools for Parallelizing Software,"With the emergence of multicore and manycore processors, engineers must design and develop software in drastically new ways to benefit from the computational power of all cores. However, developing parallel software is much harder than sequential software because parallelism can't be abstracted away easily. Authors Hans Vandierendonck and Tom Mens provide an overview of technologies and tools to support developers in this complex and error-prone task.",2012,0, 5459,Evaluating Stratification Alternatives to Improve Software Defect Prediction,"Numerous studies have applied machine learning to the software defect prediction problem, i.e. predicting which modules will experience a failure during operation based on software metrics. However, skewness in defect-prediction datasets can mean that the resulting classifiers often predict the faulty (minority) class less accurately. This problem is well known in machine learning, and is often referred to as learning from imbalanced datasets. One common approach for mitigating skewness is to use stratification to homogenize class distributions; however, it is unclear what stratification techniques are most effective, both generally and specifically in software defect prediction. In this article, we investigate two major stratification alternatives (under-, and over-sampling) for software defect prediction using Analysis of Variance. Our analysis covers several modern software defect prediction datasets using a factorial design. We find that the main effect of under-sampling is significant at = 0.05, as is the interaction between under- and over-sampling. However, the main effect of over-sampling is not significant.",2012,1, 5460,A Dependent Model for Fault Tolerant Software Systems During Debugging,"This paper proposes a special redundant model for describing the s-dependency of multi-version programming software during testing and debugging. N-version programming (NVP) is one of the most important software fault tolerance techniques. Many papers have studied the issue of fault correlation among versions. However, only a few of them consider this issue during the testing and debugging part of the software development life cycle. During testing and debugging, faults may not be successfully removed. Imperfect debugging may result in unsuccessful removal, and the introduction of new faults. Different from existing NVP models, the model proposed in this paper allows an assessment of s-dependency when correlated failures may not necessarily occur at the same execution time point. The model focuses on 2 VP systems. It is developed to be a bivariate counting process by assuming positive s-dependency among versions. Considering imperfect debugging, this bivariate process characterizes dynamic changes of fault contents for each version during testing and debugging. The system reliability, expected number of faults, probability of perfect debugging, and parameter estimation of model parameters are presented. An application example is given to illustrate the proposed model. The paper provides an alternative approach for evaluating the reliability of 2 VP software systems when there is positive s -dependency between versions.",2012,0, 5461,Low-cost control flow error protection by exploiting available redundancies in the pipeline,"Due to device miniaturization and reducing supply voltage, embedded systems are becoming more susceptible to transient faults. Specifically, faults in control flow can change the execution sequence, which might be catastrophic for safety critical applications. Many techniques are devised using software, hardware or software-hardware co-design for control flow error checking. Software techniques suffer from a significant amount of code size overhead, and hence, negative impact on performance and energy consumption. On the other hand, hardware-based techniques have a significant amount of hardware and area cost. In this research we exploit the available redundancies in the pipeline. The branch target buffer stores target addresses of taken branches, and ALU generates target addresses using the low-order branch displacement bits of branch instructions. To exploit these redundancies in the pipeline, we propose a control flow error checking (CFEC) scheme. It can detect control flow errors and recover from them with negligible energy and performance overhead.",2012,0, 5462,Analysis of Clustering Techniques for Software Quality Prediction,"Clustering is the unsupervised classification of patterns into groups. A clustering algorithm partitions a data set into several groups such that similarity within a group is larger than among groups The clustering problem has been addressed in many contexts and by researchers in many disciplines, this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis. There is need to develop some methods to build the software fault prediction model based on unsupervised learning which can help to predict the fault -- proneness of a program modules when fault labels for modules are not present. One of the such method is use of clustering techniques. This paper presents a case study of different clustering techniques and analyzes their performance.",2012,0, 5463,A Clustered Approach to Analyze the Software Quality Using Software Defects,As the software development begins there also exists the probability of occurrence of some defect in the software system. Software defects plays important role to take the decision about when the testing will be stopped. Software defects are one of the major factors that can decide the time of software delivery. Not only has the number of defects also the type of defect as well the criticality of a software defect affected the software quality. Software cannot be presented with software defects. All the Software Quality estimation approaches like CMM etc. follow the software defects as a parameter to estimate the software quality. The proposed work is also in the same direction. We are trying to categorize the software defects using some clustering approach and then the software defects will be measured in each clustered separately. The proposed system will analyze the software defect respective the software criticality and its integration with software module.,2012,0, 5464,Flexible Discrete Software Reliability Growth Model for Distributed Environment Incorporating Two Types of Imperfect Debugging,"In literature we have several software reliability growth models developed to monitor the reliability growth during the testing phase of the software development. These models typically use the calendar / execution time and hence are known as continuous time SRGM. However, very little seems to have been done in the literature to develop discrete SRGM. Discrete SRGM uses test cases in computer test runs as a unit of testing. Debugging process is usually imperfect because during testing all software faults are not completely removed as they are difficult to locate or new faults might be introduced. In real software development environment, the number of failures observed need not be same as the number of errors removed. If the number of failures observed is more than the number of faults removed then we have the case of imperfect debugging. Due to the complexity of the software system and the incomplete understanding of the software requirements, specifications and structure, the testing team may not be able to remove the fault perfectly on detection of the failure and the original fault may remain or get replaced by another fault. In this paper, we discuss a discrete software reliability growth model for distributed system considering imperfect debugging that faults are not always corrected/removed when they are detected and fault generation. The proposed model assumes that the software system consists of a finite number of reused and newly developed sub-systems. The reused sub-systems do not involve the effect of severity of the faults on the software reliability growth phenomenon because they stabilize over a period of time i.e. the growth is uniform whereas, the newly developed subsystem does involve. For newly developed component, it is assumed that removal process follows logistic growth curve due to the fact that learning of removal team grows as testing progresses. The fault removal phenomena for reused and newly developed sub-systems have been modeled separa- ely and are summed to obtain the total fault removal phenomenon of the software system. The model has been validated on two software data sets and it is shown that the proposed model fairs comparatively better than the existing one.",2012,0, 5465,Assessing HPC Failure Detectors for MPI Jobs,"Reliability is one of the challenges faced by exascale computing. Components are poised to fail during large-scale executions given current mean time between failure (MTBF) projections. To cope with failures, resilience methods have been proposed as explicit or transparent techniques. For the latter techniques, this paper studies the challenge of fault detection. This work contributes a study on generic fault detection capabilities at the MPI level and beyond. The objective is to assess different detectors, which ultimately may or may not be implemented within the application's runtime layer. A first approach utilizes a periodic liveness check while a second method promotes sporadic checks upon communication activities. The contributions of this paper are two-fold: (a) We provide generic interposing of MPI applications for fault detection. (b) We experimentally compare periodic and sporadic methods for liveness checking. We show that the sporadic approach, even though it imposes lower bandwidth requirements and utilizes lower frequency checking, results in equal or worse application performance than a periodic liveness test for larger number of nodes. We further show that performing liveness checks in separation from MPI applications results in lower overhead than inter-positioning, as demonstrated by our prototypes. Hence, we promote separate periodic fault detection as the superior approach for fault detection.",2012,0, 5466,Generative Inspection: An Intelligent Model to Detect and Remove Software Defects,"Software inspection covers the defects related to software tests Incompetence. The proposed model of this research performs defect removal actions as an important duty of inspection, as well as, using the capabilities of collaborative and knowledge base systems. The process improvement is continuously in progress by creating swap iteration in inspection model kernel. Making and modifying some rules related to defects, adds intelligence and learning features to the model. In order to validate the model, it is implemented in a real software inspection project. The varieties of detected and removed defects show the potential performance of the model and make the process reliable.",2012,0, 5467,ERSA: Error Resilient System Architecture for Probabilistic Applications,"There is a growing concern about the increasing vulnerability of future computing systems to errors in the underlying hardware. Traditional redundancy techniques are expensive for designing energy-efficient systems that are resilient to high error rates. We present Error Resilient System Architecture (ERSA), a robust system architecture which targets emerging killer applications such as recognition, mining, and synthesis (RMS) with inherent error resilience, and ensures high degrees of resilience at low cost. Using the concept of configurable reliability, ERSA may also be adapted for general-purpose applications that are less resilient to errors (but at higher costs). While resilience of RMS applications to errors in low-order bits of data is well-known, execution of such applications on error-prone hardware significantly degrades output quality (due to high-order bit errors and crashes). ERSA achieves high error resilience to high-order bit errors and control flow errors (in addition to low-order bit errors) using a judicious combination of the following key ideas: 1) asymmetric reliability in many-core architectures; 2) error-resilient algorithms at the core of probabilistic applications; and 3) intelligent software optimizations. Error injection experiments on a multicore ERSA hardware prototype demonstrate that, even at very high error rates of 20 errors/flip-flop/108 cycles (equivalent to 25000 errors/core/s), ERSA maintains 90% or better accuracy of output results, together with minimal impact on execution time, for probabilistic applications such as K-Means clustering, LDPC decoding, and Bayesian network inference. In addition, we demonstrate the effectiveness of ERSA in tolerating high rates of static memory errors that are characteristic of emerging challenges related to SRAM Vccmin problems and erratic bit errors.",2012,0,4065 5468,Crosstalk Issues of Vicinity Magnets Studied With a Novel Rotating-Coil System,"Taiwan Photon Source (TPS) is a low-emittance synchrotron radiation factory. The lattice magnets are located compactly within the limited space in the storage ring. The mutual proximity of the magnets induces field crosstalk and influences the dynamic aperture of the electron beam. The field becomes distorted, induced by the crosstalk within not only the iron yoke but also the edges of the magnets. Precise measurements with rotating-coil systems were conducted to characterize the integral field quality of the magnets; a new method to detect the center of magnets is presented. Simulation with TOSCA software was undertaken for comparison with these experimental results. We report the field distortions induced by crosstalk on the basis of our measurements and simulations. A misalignment effect between the quadrupole and sextupole magnets is discussed.",2012,0, 5469,Risk-informed Preventive Maintenance optimization,"The risk management group at the South Texas Project Electric Generating Station (STPEGS) has successfully developed a Preventive Maintenance (PM) optimization application based on a new mathematical model developed in collaboration with the University of Texas at Austin. This model uses historical maintenance data from the STPEGS work management database. Robust statistical analysis, coupled with an efficient algorithm generates an optimal PM schedule, based on a Non-Homogenous Poisson Process (NHPP) with a power law failure rate function. In addition, the risk associated with significant plant events triggered by a component failure is appropriately captured in the Corrective Maintenance (CM) cost estimates. The probabilities of such events are modeled via fault tree analysis, and consequences re expressed as monetary costs. The net cost of CM is then modified by a weighted sum of the probability of each event multiplied its monetary cost. The ratio of risk-adjusted CM cost to PM cost is used with the failure rate parameters to calculate the optimum PM frequency that minimizes combined CM and PM costs. The software can evaluate individual components or entire systems of components. Several low-risk ranked systems have been evaluated. In this paper we present the results of these evaluations.",2012,0, 5470,Comparison modeling of system reliability for future NASA projects,"A National Aeronautics and Space Administration (NASA) supported Reliability, Maintainability, and Availability (RMA) analysis team developed a RMA analysis me thodology that uses cut set and importance measure analyses to compare model proposed avionics computing architectures. In this paper we will present an effective and efficient application of the RMA analysis methodology for importance measures that includes Reliability Block Diagram (RBD) Analysis, Comparison modeling, Cut Set Analysis, and Importance Measure Analysis. In addition, we will also demonstrate that integrating RMA early in the system design process is a key and fundamental decision metric that supports design selection. The RMA analysis methodology presented in this paper and applied to the avionics architectures enhances the usual way of predicting the need for redundancy based on failure rat es or subject matter expert opinion. Typically, RBDs and minimal cut sets along with the Fussell-Vesely (FV) method is used to calculate importance measures are calculated for each functional element in the architecture [1]. This paper presents an application of the FV importance measures and presents it as a methodology for using importance measures in success space to compare architectures. These importance measures are used to identify which functional element is most likely to cause a system failure, thus, quickly identifying the path to increase the overall system reliability by either procuring more reliable functional elements or adding redundancy [2]. This methodology that used RBD analysis, cut set analysis, and the FV importance measures allowed the avionics design team to better understand and compare the vulnerabilities in each scenario of the architectures. It also enabled the design team to address the deficiencies in the design architectures more efficiently, while balancing the need to design for optimum weight and space allocations.",2012,0, 5471,Reliability analysis of substation automation system functions,"This paper presents a case study applying a framework developed for the analysis of substation automation system function reliability. The analysis framework is based on Probabilistic Relational Models (PRMs) and includes the analysis of both primary equipment and the supporting information and communication (ICT) systems. Furthermore, the reliability analysis also considers the logical structure and its relation to the physical infrastructure. The system components that are composing the physical infrastructure are set with failure probabilities and depending of the logical structure the reliability of the studied functionality is evaluated. Software failures are also accounted for in the analysis. As part of the case study failure rates of modern digital control and protection relays were identified by studying failure logs from a Nordic power utility. According to the failure logs software counts for approximately 35% of causes of failures related to modern control and protection relays. The framework including failure probabilities is applied to a system for voltage control that consists of a voltage transformer with an on-load tap changer and a control system for controlling the tap. The result shows a 96% probability of successful operation over period of one year for the automatic voltage control. A concluding remark is that when analyzing substation automation system business functions it is important to reduce the modeling effort. The expressiveness of the presented modeling framework has shown somewhat cumbersome when modeling a single business function with a small number of components. Instead the analysis framework's full usefulness may expect to arise when a larger number of business functions are evaluated for a system with a high degree of dependency between the components in the physical infrastructure. The identification of accurate failure rates is also a limiting factor for the analysis and is something that is interesting for further work.",2012,0, 5472,On ESL verification of memory consistency for system-on-chip multiprocessing,"Chip multiprocessing is key to Mobile and high-end Embedded Computing. It requires sophisticated multilevel hierarchies where private and shared caches coexist. It relies on hardware support to implicitly manage relaxed program order and write atomicity so as to provide well-defined shared-memory semantics (captured by the axioms of a memory consistency model) at the hardware-software interface. This paper addresses the problem of checking if an executable representation of the memory system complies with a specified consistency model. Conventional verification techniques encode the axioms as edges of a single directed graph, infer extra edges from memory traces, and indicate an error when a cycle is detected. Unlike them, we propose a novel technique that decomposes the verification problem into multiple instances of an extended bipartite graph matching problem. Since the decomposition was judiciously designed to induce independent instances, the target problem can be solved by a parallel verification algorithm. Our technique, which is proven to be complete for several memory consistency models, outperformed a conventional checker for a suite of 2400 randomly-generated use cases. On average, it found a higher percentage of faults (90%) as compared to that checker (69%) and did it, on average, 272 times faster.",2012,0, 5473,A new SBST algorithm for testing the register file of VLIW processors,"Feature size reduction drastically influences permanent faults occurrence in nanometer technology devices. Among the various test techniques, Software-Based Self-Test (SBST) approaches have been demonstrated to be an effective solution for detecting logic defects, although achieving complete fault coverage is a challenging issue due to the functional-based nature of this methodology. When VLIW processors are considered, standard processor-oriented SBST approaches result deficient since not able to cope with most of the failures affecting VLIW multiple parallel domains. In this paper we present a novel SBST algorithm specifically oriented to test the register files of VLIW processors. In particular, our algorithm addresses the cross-bar switch architecture of the VLIW register file by completely covering the intrinsic faults generated between the multiple computational domains. Fault simulation campaigns comparing previously developed methods with our solution demonstrate its effectiveness. The results show that the developed algorithm achieves a 97.12% fault coverage which is about twice better than previously developed SBST algorithms. Further advantages of our solution are the limited overhead in terms of execution cycles and memory occupation.",2012,0, 5474,"CrashTest'ing SWAT: Accurate, gate-level evaluation of symptom-based resiliency solutions","Current technology scaling is leading to increasingly fragile components, making hardware reliability a primary design consideration. Recently researchers have proposed low-cost reliability solutions that detect hardware faults through software-level symptom monitoring. SWAT (SoftWare Anomaly Treatment), one such solution, demonstrated with microarchitecture-level simulations that symptom-based solutions can provide high fault coverage and a low Silent Data Corruption (SDC) rate. However, more accurate evaluations are needed to validate such solutions for hardware faults in real-world processor designs. In this paper, we evaluate SWAT's symptom-based detectors on gate-level faults using an FPGA-based, full-system prototype. With this platform, we performed a gate-level accurate fault injection campaign of 51,630 fault injections in the OpenSPARC T1 core logic across five SPECInt 2000 benchmarks. With an overall SDC rate of 0.79%, our results are comparable to previous microarchitecture-level evaluations of SWAT, demonstrating the effectiveness of symptom-based software detectors for permanent faults in real-world designs.",2012,0, 5475,Flexible and Smart Online Monitoring and Fault Diagnosis System for Rotating Machinery,"Monitoring the vibration signals of rotating machinery, ulteriorly, assessing the safety of equipment plays a significant role in ensuring the security of equipment and in saving maintenance fee. This paper integrated the idea of """"configuration"""" in the industry control software, developed the """"flexible"""" network-based online monitoring and fault diagnosis system. The network topology, configuration module, database, data acquisition workstation and monitoring components were presented. With the smart data acquisition strategy and strong adaptive monitoring tools, the system can be applied on kinds of rotating machinery, and the practical application of the system was introduced.",2012,0, 5476,Impact Analysis Using Static Execute After in WebKit,"Insufficient propagation of changes causes the majority of regression errors in heavily evolving software systems. Impact analysis of a particular change can help identify those parts of the system that also need to be investigated and potentially propagate the change. A static code analysis technique called Static Execute After can be used to automatically infer such impact sets. The method is safe and comparable in precision to more detailed analyses. At the same time it is significantly more efficient, hence we could apply it to different large industrial systems, including the open source Web Kit project. We overview the benefits of the method, its existing implementations, and present our experiences in adapting the method to such a complex project. Finally, using this particular analysis on the Web Kit project, we verify whether applying the method we can actually predict the required change propagation and hence reduce regression errors. We report on the properties of the resulting impact sets computed for the change history, and their relationship to the actual fixes required. We looked at actual defects provided by the regression test suite along with their fixes taken from the version control repository, and compared these fixes to the predicted impact sets computed at the changes that caused the failing tests. The results show that the method is applicable for the analysis of the system, and that the impact sets can predict the required changes in a fair amount of cases, but that there are still open issues for the improvement of the method.",2012,0, 5477,On the Comparison of User Space and Kernel Space Traces in Identification of Software Anomalies,"Corrective software maintenance consumes 30-60% time of software maintenance activities. Automated failure reporting has been introduced to facilitate developers in debugging failures during corrective maintenance. However, reports of software with large user bases overwhelm developers in identification of the origins of faults, and in many cases it is not known whether reports of failures contain information about faults. Prior techniques employ different classification or anomaly detection algorithms on user space traces (e.g., function calls) or kernel space traces (e.g., system calls) to detect anomalies in software behaviour. Each algorithm and type of tracing (user space or kernel space) has its advantages and disadvantages. For example, user space tracing is useful in detailed analysis of anomalous (faulty) behaviour of a program whereas kernel space tracing is useful in identifying system intrusions, program intrusions, or malicious programs even if source program code is different. If one type of tracing or algorithm is infeasible to implement then it is important to know whether we can substitute another type of tracing and algorithm. In this paper, we compare user space and kernel space tracing by employing different types of classification algorithms on the traces of various programs. Our results show that kernel space tracing can be used to identify software anomalies with better accuracy than user space tracing. In fact, the majority of software anomalies (approximately 90%) in a software application can be best identified by using a classification algorithm on kernel space traces.",2012,0, 5478,Software Evolution Prediction Using Seasonal Time Analysis: A Comparative Study,"Prediction models of software change requests are useful for supporting rational and timely resource allocation to the evolution process. In this paper we use a time series forecasting model to predict software maintenance and evolution requests in an open source software project (Eclipse), as an example of projects with seasonal release cycles. We build an ARIMA model based on data collected from Eclipse's change request tracking system since the project's start. A change request may refer to defects found in the software, but also to suggested improvements in the system under scrutiny. Our model includes the identification of seasonal patterns and tendencies, and is validated through the forecast of the change requests evolution for the next 12 months. The usage of seasonal information significantly improves the estimation ability of this model, when compared to other ARIMA models found in the literature, and does so for a much longer estimation period. Being able to accurately forecast the change requests' evolution over a fairly long time period is an important ability for enabling adequate process control in maintenance activities, and facilitates effort estimation and timely resources allocation. The approach presented in this paper is suitable for projects with a relatively long history, as the model building process relies on historic data.",2012,0, 5479,Filtering Bug Reports for Fix-Time Analysis,"Several studies have experimented with data mining algorithms to predict the fix-time of reported bugs. Unfortunately, the fix-times as reported in typical open-source cases are heavily skewed with a significant amount of reports registering fix-times less than a few minutes. Consequently, we propose to include an additional filtering step to improve the quality of the underlying data in order to gain better results. Using a small-scale replication of a previously published bug fix-time prediction experiment, we show that the additional filtering of reported bugs indeed improves the outcome of the results.",2012,0, 5480,Feature Identification from the Source Code of Product Variants,"In order to migrate software products which are deemed similar into a product line, it is essential to identify the common features and the variations between the product variants. This can however be tedious and error-prone as it may involve browsing complex software and a lot of more or less similar variants. Fortunately, if arte facts of the product variants (source code files and/or models) are available, feature identification can be at least partially automated. In this paper, we thus propose a three-step approach to feature identification from source code of which the first two steps are automated.",2012,0, 5481,Pragmatic design quality assessment,"Summary form only given. Assessing and improving quality is paramount in every engineering discipline. Software engineering, however, is not considered a classical engineering activity for several reasons, such as intrinsic complexity and lack of rigor. In general, if a software system is delivering the expected functionality, only in few cases people see the need to analyze the internals. This tutorial is aimed to offer a pragmatic approach to analyzing the quality of software systems. On the one hand, it will offer a brief theoretical background on detecting quality problems by using and combining metrics, and by providing visual evidence of the state of affairs in the system. On the other hand, as analyzing real systems requires adequate tool support, the tutorial will offer an overview of the problems that occur in using such tools and provide a practical demonstration of using state-of-the-art tools on a real case study.",2012,0, 5482,Software Quality Model and Framework with Applications in Industrial Context,"Software Quality Assurance involves all stages of the software life cycle including development, operation and evolution as well. Low level measurements (product and process metrics) are used to predict and control higher level quality attributes. There exists a large body of proposed metrics, but their interpretation and the way of connecting them to actual quality management goals is still a challenge. In this work, we present our approach for modelling, collecting, storing and evaluating such software measurements, which can deal with all types of metrics collected at any stage of the life cycle. The approach is based on the Goal Question Metric paradigm, and its novelty lies in a unified representation of the metrics and the questions that evaluate them. It allows the definition of various complex questions involving different types of metrics, while the supporting framework enables the automatic collection of the metrics and the calculation of the answers to the questions. We demonstrate the applicability of the approach in three industrial case studies: two instances at local software companies with different quality assurance goals, and an application to a large open source system with a question related to testing and complexity, which demonstrates the complex use of different metrics to achieve a higher level quality goal.",2012,0, 5483,Optical Fiber Bus Protection Network to Multiplex Sensors: Experimental Validation of Self-Diagnosis,"The experimental demonstration of a resilient wavelength division multiplexed fiber bus network to interconnect sensors is reported. The network recovers operation after failures and it performs self-diagnosis, the identification of the failed constituent(s) from the patterns of surviving end-to-end connections at its operating wavelengths. We provide clear evidence for the channel arrivals predicted by theory. In doing so, we explore the potential for spurious signals caused by reflections from broken fiber ends. Appropriate precautionary measures, especially the imposition of electronic thresholds at the receivers, can greatly reduce the scope for false diagnoses. Software to predict the failure site within the network from the arriving channels at the receivers is also reported. We describe how to coordinate self-diagnosis with protection switching so as to reduce the momentary service interruption.",2012,0, 5484,Current injection disturbance based voltage drift control for anti-islanding of photovoltaic inverter system,"Islanding detection is necessary as it causes power quality issues, equipment damage and personnel hazards. This paper proposes a new active anti-islanding scheme for inverter based photovoltaic system connected to grid. This method is based on injecting a current disturbance at the PV inverter output and observing the behavior of voltage at the point of common coupling (PCC) in absence of grid which depends upon the load connected to the PV inverter in an island condition. The proposed control scheme is based on Synchronous Reference Frame or dq frame. The voltage drift scheme using control gain and current command reference as positive feedback is utilized to drift the PCC voltage beyond the threshold limits to detect island within 2 seconds as prescribed by IEEE 1547. The test system configuration and parameters for anti-islanding study is prepared on IEEE 929 standards. The proposed control scheme is implemented on constant power controlled inverter system with reliable islanding detection. The effectiveness of proposed anti-islanding scheme is validated by extensive simulation done in MATLAB platform.",2012,0, 5485,Induction machine fault diagnosis using microcontroller and Real Time Digital Simulation unit,"In an approach to diagnose the various types of fault, generally occurred in Induction machines, this paper describes a monitoring and analysis system. The induction machine model and its various types of fault are simulated using a Real Time Digital Simulation (RTDS) unit. The signal corresponding to the simulation can be taken out of the RTDS unit which is interfaced with a microcontroller for its acquisition in a PC. The PC based software can store it and the fault detection algorithm (sequence component based) runs over it to detect and diagnose the fault. Encouraging results are obtained.",2012,0, 5486,An Effective Solution to Task Scheduling and Memory Partitioning for Multiprocessor System-on-Chip,"The growing trend in current complex embedded systems is to deploy a multiprocessor system-on-chip (MPSoC). A MPSoC consists of multiple heterogeneous processing elements, a memory hierarchy, and input/output components which are linked together by an on-chip interconnect structure. Such an architecture provides the flexibility to meet the performance requirements of multimedia applications while respecting the constraints on memory, cost, size, time, and power. Many embedded systems employ software-managed memories known as scratch-pad memories (SPM). Unlike caches, SPMs are software-controlled and hence the execution time of applications on such systems can be accurately predicted. Scheduling the tasks of an embedded application on the processors and partitioning the available SPM budget among these processors are two critical issues in such systems. Often, these are considered separately; such a decoupled approach may miss better quality schedules. In this paper, we present an integrated approach to task scheduling and SPM partitioning to further reduce the execution time of embedded applications. Results on several real-life benchmarks show the significant improvement from our proposed technique.",2012,0, 5487,International Space Station power system requirements models and simulation,"International Space Station (ISS) Payload Engineering Integration (PEI) organization adopted the advanced computation and simulation technology to develop integrated electrical system models based on the test data of the various sub-units to addressing specific power system design requirements. This system model was used to assess the power system requirements for assuring: (1) Compatibility of loads with delivered power, (2) Compatibility of loads with protective devices, (3) Stability of integrated system, and (4) Fault tolerance of the EPS and other loads. PEI utilizes EMA Design Automation PSPICE software for modeling and simulating the steady-state voltage, voltage transients, reverse current, surge current, source and load impedance, large signal stability, and fault characteristics of the integrated electrical systems based on the various sub-unit test data. PSPICE provides dynamic system modeling, simulation and data analysis for large-scale system integration. Modeling is valuable at the initial design stage since it enables experimentation, exploration and development without expensive and time-consuming modifications. However, with the complexity of the system interactions among all sub-units provided by various developers and suppliers, it is difficult to model an integrated system or verify that a system model meets all the requirements of its design specifications. In addition, the changes to system requirements demand frequent redesigns and reimplementation of many systems and sub-unit components. The benefits provided by modeling from conventional test data are: (1) Relatively low cost, (2) Identification of potential system integration problems early in the program, (3) Extrapolation of test data to verify system performance to cover the entire operating envelope, (4) Provide flexibility in development system integration modeling. The modeling of an integrated system based on system and sub-unit test data enable organizations to predict and improve- system performance and to conduct efficient trade studies of system architecture. This comprehensive model can then be used directly in standard downstream processes such as rapid prototyping and risk mitigation in the product life cycle. The detailed modeling from conventional data will be discussed in the presentation.",2012,0, 5488,Wind shear detection for small and improvised airfields,"The goal of this project is to produce an inexpensive yet highly reliable system for detecting wind shear at small and improvised airfields. While the largest commercial airports do have wind shear detection systems and they have proven to be life savers, the overwhelming majority of places where aircraft land have no such protection. The system described here is self-organizing, redundant and highly fault tolerant. It has no single point of failure and can continue operation even after a significant number of node failures. It uses inexpensive, off-the-shelf hardware and can quickly be deployed in rough conditions by minimally skilled personnel, providing pilots with potentially life-saving real time data. It is eminently suitable for use on improvised airfields for military and disaster response purposes. It can also be easily reprogrammed to detect wake turbulence instead of or in addition to low-level wind shear. The working prototype has only five nodes and is too small to protect a real airfield, but tests show that the architecture is scalable to over 100 nodes without modification, which is enough for an airfield of significant size. Using nothing but off the shelf components with novel software, the prototypes were ready for initial field trials in less than six months from initial concept.",2012,0, 5489,Detection Technology of Train Signal Based on Loop Current Acquisition System,"Adopt ARM processor and external hardware circuit to assemble an embeded acquisition system of train signal. This system is complete and reliable. It can accurately measure the signal loop current RMS which will be sent to the monitoring equipment as required at the same time. The computing speed is fast, and the sampling accuracy and speed improves a lot. Sampling can be carried out several times in a half cycle. We take a series of links such as hardware filter, transformers isolation and software filter to process the input signal so that the external interference would be avoided. The system works well to predict the reliability of the signal fault.",2012,0, 5490,Single-path and multi-path label switched path allocation algorithms with quality-of-service constraints: performance analysis and implementation in NS2,"The choice of the path computation algorithm is a key factor to design efficient traffic engineering strategies in multi-protocol label switching networks and different approaches have been proposed in the literature. The effectiveness of a path computation algorithm should be evaluated against its ability to optimise the utilisation of network resources as well as to satisfy both current and future label switched paths allocation requests. Although powerful and flexible simulation tools might be useful to assist a network manager in the selection of proper algorithms, state-of-the-art simulators and network planning tools do not currently offer a suitable support. This study deals with the design and performance evaluation of multi-constraints path computation algorithms. To this aim, ad hoc software modules have been developed and integrated within the MTENS simulator. New single-path and multi-path computation algorithms have been proposed and compared in terms of number of accepted requests, success probability, network resources utilisation and execution time. Finally, some guidelines and recommendations for the selection of path computation algorithms have also been provided.",2012,0, 5491,Combined profiling: A methodology to capture varied program behavior across multiple inputs,"This paper introduces combined profiling (CP): a new practical methodology to produce statistically sound combined profiles from multiple runs of a program. Combining profiles is often necessary to properly characterize the behavior of a program to support Feedback-Directed Optimization (FDO). CP models program behaviors over multiple runs by estimating their empirical distributions, providing the inferential power of probability distributions to code transformations. These distributions are build from traditional single-run point profiles; no new profiling infrastructure is required. The small fixed size of this data representation keeps profile sizes, and the computational costs of profile queries, independent of the number of profiles combined. However, when using even a single program run, a CP maintains the information available in the point profile, allowing CP to be used as a drop-in replacement for existing techniques. The quality of the information generated by the CP methodology is evaluated in LLVM using SPEC CPU 2006 benchmarks.",2012,0, 5492,An Efficient Run Time Control Flow Errors Detection by DCT Technique,"DCT is usually used in image processing but in this paper we use it to detect the run time control errors. In this paper, using the branch instruction, a program is first divided into several data computing blocks (DCBs), each DCB can then be recognized as an image. To get the signatures of each DCB, we then use one dimension discrete cosine transform (1-DDCT) to compute each DCB to generate the 5-bits relay DCT signature (R-DCT-S) and 32-bits final DCT signature (F-DCT-S).These generated signatures are then embedded into the instruction memory and then used to do the run time error checking. For watchdog, the extra hardware should not reduce the processor performance, not increase the fault detection latency and not increase the memory overhead to store the signatures. As for improving the processor degradation, the whole block error checking is done after the branch instruction, the fault detection latency is improved by checking the intermediate error at the R-type instruction, and the memory overhead is reduced by storing the R-DCT-S to the unused sections of the R-type instruction. The experimental results show that the proposed watchdog gets very high error detection coverage and shortest error detection latency to detect either single fault or multi-faults.",2012,0, 5493,"The 18 mm Laboratory: Teaching MEMS Development With the SUMMiT Foundry Process","This paper describes the goals, pedagogical system, and educational outcomes of a three-semester curriculum in microelectromechanical systems (MEMS). The sequence takes engineering students with no formal MEMS training and gives them the skills to participate in cutting-edge MEMS research and development. The evolution of the curriculum from in-house fabrication facilities to an industry-standard foundry process affords an opportunity to examine the pedagogical benefits of the latter approach. Outcomes that are assessed include the number of students taking the classes, the quality of work produced by students, and the research that has emanated from class projects. Three key elements of the curriculum are identified: 1) extensive use of virtual design and process simulation software tools; 2) fabrication of student-designed devices for physical characterization and testing; and 3) integration of a student design competition. This work strongly leveraged the university outreach activities of Sandia National Laboratories (SNL) and the SNL SUMMiT MEMS design and fabrication system. SNL provides state-of-the-art design tools and device fabrication and hosts a yearly nationwide student design competition. Student MEMS designs developed using computer-aided design (CAD) and finite element analysis (FEA) software are fabricated at SNL and returned on 18-mm die modules for characterization and testing. One such module may contain a dozen innovative student projects. Important outcomes include an increase in enrollment in the introductory MEMS class, external research funding and archival journal publications arising from student designs, and consistently high finishes in the SNL competition. Since the SNL offerings are available to any US college or university, this curriculum is transportable in its current form.",2012,0, 5494,Utilizing a Smart Grid Monitoring System to Improve Voltage Quality of Customers,"The implementation of smart grids will fundamentally change the approach of assessing and mitigating system voltage deficiencies on an electric distribution system. Many distribution companies have historically identified customer level voltage deficiencies utilizing a reactive approach that relies upon customer complaints. The monitoring capabilities of a smart grid will allow utilities to proactively identify events that exceed the voltage threshold limitations set forth by ANSI Std. C84.1 before they become a concern to end-users. This proactive approach can reduce customer complaints and possibly operational costs. This paper describes an approach for determining voltage threshold limits as a function of duration for proactive voltage investigations utilizing smart grid monitoring equipment. The described approach was applied to a smart grid located in Boulder, Colorado. This paper also describes the results of this two-year study.",2012,0, 5495,Fault detection of system level SoC model,"The effective process models and methods for diagnosing the functional failures in software and/or hardware are offered. The register or matrix (tabular) data structures, focused to parallel execution of logic operations, are used for detecting the faulty components.",2012,0, 5496,The evaluation of 3D traverses of three different distance lengths toward the quality of the network for Deformation Survey,"3D traverse network survey is one of the methods use by surveyor or engineer as a control network for deformation monitoring, as not all the area/position surrounding the building or high rise building suitable for GPS observation and that position unavoidablely required for monitoring of the building. Generally, this Deformation Surveys can be able to detect relative even absolute movements for monitoring of a pump base, storage tanks, retaining walls, slope areas and general ground subsidence. According to this investigation, there are errors that need to be evaluated in this observation because of the data from the length lines observed, equipment used or the condition of the study area could produce some errors. Therefore, 3D traverse network is performed using a high accuracy Topcon Total Station (ES 105) with 1 second angular and 2 mm distance accuracy to evaluate the impact of distance of each traverse line toward the quality of the closed 3D traverse. Hence, this 3D traversing is expected to generate the statistical summary test after a proper processing in the Microsurvey StarNet software systems. In the StarNet system, the quality of 3D traverse will be evaluated based on the chi-square level achievement produced after the data has been processed in the systems. Then, the result of Chi-Square Test (Total Error Factor) from the adjustment must be in the middle within Upper and Lower Bounds for the data being process to passes the adjustment. Obviously, this adjustment result may be producing the output and the quality can be evaluated.",2012,0, 5497,Using Parameterized Attributes to Improve Testing Capabilities with Domain-Specific Modeling Languages,"Domain-specific modeling languages (DSMLs) show promise in improving model-based testing and experimentation (T&E) capabilities for software systems. This is because its intuitive graphical languages reduce complexities associated with error-prone, tedious, and time-consuming tasks. Despite the benefits of using DSMLs to facilitate model-based T&E, it is hard for testers to capture many variations of similar tests without manually duplicating modeling effort. This paper therefore presents a method called parameterized attributes that is used to capture points-of-variation in models. It also shows how parameterized attributes is realized in an open-source tool named the Generic Modeling Environment (GME) Template Engine. Finally, this paper quantitatively evaluates applying parameterized attributes to T&E of a representative distributed software system. Experience and results so show that parameterized attributes can reduce modeling effort after an initial model (or design) is constructed.",2012,0, 5498,UFlood: High-throughput flooding over wireless mesh networks,"This paper proposes UFlood, a flooding protocol for wireless mesh networks. UFlood targets situations such as software updates where all nodes need to receive the same large file of data, and where limited radio range requires forwarding. UFlood's goals are high throughput and low airtime, defined respectively as rate of completion of a flood to the slowest receiving node and total time spent transmitting. The key to achieving these goals is good choice of sender for each transmission opportunity. The best choice evolves as a flood proceeds in ways that are difficult to predict. UFlood's core new idea is a distributed heuristic to dynamically choose the senders likely to lead to all nodes receiving the flooded data in the least time. The mechanism takes into account which data nearby receivers already have as well as internode channel quality. The mechanism includes a novel bit-rate selection algorithm that trades off the speed of high bit-rates against the larger number of nodes likely to receive low bitrates. Unusually, UFlood uses both random network coding to increase the usefulness of each transmission and detailed feedback about what data each receiver already has; the feedback is critical in deciding which node's coded transmission will have the most benefit to receivers. The required feedback is potentially voluminous, but UFlood includes novel techniques to reduce its cost. The paper presents an evaluation on a 25-node 802.11 test-bed. UFlood achieves 150% higher throughput than MORE, a high-throughput flooding protocol, using 65% less airtime. UFlood uses 54% less airtime than MNP, an existing efficient protocol, and achieves 300% higher throughput.",2012,0, 5499,An Optimized Compilation of UML State Machines,"Due to the definition of fUML (Foundational Subset for Executable UML Models) along with its action language Alf (Action Language for fUML), UML (Unified Modeling Language) allows the production of executable models on which early verification and validation activities can be conducted. Despite this effort of standardization and the large use of UML in industry, developers still hand tune the code generated from models to correct, enhance or optimize it. This results in a gap between the model and the generated code. Manual code tuning except from being error prone can invalidate all the analysis and validations already done in the model. To avoid the code hand tuning drawbacks and, since UML is becoming an executable language, we propose a new Model Based Development (MBD) approach that skips the code generation step by compiling directly UML models. The biggest challenge for this approach - tackled in this paper is to propose a model compiler that is more efficient than a code compiler for UML models. Our model compiler performs optimizations that code compilers are unable to perform resulting in a more compact assembly code.",2012,0, 5500,A Novel Self-Adaptive Fault-Tolerant Mechanism and Its Application for a Dynamic Pervasive Computing Environment,"In pervasive computing system, the increasing dynamic and complexity of software and hardware resources and frequentative interaction among function components make fault-tolerant design very challenging. In this paper, we propose a novel self-adaptive fault-tolerant mechanism for a dynamic pervasive computing environment such as mobile ad hoc network. In our approach, the self-adaptive fault-tolerant mechanism is dynamically built according to various types of detected faults based on continuous monitoring and analysis of the component states. We put forward the architecture of fault-tolerant system and the policy-based fault-tolerant scheme, which adopts three-dimensional array of core features to capture spatial and temporal variability and the Event-Condition-Action rules. The mentioned mechanism has been designed and implemented as self-adaptive fault-tolerant middleware, shortly called SAFTM, on a preliminary prototype for a dynamic pervasive computing environment such as mobile ad hoc network. We have performed the experiments to evaluate the efficiency of the fault-tolerant mechanism. The results of the experiments show that the performance of the self-adaptive fault tolerant mechanism is realistic.",2012,0, 5501,Behavioral patterns in voltage transformer for ferroresonance detection,"Ferroresonance can severely affect voltage transformers, causing quality and security problems. The possibility of appearing a ferroresonance phenomenon is mainly based on an existing series connected capacitance and a nonlinear inductance. However, factors that may influence on it are not only limited to these considerations, but also to several constructive, design, operation and protective parameters. This paper analyses the process of obtaining the ferroresonant behaviour of a medium voltage (MV) phase-to-phase voltage transformer under different operation conditions. The study is developed by software simulation in order to characterize several ferroresonant behavioral patterns of the voltage transformer. This characterization may be essential for future methodologies to detect and suppress this phenomenon, frequently regarded as an unpredictable or random.",2012,0, 5502,Decision fusion software system for turbine engine fault diagnostics,"Sophistication and complexity of current turbine engines have mandated the need for advanced fault diagnostic for monitoring the health condition of turbine engines. A critical component of these advanced diagnostic systems is the decision fusion software. The purpose of the decision fusion software system is to increase diagnostic reliability, accuracy, and improve safety of the engine operation. It also helps decrease diagnostic false alarms hence save maintenance time. This paper focuses on the development and implementation of decision-fusion software system for enhancing the diagnosis of turbine engines. The paper describes how a fuzzy logic system is used to predict and diagnose turbine engine health conditions at different levels based on the health parameters, i.e., efficiency and flow. In this paper, the decision fusion software system was broken down into two subsystems namely, Decision Making Subsystem (DMS) and Decision Fusion Subsystem (DFS). The goal of the DMS is to predict the health condition of the engine components. While the objective of DFS is to assess the overall health condition of the engine based on information provided by the DMS. The test results of developed fusion software system are promising in providing reliable diagnostics for turbine engine, subsequently reducing maintenance cost. All the system development steps and testing results on the commercial grade turbine engine model C-MAPSS will be presented in this paper.",2012,0, 5503,On Resource Overbooking in an Unmanned Aerial Vehicle,"Large variations in the execution times of algorithms characterize many cyber-physical systems (CPS). For example, variations arise in the case of visual object-tracking tasks, whose execution times depend on the contents of the current field of view of the camera. In this paper, we study such a scenario in a small Unmanned Aerial Vehicle (UAV) system with a camera that must detect objects in a variety of conditions ranging from the simple to the complex. Given resource, weight and size constraints, such cyber-physical systems do not have the resources to satisfy the hard-real-time requirements of safe flight along with the need to process highly variable workloads at the highest quality and resolution levels. Hence, tradeoffs have to be made in real-time across multiple levels of criticality of running tasks and their operating points. Specifically, the utility derived from tracking an increasing number of objects may saturate when the mission software can no longer perform the required processing on each individual object. In this paper, we evaluate a new approach called ZS-QRAM (Zero-Slack QoS-based Resource Allocation Model) that maximizes the UAV system utility by explicitly taking into account the diminishing returns on tracking an increasing number of objects. We perform a detailed evaluation of our approach on our UAV system to clearly demonstrate its benefits.",2012,0, 5504,WatchMyPhone Providing developer support for shared user interface objects in collaborative mobile applications,"Developing collaborative mobile applications is a tedious and error-prone task, as collaboration functionality often has do be developed from scratch. To ease this process, we propose WatchMyPhone, a developer toolkit being focused on the creation of collaborative applications on mobile devices, especially sharing user interface components like text views in a multi-user setting. We provide an implementation of the toolkit as well as a demo application for shared text editing based on Android. WatchMyPhone is integrated in the Mobilis open source framework thus adding another facet to enable fast and efficient development of mobile social apps.",2012,0, 5505,Development of an inductive concentration measurement sensor of nano sized zero valent iron,"The injection of colloidal nano sized zero valent iron (nZVI) into a contaminated aquifer is a promising new in-situ groundwater remediation technique. An inductive sensor is presented to directly detect and measure the concentration of nZVI in the subsurface. The method is based on the inductive measurement of magnetic material properties of nZVI within an alternating magnetic field. The change of magnetic flux density generated by one coil is determined by measuring an induced voltage in a second coil. Numerical simulations with the finite element software COMSOL Multiphysics were performed to optimize the sensor design. Furthermore, these components were used to analyze the possible measuring range, taking into account the accuracy of the measuring device to be used. Since the susceptibility of nZVI in the aquifer is very small, it is necessary to use a background measurement to improve the sensitivity of the measurement system. Finally, the measuring concept was experimentally verified.",2012,0, 5506,Assessment of free basic electricity and use of pre-paid meters in South Africa,"In 2000, the African National Congress (ANC) through its election manifesto, made promises to provide free basic services to all poor South Africans. This was later quantified as 6 000 litres of water and 50 kWh of free basic electricity (FBE) monthly per household. Regarding the issuance of FBE, qualifying residents were registered and had to agree to a pre-paid meter being installed. It is argued that the quantity of free basic electricity provided to poor households is inadequate to meet basic needs and improvement of the quality of life. Conversely, there has been resistance to installation and use of pre-paid electricity meters, especially in townships around Johannesburg. Although prepayment systems have been proposed as innovative solutions to the problem of non-payment and affordability in utility services, the use of such mechanisms is still controversial. This paper reviews and assesses free basic electricity and the use of pre-paid electricity meters in South Africa. It also contributes to the on-going debate on FBE and prepayment systems. Recommendations are given on creating viable and stable institutions to curb uncertainties in the provision of electricity services, and methods for identifying changes in aggregate welfare resulting in the adoption of pre-paid electricity meters. Information from this article can be useful for policy-making purposes in other developing countries facing resistance in marketing, dissemination and installation of pre-paid meters.",2012,0, 5507,Overbooking-Based Resource Allocation in Virtualized Data Center,"Efficient resource management in the virtualized data center is always a practical concern and has attracted significant attention. In particularly, economic allocation mechanism is desired to maximize the revenue for commercial cloud providers. This paper uses overbooking from Revenue Management to avoid resource over-provision according to its runtime demand. We propose an economic model to control the overbooking policy while provide users probability based performance guarantee using risk estimation. To cooperate with overbooking policy, we optimize the VM placement with traffic-aware strategy to satisfy application's QoS requirement. We design GreedySelePod algorithm to achieve traffic localization in order to reduce network bandwidth consumption, especially the network bottleneck bandwidth, thus to accept more requests and increase the revenue in the future. The simulation results show that our approach can greatly improve the request acceptance rate and increase the revenue by up to 87% while with acceptable resource confliction.",2012,0, 5508,Fingerprint enhancement using contextual iterative filtering,"The performance of Automatic Fingerprint Identification Systems (AFIS) relies on the quality of the input fingerprints, so the enhancement of noisy images is a critical step. We propose a new fingerprint enhancement algorithm that selectively applies contextual filtering starting from automatically-detected high-quality regions and then iteratively expands toward low-quality ones. The proposed algorithm does not require any prior information like local orientations or frequencies. Experimental results over both real (FVC2004 and FVC2006) and synthetic (generated by the SFinGe software) fingerprints demonstrate the effectiveness of the proposed method.",2012,0, 5509,@tComment: Testing Javadoc Comments to Detect Comment-Code Inconsistencies,"Code comments are important artifacts in software. Javadoc comments are widely used in Java for API specifications. API developers write Javadoc comments, and API users read these comments to understand the API, e.g., reading a Javadoc comment for a method instead of reading the method body. An inconsistency between the Javadoc comment and body for a method indicates either a fault in the body or, effectively, a fault in the comment that can mislead the method callers to introduce faults in their code. We present a novel approach, called @TCOMMENT, for testing Javadoc comments, specifically method properties about null values and related exceptions. Our approach consists of two components. The first component takes as input source files for a Java project and automatically analyzes the English text in Javadoc comments to infer a set of likely properties for a method in the files. The second component generates random tests for these methods, checks the inferred properties, and reports inconsistencies. We evaluated @TCOMMENT on seven open-source projects and found 29 inconsistencies between Javadoc comments and method bodies. We reported 16 of these inconsistencies, and 5 have already been confirmed and fixed by the developers.",2012,0, 5510,Behaviourally Adequate Software Testing,"Identifying a finite test set that adequately captures the essential behaviour of a program such that all faults are identified is a well-established problem. Traditional adequacy metrics can be impractical, and may be misleading even if they are satisfied. One intuitive notion of adequacy, which has been discussed in theoretical terms over the past three decades, is the idea of behavioural coverage, if it is possible to infer an accurate model of a system from its test executions, then the test set must be adequate. Despite its intuitive basis, it has remained almost entirely in the theoretical domain because inferred models have been expected to be exact (generally an infeasible task), and have not allowed for any pragmatic interim measures of adequacy to guide test set generation. In this work we present a new test generation technique that is founded on behavioural adequacy, which combines a model evaluation framework from the domain of statistical learning theory with search-based white-box test generation strategies. Experiments with our BESTEST prototype indicate that such test sets not only come with a statistically valid measurement of adequacy, but also detect significantly more defects.",2012,0, 5511,A Scalable Distributed Concolic Testing Approach: An Empirical Evaluation,"Although testing is a standard method for improving the quality of software, conventional testing methods often fail to detect faults. Concolic testing attempts to remedy this by automatically generating test cases to explore execution paths in a program under test, helping testers achieve greater coverage of program behavior in a more automated fashion. Concolic testing, however, consumes a significant amount of computing time to explore execution paths, which is an obstacle toward its practical application. To address this limitation, we have developed a scalable distributed concolic testing framework that utilizes large numbers of computing nodes to generate test cases in a scalable manner. In this paper, we present the results of an empirical study that shows that the proposed framework can achieve a several orders-of-magnitude increase in test case generation speed compared to the original concolic approach, and also demonstrates clear potential for scalability.",2012,0, 5512,Dynamic Backward Slicing of Model Transformations,"Model transformations are frequently used means for automating software development in various domains to improve quality and reduce production costs. Debugging of model transformations often necessitates identifying parts of the transformation program and the transformed models which have causal dependence on a selected statement. In traditional programming environments, program slicing techniques are widely used to calculate control and data dependencies between the statements of the program. Here, we introduce program slicing for model transformations where the main challenge is to simultaneously assess data and control dependencies over the transformation program and the underlying models of the transformation. In this paper, we present a dynamic backward slicing approach for both model transformation programs and their transformed models based on automatically generated execution trace models of transformations. We evaluate our approach using different transformation case studies.",2012,0, 5513,AutoFLox: An Automatic Fault Localizer for Client-Side JavaScript,"Java Script is a scripting language that plays a prominent role in modern web applications today. It is dynamic, loosely typed, and asynchronous. In addition, it is extensively used to interact with the DOM at runtime. All these characteristics make Java Script code error-prone and challenging to debug. Java Script fault localization is currently a tedious and mainly manual task. Despite these challenges, the problem has received very limited attention from the research community. We propose an automated technique to localize Java Script faults based on dynamic analysis of the web application, tracing, and backward slicing of Java Script code. Our fault localization approach is implemented in an open source tool called Auto Lox. The results of our empirical evaluation indicate that (1) DOM-related errors are prominent in web applications, i.e., they form at least 79% of reported Java Script bugs, (2) our approach is capable of automatically localizing DOM-related Java Script errors with a high degree of accuracy (over 90%) and no false-positives, and (3) our approach is capable of isolating Java Script errors in a production web application, viz., Tumbler.",2012,0, 5514,Tester Feedback Driven Fault Localization,"Coincidentally correct test cases are those that execute faulty statements but do not cause failures. Such test cases reduce the effectiveness of spectrum-based fault localization techniques, such as Ochiai, because the correlation of failure with the execution of a faulty statement is lowered. Thus, coincidentally correct test cases need to be predicted and removed from the test suite used for fault localization. Techniques for predicting coincidentally correct test cases can produce false positives, such as when one predicts a fixed percentage that is higher than the actual percentage of coincidentally correct test cases. False positives may cause non-faulty statements to be assigned higher suspiciousness scores than the faulty statements. We propose an approach that iteratively predicts and removes coincidentally correct test cases. In each iteration, we present the tester the set of statements that share the highest Ochiai suspiciousness score. If the tester reports that these statements are not faulty, we use that feedback to determine a number that is guaranteed to be less than or equal to the actual number of coincidentally correct test cases. We predict and remove that number of coincidentally correct test cases, recalculate the suspiciousness scores of the remaining statements, and repeat the process. We evaluated our approach with the Siemens benchmark suite and the Unix utilities, grep and gzip. Our approach outperformed an existing approach that predicts a fixed percentage of test cases as coincidentally correct. The results with Ochiai were mixed. In some cases, our approach outperformed Ochiai by up to 67%. In others, Ochiai was more effective.",2012,0, 5515,A Unified Approach for Localizing Non-deadlock Concurrency Bugs,"This paper presents UNICORN, a new automated dynamic pattern-detection-based technique that finds and ranks problematic memory access patterns for non-deadlock concurrency bugs. UNICORN monitors pairs of memory accesses, combines the pairs into problematic patterns, and ranks the patterns by their suspiciousness scores. UNICORN detects significant classes of bug types, including order violations and both single-variable and multi-variable atomicity violations, which have been shown to be the most important classes of non-deadlock concurrency bugs. The paper also describes implementations of UNICORN in Java and C++, along with empirical evaluation using these implementations. The evaluation shows that UNICORN can effectively compute and rank the patterns that represent concurrency bugs, and perform computation and ranking with reasonable efficiency.",2012,0, 5516,Test Adequacy Evaluation for the User-database Interaction: A Specification-Based Approach,"Testing a database application is a challenging process where both the database and the user interaction have to be considered in the design of test cases. This paper describes a specification-based approach to guide the design of test inputs (both the test database and the user inputs) for a database application and to automatically evaluate the test adequacy. First, the system specification of the application is modelled: (1) the structure of the database and the user interface are represented in a single model, called Integrated Data Model (IDM), (2) the functional requirements are expressed as a set of business rules, written in terms of the IDM. Then, a MCDC-based criterion is applied over the business rules to automatically derive the situations of interest to be tested (test requirements), which guide the design of the test inputs. Finally, the adequacy of these test inputs is automatically evaluated to determine whether the test requirements are covered. The approach has been applied to the TPC-C benchmark. The results show that it allows designing test cases that are able to detect interesting faults which were located in the procedural code of the implementation.",2012,0, 5517,CrossCheck: Combining Crawling and Differencing to Better Detect Cross-browser Incompatibilities in Web Applications,"One of the consequences of the continuous and rapid evolution of web technologies is the amount of inconsistencies between web browsers implementations. Such inconsistencies can result in cross-browser incompatibilities (XBIs)-situations in which the same web application can behave differently when run on different browsers. In some cases, XBIs consist of tolerable cosmetic differences. In other cases, however, they may completely prevent users from accessing part of a web application's functionality. Despite the prevalence of XBIs, there are hardly any tools that can help web developers detect and correct such issues. In fact, most existing approaches against XBIs involve a considerable amount of manual effort and are consequently extremely time consuming and error prone. In recent work, we have presented two complementary approaches, WEBDIFF and CROSST, for automatically detecting and reporting XBIs. In this paper, we present CROSSCHECK, a more powerful and comprehensive technique and tool for XBI detection that combines and adapts these two approaches in a way that leverages their respective strengths. The paper also presents an empirical evaluation of CROSSCHECK on a set of real-world web applications. The results of our experiments show that CROSSCHECK is both effective and efficient in detecting XBIs, and that it can outperform existing techniques.",2012,0, 5518,An Empirical Study of Pre-release Software Faults in an Industrial Product Line,"There is a lack of published studies providing empirical support for the assumption at the heart of product line development, namely, that through structured reuse later products will be less fault-prone. This paper presents results from an empirical study of pre-release fault and change proneness from four products in an industrial software product line. The objectives of the study are (1) to determine the association between various software metrics, as well as their correlation with the number of faults at the component level, (2) to characterize the fault and change proneness at various degrees of reuse, and (3) to determine how existing products in the software product line affect the quality of subsequently developed products and our ability to make predictions. The research results confirm, in a software product line setting, the findings of others that faults are more highly correlated to change metrics than to static code metrics. Further, the results show that variation components unique to individual products have the highest fault density and are the most prone to change. The longitudinal aspect of our research indicates that new products in this software product line benefit from the development and testing of previous products. For this case study, the number of faults in variation components of new products is predicted accurately using a linear model built on data from the previous products.",2012,0, 5519,Challenges for Addressing Quality Factors in Model Transformation,"Designing a high quality model transformation is critical, because it is the pivotal mechanism in many mission applications for evolving the intellectual design described by models. This paper proposes solution ideas to assist modelers in developing high quality transformation models. We propose to initiate a design pattern movement in the context of model transformation. The resulting catalog of patterns shall satisfy quality attributes identified beforehand. Verification and validation of these patterns allow us to assess whether the cataloged design patterns are sound and complete with respect to the quality criteria. This will lead to techniques and tools that can detect bad designs and propose alternatives based on well-thought design patterns during the development or maintenance of model transformation.",2012,0, 5520,Identifying Failure-Inducing Combinations in a Combinatorial Test Set,"A t-way combinatorial test set is designed to detect failures that are triggered by combinations involving no more than t parameters. Assume that we have executed a t-way test set and some tests have failed. A natural question to ask is: What combinations have caused these failures? Identifying such combinations can facilitate the debugging effort, e.g., by reducing the scope of the code that needs to be inspected. In this paper, we present an approach to identifying failure-inducing combinations, i.e., combinations that have caused some tests to fail. Given a t-way test set, our approach first identifies and ranks a set of suspicious combinations, which are candidates that are likely to be failure-inducing combinations. Next, it generates a set of new tests, which can be executed to refine the ranking of suspicious combinations in the next iteration. This process can be repeated until a stopping condition is satisfied. We conducted an experiment in which our approach was applied to several benchmark programs. The experimental results show that our approach can effectively and efficiently identify failure-inducing combinations in these programs.",2012,0, 5521,Industrial Application of Concolic Testing on Embedded Software: Case Studies,"Current industrial testing practices often build test cases in a manual manner, which is slow and ineffective. To alleviate this problem, concolic testing generates test cases that can achieve high coverage in an automated fashion. However, due to a large number of possible execution paths, concolic testing might not detect bugs even after spending significant amount of time. Thus, it is necessary to check if concolic testing can detect bugs in embedded software in a practical manner through case studies. This paper describes case studies of applying the concolic testing tool CREST to embedded Applications. Through this project, we have detected new faults in the Samsung Linux Platform (SLP) file manager, Samsung security library, and busy box ls.",2012,0, 5522,Securing Opensource Code via Static Analysis,"Static code analysis (SCA) is the analysis of computer programs that is performed without actually executing the programs, usually by using an automated tool. SCA has become an integral part of the software development life cycle and one of the first steps to detect and eliminate programming errors early in the software development stage. Although SCA tools are routinely used in proprietary software development environment to ensure software quality, application of such tools to the vast expanse of open source code presents a forbidding albeit interesting challenge, especially when open source code finds its way into commercial software. Although there have been recent efforts in this direction, in this paper, we address this challenge to some extent by applying static analysis on a popular open source project, i.e., Linux kernel, discuss the results of our analysis and based on our analysis, we propose an alternate workflow that can be adopted while incorporating open source software in a commercial software development process. Further, we discuss the benefits and the challenges faced while adopting the proposed alternate workflow.",2012,0, 5523,A Smart Structured Test Automation Language (SSTAL),"In model-based testing, abstract tests are designed in terms of a model, for example, a path in a graph and concrete tests are expressed in terms of the implementation of the model. Two problems exist while converting abstract tests to concrete tests: the """"mapping'' problem and test oracle problem. Abstract tests cannot be applied directly to the actual program and they must first be mapped to concrete tests. Testers currently solve this mapping problem by hand. If one basic action is used multiple times in different abstract tests, testers must write a lot of redundant code for the same action. This process is time-consuming, labor-intensive, and error-prone. For the """"mapping'' problem, this research will design a structured test automation language to partially automate the mapping from abstract tests to concrete tests. First, programmers or testers use the automation language to create mappings from each basic identifiable part (e.g. an action in a finite-state machine) of the model to the corresponding executable programming code. Once the mappings are generated, concrete tests can be generated automatically from the abstract tests. This structured test automation language will be used to improve the efficiency of generating concrete tests from a variety of abstract tests and reduce the potential errors. Test oracle problem refers to how to determine whether a test has passed or fail. While writing executable code, it is difficult to decide which parts of the program state should be compared by the concrete tests. A guideline will be established to evaluate stats of the program effectively and efficiently.",2012,0, 5524,Mitigating the Effect of Coincidental Correctness in Spectrum Based Fault Localization,"Coincidentally correct test cases are those that execute faulty statements but do not cause failures. Such test cases reduce the effectiveness of spectrum-based fault localization techniques, such as Ochiai. These techniques calculate a suspiciousness score for each statement. The suspiciousness score estimates the likelihood that the program will fail if the statement is executed. The presence of coincidentally correct test cases reduces the suspiciousness score of the faulty statement, thereby reducing the effectiveness of fault localization. We present two approaches that predict coincidentally correct test cases and use the predictions to improve the effectiveness of spectrum based fault localization. In the first approach, we assign weights to passing test cases such that the test cases that are likely to be coincidentally correct obtain low weights. Then we use the weights to calculate suspiciousness scores. In the second approach, we iteratively predict and remove coincidentally correct test cases, and calculate the suspiciousness scores with the reduced test suite. In this dissertation, we investigate the cost and effectiveness of our approach to predicting coincidentally correct test cases and utilizing the predictions. We report the results of our preliminary evaluation of effectiveness and outline our research plan.",2012,0, 5525,Web Mutation Testing,"Web application software uses new technologies that have novel methods for integration and state maintenance that amount to new control flow mechanisms and new variables coping. Although powerful, these bring in new problems that current testing techniques do not adequately test for. Testing individual web software component in isolation cannot detect interaction faults, which occur in communication among web software components. Improperly implementing and testing the communications among web software components is a major source of faults. As mutation analysis has been shown to be effective in testing traditional software, the proposed project will investigate the usefulness of applying mutation testing to web applications. In a preliminary study, several new web mutation operators were defined specifically for web interaction faults. These operators were implemented in a prototype tool for a feasibility study. The resulting paper appeared in Mutation 2010 and the experimental results evince that mutation analysis can potentially help create tests that are effective at finding web application faults. To improve web fault coverage, the initial set of web mutation operators will be extended and evaluated. Additional web mutation operators will be defined. I intend to validate the proposed technique, web mutation testing, by comparing with other existing approaches used for web application testing.",2012,0, 5526,Experimental Comparison of Test Case Generation Methods for Finite State Machines,"Testing from finite state machines has been widely investigated due to its well-founded and sound theory as well as its practical application in different areas, e.g., Web-based systems and protocol testing. There has been a recurrent interest in developing methods capable of generating test suites that detect all faults in a given fault domain. However, the proposal of new methods motivates the comparison with traditional methods. In this context, we conducted a set of experiments that compares W, HSI, H, SPY, and P methods. The results have shown that H, SPY, and P methods produce smaller test suites than traditional methods (W, HSI). Although the P method presented the shortest test suite in most cases, its reduction is smaller compared with H and SPY. We have also observed that the reduction ratio in partial machines is smaller than that in complete machines.",2012,0, 5527,Overcoming Web Server Benchmarking Challenges in the Multi-core Era,"Web-based services are used by many organizations to support their customers and employees. An important consideration in developing such services is ensuring the Quality of Service (QoS) that users experience is acceptable. Recent years have seen a shift toward deploying Web services on multi-core hardware. Leveraging the performance benefits of multi-core hardware is a non-trivial task. In particular, systematic Web server benchmarking techniques are needed so organizations can verify their ability to meet customer QoS objectives while effectively utilizing such hardware. However, our recent experiences suggest that the multi-core era imposes significant challenges to Web server benchmarking. In particular, due to limitations of current hardware monitoring tools, we found that a large number of experiments are needed to detect complex bottlenecks that can arise in a multi-core system due to contention for shared resources such as cache hierarchy, memory controllers and processor inter-connects. Furthermore, multiple load generator instances are needed to adequately stress multi-core hardware. This leads to practical challenges in validating and managing the test results. This paper describes the automation strategies we employed to overcome these challenges. We make our test harness available for other researchers and practitioners working on similar studies.",2012,0, 5528,Test Case Prioritization Due to Database Changes in Web Applications,"A regression test case prioritization (TCP) technique reorders test cases for regression testing to achieve early fault detection. Most TCP techniques have been developed for regression testing of source code in an application. Most web applications rely on a database server for serving client requests. Any changes in the database result in erroneous client interactions and may bring down the entire web application. However, most prioritization techniques are unsuitable for prioritizing test suites for early detection of changes in databases. There are very few proposals in the literature for prioritization of test cases that can detect faults in the database early. We propose a new automated TCP technique for web applications that automatically identifies the database changes, prioritizes test cases related to database changes and executes them in priority order to detect faults early.",2012,0, 5529,Towards Symbolic Model-Based Mutation Testing: Pitfalls in Expressing Semantics as Constraints,"Model-based mutation testing uses altered models to generate test cases that are able to detect whether a certain fault has been implemented in the system under test. For this purpose, we need to check for conformance between the original and the mutated model. We have developed an approach for conformance checking of action systems using constraints. Action systems are well-suited to specify reactive systems and may involve non-determinism. Expressing their semantics as constraints for the purpose of conformance checking is not totally straight forward. This paper presents some pitfalls that hinder the way to a sound encoding of semantics into constraint satisfaction problems and gives solutions for each problem.",2012,0, 5530,Automatic XACML Requests Generation for Policy Testing,"Access control policies are usually specified by the XACML language. However, policy definition could be an error prone process, because of the many constraints and rules that have to be specified. In order to increase the confidence on defined XACML policies, an accurate testing activity could be a valid solution. The typical policy testing is performed by deriving specific test cases, i.e. XACML requests, that are executed by means of a PDP implementation, so to evidence possible security lacks or problems. Thus the fault detection effectiveness of derived test suite is a fundamental property. To evaluate the performance of the applied test strategy and consequently of the test suite, a commonly adopted methodology is using mutation testing. In this paper, we propose two different methodologies for deriving XACML requests, that are defined independently from the policy under test. The proposals exploit the values of the XACML policy for better customizing the generated requests and providing a more effective test suite. The proposed methodologies have been compared in terms of their fault detection effectiveness by the application of mutation testing on a set of real policies.",2012,0, 5531,Adding Criteria-Based Tests to Test Driven Development,"Test driven development (TDD) is the practice of writing unit tests before writing the source. TDD practitioners typically start with example-based unit tests to verify an understanding of the software's intended functionality and to drive software design decisions. Hence, the typical role of test cases in TDD leans more towards specifying and documenting expected behavior, and less towards detecting faults. Conversely, traditional criteria-based test coverage ignores functionality in favor of tests that thoroughly exercise the software. This paper examines whether it is possible to combine both approaches. Specifically, can additional criteria based tests improve the quality of TDD test suites without disrupting the TDD development process? This paper presents the results of an observational study that generated additional criteria-based tests as part of a TDD exercise. The criterion was mutation analysis and the additional tests were designed to kill mutants not killed by the TDD tests. The additional unit tests found several software faults and other deficiencies in the software. Subsequent interviews with the programmers indicated that they welcomed the additional tests, and that the additional tests did not inhibit their productivity.",2012,0, 5532,Languages and Their Importance in Quality Software,"Computer software succeeds - When it meets the needs of the people who use it, when it performs flawlessly over a long period of time, when it is easy to modify and even easier to use-it can does change things for the better. But When software fails-When its users are dissatisfied, When it is error prone, When it is difficult to change and even harder to use - bad things can and do happen. Generally software quality has been defined by various characteristics of the software product. This paper analyzes the various languages based on various characteristics and shown their impact on software quality.",2012,0, 5533,A multi-frame and multi-slice H.264 parallel video encoding approach with simultaneous encoding of prediction frames,"This paper describes a novel multi-frame and multi-slice parallel video encoding approach with simultaneous encoding of predicted frames. The approach, when applied to H.264 encoding, leads to speedups comparable to those obtained by state-of-the-art approaches, but without the disadvantage of requiring bidirectional frames. The new approach uses a number of slices equal or greater than the number of cores used and supports three motion estimation modes. Their combination leads to various tradeoffs between speedup and visual quality loss. For an H.264 baseline profile encoder based on Intel IPP code samples running on a two quad core Xeon system (8 cores in total), our experiments show an average speedup of 7.20, with an average quality loss of 0.22 dB (compared to a non-parallelized version) for the most efficiency motion estimation mode, and an average speedup of 7.95, with a quality loss of 1.85 dB for the faster motion estimation mode.",2012,0, 5534,Apply embedded openflow MPLS technology on wireless Openflow OpenRoads,"Openflow is one of the most important and popular next generation internet structure and technology. Its thought of software decide network makes dispatch of controller and date layer. Controller uses the openflow protocol to communicate and control the openflow switch to generate flow table, thus to achieve the centralized control of the whole network. OpenRoads is the framework about the wireless Openflow environment. It successfully apply the thought of SDN on the wireless mobile environment. As a propose to improve the performance and quality of the OpenRoads, The new embedded Openflow-MPLS(EOF-MPLS) is presented. An important technology key point is that both traditional MPLS and Openflow support the thought of separation between controller and date layer which makes a solid foundation for the new EOF-MPLS. EOF-MPLS offered forward equal class(FEC), Label distribute Protocol(LDP) to improve the efficiency of the nodes forward. It makes tight coupling with Openflow structure and support the QoS and traffic engineering. At last an entropy is came up with to assess the communication effectiveness of the EOF-MPLS to the OpenRoads.",2012,0, 5535,Formal specification of humanitarian disaster management processes,"Disaster situations are dynamic and demanding a change in response, and information is received in a fragmented manner during the early stages of the event. Therefore, a Rapid Assessment and Intervention Team (RAIT Team) is established to respond quickly to the event, by achieving an initial assessment helping to understand the nature and scope of the incident and to determine the required assistance. The RAIT Team can be considered as a Reactive Collaborative Network, having to coordinate numerous actors and two or more reactive parallel sub-processes. Therefore, formal specification and validation methods and tools are needed while specifying and verifying RAIT processes to detect and correct deficiencies and faults In this perspective, the present paper proposes to use the Decisional Reactive Agents (DRA) based approach for the formal modeling and checking of RAIT system processes, so that they can be directly implemented in software environments with the maximum of logical correctness. The proposed approach is illustrated by formally checking temporal constraints of a transversal RAIT process acting in case of medical or humanitarian emergency.",2012,0, 5536,TIP-EXE: A Software Tool for Studying the Use and Understanding of Procedural Documents,"Research problem: When dealing with procedural documents, individuals sometimes encounter comprehension problems due to poor information design. Researchers studying the use and understanding of procedural documents, as well as technical writers charged with the design of these documents, or usability specialists evaluating their quality, would all benefit from tools allowing them to collect real-time data concerning user behavior in user-centered studies. With this in mind, the generic software Technical Instructions Processing-Evaluations and eXperiments Editor (TIP-EXE) was designed to facilitate the carrying out of such studies. Research questions: Does document design, and specifically the matching or mismatching of the terms employed in a user manual and on the corresponding device, affect the cognitive processes involved in the comprehension of procedural instructions? Can we use a software tool like TIP-EXE to assess the impact of document design on the use and understanding of a procedural document? Literature review: A review of the methods employed to study either the use of procedural documents or their cognitive processing, and to evaluate the quality of these documents, revealed the lack of tools for collecting relevant data. Methodology: TIP-EXE software was used to set up and run a laboratory experiment designed to collect data concerning the effect of document design on the performance of a task. The experiment was conducted with 36 participants carrying out tasks involving the programming of a digital timer under one of three conditions: matching instructions, mismatching instructions, mismatching instructions + picture. Based on a click-and-read method for blurred text, TIP-EXE was used to collect data on the time the users spent reading the instructions, as well as the time spent handling the timer. Results and discussion: Results show that matching instructions (when the te- ms employed in the user manual match the terms on the device) enhance user performance. This instructional format results in less time spent consulting the instructions and handling the device, as well as fewer errors. This research shows that TIP-EXE software can be used to study the way in which operating instructions are read, and the time spent consulting specific information contained therein, thereby revealing the effects of document design on user behavior.",2012,0, 5537,"Indoor propagation model in 2.4 GHz with QoS parameters estimation in VoIP calls, considering different types of walls and floors",This paper presents an empirical propagation model for indoor environments with different types of walls and floors the predicts not only the power level but also QoS parameters to ensure quality in VoIP calls.,2012,0, 5538,"Improving public auditability, data possession in data storage security for cloud computing","Cloud computing is Internet based technology where the users can subscribe high quality of services from data and software that resides solely in the remote servers. This provides many benefits for the users to create and store data in the remote servers thereby utilizing fewer resources in client system. However management of the data and software may not be fully trustworthy which possesses many security challenges. One of the security issues is the data storage security where frequent integrity checking of remotely stored data is carried out. RSA based storage security (RSASS) method uses public auditing of the remote data by improving existing RSA based signature generation. This public key cryptography technique is widely used for providing strong security. Using this RSASS method, the data storage correctness is assured and identification of misbehaving server with high probability is achieved. This method also supports dynamic operation on the data and tries to reduce the server computation time. The preliminary results achieved through RSASS, proposed scheme outperforms with improved security in data storage when compared with the existing methods.",2012,0, 5539,A framework to select an approach for Web services and SOA development,"Service-Orientation Architecture (SOA) is an architectural style for software, where the main components are loosely coupled, interoperable, distributed pieces of logic provided as Web services (WSs). Service-Orientation (SO) is a paradigm that provides WSs with a set of design principles in order to conform to SOA. Service-Oriented Software Engineering (SOSE) concerns with processes and tools to build and provide software systems as composition of WSs with respect to SOA. This work proposes to assess different types of approaches within a framework that considers critical perspectives such as: (i) Building blocks used to specify functional requirements with respect to business/IT alignment, (ii) SOA principles and drivers, (iii) SO design paradigm, (iv) solution lifecycle, modeling, views, and CASE tools, and (v) inspection of solution quality attributes. The framework is meant to answer the question to what extent a solution provided by any service-oriented development method would conform to SO? in order to be used in: (a) comparing the approaches themselves, and (b) highlighting issues that need further research.",2012,0, 5540,WESPACT: Detection of web spamdexing with decision trees in GA perspective,"Internet today is huge, dynamic, self-organized, and strongly interlinked. Web spam can significantly worsen the quality of search engine results. The motivation of the paper is based on the logical perspective of approaching the web spam problem as cancer caused to the internet, and the solution could be derived by formulating the algorithms based on genetic algorithm (GA) based on content and link attributes. Web mining tools GATree [15] and PermutMatrix [14] has been used to simulate the experiments. JAVA is used to develop program that analyze and report the spamdexing instance. This paper proposes an algorithm WESPACT, to detect the web spam. This algorithm performs well as shown through experiments.",2012,0, 5541,A Tool for Teaching Risk,"Students tend to think optimistically about the software they construct. They believe the software will be defect free, and underestimate apparent risks to the development process. In the Software Enterprise, a 4-course upper division project sequence, student team failures to predict and prevent these risks lead to various problems like schedule delays, frustration, and dissatisfaction from external customer sponsors. The Enterprise uses the IBM Rational Jazz platform, but it does not have a native risk management capability. Instead, project teams were recording risks associated with their projects on paper reports. To facilitate maintaining and managing the risks associated with their projects, we developed a risk management component in the Jazz environment. This component complements Jazz by providing features of the risk management process like risk control and monitoring. The risk management component was used and evaluated by student capstone project teams.",2012,0, 5542,When the Software Goes Beyond its Requirements -- A Software Security Perspective,"Evidences from current events have shown that, in addition to virus and hacker attacks, many software systems have been embedded with """"agents"""" that pose security threats such as allowing someone to """"invade"""" into computers with such software installed. This will eventually grow into a more serious problem when Cluster and Cloud Computing becomes popular. As this is an area that few have been exploring, we discuss in this paper the issue of software security breaches resulting from embedded sleeping agents. We also investigate some patterns of embedded sleeping agents utilized in software industry. In addition, we review these patterns and propose a security model that identifies different scenarios. This security model will provide a foundation for further study on how to detect and prevent such patterns from becoming security breaches.",2012,0, 5543,A Novel Flit Serialization Strategy to Utilize Partially Faulty Links in Networks-on-Chip,"Aggressive MOS transistor size scaling substantially increase the probability of faults in NoC links due to manufacturing defects, process variations, and chip wire-out effects. Strategies have been proposed to tolerate faulty wires by replacing them with spare ones or by partially using the defective links. However, these strategies either suffer from high area and power overheads, or significantly increase the average network latency. In this paper, we propose a novel flit serialization method, which divides the links and flits into several sections, and serializes flit sections of adjacent flits to transmit them on all available fault-free link sections to avoid the complete waste of defective links bandwidth. Experimental results indicate that our method reduces the latency overhead significantly and enables graceful performance degradation, when compared with related partially faulty link usage proposals, and saves area and power overheads by up to 29% and 43.1%, respectively, when compared with spare wire replacement methods.",2012,0, 5544,Quality model based on ISO/IEC 9126 for internal quality of MATLAB/Simulink/Stateflow models,"In a model-based approach, models are considered as the prime artefacts for the software specification, design and implementation. Quality assurance for program codes has been discussed a lot, however equivalent methods for model quality assessment remain rareness. Assessing quality is of particular importance for technical models (e.g. MATLAB/Simulink/Stateflow models), since they are often used for production code generation. Our main contribution is a quality model based on ISO/IEC 9126, which defines the internal model quality as well as measures for the assessments. Our quality model shall not only show improvement potentials in model, but also provide evidence about quality evolution of a model.",2012,0, 5545,Fault detection and diagnosis of voltage source inverter using the 3D current trajectory mass center,"This paper investigates the use of the current trajectory mass center in a three dimensional referential. The proposed approach uses the inverter output currents. These currents are used to obtain a typical pattern in a three dimensional referential. According the fault type different patterns is obtained. In this way, with the proposed approach it is possible to detect and identify the faulty power switch In order to automatically identify the different patterns, it is used an algorithm to obtain a mass center of the 3D current trajectory. This results in a fast and reliable fault detection method. The applicability of the proposed technique, are confirmed through several simulation and experimental results.",2012,0, 5546,Application of multi-fuzzy system for condition monitoring of liquid filling machines,"In this paper a novel approach is implemented for investigation of failures in Stork bottle filling machines. Fuzzy based system is used to detect the abnormalities present in machine by using time and frequency domain statistical features. Statistical analysis of vibration data determined the gearbox failure which correlated with engineer's findings. The method used has shown promising results to predict the failure in this case of low speed rotary machines. It has been concluded that statistical based analysis of vibration signal is a suitable for predicting machine faults with low rotating speeds. This paper presents a system, implemented on the industrial process machine, which has successfully predicted the faults in the gearbox before the catastrophic failure.",2012,0, 5547,Voltage Unbalance Emission Assessment in Radial Power Systems,"Voltage unbalance (VU) emission assessment is an integral part in the VU-management process where loads are allocated a portion of the unbalance absorption capacity of the power system. The International Electrotechnical Commission Report IEC/TR 61000-3-13:2008 prescribes a VU emission allocation methodology establishing the fact that the VU can arise at the point of common connection (PCC) due to upstream network unbalance and load unbalance. Although this is the case for emission allocation, approaches for post connection emission assessment do not exist except for cases where the load is the only contributor to the VU at the PCC. Such assessment methods require separation of the post connection VU emission level into its constituent parts. In developing suitable methodologies for this purpose, the pre and postconnection data requirements need to be given due consideration to ensure that such data can be easily established. This paper presents systematic, theoretical bases which can be used to assess the individual VU emission contributions made by the upstream source, asymmetrical line, and the load for a radial power system. The methodology covers different load configurations including induction motors. Assessments obtained by employing the theoretical bases on the study system were verified by using unbalanced load-flow analysis in MATLAB and using DIgSILENT PowerFactory software.",2012,0,6716 5548,Security Assessment of Code Refactoring Rules,"Refactoring is a common approach to producing better quality software. Its impact on many software quality properties, including reusability, maintainability and performance, has been studied and measured extensively. However, its impact on the information security of programs has received relatively little attention. In this work, we assess the impact of a number of the most common code-level refactoring rules on data security, using security metrics that are capable of measuring security from the point view of potential information flow. The metrics are calculated for a given Java program using a static analysis tool we have developed to automatically analyse compiled Java bytecode. We ran our Java code analyser on various programs which were refactored according to each rule. New values of the metrics for the refactored programs then confirmed that the code changes had a measurable effect on information security.",2012,0, 5549,Achieving High Reliability on Linux for K2 System,"Driver faults are the main reasons of causing failure in operating system. In order to address this issue and improve the kernel reliability, this paper presents an intelligent kernel-mode driver enhancement mechanism - Style Box which can limit the driver's rights to access kernel by a private page table and a call control list. This method captures a variety of type errors, synchronization errors and behavior errors of the driver, and intelligently predicts and rapidly recovers driver errors. Experimental results show that Style Box can effectively detect and deal with driver errors, and obviously improve the reliability of the operating system.",2012,0, 5550,Revenue-maximizing server selection and admission control for IPTV content servers using available bandwidth estimates,"We present a server selection and admission control algorithm for IPTV networks that uses available bandwidth estimation to assess bandwidth available on the path from an end-user point of attachment to one or more IPTV content servers and that employs a revenue maximising admission decision process that prioritizes requests for high revenue content item types over requests for lower revenue item types. The algorithm operates by estimating expected request arrival rates for different content item types based on past arrival rates and, based on these and available bandwidth estimates decides whether to accept a new request and, when accepting requests, which of the available content servers to use. Results of a simulation study show that the algorithm succeeds in 1) maintaining acceptable packet delays for accepted flows in the presence of fluctuating background traffic on network paths and 2) when available bandwidth is limited prioritizing requests for higher revenue content types.",2012,0, 5551,Evaluating compressive sampling strategies for performance monitoring of data centers,"Performance monitoring of data centers provides vital information for dynamic resource provisioning, fault diagnosis, and capacity planning decisions. However, the very act of monitoring a system interferes with its performance, and if the information is transmitted to a monitoring station for analysis and logging, this consumes network bandwidth and disk space. This paper proposes a low-cost monitoring solution using compressive sampling - a technique that allows certain classes of signals to be recovered from the original measurements using far fewer samples than traditional approaches - and evaluates its ability to measure typical signals generated in a data-center setting using a testbed comprising the Trade6 enterprise application. The results open up the possibility of using low-cost compressive sampling techniques to detect performance bottlenecks and anomalies that manifest themselves as abrupt changes exceeding operator-defined threshold values in the underlying signals.",2012,0, 5552,Extensive DBA-CAC mechanism for maximizing efficiency in 3GPP: LTE networks,"In this continuous fast world of mobile devices there is always a growing demand of high rate services. So a call has to be continuous with same data rates during a handoff. This paper deals with a novel approach DBA-CAC to reduce the call dropping probability while ensuring QoS demands are met in LTE wireless networks. The reduction is based on Adaptive Call Admission Control (Ad-CAC) scheme which gives priority to handoff call over the new calls. The Dynamic Bandwidth Adaptation (DBA) approach is used to maximize the overall system utilization while keeping the blocking rates low. DBA algorithm is used in two phases, when a call arrives and when a call ends. The DBA approach helps in predicting the user behavior and allocation the resources in advance, hence utilizing the resources more efficiently. This approach also maintains a low new call blocking rates.",2012,0, 5553,Software defect prediction using Two level data pre-processing,"Defect prediction can be useful to streamline testing efforts and reduce the development cost of software. Predicting defects is usually done by using certain data mining and machine learning techniques. A prediction model is said to be effective if it is able to classify defective and non defective modules accurately. In this paper we investigate the result of data pre-processing on the performance of four different K-NN classifiers and compare the results with random forest classifier. The method used for pre-processing includes attribute selection and instance filtering. We observed that Two-level data pre-processing enhances defect prediction results. We also report how these two filters influence the performance independently. The observed performance improvement can be attributed to the removal of irrelevant attributes by dimension (attribute) reduction and of class imbalance problem by Resampling, together leading to the improved performance capabilities of the classifiers.",2012,0, 5554,Analytical model for channel allocation scheme in macro/femto-cell based BWA networks,"The femtocellular technology is observed to be quite promising for mobile operators as it improves their network coverage and capacity at the outskirts of the macro cell. In this paper, we have developed an analytical model for channel allocation in macro/femto-cell based BWA (Broadband Wireless Access) networks using Continuous Time Markov Chain (CTMC). The focus of the work is to analyze various QoS parameters like connection blocking probability, system capacity enhancement and channel utilization of the network for performance evaluation. We have considered a hierarchical WiMAX BWA network consisting of single macro BS along with `m' femto BS and a total number of `ch' orthogonal channels are assumed to be available in the network. The macro BS will receive the channel requests from the users either directly or via femto BS. Four types of services i.e. UGS, rtPS, nrtPS and BE request for a channel to be admitted. Pareto distribution is considered for the arrival process of the newly originated service type. The hierarchical WiMAX network is analytically modeled in the form of a `6+m' dimensional Markov Chain based on the number of admitted services of each type under macro BS and `m' femto BSs. Extensive analysis has been performed to evaluate the effectiveness and efficiency of the hierarchical WiMAX networks along with the concept of channel reuse. As per the analysis, the connection blocking probability is observed to fall drastically from about 0.8 to about 0.02 for up to 10 femto cells in the network with channel reuse. On the other hand, the system capacity and channel utilization is observed to improve acutely with the introduction of femto cells with channel reuse. Enhancement in the system capacity is observed to be up to 120% and channel utilization increases from 95% to 220% with the introduction of up to 10 femto cells in the network with channel reuse. Thus, our developed analytical model exhibits that the system performance is enhanced with th- introduction of the femto cells and the concept of channel reuse in the hierarchical WiMAX networks. The contribution of the work lies within the scope of developing an analytical model for channel allocation using CTMC to evaluate the performance of the hierarchical WiMAX networks. However, this analytical model would be equally applicable to any femto-cellular based BWA networks.",2012,0, 5555,Adaptive Quality of Service in ad hoc wireless networks,"In high criticality crisis scenarios, such as disaster management, ad hoc wireless networks are quickly assembled in the field to support decision makers through situational awareness using messaging-, voice-, and video-based applications. These applications cannot afford the luxury of stalling or failing due to overwhelming bandwidth demand on these networks as this could contribute to overall mission failure. This paper describes an approach for satisfying application-specific Quality of Service (QoS) expectations operating on ad hoc wireless networks where available bandwidth fluctuates. The proposed algorithm, D-Q-RAM (Distributed QoS Resource Allocation Model) incorporates a distributed optimization heuristic that results in near optimal adaptation without the need to know, estimate, or predict available bandwidth at any moment in time.",2012,0, 5556,A Mirrored Data Structures Approach to Diverse Partial Memory Replication,"Software memory errors are a growing threat to software dependability. In previous work, we proposed an approach for detecting memory errors, called Diverse Partial Memory Replication (DPMR), that utilized automated program diversity and memory replication. The original design aimed to maximize coverage by making the pointers stored in different memory replicas comparable. In this paper, we propose and evaluate an alternative design called Mirrored Data Structures (MDS), which sacrifices pointer comparability to gain three primary benefits. 1) MDS significantly increases DPMR's applicability by eliminating all DPMR restrictions on memory allocation, pointer arithmetic, and pointer-to-pointer casts. 2) For programs that store many pointers to memory, MDS reduces DPMR's overhead, as is demonstrated in experimental results. 3) MDS significantly reduces DPMR's memory footprint.",2012,0, 5557,The Provenance of WINE,"The results of cyber security experiments are often impossible to reproduce, owing to the lack of adequate descriptions of the data collection and experimental processes. Such provenance information is difficult to record consistently when collecting data from distributed sensors and when sharing raw data among research groups with variable standards for documenting the steps that produce the final experimental result. In the WINE benchmark, which provides field data for cyber security experiments, we aim to make the experimental process self-documenting. The data collected includes provenance information -- such as when, where and how an attack was first observed or detected -- and allows researchers to gauge information quality. Experiments are conducted on a common test bed, which provides tools for recording each procedural step. The ability to understand the provenance of research results enables rigorous cyber security experiments, conducted at scale.",2012,0, 5558,Experimental Analysis of Binary-Level Software Fault Injection in Complex Software,"The injection of software faults (i.e., bugs) by mutating the binary executable code of a program enables the experimental dependability evaluation of systems for which the source code is not available. This approach requires that programming constructs used in the source code should be identified by looking only at the binary code, since the injection is performed at this level. Unfortunately, it is a difficult task to inject faults in the binary code that correctly emulate software defects in the source code. The accuracy of binary-level software fault injection techniques is therefore a major concern for their adoption in real-world scenarios. In this work, we propose a method for assessing the accuracy of binary-level fault injection, and provide an extensive experimental evaluation of a binary-level technique, G-SWFIT, in order to assess its limitations in a real-world complex software system. We injected more than 12 thousand binary-level faults in the OS and application code of the system, and we compared them with faults injected in the source code by using the same fault types of G-SWFIT. The method was effective at highlighting the pitfalls that can occur in the implementation of G-SWFIT. Our analysis shows that G-SWFIT can achieve an improved degree of accuracy if these pitfalls are avoided.",2012,0, 5559,Changeloads for Resilience Benchmarking of Self-Adaptive Systems: A Risk-Based Approach,"Benchmarking self-adaptive software systems calls for a new model that takes into account a distinctive characteristic of such systems: alterations over time (i.e., self-achieved modifications or adjustments triggered by changes in the external or internal contexts of the system). Changes are thus a fundamental component of a resilience benchmark, raising an intrinsic research problem: how to identify and select the most realistic and relevant (sequences of) changes to be included in the benchmarking procedure. The problem is that defining a representative change load would require access to a large amount of field data, which is not available for most systems. In this paper we propose an approach based on risk analysis to tackle this key issue, debating its effectiveness and usability with a simple case study. The procedure, that combines field data with expert knowledge and experimental data, allows moving from the identification of the generic goals of systems in the benchmarking domain to the identification of the most relevant change scenarios (based on probability and impact) that may prevent those systems from achieving their goals.",2012,0, 5560,Incipient fault detection of industrial pilot plant machinery via acoustic emission,Numerous condition monitoring techniques and identification algorithms for detection and diagnosis of faults in industrial plants have been proposed for the past few years. Motors are one of the common used elements in almost all plant machinery. They cause the machine failure upon getting faulty. Therefore advance and effective condition monitoring techniques are required to monitor and detect the motor problems at incipient stages. This avoids catastrophic machine failure and costly unplanned shutdown. In this paper the acoustic emission (AE) monitoring system is established. It discusses a method based on time and frequency domain analysis of AE signals acquired from motors used in chemical process pilot plant. A real time measurement system is developed. It utilizes MatLAB to process and analyze the data to provide valuable information regarding the process being monitored.,2012,0, 5561,MATLAB based defect detection and classification of printed circuit board,"A variety of ways has been established to detect defects found on printed circuit boards (PCB). In previous studies, defects are categories into seven groups with a minimum of one defect and up to a maximum of 4 defects in each group. Using Matlab image processing tools this research separates two of the existing groups containing two defects each into four new groups containing one defect each by processing synthetic images of bare through-hole single layer PCBs.",2012,0, 5562,Software quality in use characteristic mining from customer reviews,"Reviews from customers who have experience with the software product are an important information decision making for software product acquisition. They usually appear on ecommerce websites or any online download market. If some products have a large number of reviews, customer may not have time to read all of them. Therefore, we need to extract software information characteristic from reviews in order to provide product review representation. Customer can further use it to compare one software product attributes and other products' attributes. Software product quality from user point of view may be used to characterize each software product. ISO 9126 is widely used among software engineer to assess software quality in use. It covers software quality model and contains the quality model characteristic from user perspective: effectiveness, productivity, safety and satisfaction. We propose a methodology for software product reviews mining based on software quality ontology constructed from ISO 9126 and a rule-based classification to finally produce software quality in use scores for software product Representation. The quality in use score for each software characteristic can be used to preliminary determine the quality of the software.",2012,0, 5563,An adaptive and Efficient Data Delivery Scheme for DFT-MSNs (delay and disruption tolerant Mobile Sensor Networks),"Delay and disruption tolerant networking (DTN) architecture has been developed for networks operating under extreme conditions. DTN architecture is built using a set of new protocol family including Bundle Protocol and Licklider Transmission Protocol which relax almost all the assumptions for the existence of a network, like presence of end-to-end connectivity among communicating nodes, low propagation delays etc and allow the communication to take place. Routing is one of the biggest challenges in DTNs. A number of routing schemes have been proposed but all such schemes are either history based and deterministic or application specific and thus do not offer a stable solution for the routing problem for DTNs. Direct transmission scheme involves single hop forwarding which results in poor delivery efficiency. Epidemic routing is a flooding based approach resulting in high delivery efficiency at the cost of high network congestion which eventually reduces the network performance. Probabilistic routing is a moderately efficient scheme in terms of delivery efficiency and usage of network resources. In this paper, we propose an adaptive and efficient Bundle Fault Tolerance based Probabilistic Routing (BFPR) scheme, which offers significantly improved performance over existing DTN routing schemes. Simulations and results analysis have been carried out using MATLAB.",2012,0, 5564,Event driven test case selection for regression testing web applications,"Traditional testing techniques are not apt for the multifaceted web-based applications, since they miss the additional features of web applications such as their multi-tier nature, hyperlink-based structure, and event-driven feature. As software systems evolve, errors sometimes sneak in; software that has been tested on certain inputs may fail to work on those same inputs in the future. Regression testing aims to detect these errors by comparing present behavior with past behavior. Although regression testing has been widely used to gain confidence in the reliability of software by providing information about the quality of an application, it has suffered limited use in this domain due to the frequent nature of updates to websites and the difficulty of automatically comparing test case output. In this paper we propose a new paradigm that exploits regression testing to be used by web applications. This event-driven technique is based on the creation of event-dependency graph of the original and modified web application, then converting the original and modified web application graph into event test tree, followed by the comparison of both trees to identify affected and potentially affected nodes which enables selection of test cases for regression testing web applications finally reducing the test set size. We apply this technique to a case study to demonstrate the usefulness of the proposed paradigm.",2012,0, 5565,Probabilistic duration of power estimation for Nickel- metal- hydride (NiMH) battery under constant load using Kalman filter on chip,"For a battery powered safety critical system the safe duration of power for executing a specific task is extremely important. It is necessary to avoid unacceptable consequences due to unwanted battery power failure. An early stage estimation of this duration reduces the overall risk through optimization of current consumption by switching off noncritical load ahead of delivery of power to a critical load. In order to address this issue, an online battery state of charge estimator on chip is conceived and implemented using Kalman filter. The Kalman filter estimates the true values of measurements by predicting a value, considering the estimated uncertainty of the predicted value, and then computing a weighted average of the predicted value and the measured value. The basic idea is more accurate state prediction is possible when the state predicted value is fused with sensor prediction under any uncertain disturbance. The state estimator is developed in the form of an algorithm and stored into a single chip microcontroller. It is finally used to generate an early stage warning signal against battery failure. The paper presents a methodology for creating energy aware system that would avoid sudden system failure due to power outage. The authors used a generalized state space model of the battery to estimate the effect of unobserved battery parameters for duration estimation. An experiment was conducted in this regard through discharging the battery under constant load. Subsequently the internal parameters of battery were calculated. The model was simulated through MATLAB/simulink R2008a software and efficiency was tested. The program for prediction was finally emulated in a microcontroller and found satisfactory result.",2012,0, 5566,A new algorithm for voltage sag detection,"Voltage sags is a common power system disturbance, usually associated with power system faults. Therefore the effective detection of voltage sag event is an important issue for voltage sag analysis and mitigation. There are several detection methods for voltage sags such as RMS voltage detection, peak voltage detection, and Fourier transform methods. The problem with these methods is that they use a windowing technique and can therefore be too slow when applied to detect voltages sags for mitigation since they use historical value, not instantaneous value which may lead to long detection time when voltage sag has occurred. This paper presents a new algorithm for voltage sag detection. The algorithm can extract a single non-stationary sinusoidal signal out of a given multi-component input signal. The algorithm is capable of estimating the amplitude, phase and frequency of an input signal in real-time. It is compared to other methods of sag detection.",2012,0, 5567,Parallelization and performance optimization on face detection algorithm with OpenCL: A case study,"Face detect application has a real time need in nature. Although Viola-Jones algorithm can handle it elegantly, today's bigger and bigger high quality images and videos still bring in the new challenge of real time needs. It is a good idea to parallel the Viola-Jones algorithm with OpenCL to achieve high performance across both AMD and NVidia GPU platforms without bringing up new algorithms. This paper presents the bottleneck of this application and discusses how to optimize the face detection step by step from a very naive implementation. Some brilliant tricks and methods like CPU execution time hidden, stubbles usage of local memory as high speed scratchpad and manual cache, and variable granularity were used to improve the performance. Those technologies result in 413 times speedup varying with the image size. Furthermore, those ideas may throw on some light on the way to parallel applications efficiently with OpenCL. Taking face detection as an example, this paper also summarizes some universal advice on how to optimize OpenCL program, trying to help other applications do better on GPU.",2012,0, 5568,An Autonomous Reliability-Aware Negotiation Strategy for Cloud Computing Environments,"Cloud computing paradigm allows subscription-based access to computing and storages services over the Internet. Since with advances of Cloud technology, operations such as discovery, scaling, and monitoring are accomplished automatically, negotiation between Cloud service requesters and providers can be a bottleneck if it is carried out by humans. Therefore, our objective is to offer a state-of-the-art solution to automate the negotiation process in Cloud environments. In previous works in the SLA negotiation area, requesters trust whatever QoS criteria values providers offer in the process of negotiation. However, the proposed negotiation strategy for requesters in this work is capable of assessing reliability of offers received from Cloud providers. In addition, our proposed negotiation strategy for Cloud providers considers utilization of resources when it generates new offers during negotiation and concedes more on the price of less utilized resources. The experimental results show that our strategy helps Cloud providers to increase their profits when they are participating in parallel negotiation with multiple requesters.",2012,0, 5569,Self-Healing of Operational Workflow Incidents on Distributed Computing Infrastructures,"Distributed computing infrastructures are commonly used through scientific gateways, but operating these gateways requires important human intervention to handle operational incidents. This paper presents a self-healing process that quantifies incident degrees of workflow activities from metrics measuring long-tail effect, application efficiency, data transfer issues, and site-specific problems. These metrics are simple enough to be computed online and they make little assumptions on the application or resource characteristics. Incidents are classified in levels and associated to sets of healing actions that are selected based on association rules modeling correlations between incident levels. The healing process is parametrized on real application traces acquired in production on the European Grid Infrastructure. Implementation and experimental results obtained in the Virtual Imaging Platform show that the proposed method speeds up execution up to a factor of 4 and properly detects unrecoverable errors.",2012,0, 5570,Scalable Join Queries in Cloud Data Stores,"Cloud data stores provide scalability and high availability properties for Web applications, but do not support complex queries such as joins. Web application developers must therefore design their programs according to the peculiarities of No SQL data stores rather than established software engineering practice. This results in complex and error-prone code, especially with respect to subtle issues such as data consistency under concurrent read/write queries. We present join query support in Cloud TPS, a middleware layer which stands between a Web application and its data store. The system enforces strong data consistency and scales linearly under a demanding workload composed of join queries and read-write transactions. In large-scale deployments, Cloud TPS outperforms replicated Postgre SQL up to three times.",2012,0, 5571,Automated Tagging for the Retrieval of Software Resources in Grid and Cloud Infrastructures,"A key challenge for Grid and Cloud infrastructures is to make their services easily accessible and attractive to end-users. In this paper we introduce tagging capabilities to the Miner soft system, a powerful tool for software search and discovery in order to help end-users locate application software suitable to their needs. Miner soft is now able to predict and automatically assign tags to software resources it indexes. In order to achieve this, we model the problem of tag prediction as a multi-label classification problem. Using data extracted from production-quality Grid and Cloud computing infrastructures, we evaluate an important number of multi-label classifiers and discuss which one and with what settings is the most appropriate for use in the particular problem.",2012,0, 5572,Data Outsourcing Simplified: Generating Data Connectors from Confidentiality and Access Policies,"For cloud-based outsourcing of confidential data, various techniques based on cryptography or data-fragmentation have been proposed, each with its own tradeoff between confidentiality, performance, and the set of supported queries. However, it is complex and error-prone to select appropriate techniques to individual scenarios manually. In this paper, we present a policy-based approach consisting of a domain specific language and a policy-transformator to automatically generate scenario-specific software adapters called mediators that set up data outsourcing and govern data access. Mediators combine state-of-the-art confidentiality techniques to ensure a user-specified level of confidentiality while still offering efficient data access. Thus, our approach simplifies data outsourcing by decoupling policy decisions from their technical implementation and realizes appropriate tradeoffs between confidentiality and efficiency.",2012,0, 5573,Time-Domain Analysis of Differential Power Signal to Detect Magnetizing Inrush in Power Transformers,"In this paper, a novel power-based algorithm to discriminate between switching and internal fault conditions in power transformers is proposed and evaluated. First, the differential power signal is scrutinized and its intrinsic features during inrush conditions are introduced. Afterwards, a combined time-domain-based waveshape classification technique is proposed. This technique exploits the suggested features and provides two discriminative indices. Based on the values of these indices, inrush power signals are identified after only half a cycle. This method is founded upon some inherent low-frequency features of power waveforms and is independent of the magnitude of differential power. The approach is also unaffected by power system parameters, operating conditions, noise and transformer magnetizing curves. Simplicity of the suggested features and equations describe how the proposed method can help make it a practical solution for the inrush problem. Extensive simulations carried out in PSCAD/EMTDC software validate the merit of this technique for various conditions, such as current-transformer saturation. Furthermore, real-time testing of the proposed method using real fault and inrush signals confirms the possibility of implementing this algorithm for industrial applications.",2012,0, 5574,An adaptive self-test routine for in-field diagnosis of permanent faults in simple RISC cores,"The localization of permanent faults in a processor is a precondition for applying (self-)repair functions to that processor core. This paper presents a software-based self-test technique that can be used in the field for test and fault localization, there-by providing a high diagnostic resolution. It is shown how the self-test routine is adapted in the field to already detected faults in the processor, such that these faults do not affect the test- and diagnostic capability of the self-test routine. By this it becomes reasonable to localize multiple permanent faults in the processor. The proposed self-test is software-based, but it requires a few modifications of the processor. The feasibility of the technique is presented by an example; limitations are discussed, too.",2012,0, 5575,Study the impact of improving source code on software metrics,"The process of improving the quality of the software products is a continuous process where software developers learn from their previous experience and from previous software releases to improve the future products or release. In this paper, we evaluate the ability of Software source code analysis process and tools to predict possible defects, errors or problem in the software products. More specifically, we evaluate the effect of improving the code according to recommendations from source code tools on software metrics. Several open source software projects are selected for the case study. The output of applying source code analysis tools on those projects result in several types of warning. After performing manual correction of those warning, we compare the metrics of the evaluated projects before and after applying the corrections. Results showed that the size and structural complexity in most cases are increases. On the other hand, some of the complexities related to coupling and maintainability are decreases.",2012,0, 5576,Design of cyber-physical interface for automated vital signs reading in electronic medical records systems,The focus of this project is to study the design of a cyber-physical interface for automated vital sign readings in Electronic Medical Record Systems. This is presented as a solution for a need in actual EMR systems where the reading of vital signs is done manually which is error prone and time consuming. The domain application knowledge and prototype used for the development of this paper is made possible with the collaboration of the Alliance of Chicago Community Health Center LLC.,2012,0, 5577,Towards single-chip diversity TMR for automotive applications,"The continuous requirement to provide safe, low-cost, compact systems makes applications such as automotive more prone to increasing types of faults. This may result in increased system failure rates if not addressed correctly. While some of the faults are not permanent in nature, they can lead to malfunctioning in complex circuits and/or software systems. Moreover, automotive applications have recently adopted the ISO26262 to provide a standard for defining functional safety. One of the recommended schemes to tolerate faults is Triple Modular Redundancy (TMR). However, traditional TMR designs typically consume too much space, power, and money all of which are undesirable for automotive. In addition, common mode faults have always been a concern in TMR which their effects would be increasing in compact systems. Errors such as noise and offset that impact a TMR sensor input can potentially cause common mode failures that lead to an entire system failure. In this paper, we introduce a new architecture and implementation for diverse TMR in a speed measurement system that would serve automotive cost and safety demands. Diversity TMR is achieved on a single chip by designing functionally identical circuits each in a different design domain to reduce the potential of common mode failures. Three versions of a speed sensing application are implemented on a mixed-signal Programmable System on Chip (PSoC) from Cypress Semiconductors. We introduce errors that impact speed sensor signals as defined by the ISO26262 standard to evaluate DTMR. Our testing shows how DTMR can be effective to different types of errors that impact speed sensor signals.",2012,0, 5578,Voltage sag calculation based on Monte Carlo technique,"Fault occurrence in power system not only affects on reliability of system but also stability too. In many cases, fault occurrence in power system leads to voltage depressions -dips or sags- that damages to power system components. In this paper, using stochastic technique in appropriate modeled power system simulated in PSCAD/EMTDC software, maximum voltage sag will be calculated based on the Monte Carlo technique. The results show that the presented technique has excellent ability in order to detect and estimate the amplitude of voltage sags in different conditions.",2012,0, 5579,High speed adaptive auto reclosing of 765 kV transmission lines,"This paper proposes a new adaptive auto reclosing scheme for extra high voltage transmission lines. The performance of an auto reclosing scheme depends not only on power system behavior and transmission lines characteristic but also secondary arc current. In this paper at first, an appropriate model of arc in extra high voltage transmission overhead line is investigated and after that, using symmetrical components a novel criterion based on apparent power is presented in order to detect the exact time of arc extinguishing. In fact this criterion has the ability to distinguish between transient faults and permanent faults by detecting the exact time of arc extinguishing. Also, a generalized scheme is presented in order to have high speed auto reclosing in 765 kV transmission lines. This method is very simple and only uses two filters, so it needs to low hardware requirements. Therefore, application of this method is highly recommended in extra high voltage transmission lines. The simulation studies of this paper are performed by PSCAD/EMTDC software and the results show satisfactory performance of the proposed method.",2012,0, 5580,A new fuzzy fault locator for series compensated transmission lines,"Series capacitors (SCs) are installed on long transmission lines to reduce the inductive reactance of the lines. SCs and their associated over-voltage protection devices (typically metal oxide varistors, and/or air gaps) create several problems for distance protection relays and fault locators including voltage and/or current inversion, sub-harmonic oscillations, transients caused by the air-gap flashover and sudden changes in the operating reach. In this paper, using fuzzy logic a simple and accurate fault location algorithm is presented for series compensated transmission lines. The fault region is determined by a fuzzy identifier, firstly. Then the distance to fault is calculated from the fault loop quantities similar to the classic fault locators, but in case of voltages additionally the compensation for the voltage drop across the bank (or banks) of series capacitors is performed. The power system is simulated on a PC using PSCAD software to provide fault data. The fuzzy fault region identifier and fault locator are designed and implemented using MATLAB software. The operating behaviour of the fault locator was assessed using a 400 kV, 400km double end fed simulated transmission line with three phase faults at various locations on the line. It relies totally on locally derived information.",2012,0, 5581,A new approach to high impedance fault location in three-phase underground distribution system using combination of fuzzy logic & wavelet analysis,"This paper presents the results of investigation into a new fault classification and location, in the EMTP software. The simulated data is then analyzed using advanced signal processing technique based on wavelet analysis to extract useful information from signals and this is then applied to the fuzzy logic system (FLS) for detect of the type and location ground high impedance faults in a practical underground radial distribution system. The paper concludes by comprehensively evaluation the performance of the technique developed in the case of ground high impedance faults. The results indicate that the fault location technique has an acceptable accuracy under a whole variety of different systems and fault conditions.",2012,0, 5582,The detection of VFC and STATCOM faults in Doubly Fed Induction Generator,"The wind power has the most rapid growth in comparison with other renewable energies. During last decade wind energy has became an important part of the electricity production throughout the world. As the amount of wind energy increase, the reliability of the wind turbines becomes crucial. A low reliability would result in an unstable energy source with poor economical performance. Monitoring the condition of vital components is a key element to keep a high reliability. Condition monitoring of Doubly Fed Induction Generators (DFIG) is growing in importance for wind turbines. This paper investigates the effect of VFC (Variable Frequency Converter) and STATCOM Faults on wind turbine equipped with DFIG operation in order to condition monitoring of wind turbine. Consequently, a proposed method is used to detecting these faults means of harmonic components analyzing of DFIG rotor current. The simulation has been done with PSCAD/EMTDC software.",2012,0, 5583,A reputation model based on hierarchical bayesian estimation for Web services,"The motivation of Web service comes from its interoperational ability so a large number of Web services can interact with others and constitute an open network, Web service network. The success of Web services selection rely on, not only its Qos capability advertised, but the trustworthy of QoS to large degree. How to evaluate the trustworthy of services QoS information, however, is a challenge in Web service network. Reputation system, a mechanism which assesses the future QoS performance by the past behavior of service, is one of promising approaches to facilitate users make optimal decision. In this paper, we present a hybrid framework of reputation model for Web service. Based on this hybrid architecture, clients build their specific social communities, by which they obtain service's prior reputation. At the same time, the central reputation system fuses the rating data from clients by bayesian estimation. The result of experiments illustrated our approach is more efficiency and accuracy in several aspects, especially when dealing with strategics services.",2012,0, 5584,Sequence-based interaction testing implementation using Bees Algorithm,"T-way strategies is used to generate test data to detect faults due to interaction. In the literature, there are many t-way strategies developed by researchers for the past 10 years. However, most of the strategies assumed sequence-less parameter interaction. In the real world, there are many systems that consider the sequence of the input parameter in order to produce correct output. These interactions of the sequence of inputs need to be tested to avoid fault due to sequence interaction. In this paper we present a sequence-based interaction testing strategy (termed as sequence covering array) using Bees Algorithm. We discuss the implementation, present and compare the results with existing sequence covering array algorithm.",2012,0, 5585,A proposal for enhancing user-developer communication in large IT projects,"A review of the literature showed that the probability of system success, i.e. user acceptance, system quality and system usage, can be increased by user-developer communication. So far most research on user participation focuses either on early or on late development phases. Especially large IT projects require increased participation, due to their high complexity. We believe that the step in software development when user requirements are translated (and thus interpreted) by developers into a technical specification (i.e. system requirements, architecture and models) is a critical one for user participation. In this step a lot of implicit decisions are taken, some of which should be communicated to the end users. Therefore, we want to create a method that enhances communication between users and developers during that step. We identified trigger points (i.e. changes on initial user requirements), and the granularity level on which to communicate with the end users. Also, representations of changes and adequate means of communication are discussed.",2012,0, 5586,The new DC system insulation monitoring device based on phase differences of magnetic modulation,"According to the exiting issues of currently DC system insulation monitoring device, this paper developed a more comprehensive function monitoring device. The device adopted phase different of magnetic modulation detection principle. Though access to the unbalanced grounding relay, it can solve the conventional devices which unable to detect issue like: the DC system grounding simultaneity or equivalent insulation deterioration, with this method the device can determine a variety of ground failure correctly, and it also can predict the insulation decline. Because the 500kV transfer substation exiting serious electromagnetic interference which can make the data of insulation monitoring device inaccurate and work unstable. According to these phenomenon, we use a variety of anti-jamming measures on both hardware and software. This paper describes the overall hardware of the device composition and a detailed description of key technologies. Field experimental results show that the device may change from time to time to monitor the insulation resistance and have the ability to find out the grounding branches accurately; also it has no effects on disturbed capacitance. Currently, the device has been putting into use and working well.",2012,0, 5587,"A cross layer, adaptive data aggregation algorithm utilizing spatial and temporal correlation for fault tolerant Wireless Sensor Networks","Wireless Sensor Networks (WSNs) are ad hoc networks formed by tiny, low powered, and low cost devices. WSNs take advantage of distributed sensing capability of the sensor nodes such that several sensors can be used collaboratively to detect events or perform monitoring of specific environmental attributes. Since sensor nodes are often exposed to harsh environmental elements, and normally operate in an unsupervised fashion over long periods of time, within their MTBF, some of them are subject to partial failure in form of A/D readings that are permanently off the correct levels. Additionally, due to glitches in timing and in hardware or software, even healthy sensor nodes can occasionally report readings that are outside of the expected range. In this paper we present a novel approach that combines spatial and temporal correlation of the data collected by neighboring sensors to combat both error modes described above. We combine the weighted averaging algorithm across multiple sensors, with the LMS adaptive filtering of individual sensor data, in order to improve fault tolerance of WSNs. We present performance gains achieved by combining these methods; and analyze the computational and memory costs of these algorithms.",2012,0, 5588,A phase space method to assess and improve autocorrelation and RFM autocorrelation performances of chaotic sequences,"Some chaotic sequences have poor autocorrelation performance or modulated autocorrelation performance, but previously we didn't know the rule and couldn't improve their performances effectively. Using recently presented Autocorrelation and modulated Autocorrelation theorems based on a Phase Space method, we can find and mend the structure defects of chaotic sequences and improve their autocorrelation or modulated autocorrelation performance. Using well known Bernoulli and Skew Tent sequences as examples, we have assessed and improved their autocorrelation and RFM autocorrelation performances through the phase space method to validate that the method is simple yet effective.",2012,0, 5589,The scheme design of distributed systems service fault management based on active probing,"Service fault management in distributed computer systems and networks is a difficult task that requires high efficient inferences from mass data. In this paper, we propose a corresponding solution. Firstly, challenges of distributed systems service fault management are analyzed, and a multilayer model is recommended. Then, a dependency matrix to represent the causal relationship between faults and probes is defined and the framework of fault management is built. After these, a service fault management scheme using active probing is proposed. This scheme is composed of two phases: fault detection and fault localization. In first phase, we propose a probe selection algorithm, which selects a minimal set of probes while remaining a high probability of fault detection. In second phase, we propose a fault localization probe selection algorithm, which selects probes to obtain more system information based on the symptoms observed in previous phase. Finally, the instance proves the validity and efficiency of our scheme.",2012,0, 5590,Why do software packages conflict?,"Determining whether two or more packages cannot be installed together is an important issue in the quality assurance process of package-based distributions. Unfortunately, the sheer number of different configurations to test makes this task particularly challenging, and hundreds of such incompatibilities go undetected by the normal testing and distribution process until they are later reported by a user as bugs that we call conflict defects. We performed an extensive case study of conflict defects extracted from the bug tracking systems of Debian and Red Hat. According to our results, conflict defects can be grouped into five main categories. We show that with more detailed package meta-data, about 30 % of all conflict defects could be prevented relatively easily, while another 30 % could be found by targeted testing of packages that share common resources or characteristics. These results allow us to make precise suggestions on how to prevent and detect conflict defects in the future.",2012,0, 5591,Do faster releases improve software quality? An empirical case study of Mozilla Firefox,"Nowadays, many software companies are shifting from the traditional 18-month release cycle to shorter release cycles. For example, Google Chrome and Mozilla Firefox release new versions every 6 weeks. These shorter release cycles reduce the users' waiting time for a new release and offer better marketing opportunities to companies, but it is unclear if the quality of the software product improves as well, since shorter release cycles result in shorter testing periods. In this paper, we empirically study the development process of Mozilla Firefox in 2010 and 2011, a period during which the project transitioned to a shorter release cycle. We compare crash rates, median uptime, and the proportion of post-release bugs of the versions that had a shorter release cycle with those having a traditional release cycle, to assess the relation between release cycle length and the software quality observed by the end user. We found that (1) with shorter release cycles, users do not experience significantly more post-release bugs and (2) bugs are fixed faster, yet (3) users experience these bugs earlier during software execution (the program crashes earlier).",2012,0, 5592,Explaining software defects using topic models,"Researchers have proposed various metrics based on measurable aspects of the source code entities (e.g., methods, classes, files, or modules) and the social structure of a software project in an effort to explain the relationships between software development and software defects. However, these metrics largely ignore the actual functionality, i.e., the conceptual concerns, of a software system, which are the main technical concepts that reflect the business logic or domain of the system. For instance, while lines of code may be a good general measure for defects, a large entity responsible for simple I/O tasks is likely to have fewer defects than a small entity responsible for complicated compiler implementation details. In this paper, we study the effect of conceptual concerns on code quality. We use a statistical topic modeling technique to approximate software concerns as topics; we then propose various metrics on these topics to help explain the defect-proneness (i.e., quality) of the entities. Paramount to our proposed metrics is that they take into account the defect history of each topic. Case studies on multiple versions of Mozilla Firefox, Eclipse, and Mylyn show that (i) some topics are much more defect-prone than others, (ii) defect-prone topics tend to remain so over time, and (iii) defect-prone topics provide additional explanatory power for code quality over existing structural and historical metrics.",2012,0, 5593,Co-evolution of logical couplings and commits for defect estimation,"Logical couplings between files in the commit history of a software repository are instances of files being changed together. The evolution of couplings over commits' history has been used for the localization and prediction of software defects in software reliability. Couplings have been represented in class graphs and change histories on the class-level have been used to identify defective modules. Our new approach inverts this perspective and constructs graphs of ordered commits coupled by common changed classes. These graphs, thus, represent the co-evolution of commits, structured by the change patterns among classes. We believe that co-evolutionary graphs are a promising new instrument for detecting defective software structures. As a first result, we have been able to correlate the history of logical couplings to the history of defects for every commit in the graph and to identify sub-structures of bug-fixing commits over sub-structures of normal commits.",2012,0, 5594,Can we predict types of code changes? An empirical analysis,"There exist many approaches that help in pointing developers to the change-prone parts of a software system. Although beneficial, they mostly fall short in providing details of these changes. Fine-grained source code changes (SCC) capture such detailed code changes and their semantics on the statement level. These SCC can be condition changes, interface modifications, inserts or deletions of methods and attributes, or other kinds of statement changes. In this paper, we explore prediction models for whether a source file will be affected by a certain type of SCC. These predictions are computed on the static source code dependency graph and use social network centrality measures and object-oriented metrics. For that, we use change data of the Eclipse platform and the Azureus 3 project. The results show that Neural Network models can predict categories of SCC types. Furthermore, our models can output a list of the potentially change-prone files ranked according to their change-proneness, overall and per change type category.",2012,0, 5595,Who? Where? What? Examining distributed development in two large open source projects,"To date, a large body of knowledge has been built up around understanding open source software development. However, there is limited research on examining levels of geographic and organizational distribution within open source software projects, despite many studies examining these same aspects in commercial contexts. We set out to fill this gap in OSS knowledge by manually collecting data for two large, mature, successful projects in an effort to assess how distributed they are, both geographically and organizationally. Both Firefox and Eclipse have been the subject of many studies and are ubiquitous in the areas of software development and internet usage respectively. We identified the top contributors that made 95% of the changes over multiple major releases of Firefox and Eclipse and determined their geographic locations and organizational affiliations. We examine the distribution in each project's constituent subsystems and report the relationship of pre- and post-release defects with distribution levels.",2012,0, 5596,Developing an h-index for OSS developers,"The public data available in Open Source Software (OSS) repositories has been used for many practical reasons: detecting community structures; identifying key roles among developers; understanding software quality; predicting the arousal of bugs in large OSS systems, and so on; but also to formulate and validate new metrics and proof-of-concepts on general, non-OSS specific, software engineering aspects. One of the results that has not emerged yet from the analysis of OSS repositories is how to help the career advancement of developers: given the available data on products and processes used in OSS development, it should be possible to produce measurements to identify and describe a developer, that could be used externally as a measure of recognition and experience. This paper builds on top of the h-index, used in academic contexts, and which is used to determine the recognition of a researcher among her peers. By creating similar indices for OSS (or any) developers, this work could help defining a baseline for measuring and comparing the contributions of OSS developers in an objective, open and reproducible way.",2012,0, 5597,Incorporating version histories in Information Retrieval based bug localization,"Fast and accurate localization of software defects continues to be a difficult problem since defects can emanate from a large variety of sources and can often be intricate in nature. In this paper, we show how version histories of a software project can be used to estimate a prior probability distribution for defect proneness associated with the files in a given version of the project. Subsequently, these priors are used in an IR (Information Retrieval) framework to determine the posterior probability of a file being the cause of a bug. We first present two models to estimate the priors, one from the defect histories and the other from the modification histories, with both types of histories as stored in the versioning tools. Referring to these as the base models, we then extend them by incorporating a temporal decay into the estimation of the priors. We show that by just including the base models, the mean average precision (MAP) for bug localization improves by as much as 30%. And when we also factor in the time decay in the estimates of the priors, the improvements in MAP can be as large as 80%.",2012,0, 5598,"Think locally, act globally: Improving defect and effort prediction models","Much research energy in software engineering is focused on the creation of effort and defect prediction models. Such models are important means for practitioners to judge their current project situation, optimize the allocation of their resources, and make informed future decisions. However, software engineering data contains a large amount of variability. Recent research demonstrates that such variability leads to poor fits of machine learning models to the underlying data, and suggests splitting datasets into more fine-grained subsets with similar properties. In this paper, we present a comparison of three different approaches for creating statistical regression models to model and predict software defects and development effort. Global models are trained on the whole dataset. In contrast, local models are trained on subsets of the dataset. Last, we build a global model that takes into account local characteristics of the data. We evaluate the performance of these three approaches in a case study on two defect and two effort datasets. We find that for both types of data, local models show a significantly increased fit to the data compared to global models. The substantial improvements in both relative and absolute prediction errors demonstrate that this increased goodness of fit is valuable in practice. Finally, our experiments suggest that trends obtained from global models are too general for practical recommendations. At the same time, local models provide a multitude of trends which are only valid for specific subsets of the data. Instead, we advocate the use of trends obtained from global models that take into account local characteristics, as they combine the best of both worlds.",2012,0, 5599,Characterizing verification of bug fixes in two open source IDEs,"Data from bug repositories have been used to enable inquiries about software product and process quality. Unfortunately, such repositories often contain inaccurate, inconsistent, or missing data, which can originate misleading results. In this paper, we investigate how well data from bug repositories support the discovery of details about the software verification process in two open source projects, Eclipse and NetBeans. We have been able do identify quality assurance teams in NetBeans and to detect a well-defined verification phase in Eclipse. A major challenge, however, was to identify the verification techniques used in the projects. Moreover, we found cases in which a large batch of bug fixes is simultaneously reported to be verified, although no software verification was actually done. Such mass verifications, if not acknowledged, threatens analyses that rely on information about software verification reported on bug repositories. Therefore, we recommend that the exploratory analyses presented in this paper precede inferences based on reported verifications.",2012,0, 5600,Mining usage data and development artifacts,"Software repository mining techniques generally focus on analyzing, unifying, and querying different kinds of development artifacts, such as source code, version control meta-data, defect tracking data, and electronic communication. In this work, we demonstrate how adding real-world usage data enables addressing broader questions of how software systems are actually used in practice, and by inference how development characteristics ultimately affect deployment, adoption, and usage. In particular, we explore how usage data that has been extracted from web server logs can be unified with product release history to study questions that concern both users' detailed dynamic behaviour as well as broad adoption trends across different deployment environments. To validate our approach, we performed a study of two open source web browsers: Firefox and Chrome. We found that while Chrome is being adopted at a consistent rate across platforms, Linux users have an order of magnitude higher rate of Firefox adoption. Also, Firefox adoption has been concentrated mainly in North America, while Chrome users appear to be more evenly distributed across the globe. Finally, we detected no evidence in age-specific differences in navigation behaviour among Chrome and Firefox users; however, we hypothesize that younger users are more likely to have more up-to-date versions than more mature users.",2012,0, 5601,Performance analysis of hybrid robust automatic speech recognition system,"In this paper, we evaluate the performance of several objective measures in terms of predicting the quality of noisy input speech signal through the Hybrid method using Voice Activity Detection (VAD) and Speech Enhancement Algorithm (SEA). Demand for Speech Recognition technology is expected to rise dramatically over the next few years as people use their mobile phones and voice recognition system everywhere. This paper enlighten the implementation process which includes a speech-to-text system using isolated word recognition with a vocabulary of ten words (digits 0 to 9). In the training period, the uttered digits are recorded using 8-bit Pulse Code Modulation (PCM) with a sampling rate of 8 KHz and save as a wave format file using sound recorder software. For a given word in the vocabulary, the system builds an Hidden Markov Model (HMM) model and trains the model during the training phase. The training steps, from VAD, Speech Enhancement to HMM model building, are performed using PC-based Matlab programs.",2012,0, 5602,Evaluation of resilience in self-adaptive systems using probabilistic model-checking,"The provision of assurances for self-adaptive systems presents its challenges since uncertainties associated with its operating environment often hamper the provision of absolute guarantees that system properties can be satisfied. In this paper, we define an approach for the verification of self-adaptive systems that relies on stimulation and probabilistic model-checking to provide levels of confidence regarding service delivery. In particular, we focus on resilience properties that enable us to assess whether the system is able to maintain trustworthy service delivery in spite of changes in its environment. The feasibility of our proposed approach for the provision of assurances is evaluated in the context of the Znn.com case study.",2012,0, 5603,A taxonomy and survey of self-protecting software systems,"Self-protecting software systems are a class of autonomic systems capable of detecting and mitigating security threats at runtime. They are growing in importance, as the stovepipe static methods of securing software systems have shown inadequate for the challenges posed by modern software systems. While existing research has made significant progress towards autonomic and adaptive security, gaps and challenges remain. In this paper, we report on an extensive study and analysis of the literature in this area. The crux of our contribution is a comprehensive taxonomy to classify and characterize research efforts in this arena. We also describe our experiences with applying the taxonomy to numerous existing approaches. This has shed light on several challenging issues and resulted in interesting observations that could guide the future research.",2012,0, 5604,Fault detection and isolation from uninterpreted data in robotic sensorimotor cascades,"One of the challenges in designing the next generation of robots operating in non-engineered environments is that there seems to be an infinite amount of causes that make the sensor data unreliable or actuators ineffective. In this paper, we discuss what faults are possible to detect using zero modeling effort: we start from uninterpreted streams of observations and commands, and without a prior knowledge of a model of the world. We show that in sensorimotor cascades it is possible to define static faults independently of a nominal model. We define an information-theoretic usefulness of a sensor reading and we show that it captures several kind of sensorimotor faults frequently encountered in practice. We particularize these ideas to models proposed in previous work as suitable candidates for describing generic sensorimotor cascades. We show several examples with camera and range-finder data, and we discuss a possible way to integrate these techniques in an existing robot software architecture.",2012,0, 5605,Semi-automatic establishment and maintenance of valid traceability in automotive development processes,"The functionality realized by software in modern cars is increasing and as a result the development artifacts of automotive systems are getting more complex. The existence of traceability along these artifacts is essential, since it allows to monitor the product development from the initial requirements to the final code. However, traceability is established and maintained mostly manually, which is time-consuming and error-prone. A further crucial problem is the assurance of the validity of the trace links, that is, the linked elements are indeed related to each other. In this paper we present a semiautomatic approach to create, check, and update trace links between artifacts along an automotive development process.",2012,0, 5606,Research challenges on adaptive software and services in the future internet: towards an S-Cube research roadmap,This paper introduces research challenges on future service-oriented systems and software services. Those research challenges have been identified in a coordinated effort by researchers under the umbrella of the EU FP7 Network of Excellence S-Cube. We relate this effort to previous and related research roadmap activities and discuss the approach and results on identifying and assessing those challenges.,2012,0, 5607,Verification and testing at run-time for online quality prediction,"This paper summarizes two techniques for online failure prediction allowing to anticipate the need for adaptation of service-oriented systems: (1) SPADE, employing run-time verification to predict failures of service compositions. (2) PROSA, building on online testing to predict failures of individual services.",2012,0, 5608,Dependability-driven runtime management of service oriented architectures,"Software systems are becoming more and more complex due to the integration of large scale distributed entities and the continuous evolution of these new infrastructures. All these systems are progressively integrated in our daily environment and their increasing importance have raised a dependability issue. While Service oriented architecture is providing a good level of abstraction to deal with the complexity and heterogeneity of these new infrastructures, current approaches are limited in their ability to monitor and ensure the system dependability. In this paper, we propose a framework for the autonomic management of service oriented application based on a dependability objective. Our framework proposes a novel approach which leverages peer to peer evaluation of service providers to assess the system dependability. Based on this evaluation, we propose various strategies to dynamically adapt the system to maintain the dependability level of the system to the desired objective.",2012,0, 5609,Building software process lines with CASPER,"Software product quality and project productivity require defining suitable software process models. The best process depends on the circumstances where it is applied. Typically, a process engineer tailors a specific process for each project or each project type from an organizational software process model. Frequently, tailoring is performed in an informal and reactive fashion, which is expensive, unrepeatable and error prone. Trying to deal with this challenge, we have built CASPER, a meta-process for defining adaptable software process models. This paper presents CASPER illustrating it using the ISPW-6 process. CASPER meta-process allows producing project specific processes in a planned way using four software process principles and a set of process practices that enable a feasible production strategy. According to its application to a canonical case, this paper concludes that CASPER enables a practical technique for tailoring a software process model.",2012,0, 5610,Towards patterns for MDE-related processes to detect and handle changeability risks,"One of the multiple technical factors which affect changeability of software is model-driven engineering (MDE), where often several models and a multitude of manual as well as automated development activities have to be mastered to derive the final software product. The ability to change software with only reasonable costs, however, is of uppermost importance for the iterative and incremental development of software as well as agile development in general. Thus, the effective applicability of agile processes is influenced by the used MDE activities. However, there is currently no approach available to systematically detect and handle such risks to the changeability that result from the embedded MDE activities. In this paper we extend our beforehand-introduced process modeling approach by a notion of process pattern to capture typical situations that can be associated with risk or benefit with respect changeability. In addition, four candidates for the envisioned process patterns are presented in detail in the paper. Further, we developed strategies to handle changeability risks associated to these process patterns.",2012,0, 5611,Investigating the impact of code smells debt on quality code evaluation,"Different forms of technical debt exist that have to be carefully managed. In this paper we focus our attention on design debt, represented by code smells. We consider three smells that we detect in open source systems of different domains. Our principal aim is to give advice on which design debt has to be paid first, according to the three smells we have analyzed. Moreover, we discuss if the detection of these smells could be tailored to the specific application domain of a system.",2012,0, 5612,A rigorous approach to availability modeling,"Modeling and analyzing the dependability of software systems is a key activity in the development of embedded systems. An important factor of dependability is availability. Current modeling methods that support availability modeling are not based on a rigorous modeling theory. Therefore, when the behavior of the system influences the availability, as it is the case for fault-tolerant systems, the resulting analysis is imprecise or relies on external information. Based on a probabilistic extension of the Focus theory, we present a modeling technique that allows specifiying availability with a clear semantics. This semantics is a transformation of the original behavior to one that includes failures. Our approach enables modeling and verifying availability properties in the same way as system behavior.",2012,0, 5613,Creating visual Domain-Specific Modeling Languages from end-user demonstration,"Domain-Specific Modeling Languages (DSMLs) have received recent interest due to their conciseness and rich expressiveness for modeling a specific domain. However, DSML adoption has several challenges because development of a new DSML requires both domain knowledge and language development expertise (e.g., defining abstract/concrete syntax and specifying semantics). Abstract syntax is generally defined in the form of a metamodel, with semantics associated to the metamodel. Thus, designing a metamodel is a core DSML development activity. Furthermore, DSMLs are often developed incrementally by iterating across complex language development tasks. An iterative and incremental approach is often preferred because the approach encourages end-user involvement to assist with verifying the DSML correctness and feedback on new requirements. However, if there is no tool support, iterative and incremental DSML development can be mundane and error-prone work. To resolve issues related to DSML development, we introduce a new approach to create DSMLs from a set of domain model examples provided by an end-user. The approach focuses on (1) the identification of concrete syntax, (2) inducing abstract syntax in the form of a metamodel, and (3) inferring static semantics from a set of domain model examples. In order to generate a DSML from user-supplied examples, our approach uses graph theory and metamodel design patterns.",2012,0, 5614,How helpful are automated debugging tools?,"The field of automated debugging, which is concerned with the automation of identifying and correcting a failure's root cause, has made tremendous advancements in the past. However, some of the reported progress may be due to unrealistic assumptions that underlie the evaluation of automated debugging tools. These unrealistic assumptions concern the work process of developers and their ability to detect faulty code without explanatory context, as well as the size and arrangement of fixes. Instead of trying to locate the fault, we propose to help the developer understand it, thus enabling her to decide which fix she deems most appropriate. This would entail the need to employ a completely different evaluation scheme that bases on feedback from actual users of the tools in realistic usage scenarios. With this paper we propose the details for a first such user study.",2012,0, 5615,Revisiting bug triage and resolution practices,"Bug triaging is an error-prone, tedious and time-consuming task. However, little qualitative research has been done on the actual use of bug tracking systems, bug triage, and resolution processes. We are planning to conduct a qualitative study to understand the dynamics of bug triage and fixing process, as well as bug reassignments and reopens. We will study interviews conducted with Mozilla Core and Firefox developers to get insights into the primary obstacles developers face during the bug fixing process. Is the triage process flawed? Does bug review slow things down? Does approval takes too long? We will also categorize the main reasons for bug reassignments and reopens. We will then combine results with a quantitative study of Firefox bug reports, focusing on factors related to bug report edits and number of people involved in handling the bug.",2012,0, 5616,"An experimental study of a design-driven, tool-based development approach","Design-driven software development approaches have long been praised for their many benefits on the development process and the resulting software system. This paper discusses a step towards assessing these benefits by proposing an experimental study that involves a design-driven, tool-based development approach. This study raises various questions including whether a design-driven approach improves software quality and whether the tool-based approach improves productivity. In examining these questions, we explore specific issues such as the approaches that should be involved in the comparison, the metrics that should be used, and the experimental framework that is required.",2012,0, 5617,Making exceptions on exception handling,"The exception-handling mechanism has been widely adopted to deal with exception conditions that may arise during program executions. To produce high-quality programs, developers are expected to handle these exception conditions and take necessary recovery or resource-releasing actions. Failing to handle these exception conditions can lead to not only performance degradation, but also critical issues. Developers can write formal specifications to capture expected exception-handling behavior, and then apply tools to automatically analyze program code for detecting specification violations. However, in practice, developers rarely write formal specifications. To address this issue, mining techniques have been used to mine common exception-handling behavior out of program code. In this paper, we discuss challenges and achievements in precisely specifying and mining formal exception-handling specifications, as tackled by our previous work. Our key insight is that expected exception-handling behavior may be conditional or may need to accommodate exceptional cases.",2012,0, 5618,Towards a formal model to reason about context-aware exception handling,"The context-awareness is a central aspect in the design of pervasive systems, characterizing their ability to adapt its structure and behavior. The context-aware exception handling (CAEH) is an existing approach employed to design exception handling in pervasive systems. In this approach, the context is used to define, detect, propagate, and handle exceptions. CAEH is a complex and error prone activity, needing designers' insights and domain expertise to identify and characterize contextual exceptions. However, despite the existence of formal methods to analyze the adaptive behavior of pervasive systems, such methods lack specific support to specify the CAEH behavior. In this paper, we propose a formal model to reason about the CAEH behavior. It comprises an extension of the Kripke Structure to model the context evolution of a pervasive system and a transformation function that derivates the CAEH control flow from that proposed structure.",2012,0, 5619,A generic approach for deploying and upgrading mutable software components,"Deploying and upgrading software systems is typically a labourious, error prone and tedious task. To deal with the complexity of a software deployment process and to make this process more reliable, we have developed Nix, a purely functional package manager as well as an extension called Disnix, capable of deploying service-oriented systems in a network of machines. Nix and its applications only support deployment of immutable components, which never change after they have been built. However, not all components of a software system are immutable, such as databases. These components must be deployed by other means, which makes deployment and upgrades of such systems difficult, especially in large networks. In this paper, we analyse the properties of mutable components and we propose Dysnomia, a deployment extension for mutable components.",2012,0, 5620,Welcome to 3rd International Workshop on Emerging Trends in Software Metrics (WETSoM 2012),"Welcome to WETSoM2012, the 3rd International Workshop on Emerging Trends in Software Metrics. Since its start, WETSoM attracted a blend of academic and industrial researchers, creating a stimulating atmosphere to discuss the progresses of software metrics. A key motivation for this workshop is to help overcoming the low impact that software metrics has on current software development. This is pursued by critically examining the evidence for the effectiveness of existing metrics and identifying new directions for metrics. Evidence for existing metrics includes how the metrics have been used in practice and studies showing their effectiveness. Identifying new directions includes use of new theories, such as complex network theory, on which to base metrics. We are pleased that this year WETSoMfeatures 12 technical paper and an exciting keynote on mining developers' communication to assess software quality by Massimiliano di Penta. The program of WETSoM2012 is the result of hard work by many dedicated people; we especially thank the authors of submitted papers and the members of the program committee. Above all, the greatest richness of this workshop is its participants, who shape the discussion and points into new directions for software metrics research and practice. We hope you will have a great time and an unforgettable experience at WETSoM2012.",2012,0, 5621,"Mining developers' communication to assess software quality: Promises, challenges, perils","Summary form only given. In the recent decades, power consumption of System on Chip (SoC) is getting more dominant and Through-Silicon Via (TSV) technology has emerged as a promising solution to enhance system integration at lower cost and reduce footprint. Powerful microprocessor and immense memory capability integrated in standard 2D IC enabled to improve IC performance by shrinking IC dimensions. Our research evaluates the impact of Through-Silicon Via (TSV) on 3D chip performance as well as power consumption and investigates to understand the optimum TSV dimension (i.e., diameter, height, etc...) for 3D IC fabrication. The key idea is using the physical and electrical modeling of TSV which considers the coupling effects as well as TSV-to-bulk silicon parameters in 3D circuitry. In addition, by combining the conventional metrics for planar IC technology with TSV modeling, several methodologies are developed to evaluate the 3D chip's behavior with respect to interconnect and repeaters. For example, by exploiting 101-stage Ring Oscillator and 100-inverter chain into 3D IC, it can be said that the through silicon via brings substantial benefits on local interconnect layers by improving overall transmission speed and reducing power consumption. The results in our research show that by adopting TSV infusion we can both reduce the power dissipation of interconnect and improve overall performance up to 35% in 4-die stacking case. Like all ICs, the TSV based 3D stacked IC need to be analyzed for manufacturing process variation. Hence, we investigate the variation of TSV dimension and then propose the optimal shape of TSV for the best performance of 3D systems integration. From simultaneous Monte Carlo simulations of TSV height and diameter, we can conclude that for given specific pitch in 3D IC technology, TSV with a small diameter is best fo",2012,0, 5622,Fourth international workshop on Software Engineering in Health Care (SEHC 2012),"Healthcare Informatics is one of the fastest growing economic sectors in the world today. With the anticipated future advances and investments in this field, it is expected that Healthcare Informatics will become one of the dominant economic factors in the 21st century. In addition to economic importance, this field has the potential to make substantial contributions to the comfort and longevity of every human being on the face of the earth. Software, and thus Software Engineering, has an important role to play in all of this. Medical devices, electronic medical records, robotic-driven surgery are just some examples where software plays a critical role. In addition, medical processes are known to be error prone and prime targets for process improvement technology. Moreover, there are important questions about software quality, user interfaces, systems interoperability, process automation, regulatory regimes, and many other concerns familiar to software engineering practitioners and researchers.",2012,0, 5623,Security testing of web applications: A research plan,"Cross-site scripting (XSS) vulnerabilities are specific flaws related to web applications, in which missing input validation can be exploited by attackers to inject malicious code into the application under attack. To guarantee high quality of web applications in terms of security, we propose a structured approach, inspired by software testing. In this paper we present our research plan and ongoing work to use security testing to address problems of potentially attackable code. Static analysis is used to reveal candidate vulnerabilities as a set of execution conditions that could lead to an attack. We then resort to automatic test case generation to obtain those input values that make the application execution satisfy such conditions. Eventually, we propose a security oracle to assess whether such test cases are instances of successful attacks.",2012,0, 5624,ConcernReCS: Finding code smells in software aspectization,"Refactoring object-oriented (OO) code to aspects is an error-prone task. To support this task, this paper presents ConcernReCS, an Eclipse plug-in to help developers to avoid recurring mistakes during software aspectization. Based on a map of concerns, ConcernReCS automatically finds and reports error-prone scenarios in OO source code; i.e., before the concerns have been refactored to aspects.",2012,0, 5625,Modeling Cloud performance with Kriging,"Cloud infrastructures allow service providers to implement elastic applications. These can be scaled at runtime to dynamically adjust their resources allocation to maintain consistent quality of service in response to changing working conditions, like flash crowds or periodic peaks. Providers need models to predict the system performances of different resource allocations to fully exploit dynamic application scaling. Traditional performance models such as linear models and queueing networks might be simplistic for real Cloud applications; moreover, they are not robust to change. We propose a performance modeling approach that is practical for highly variable elastic applications in the Cloud and automatically adapts to changing working conditions. We show the effectiveness of the proposed approach for the synthesis of a self-adaptive controller.",2012,0, 5626,Engineering and verifying requirements for programmable self-assembling nanomachines,"We propose an extension of van Lamsweerde's goal-oriented requirements engineering to the domain of programmable DNA nanotechnology. This is a domain in which individual devices (agents) are at most a few dozen nanometers in diameter. These devices are programmed to assemble themselves from molecular components and perform their assigned tasks. The devices carry out their tasks in the probabilistic world of chemical kinetics, so they are individually error-prone. However, the number of devices deployed is roughly on the order of a nanomole (a 6 followed by fourteen 0s), and some goals are achieved when enough of these agents achieve their assigned subgoals. We show that it is useful in this setting to augment the AND/OR goal diagrams to allow goal refinements that are mediated by threshold functions, rather than ANDs or ORs. We illustrate this method by engineering requirements for a system of molecular detectors (DNA origami pliers that capture target molecules) invented by Kuzuya, Sakai, Yamazaki, Xu, and Komiyama (2011). We model this system in the Prism probabilistic symbolic model checker, and we use Prism to verify that requirements are satisfied, provided that the ratio of target molecules to detectors is neither too high nor too low. This gives prima facie evidence that software engineering methods can be used to make DNA nanotechnology more productive, predictable and safe.",2012,0, 5627,BRACE: An assertion framework for debugging cyber-physical systems,"Developing cyber-physical systems (CPS) is challenging because correctness depends on both logical and physical states, which are collectively difficult to observe. The developer often need to repeatedly rerun the system while observing its behavior and tweak the hardware and software until it meets minimum requirements. This process is tedious, error-prone, and lacks rigor. To address this, we propose BRACE, A framework that simplifies the process by enabling developers to correlate cyber (i.e., logical) and physical properties of the system via assertions. This paper presents our initial investigation into the requirements and semantics of such assertions, which we call CPS assertions. We discusses our experience implementing and using the framework with a mobile robot, and highlight key future research challenges.",2012,0, 5628,Mining input sanitization patterns for predicting SQL injection and cross site scripting vulnerabilities,"Static code attributes such as lines of code and cyclomatic complexity have been shown to be useful indicators of defects in software modules. As web applications adopt input sanitization routines to prevent web security risks, static code attributes that represent the characteristics of these routines may be useful for predicting web application vulnerabilities. In this paper, we classify various input sanitization methods into different types and propose a set of static code attributes that represent these types. Then we use data mining methods to predict SQL injection and cross site scripting vulnerabilities in web applications. Preliminary experiments show that our proposed attributes are important indicators of such vulnerabilities.",2012,0, 5629,Release engineering practices and pitfalls,"The release and deployment phase of the software development process is often overlooked as part of broader software engineering research. In this paper, we discuss early results from a set of multiple semi-structured interviews with practicing release engineers. Subjects for the interviews are drawn from a number of different commercial software development organizations, and our interviews focus on why release process faults and failures occur, how organizations recover from them, and how they can be predicted, avoided or prevented in the future. Along the way, the interviews provide insight into the state of release engineering today, and interesting relationships between software architecture and release processes.",2012,0, 5630,Extending static analysis by mining project-specific rules,"Commercial static program analysis tools can be used to detect many defects that are common across applications. However, such tools currently have limited ability to reveal defects that are specific to individual projects, unless specialized checkers are devised and implemented by tool users. Developers do not typically exploit this capability. By contrast, defect mining tools developed by researchers can discover project-specific defects, but they require specialized expertise to employ and they may not be robust enough for general use. We present a hybrid approach in which a sophisticated dependence-based rule mining tool is used to discover project-specific programming rules, which are then transformed automatically into checkers that a commercial static analysis tool can run against a code base to reveal defects. We also present the results of an empirical study in which this approach was applied successfully to two large industrial code bases. Finally, we analyze the potential implications of this approach for software development practice.",2012,0, 5631,A tactic-centric approach for automating traceability of quality concerns,"The software architectures of business, mission, or safety critical systems must be carefully designed to balance an exacting set of quality concerns describing characteristics such as security, reliability, and performance. Unfortunately, software architectures tend to degrade over time as maintainers modify the system without understanding the underlying architectural decisions. Although this problem can be mitigated by manually tracing architectural decisions into the code, the cost and effort required to do this can be prohibitively expensive. In this paper we therefore present a novel approach for automating the construction of traceability links for architectural tactics. Our approach utilizes machine learning methods and lightweight structural analysis to detect tactic-related classes. The detected tactic-related classes are then mapped to a Tactic Traceability Information Model. We train our trace algorithm using code extracted from fifteen performance-centric and safety-critical open source software systems and then evaluate it against the Apache Hadoop framework. Our results show that automatically generated traceability links can support software maintenance activities while helping to preserve architectural qualities.",2012,0, 5632,Amplifying tests to validate exception handling code,"Validating code handling exceptional behavior is difficult, particularly when dealing with external resources that may be noisy and unreliable, as it requires: 1) the systematic exploration of the space of exceptions that may be thrown by the external resources, and 2) the setup of the context to trigger specific patterns of exceptions. In this work we present an approach that addresses those difficulties by performing an exhaustive amplification of the space of exceptional behavior associated with an external resource that is exercised by a test suite. Each amplification attempts to expose a program exception handling construct to new behavior by mocking an external resource so that it returns normally or throws an exception following a predefined pattern. Our assessment of the approach indicates that it can be fully automated, is powerful enough to detect 65% of the faults reported in the bug reports of this kind, and is precise enough that 77% of the detected anomalies correspond to faults fixed by the developers.",2012,0, 5633,Detecting and visualizing inter-worksheet smells in spreadsheets,"Spreadsheets are often used in business, for simple tasks, as well as for mission critical tasks such as finance or forecasting. Similar to software, some spreadsheets are of better quality than others, for instance with respect to usability, maintainability or reliability. In contrast with software however, spreadsheets are rarely checked, tested or certified. In this paper, we aim at developing an approach for detecting smells that indicate weak points in a spreadsheet's design. To that end we first study code smells and transform these code smells to their spreadsheet counterparts. We then present an approach to detect the smells, and to communicate located smells to spreadsheet users with data flow diagrams. To evaluate our apporach, we analyzed occurrences of these smells in the Euses corpus. Furthermore we conducted ten case studies in an industrial setting. The results of the evaluation indicate that smells can indeed reveal weaknesses in a spreadsheet's design, and that data flow diagrams are an appropriate way to show those weaknesses.",2012,0, 5634,Graph-based analysis and prediction for software evolution,"We exploit recent advances in analysis of graph topology to better understand software evolution, and to construct predictors that facilitate software development and maintenance. Managing an evolving, collaborative software system is a complex and expensive process, which still cannot ensure software reliability. Emerging techniques in graph mining have revolutionized the modeling of many complex systems and processes. We show how we can use a graph-based characterization of a software system to capture its evolution and facilitate development, by helping us estimate bug severity, prioritize refactoring efforts, and predict defect-prone releases. Our work consists of three main thrusts. First, we construct graphs that capture software structure at two different levels: (a) the product, i.e., source code and module level, and (b) the process, i.e., developer collaboration level. We identify a set of graph metrics that capture interesting properties of these graphs. Second, we study the evolution of eleven open source programs, including Firefox, Eclipse, MySQL, over the lifespan of the programs, typically a decade or more. Third, we show how our graph metrics can be used to construct predictors for bug severity, high-maintenance software parts, and failure-prone releases. Our work strongly suggests that using graph topology analysis concepts can open many actionable avenues in software engineering research and practice.",2012,0, 5635,A history-based matching approach to identification of framework evolution,"In practice, it is common that a framework and its client programs evolve simultaneously. Thus, developers of client programs may need to migrate their programs to the new release of the framework when the framework evolves. As framework developers can hardly always guarantee backward compatibility during the evolution of a framework, migration of its client program is often time-consuming and error-prone. To facilitate this migration, researchers have proposed two categories of approaches to identification of framework evolution: operation-based approaches and matching-based approaches. To overcome the main limitations of the two categories of approaches, we propose a novel approach named HiMa, which is based on matching each pair of consecutive revisions recorded in the evolution history of the framework and aggregating revision-level rules to obtain framework-evolution rules. We implemented our HiMa approach as an Eclipse plug-in targeting at frameworks written in Java using SVN as the version-control system. We further performed an experimental study on HiMa together with a state-of-art approach named AURA using six tasks based on three subject Java frameworks. Our experimental results demonstrate that HiMa achieves higher precision and higher recall than AURA in most circumstances and is never inferior to AURA in terms of precision and recall in any circumstances, although HiMa is computationally more costly than AURA.",2012,0, 5636,Improving early detection of software merge conflicts,"Merge conflicts cause software defects which if detected late may require expensive resolution. This is especially true when developers work too long without integrating concurrent changes, which in practice is common as integration generally occurs at check-in. Awareness of others' activities was proposed to help developers detect conflicts earlier. However, it requires developers to detect conflicts by themselves and may overload them with notifications, thus making detection harder. This paper presents a novel solution that continuously merges uncommitted and committed changes to create a background system that is analyzed, compiled, and tested to precisely and accurately detect conflicts on behalf of developers, before check-in. An empirical study confirms that our solution avoids overloading developers and improves early detection of conflicts over existing approaches. Similarly to what happened with continuous compilation, this introduces the case for continuous merging inside the IDE.",2012,0, 5637,Reconciling manual and automatic refactoring,"Although useful and widely available, refactoring tools are underused. One cause of this underuse is that a developer sometimes fails to recognize that she is going to refactor before she begins manually refactoring. To address this issue, we conducted a formative study of developers' manual refactoring process, suggesting that developers' reliance on chasing error messages when manually refactoring is an error-prone manual refactoring strategy. Additionally, our study distilled a set of manual refactoring workflow patterns. Using these patterns, we designed a novel refactoring tool called BeneFactor. BeneFactor detects a developer's manual refactoring, reminds her that automatic refactoring is available, and can complete her refactoring automatically. By alleviating the burden of recognizing manual refactoring, BeneFactor is designed to help solve the refactoring tool underuse problem.",2012,0, 5638,Characterizing logging practices in open-source software,"Software logging is a conventional programming practice. While its efficacy is often important for users and developers to understand what have happened in the production run, yet software logging is often done in an arbitrary manner. So far, there have been little study for understanding logging practices in real world software. This paper makes the first attempt (to the best of our knowledge) to provide a quantitative characteristic study of the current log messages within four pieces of large open-source software. First, we quantitatively show that software logging is pervasive. By examining developers' own modifications to the logging code in the revision history, we find that they often do not make the log messages right in their first attempts, and thus need to spend a significant amount of efforts to modify the log messages as after-thoughts. Our study further provides several interesting findings on where developers spend most of their efforts in modifying the log messages, which can give insights for programmers, tool developers, and language and compiler designers to improve the current logging practice. To demonstrate the benefit of our study, we built a simple checker based on one of our findings and effectively detected 138 pieces of new problematic logging code from studied software (24 of them are already confirmed and fixed by developers).",2012,0, 5639,EVOSS: A tool for managing the evolution of free and open source software systems,"Software systems increasingly require to deal with continuous evolution. In this paper we present the EVOSS tool that has been defined to support the upgrade of free and open source software systems. EVOSS is composed of a simulator and of a fault detector component. The simulator is able to predict failures before they can affect the real system. The fault detector component has been defined to discover inconsistencies in the system configuration model. EVOSS improves the state of the art of current tools, which are able to predict a very limited set of upgrade faults, while they leave a wide range of faults unpredicted.",2012,0, 5640,Fault detection of a series compensated line during the damping process of Inter-area mode of oscillation,"The presence of supplementary controllers for thyristor controlled series compensators (TCSC), to damp the Inter-area power system oscillations makes the resultant transmission line (T.L), reactance varying during the damping process. This variation causes an undesirable effect on the distance relay DR, as its setting depends on the physical impedance of the T.L. In this paper, a novel algorithm is designed for the DR to allow it to update its impedance setting smoothly based on the measured level of series compensation. This adaptive setting allows the DR to detect the correct zone for the low current faults. PSCAD software is used to simulate the tuneable zones setting of the DR during the damping process.",2012,0, 5641,Optimal protection devices allocation and coordination in MV distribution networks,"Historically, the distribution network has been planned in order to operate in radial configuration. Protective relays are adopted to detect system abnormalities and to execute appropriate commands to isolate swiftly only the faulty component from the healthy system. Nowadays, the regulation schemes implemented by Regulators require that Distribution Companies reduce number and duration of supply interruptions by adopting new strategies in order to identify and isolate faults along distribution feeders. Two optimization problems arise: finding the optimal location of circuit breakers and the coordination among the overcurrent relay characteristics. In the paper, a novel algorithm to solve the overcurrent relay allocation and coordination problem in distribution networks based on genetic algorithm is proposed. Examples derived by a representative distribution network are presented.",2012,0, 5642,Mining object-oriented design models for detecting identical design structures,"The object-oriented design is the most popular design methodology of the last twenty-five years. Several design patterns and principles are defined to improve the design quality of object-oriented software systems. In addition, designers can use unique design motifs which are particular for the specific application domain. Another common habit is cloning and modifying some parts of the software while creating new modules. Therefore, object-oriented programs can include many identical design structures. This work proposes a sub-graph mining based approach to detect identical design structures in object-oriented systems. By identifying and analyzing these structures, we can obtain useful information about the design, such as commonly-used design patterns, most frequent design defects, domain-specific patterns, and design clones, which may help developers to improve their knowledge about the software architecture. Furthermore, problematic parts of frequent identical design structures are the appropriate refactoring opportunities because they affect multiple areas of the architecture. Experiments with several open-source projects show that we can successfully find many identical design structures in each project. We observe that usually most of the identical structures are an implementation of common design patterns; however we also detect various anti-patterns, domain-specific patterns, and design-level clones.",2012,0, 5643,"We have all of the clones, now what? Toward integrating clone analysis into software quality assessment","Cloning might seems to be an unconventional way of designing and developing software, yet it is very widely practised in industrial development. The cloning research community has made substantial progress on modeling, detecting, and analyzing software clones. Although there is continuing discussion on the real role of clones on software quality, our community may agree on the need for advancing clone management techniques. Current clone management techniques concentrate on providing editing tools that allow developers to easily inspect clone instances, track their evolution, and check change consistency. In this position paper, we argue that better clone management can be achieved by responding to the fundamental needs of industry practitioners. And the possible research directions include a software problem-oriented taxonomy of clones, and a better structured clone detection report. We believe this line of research should inspire new techniques, and reach to a much wider range of professionals from both the research and industry community.",2012,0, 5644,Assessing risk in Grids at resource level considering Grid resources as repairable using two state Semi Markov model,"Service Level Agreements in Grids improve upon the Best Effort Approach which provides no guarantees for provision of any Quality of Service (QoS) between the End User and the Resource Provider. Risk Assessment in Grids improves upon SLA by provision of Risk information to resource provider. Most of the previous studies of Risk Assessment in Grids work at node level. As a node failure can be a failure of any component such as Disk, CPU, Memory, Software, etcetera, the risk assessment at component level in Grids was introduced. In this work, we propose a Risk Assessment Model at component level while considering Grid resources as repairable. This work can be differentiated from the other works by the fact that the past efforts in Risk Assessment in Grids consider Grid Resources as replaceable rather than repairable. This Semi Markov model relies on the distribution fitting for both time to Failure and Time to Repair, extracted from the Grid Failure data during the data analysis section. By using Grid Failure data, the utilization of this Grid model is demonstrated by providing (Probability of Failure) PoF and (Probability of Repair) PoR values for different components. The experimental results indicate the PoF and PoR behave vary differently with the latter showing considerably times required for repair as compared to expectance of a failure. The risk information provided by this Risk Assessment Model will help Resource provider to use the Grid Resources efficiently and achieve effective scheduling.",2012,0, 5645,Automated prediction of defect severity based on codifying design knowledge using ontologies,"Assessing severity of software defects is essential for prioritizing fixing activities as well as for assessing whether the quality level of a software system is good enough for release. In filling out defect reports, developers routinely fill out default values for the severity levels. The purpose of this research is to automate the prediction of defect severity. Our aim is to research how this severity prediction can be achieved through reasoning about the requirements and the design of a system using ontologies. In this paper we outline our approach based on an industrial case study.",2012,0, 5646,Predicting mutation score using source code and test suite metrics,"Mutation testing has traditionally been used to evaluate the effectiveness of test suites and provide confidence in the testing process. Mutation testing involves the creation of many versions of a program each with a single syntactic fault. A test suite is evaluated against these program versions (mutants) in order to determine the percentage of mutants a test suite is able to identify (mutation score). A major drawback of mutation testing is that even a small program may yield thousands of mutants and can potentially make the process cost prohibitive. To improve the performance and reduce the cost of mutation testing, we propose a machine learning approach to predict mutation score based on a combination of source code and test suite metrics.",2012,0, 5647,A Heuristic Model-Based Test Prioritization Method for Regression Testing,"Due to the resource and time constraints for re-executing large test suites in regression testing, developers are interested in detecting faults in the system as early as possible. Test case prioritization seeks to order test cases in such a way that early fault detection is maximized. In this paper, we present a model-based heuristic method to prioritize test cases for regression testing, which takes into account two types of information collected during execution of the modified model on the test suite. The experiment shows that our algorithm has better effectiveness of early fault detection.",2012,0, 5648,Simulation of temperature effects on GaAs MESFET based on physical model,"The Prognostics and Health Management (PHM) system for radar equipment is paid more and more attention by researchers in recent years. Owing to its high reliability and performance, the Active Phased Array Radar (APAR) has been playing an increasingly important role in the modern radar field which is composed of thousands of solid-state Transmit/Receive (T/R) modules. As the power source of the T/R module, gallium arsenide metal-semiconductor field effect transistor (GaAs MESFET) has been widely used due to its higher electron mobility, operating frequency, power-added efficiency and lower noise figures than silicon MOSFET. However, the performance of GaAs MESFET is influenced by its operating temperature significantly. In order to achieve effective fault injection for the PHM system, it's necessary to get temperature effects on GaAs MESFET. A simplified GaAs MESFET equivalent circuit model based on specific physical properties is proposed and realized on the EDA software. It can help the optimizing of device's structure and materials' parameters. What is more, it realizes the performance simulation under varied temperatures, thus the degradation of GaAs MESFET's output parameters can be predicted by monitoring its temperature.",2012,0, 5649,Simulation of stress-magnetization coupling about magnetic memory effect of gear defects,"As an important component in the transmission system, gear endures complex stress during the meshing process. The tiny defects of the gear are likely to occur after some amount of load cycles. This will lead to a huge economic loss if the tiny defects are not detected in time. Metal magnetic memory (MMM) technique can effectively find the early defects of ferromagnetic material and it has attracted great attentions. However, the mechanism of metal magnetic memory on ferromagnetic materials under loading and geomagnetic field has not been thoroughly addressed, and the studies on gear defects are rarely reported. This paper adopts the finite element analysis (FEA) software ANSYS to construct a defective gear model and the air around it. The contact-stress and magnetic flux leakage distribution is given through the stress-magnetization coupling model. The simulation results are in accord with the common MMM principle, which are helpful to the understanding of the stress-magnetization coupling mechanism and provide the gist for the further study of gears early defects detection based on MMM technique.",2012,0, 5650,Critical review of system failure behavior analysis method,"Failure behavior is the state change process of product or part of a product which is relative to its environment, over time performance and can be detected from the outside. According to the different level of analysis, failure behavior analysis method can be divided into element failure behavior analysis method and system failure behavior analysis method. The formal reveals the variety failure mechanisms under the alone or coupled action of internal cause and external cause using coupling analysis method; the latter focuses on product failure performance law under the effect of variety failure mechanisms by means of failure propagation analysis or state analysis. This critical review from two aspects of element and system investigates and summarizes the current research status of failure behavior analysis method. The result shows that coupling analysis method has been mature at present, and there is plenty of supporting software for computer-aided analysis. Failure propagation analysis method consists of graph theory based method, Petri Net method and complex network method. But the above-mentioned methods are unilateral and isolated from each other. System failure behavior analysis method needs to synthetically use the current methods - coupling analysis, failure propagation analysis and state analysis method so that forms an analysis methodology which needs to clear the input and output of each method and improve the interface between application software. The comprehensive methodology that the failure behavior analysis follows will provide support to product reliability analysis and design improvement.",2012,0, 5651,Test case prioritization incorporating ordered sequence of program elements,"Test suites often grow very large over many releases, such that it is impractical to re-execute all test cases within limited resources. Test case prioritization, which rearranges test cases, is a key technique to improve regression testing. Code coverage information has been widely used in test case prioritization. However, other important information, such as the ordered sequence of program elements measured by execution frequencies, was ignored by previous studies. It raises a risk to lose detections of difficult-to-find bugs. Therefore, this paper improves the similarity-based test case prioritization using the ordered sequence of program elements measured by execution counts. The empirical results show that our new technique can increase the rate of fault detection more significantly than the coverage-based ART technique. Moreover, our technique can detect bugs in loops more quickly and be more cost-benefits than the traditional ones.",2012,0, 5652,Grammar based oracle for security testing of web applications,"The goal of security testing is to detect those defects that could be exploited to conduct attacks. Existing works, however, address security testing mostly from the point of view of automatic generation of test cases. Less attention is paid to the problem of developing and integrating with a security oracle. In this paper we address the problem of the security oracle, in particular for Cross-Site Scripting vulnerabilities. We rely on existing test cases to collect HTML pages in safe conditions, i.e. when no attack is run. Pages are then used to construct the safe model of the application under analysis, a model that describes the structure of an application response page for safe input values. The oracle eventually detects a successful attack when a test makes the application display a web page that is not compliant with the safe model.",2012,0, 5653,Automated test-case generation by cloning,"Test cases are often similar. A preliminary study of eight open-source projects found that on average at least 8% of all test cases are clones; the maximum found was 42 %. The clones are not identical with their originals - identifiers of classes, methods, attributes and sometimes even order of statements and assertions differ. But the test cases reuse testing logic and are needed for testing. They serve a purpose and cannot be eliminated. We present an approach that generates useful test clones automatically, thereby eliminating some of the grunt work of testing. An important advantage over existing automated test case generators is that the clones include the test oracle. Hence, a human decision maker is often not needed to determine whether the output of a test is correct. The approach hinges on pairs of classes that provide analogous functionality, i.e., functions that are tested with the same logic. TestCloner transcribes tests involving analogous functions from one class to the other. Programmers merely need to indicate which methods are analogs. Automatic detection of analogs is currently under investigation. Preliminary results indicate a significant reduction in the number of boilerplate tests that need to be written by hand. The transcribed tests do detect defects and can provide hints about missing functionality.",2012,0, 5654,BlackHorse: Creating smart test cases from brittle recorded tests,"Testing software with a GUI is difficult. Manual testing is costly and error-prone, but recorded test cases frequently break due to changes in the GUI. Test cases intended to test business logic must therefore be converted to a less brittle form to lengthen their useful lifespan. In this paper, we describe BlackHorse, an approach to doing this that converts a recorded test case to Java code that bypasses the GUI. The approach was implemented within the testing environment of Research In Motion. We describe the design of the toolset and discuss lessons learned during the course of the project.",2012,0, 5655,Testing of PolPA authorization systems,"The implementation of an authorization system is a difficult and error-prone activity that requires a careful verification and testing process. In this paper, we focus on testing the implementation of the PolPA authorization system and in particular its Policy Decision Point (PDP), used to define whether an access should be allowed or not. Thus exploiting the PolPA policy specification, we present a fault model and a test strategy able to highlight the problems, vulnerabilities and faults that could occur during the PDP implementation, and a testing framework for the automatic generation of a test suite that covers the fault model. Preliminary results of the test framework application to a realistic case study are presented.",2012,0, 5656,How much information could be revealed by analyzing data from pressure sensors attached to shoe insole?,"Data collected from pressure sensors attached to shoe insole is a rich source of information. (1) We can detect faults in walking and balancing problems for old people. (2) The pressure sensor data can be used to design personalized foot orthoses. (3) We can calculate the calorie burnt, even when walking and jogging are mixed, and the road slope changes. (4) We can use the data to train sprinters or tennis players. (5) We can even use the data for person identification. In addition, it can be used for alarms for situations arising from mis-handling of machines, like accelerator pedal in a car. We attached very thin pressure sensors on top of a shoe insole and collected data. A few important and readily detectable features from the time series data collected by those sensors are extracted and used for person identification, or to classify whether a person is walking or jogging. Nearly 100% classification accuracy was achieved. Thus, the target to classify whether the person is walking or jogging or climbing up or down the stairs is possible. This success also encouraged us to investigate whether it is possible to find the body weight and the step-length from this data. Once that is possible, the system can accurately deliver the calorie burnt at the end of the day. We will further explore the possibility, using sophisticated feature extraction and classification techniques, to detect faults in walking and predict the probability of fall for elderly people, or people with problem in balancing due to various diseases or caused by accidents.",2012,0, 5657,EMFtoCSP: A tool for the lightweight verification of EMF models,"The increasing popularity of MDE results in the creation of larger models and model transformations, hence converting the specification of MDE artefacts in an error-prone task. Therefore, mechanisms to ensure quality and absence of errors in models are needed to assure the reliability of the MDE-based development process. Formal methods have proven their worth in the verification of software and hardware systems. However, the adoption of formal methods as a valid alternative to ensure model correctness is compromised for the inner complexity of the problem. To circumvent this complexity, it is common to impose limitations such as reducing the type of constructs that can appear in the model, or turning the verification process from automatic into user assisted. Since we consider these limitations to be counterproductive for the adoption of formal methods, in this paper we present EMFtoCSP, a new tool for the fully automatic, decidable and expressive verification of EMF models that uses constraint logic programming as the underlying formalism.",2012,0, 5658,Evaluation of testability enhancement using software prototype,"The functional delay-fault models, which are based on the input stimuli and correspondent responses at the outputs, cover transition faults at the gate level quite well. This statement forms the basis for the analysis and comparison of different methods of design for testability (DFT) using software prototype model of the circuit and to select the most appropriate one before the structural synthesis of the circuit. Along with known DFT methods (enhanced scan, launch-on-shift scan and launch-on-capture scan), the authors introduce the method, which is based on the addition of new connections to the circuit in the non-scan testing mode. In order to assess the DFT methods, the functional test is generated for the analysed circuit and the functional delay-fault coverage for this test is evaluated. Each of the considered DFT methods has its own advantages and disadvantages, since they have different delay fault coverage, and require different hardware for their implementation. These differences depend on the function of circuit. The experimental results are provided for the ITC'99 benchmark circuits. The obtained results proved the applicability of the proposed method.",2012,0, 5659,A new tracking method of symmetrical fault during Power Swing Based on S-Transform,"Current distance relay is accommodated with Power Swing Blocking (PSB) scheme. However, this blocking scheme prove to vulnerable to distance relay operation as it could block the trip signals if the symmetrical fault occur during power swing. Hence, it is important to develop the proper fault detection scheme during power swing to avoid such undesirable circumstances. This paper presents a new detection technique to detect symmetrical fault during power swing by using S-Transform analysis based on the current, voltage, and three-phase active and reactive power signals waveform. To evaluate the effectiveness of the proposed technique, testing has been conducted under IEEE 9 bus test system. Simulation results show that the proposed technique can reliably detect symmetrical fault occurring during power swing.",2012,0, 5660,Impact of a 200 MW concentrated receiver solar power plant on steady-state and transient performances of Oman transmission system,"The paper presents steady-state and transient studies to assess the impact of a 200 MW Central Receiver Solar Power Plant (CRSPP) connection on the Main Interconnected Transmission System (MITS) of Oman. The CRSPP consists mainly of a central solar receiver, power tower, thousands of heliostats, molten salt storage tanks, heat exchangers, steam generator, steam turbine, synchronous generator, and step-up transformer. Two proposed locations are considered to connect the CRSPP plant to MITS: Manah 132 kV and Adam 132/33 kV grid stations. The 2015 transmission grid model has been updated to include the simulation of the proposed 200 MW CRSPP using the DIgSILENT PowerFactory professional software. The studies include load flow analysis and short-circuit level calculations in addition to transient responses to three-phase fault and complete CRSPP outage. The results have shown that the connection of the proposed CRSPP plant to the MITS is acceptable.",2012,0, 5661,Towards spatial fault resilience in array processors,"Computing with large die-size graphical processors (that need huge arrays of identical structures) in the late CMOS era is abounding with challenges due to spatial non-idealities arising from chip-to-chip and within-chip variation of MOSFET threshold voltage. In this paper, we propose a machine learning based software-framework for in-situ prediction and correction of computation corrupted due to threshold voltage variation of transistors. Based on semi-supervised training imparted to a fully connected cascade feed-forward neural network (FCCFF-NN), the NN makes an accurate prediction of the underlying hardware, creating a spatial map of faulty processing elements (PE). The faulty elements identified by the NN are avoided in future computing. Further, any transient faults occurring over and above these spatial faults are tracked, and corrected if the number of PEs involved in a particle strike is above a preset threshold. For the purposes of experimental validation, we consider a 256 256 array of PE. Each PE is comprised of a multiply-accumulate (MAC) block with three 8 bit registers (two for inputs and one for storing the computed result). One thousand instances of this processor array are created and PEs in each instance are randomly perturbed with threshold voltage variation. Common image processing operations such as low pass filtering and edge enhancement are performed on each of these 1000 instances. A fraction of these images (about 10%) is used to train the NN for spatial non-idealities. Based on this training, the NN is able to accurately predict the spatial extremities in 95% of all the remaining 90% of the cases. The proposed NN based error tolerance results in superior quality images whose degradation is no longer visually perceptible.",2012,0, 5662,Book of abstracts,"Summary form only given. Programming assignments (PAs) are very important to many computer science courses. Traditionally, the grading of a programming assignment is based mainly on the correctness of the code. However, from the view point of software engineering education, such a grading does not encourage students to develop code that is easy to read and maintain. Thus, the authors created a grading policy that considers not only the correctness but also the quality of the code, expecting students to follow the most important discipline the source code should be written in a way that is readable and maintainable. Instead of using pure subjective code-quality ratings, bad smells are used to assess the code quality of PAs. When a PA is graded by the teaching assistant, a list of bad smells is identified and given to the student so that the student can use refactoring methods to improve the code.",2012,0, 5663,On-line software-based self-test of the Address Calculation Unit in RISC processors,"Software-based Self-Test (SBST) can be used during the mission phase of microprocessor-based systems to periodically assess the hardware integrity. However, several constraints are imposed to this approach, due to the coexistence of test programs with the mission application. This paper proposes a method for the generation of SBST programs to test on-line the Address Calculation Unit of embedded RISC processors, which is one of the most heavily impacted by the online constraints. The proposed strategy achieves high stuck-at fault coverage on both a MIPS-like processor and an industrial 32-bit pipelined processor; these two case studies show the effectiveness of the technique and the low effort.",2012,0, 5664,Test tool qualification through fault injection,"According to ISO 26262, a recent automotive functional safety standard, verification tools shall undergo qualification, e.g. to ensure that they do not fail to detect faults that can lead to violation of functional safety requirements. We present a semi-automatic qualification method involving a monitor and fault injection that reduce cost in the qualification process. We experiment on a verification tool implemented in LabVIEW.",2012,0, 5665,Annotation support for generic patches,"In large projects parallelization of existing programs or refactoring of source code is time consuming as well as error-prone and would benefit from tool support. However, existing automatic transformation systems are not extensively used because they either require tedious definitions of source code transformations or they lack general adaptability. In our approach, a programmer changes code inside a project, resulting in before and after source code versions. The difference (the generated transformation) is stored in a database. When presented with some arbitrary code, our tool mines the database to determine which of the generalized transformations possibly apply. Our system is different from a pure compiler based (semantics preserving) approach as we only suggest code modifications. Our contribution is a set of generalizing annotations that we have found by analyzing recurring patterns in open source projects. We show the usability of our system and the annotations by finding matches and applying generated transformations in real-world applications.",2012,0, 5666,The analysis of complex networks research based on scientific knowledge mapping,"In this paper, research papers of complex network which published between 2000 and 2011 are indexed based on SCI database. From quotation analysis, content analysis and statistics analysis, the knowledge mapping of major research country and top quality institution are drawn using Pajek software. Through the knowledge mapping, the detecting research frontier, scientific cooperation are discovered. We also discuss the hot topic of complex network research and forecast the future direction in different country. At the same time, some advice was given to Chinese researchers.",2012,0, 5667,Data mining T-RFLP profiles from urban water system sampling using self-organizing maps,"Descriptions of urban water system microbiological properties can range from single parameters such as microbial biomass to multiparameter qualitative and quantitative data that describes biochemical profiles, measurements of enzyme activities, and molecular analyses of microbial communities. Whilst most of the hydraulic and physico-chemical variables are quite well understood, measures of microbiological processes have so far been more difficult to use as part of decision support tools. The methods commonly used to assess the microbial quality of water and wastewater are mainly culture-dependent methods, which underestimate the actual microbial diversity within the system. To circumvent this limitation, DNA-based molecular techniques are now being used to analyze environmental samples. In the past few decades, technological innovations have led to the development of a new biological research paradigm, one that is data intensive and computer-driven. A range of data driven tools have been applied for exploring the interrelationships between various types of variables. A number of studies have used Artificial Neural Networks (ANNs) to probe such complex data sets. This paper demonstrates how Kohonen self-organizing maps (SOM) can be used for data mining of microbiological data sources from urban water systems. Genetic signatures acquired by terminal restriction fragment length polymorphisms (TRFLP) were obtained from samples and then post processed by the T-Align software tool before being reduced in dimensionality with Principal Component Analysis (PCA). These datasets were then analyzed by SOM networks and additional characteristics were used in the map labeling. Initial results show that the visual output of the SOM analysis provides a rapid and intuitive means of exploring hypotheses for increased understanding and interpretation of microbial ecology.",2012,0, 5668,A Fault Tolerant Approach to Detect Transient Faults in Microprocessors Based on a Non-Intrusive Reconfigurable Hardware,"This paper presents a non-intrusive hybrid fault detection approach that combines hardware and software techniques to detect transient faults in microprocessors. Such faults have a major influence in microprocessor-based systems, affecting both data and control flow. In order to protect the system, an application-oriented hardware module is automatically generated and reconfigured on the system during runtime. When combined with fault tolerance techniques based on software, this solution offers full system protection against transient faults. A fault injection campaign is performed using a MIPS microprocessor executing a set of applications. HW/SW implementation in a reprogrammable platform shows smaller memory area and execution time overhead when compared to related works. Fault injection results show the efficiency of this method by detecting 100% of faults.",2012,0, 5669,Traveling-Wave-Based Line Fault Location in Star-Connected Multiterminal HVDC Systems,"This paper presents a novel algorithm to determine the location of dc line faults in an HVDC system with multiple terminals connected to a common point, using only the measurements taken at the converter stations. The algorithm relies on the traveling-wave principle, and requires the fault-generated surge arrival times at the converter terminals. With accurate surge arrival times obtained from time-synchronized measurements, the proposed algorithm can accurately predict the faulty segment as well as the exact fault location. Continuous wavelet transform coefficients of the input signal are used to determine the precise time of arrival of traveling waves at the dc line terminals. Performance of the proposed fault-location scheme is analyzed through detailed simulations carried out using the electromagnetic transient simulation software PSCAD. The algorithm does not use reflected waves for its calculations and therefore it is more robust compared to fault location algorithms previously proposed for teed transmission lines. Furthermore, the algorithm can be generalized to handle any number of line segments connected to the star point.",2012,0,6714 5670,Comparing maintainability evolution of object-oriented and aspect-oriented software product lines,"Software Product Line aims at improving productivity and decrease realization times by gathering the analysis, design and implementation activities of a family of systems. Evaluating the quality attributes for SPL architectures is very crucial especially architecture maintainability as SPL are expected to have longer lifetime span. Aspect-orientation offers a modularization way by separating crosscutting concerns from non-crosscutting ones. Aspect-oriented programming is assumed to endorse better modularity and changeability of product lines than traditional variability mechanisms. In this paper, we show that change propagation probability (CP) is helpful and effective in assessing the design quality of software architectures. We propose to use the CP to assess the evolution of the architecture of software product lines through different releases. We use CP to investigate whether aspect oriented SPL has better maintainability evolution than object-oriented SPL.",2012,0, 5671,Harmonic study in electromagnetic voltage transformers,"Ferroresonance is a complex electrical phenomenon which may cause overvoltages and overcurrents in electrical power system disturbing the system reliability and continuous safe operating. The ability to predict or confirm ferroresonance depends primarily on the accuracy of the voltage transformer model used in the computer simulation. In this study, at first an overview of the subject in the literature is provided. Then, occurrence of ferroresonance and the resulting harmonics in a voltage transformer are simulated. The effect of iron core saturation characteristic on the occurrence of harmonic modes has been studied. The system under study has a voltage transformer rated 100VA, 275 kV. The nonlinear magnetization cure index of the autotransformer is chosen with q=7. The linear core loss of the transformer core is modeled with linear resistance. Harmonic modes in the proposed power system are analyzed using MATLAB software. The results show that harmonics are produced in the considered substation and causes great effect on voltage transformer failure.",2012,0, 5672,On prediction of defect rates,"The prediction of software reliability can determine the current reliability of a product, using statistical techniques based on the failures data, obtained during testing or system usability. The major problem in predicting software reliability is their high complexity, as well as the excessive limitations of existing models. Therefore, choosing the appropriate reliability prediction model is essential. The purpose of this article is to study the defect prediction models, using the Rayleigh function. This function forecasts the defect discovery rate as a function of time through the software development process. Starting from an existing static prediction model, we have introduced a new approach, using the time-scale. Also, we have considered two methods for computing the model parameters, and have compared them to the real data set. The results have validated this model as a good estimation of the defect rate behavior. We will also suggest possible directions for extending the paper in the future.",2012,0, 5673,Curve fitting method for modeling and analysis of photovoltaic cells characteristics,This paper deals with the mathematical model to assess the performances of photovoltaic (PV) cells. The PV system characteristics are modeled and analyzed by using the curve fitting method referred to the different connections of PV cells and different solar irradiance. The results are compared with those resulting from measured data in a real case. Specific LabVIEWTM and MatlabTM software applications are implemented to prove the theoretical methods.,2012,0, 5674,Virtual instrumentation in power engineering,"Focusing on the investigation of virtual instrumentation systems, this paper explores certain software platforms to develop interface for virtual instruments, then assesses and puts side by side their performance. There are introduced controls generating methods, based on ActiveX/beans, for virtual instruments development. Authors attempts were converged in achieving the overall integration between virtual instrumentation components and third-party applications supporting COM technology.",2012,0, 5675,An initial study on ideal GUI test case replayability,"In this paper we investigate the effect of long-term GUI changes occurring during application development on the reusability of existing GUI test cases. We conduct an empirical evaluation on two complex, open-source GUI-driven applications for which we generate test cases of various lengths. We then assess the replayability of generated test cases using simulation on newer versions of the target applications and partition them according to the type of repairing change required for their reuse.",2012,0, 5676,Video based control of a 6 degrees-of-freedom serial manipulator,This paper presents a video based control system for a Fanuc M-6iB/2HS articulated robot. The system uses a CMUCam3 video camera connected to PC. The industrial robot is controlled via the TCP/IP protocol using a custom simulation software created in previous researches for the industrial robot. The simulation software implements additional classes designed to control and monitor the video camera. The paper presents in detail the calibration and testing stages. The system is capable to detect cylindrical objects of any stored color and is able to determine their position in the working space of the robot.,2012,0, 5677,"Simulations for wind, solar and pumping facilities and hybrid systems design with KOGAION","KOGAION is a computer modeling application that simplifies the task of designing distributed renewable energy generation systems - both on and off-grid. KOGAION's optimization and sensitivity analysis algorithms allows to evaluate the economic and technical feasibility of a photovoltaic, wind and hydro turbines technology options and to account for variations in technology costs and energy resource availability. It's adaptable to a wide variety of projects like wind, photovoltaic and hybrid energy systems, both for large on grid projects or small hybrid isolated projects (for a village, community-scale power systems or individual systems). KOGAION can model both the technical and economic factors involved in the project. For larger systems, it can provide an important overview that compares the cost and feasibility of different configurations. It is accessible to broad spectrum of users, from financial decision makers to engineers and others. More than other software applications, Kogaion supports all types of wind descriptions: by Weibull coefficients, probability of density function (pdf), time series (TS). Time references simulation is crucial for modeling variable resources, such as wind power. KOGAION's sensitivity analysis helps to determine the potential impact of uncertain factors wind speed on a given system, over time. It doesn't cover bio fuel, hydrogen, fuel cells.",2012,0, 5678,FEM simulation for Lamb wave evaluate the defects of plates,"The experimental method is hard to predict and explain the received Lamb signal of plates due to its dispersive nature. The method to evaluate the depth and defects in plates is investigated theoretically using the finite element method(FEM) in this paper. At first, the theories of Lamb waves in a plate was introduced. Then, the commercial FEM software, ABAQUS EXPLICIT, model lamb wave in plates. The monitor points of healthy plate and defective plate were predicted. The result shows that FEM simulation can effective evaluate the notch by delay time between reflection signal and first echo signal.",2012,0, 5679,Specifying Compiler Strategies for FPGA-based Systems,"The development of applications for high-performance Field Programmable Gate Array (FPGA) based embedded systems is a long and error-prone process. Typically, developers need to be deeply involved in all the stages of the translation and optimization of an application described in a high-level programming language to a lower-level design description to ensure the solution meets the required functionality and performance. This paper describes the use of a novel aspect-oriented hardware/software design approach for FPGA-based embedded platforms. The design-flow uses LARA, a domain-specific aspect-oriented programming language designed to capture high-level specifications of compilation and mapping strategies, including sequences of data/computation transformations and optimizations. With LARA, developers are able to guide a design-flow to partition and map an application between hardware and software components. We illustrate the use of LARA on two complex real-life applications using high-level compilation and synthesis strategies for achieving complete hardware/software implementations with speedups of 2.5 and 6.8 over software-only implementations. By allowing developers to maintain a single application source code, this approach promotes developer productivity as well as code and performance portability.",2012,0, 5680,A unifying framework for the definition of syntactic measures over conceptual schema diagrams,"There are many approaches that propose the use of measures for assessing the quality of conceptual schemas. Many of these measures focus purely on the syntactic aspects of the conceptual schema diagrams, e.g. their size, their shape, etc. Similarities among different measures may be found both at the intra-model level (i.e., several measures over the same type of diagram are defined following the same layout) and at the intermodel level (i.e., measures over different types of diagrams are similar considering an appropriate metaschema correspondence). In this paper we analyse these similarities for a particular family of diagrams used in conceptual modelling, those that can be ultimately seen as a combination of nodes and edges of different types. We propose a unifying measuring framework for this family to facilitate the measure definition process and illustrate its application on a particular type, namely business process diagrams.",2012,0, 5681,Program complexity metrics and programmer opinions,"Various program complexity measures have been proposed to assess maintainability. Only relatively few empirical studies have been conducted to back up these assessments through empirical evidence. Researchers have mostly conducted controlled experiments or correlated metrics with indirect maintainability indicators such as defects or change frequency. This paper uses a different approach. We investigate whether metrics agree with complexity as perceived by programmers. We show that, first, programmers' opinions are quite similar and, second, only few metrics and in only few cases reproduce complexity rankings similar to human raters. Data-flow metrics seem to better match the viewpoint of programmers than control-flow metrics, but even they are only loosely correlated. Moreover we show that a foolish metric has similar or sometimes even better correlation than other evaluated metrics, which raises the question how meaningful the other metrics really are. In addition to these results, we introduce an approach and associated statistical measures for such multi-rater investigations. Our approach can be used as a model for similar studies.",2012,0, 5682,A semantic relatedness approach for traceability link recovery,"Human analysts working with automated tracing tools need to directly vet candidate traceability links in order to determine the true traceability information. Currently, human intervention happens at the end of the traceability process, after candidate traceability links have already been generated. This often leads to a decline in the results' accuracy. In this paper, we propose an approach, based on semantic relatedness (SR), which brings human judgment to an earlier stage of the tracing process by integrating it into the underlying retrieval mechanism. SR tries to mimic human mental model of relevance by considering a broad range of semantic relations, hence producing more semantically meaningful results. We evaluated our approach using three datasets from different application domains, and assessed the tracing results via six different performance measures concerning both result quality and browsability. The empirical evaluation results show that our SR approach achieves a significantly better performance in recovering true links than a standard Vector Space Model (VSM) in all datasets. Our approach also achieves a significantly better precision than Latent Semantic Indexing (LSI) in two of our datasets.",2012,0, 5683,Toward an effective automated tracing process,"The research on automated tracing has noticeably advanced in the past few years. Various methodologies and tools have been proposed in the literature to provide automatic support for establishing and maintaining traceability information in software systems. This movement is motivated by the increasing attention traceability has been receiving as a de jure standard in software quality assurance. Following that effort, in this research proposal we describe several research directions related to enhancing the effectiveness of automated tracing tools and techniques. Our main research objective is to advance the state of the art in this filed. We present our suggested contributions through a set of incremental enhancements over the conventional automated tracing process, and briefly describe a set of strategies for assessing these contributions impact on the process.",2012,0, 5684,Non-blocking N-version programming for message passing systems,"N-version programming (NVP) employs masking redundancy: N equivalent modules (called versions) are implemented independently and run concurrently. The results of their execution are adjudicated by a special component that defines the correct majority result and eliminates the results of the versions in which design faults have been triggered. The disadvantage of using these schemes is that they need to block the receiver process until each received message is confirmed by the other version, which results in high time overhead. In the case of variant response latencies, consisting of processing time and message transmission delay, these techniques would not be efficient. In this paper a new non-blocking NVP approach based on capturing the causality between requests and response is proposed. This approach does not need to block the versions to confirm the output. The simulations result show that for acceptable values for probability of failure per demand (pfd) and simultaneous active requests, our approach has lower execution time.",2012,0, 5685,Multivariate logistic regression prediction of fault-proneness in software modules,"This paper explores additional features, provided by stepwise logistic regression, which could further improve performance of fault predicting model. Three different models have been used to predict fault-proneness in NASA PROMISE data set and have been compared in terms of accuracy, sensitivity and false alarm rate: one with forward stepwise logistic regression, one with backward stepwise logistic regression and one without stepwise selection in logistic regression. Despite an obvious trade-off between sensitivity and false alarm rate, we can conclude that backward stepwise regression gave the best model.",2012,0, 5686,On software design for stochastic processors,"Much recent research [8, 6, 7] suggests significant power and energy benefits of relaxing correctness constraints in future processors. Such processors with relaxed constraints have often been referred to as stochastic processors [10, 15, 11]. In this paper we present three approaches for building applications for such processors. The first approach relies on relaxing the correctness of the application based upon an analysis of application characteristics. The second approach relies upon detecting and then correcting faults within the application as they arise. The third approach transforms applications into more error tolerant forms. In this paper, we show how these techniques that enhance or exploit the error tolerance of applications can yield significant power and energy benefits when computed on stochastic processors.",2012,0, 5687,An useful system based data mining of the ore mixing in sintering process,"The article introduced the system of ore mixing of sintering based on data mining and neuron network. Appling the method of cluster to the saved history data of the sintering process to classify the history data into three categories. And then, established neuron network model of each cluster. Every new sample should be put into the Corresponding cluster first by the distance of Eucliden, and then we can predict the quality index of the new sample by the neuron network model. The method is more effective and accurate, and it can extend to other production of the enterprise.",2012,0, 5688,Design of the remote monitoring system for mine hoists,"In this paper we present a remote monitoring system for mine hoist based on computer technology, network technology and monitoring technology. The functional and structural design of the system is addressed. The system consists of data acquisition module, signal transmission module and software working on workstation and network server. An algorithm that can detect multiple faults simultaneously is proposed in detail. The software of the system is designed in a browser/server (Browse/Server, referred to as the B/S) architecture frame based on the Microsoft Visual Basic.NET platform. Such architecture for software is often used in industrial applications and it becomes the mainstream of remote monitoring and control architecture model. The validity and superiority of the proposed system are supported by field test.",2012,0, 5689,Design and Implementation of a Comprehensive Information System for Detection of Urban Active Faults,"This paper presents a practical comprehensive information system for detecting urban active faults using WebGIS, data fusion, the Spatial Database Engine (SDE), network and RAID5, and other technologies. This system is designed to provide data and a platform for further quantitative analysis of active faults, to protect against and mitigate earthquake disasters, and to serve as a scientific reference for future urban planning, hazard prevention, and emergency response in Hangzhou, China. The system network adopts ordinary three-tier architecture, a Web server, WebGIS application server and database server, to maximize the performance of the overall architecture. The system software adopts a three-layer structure that consists of both C/S and B/S: C/S for LAN and B/S for WAN. The system comprises five models: system management, database management, image and data services, information distribution, and results management. The system employs ArcGIS GeoDataBase and ArcSDE to effectively organize, store and manage four-category formats and 16 sub-databases. Technical issues in system development, such as multi-source heterogeneous data fusion, massive data manipulation, and system security, are analyzed, their solutions and system realization are provided. This system has successfully been used in the last two years in urban planning, hazard prevention, and emergency response.",2012,0, 5690,User-Centered Evaluation of Recommender Systems with Comparison between Short and Long Profile,"The growth of the social web poses new challenges and opportunities for recommender systems. The goal of Recommender Systems (RSs) is to filter information from a large data set and to recommend to users only the items that are most likely to interest and/or appeal to them. The quality of a RS is typically defined in terms of different attributes, the principal ones being relevance, novelty, serendipity and global satisfaction. Most existing works evaluate quality of Recommender Systems in terms of statistical factors that are algorithmically measured. This paper aims to explore whether (i) algorithmic measures of RS quality are in accordance with user-based measure and (ii) the user perceived quality of a RS is affected by the number of movies rated by the user. For this purpose we designed, developed and tested a web recommender system, TheBestMovie4You (http://www.moviers.it), which allows us to collect questionnaires about the quality of recommendations. We made a questionnaire and gave it to 240 subjects and we wanted to have as wide a set of users as possible using social web. In a experiment we asked the users to choose five movies (short profile), in a second to choose ten (long profile). Our results show that statistical properties fail in fully describing the quality of algorithms, because with user-centered metrics we can outline an algorithm's features that otherwise could not be detected. The comparison between the two phases highlighted a difference only in three cases out of twenty.",2012,0, 5691,A Framework for User Feedback Based Cloud Service Monitoring,"The increasing popularity of the cloud computing paradigm and the emerging concept of federated cloud computing have motivated research efforts towards intelligent cloud service selection aimed at developing techniques for enabling the cloud users to gain maximum benefit from cloud computing by selecting services which provide optimal performance at lowest possible cost. Given the intricate and heterogeneous nature of current clouds, the cloud service selection process is, in effect, a multi criteria optimization or decision-making problem. The possible criteria for this process are related to both functional and nonfunctional attributes of cloud services. In this context, the two major issues are: (1) choice of a criteria-set and (2) mechanisms for the assessment of cloud services against each criterion for thorough continuous cloud service monitoring. In this paper, we focus on the issue of cloud service monitoring wherein the existing monitoring and assessment mechanisms are entirely dependent on various benchmark tests which, however, are unable to accurately determine or reliably predict the performance of actual cloud applications under a real workload. We discuss the recent research aimed at achieving this objective and propose a novel user-feedback-based approach which can monitor cloud performance more reliably and accurately as compared with the existing mechanisms.",2012,0, 5692,Protein Structure Alignment: Is There Room for Improvement?,"Recent years have seen rapid development of methods for approximate and optimal solutions to the protein structure alignment problem. Albeit slow, these methods can be extremely useful in assessing the accuracy of more efficient, heuristic algorithms. We utilize a recently developed approximation algorithm for protein structure matching to demonstrate that a deep search of the protein superposition space leads to increased alignment accuracy with respect to many well established measures of alignment quality. The results of our study suggest that a large and important part of the protein superposition space remains unexplored by current techniques for protein structure alignment.",2012,0, 5693,Assessing the Impact of E-Government on Users: A Case Study of Indonesia,"This paper aims to identify problems with the online information services for assessing impacts of the e-Government website. Some governments claimed that the phenomenal developing of e-Government system in some countries was driven mainly by the demand in the society especially of the phenomenal growth of computer and mobile phone users. However, this claim requires further validation with socio-economic and web usability approach to make clear what are actual factors? To achieve this goal, it is required to conduct analyzing some factors with broader socio-economic and usability framework, to find out the most dominant factors influenced the phenomenal growth of the e-Government systems and its users. Based on a quantitative analyses approach, this study analyzes user's preference to warde-Government system, and analysis factors influencing users of e-Government. This paper discusses the development of e-Government in Indonesia, and a case study to evaluate information quality, usability and impacts of the e-Government of Indonesia's central government institutions. The paper also describe designing evaluation sheet, data collection and analysis strategies of a case study in Indonesian centralgovernment's e-government portals evaluation.",2012,0, 5694,Voice Quality Assessment and Visualization,"Commercial voice analysis systems are expensive and complicated. These equipments are only available at medical centers and need to be operated by well trained physicians. Thus the treatment costs of speech disorders are usually high. To improve this situation, we develop a software system dedicated to the assessment of human voice qualities. Our system is implemented in an ordinary personal computer. High precision electronic devices are not required for acquiring and analyzing voices. Therefore, the installation costs are low. Our system relies on four voice parameters to evaluate the qualities of voice signals. These parameters are fundamental frequency, jitter, shimmer, and harmonic-to-noise ratio. These measurements are widely used in otolaryngology to assess the pitch, variation of frequency, perturbation of amplitude, and harmony of human voice. Our system extracts these information from voice data and displays them by using graphical media. Consequently, the qualities of voice are more comprehensible. Users with little training and background knowledge can operate this system in their living rooms to assess their voices.",2012,0, 5695,Fault-Tolerant Distributed Knowledge Discovery Services for Grids,"Fault tolerance is an important issue in service oriented architectures like Grid and Cloud systems, where many and heterogeneous machines are used. In this paper we present a flexible failure handling framework which extends a service-oriented architecture for Distributed Data Mining previously proposed, addressing the requirements for handling fault tolerance in service-oriented Grids. In particular, two different solutions are described and experimentally evaluated on a real Grid setting, aimed at assessing their effectiveness and performance.",2012,0, 5696,Analysis of Gromacs MPI Using the Opportunistic Cloud Infrastructure UnaCloud,"This paper shows and analyzes the execution of a molecular dynamic application that uses Message Passing Interface (MPI) mechanism-Gromacs MPI-over a cloud infrastructure (UnaCloud) supported by desktop computers. The main objective is to find a solution to support Gromacs-MPI on UnaCloud. This coupling was carried out in order to predict and to redefine the Helicobacter pylori Cag A protein 3D structure. Although the structure of eight indigenous sequences was obtained, the handle of resource discovery and failure recovery on the opportunistic infrastructure was achieved manually, restricting the application scope of the solution. To eliminate these restrictions, a mechanism to automate the process execution on UnaCloud was identified and proposed.",2012,0, 5697,Evaluating fault tolerance in security requirements of web services,"It is impossible to identify all of the internal and external security faults (vulnerabilities and threats) during the security analysis of web services. Hence, complete fault prevention would be impossible and consequently a security failure may occur within the system. To avoid security failures, we need to provide a measurable level of fault tolerance in the security requirements of target web service. Although there are some approaches toward assessing the security of web services but still there is no well-defined evaluation model for security improvement specifically during the requirement engineering phase. This paper introduces a measurement model for evaluating the degree of fault tolerance (FTMM) in security requirements of web services by explicitly factoring the mitigation techniques into the evaluation process which eventually contributes to required level of fault tolerance in security requirements. Our approach evaluates overall tolerance of the target service in the presence of the security faults through evaluating the computational security requirement model (SRM) of the service. We measure fault tolerance of the target web service by taking into consideration the cost, technical ability, impact and flexibility of the security goals established to mitigate the security faults.",2012,0, 5698,Security metrics to improve misuse case model,"Assessing security at an early stage of the web application development life cycle helps to design a secure system that can withstand malicious attacks. Measuring security at the requirement stage of the system development life cycle assists in mitigating vulnerabilities and increasing the security of the developed system, which reduces cost and rework. In this paper, we present a security metrics model based on the Goal Question Metric approach, focusing on the design of the misuse case model. The security metrics model assists in examining the misuse case model to discover and fix defects and vulnerabilities before moving to the next stages of system development. The presented security metrics are based on the OWASP top 10-2010, in addition to misuse case modelling antipattern.",2012,0, 5699,Component-based software system reliability allocation and assessment based on ANP and petri,"There are various dependencies in the component-based software systems. ANP can describe the complex structure of system using network representation for system structure. Therefore, it can assign reliability target to each component by comparing the relative importance to the users and considering constraints cost. However, ANP method can only estimate software system reliability statically and indirectly in the design stage. During the service process of software, how to assess the software system reliability, and further to validate the rationality of software reliability allocation in the design stage are dynamic problems, which require a dynamic method. In this paper, firstly, the dynamics model for state changes of software system reliability over time is developed using Petri network, then the software system reliability can be assessed dynamically combined with the dependency graph of components. Finally, the allocation and assessment for component-based software system reliability by ANP combining Petri network is presented and discussed by one example.",2012,0, 5700,A Physical Model based research for fault diagnosis of gear crack,"Because of the advantage of gear, gearbox was used widely. But its fault also brought many losses of the production and society. It was necessary to research and analysis on the dynamical behavior of the gear system. One kind of gear fault was numerically simulated so that the influence of gear fault to change of vibration state was studied theoretically, and the symptom generated by faults was found. A finite element model with crack in gear roots was established with mixed meshing of singularity and isoparametric elements. The automatic analysis program of crack propagation was developed based on the ABAQUS software and linear elasticity fracture mechanics. The crack propagation of involutes' gear was simulated. The structural experiments of fault crack with different types and sizes were done, the vibration signals of the fault gear body and system were tested, and the dynamical characteristics of the structure and system were gained. The results were compared to that of simulation and theory, the validity of the theoretic and simulative was testified and the reliability was proved. There was important guidance for predicting the residual life of the gear with crack.",2012,0, 5701,Design and development of the reliability prediction software for smart meters,"By analyzing the characteristics and specifying the reliability prediction process of smart meters, the reliability prediction software for smart meters is designed and developed in this paper. Three methods to predict the reliability of smart meters based on Telcordia SR-332, SN29500 and the combination of Telcordia SR-332 and SN29500 are provided in this software which can quickly calculate the reliability indicators including failure rate and MTTF on the component level, unit level and system level. The software consists of the data interface module, database management module, reliability prediction module, results output module and help module. The application of this software will help in shortening the development cycle and reducing the research and development cost of smart meters and will become a useful tool in achieving the reliability growth of smart meters.",2012,0, 5702,Reliability study of the digital rocket-ground signal detection and analysis system,"The digital rocket-ground signal detection and analysis system is mainly used to remote and local real-time detect and acquire the signals from the key equipments of the control system of the carrier rockets. As a supporting product of the launch vehicles, this system has strict reliability requirements for the hardware and software. With the design of a multi-channel, integrated, real-time detection scheme, the methods of how to improve the hardware and software reliability of the system was discussed. The reliability of related hardware were expected with the method of stress analysis from the component level, unit circuit level, function modules and system-level, and the results show that the reliability of the system is great and can meet the system requirements.",2012,0, 5703,Design of the electric properties test system of capacitance proximity fuze for HE,"In order to estimate the quality variety law of the stockpile capacitance proximity fuze for HE scientifically and provide a kind of technical instrument for the army to monitor the quality of the capacitance proximity fuze for HE, this article designs an electric properties test system of capacitance proximity fuze for HE. This test system is designed based on the virtual instrument technology and it uses a varactor network as the core. The article firstly analyses the basic principles of the capacitance fuze to detecting targets. Then the article points out that the essential of actualizing normal function of the fuze in laboratory conditions is to simulate the target function process on the fuze. The purpose to do this is to make the detector of the test fuze generate a variable demodulation signal just like the test fuze encountered a real target. Based on the above principle, this article puts forward the overall design scheme of the electric properties test system of capacitance proximity fuze for HE and detailedly designs the virtual instrument test platform, the signal conditioning circuit, the target effect simulation platform and the power supply. Taking the varactor network as the core, using the shielding box and the test interface equipment, the target effect simulation platform is constructed. On the basis of the virtual instrument technique, the test platform of the virtual instrument is constructed. According to modularizing design principle, the test software is developed on the platform of Lab VIEW edition 8.5. This article mainly designs modules of simulation bias voltage generation, target signatures signal fitting, data output and acquisition, data analyse and display, data storage and data called and displayed. Lastly, the engineering prototype of the test system is manufactured. In order to validate the function of the test system, we draw out stochastically 5 capacitance proximity fuzes for HE as the test object and finish the functionality - xperiment. The test results indicate that the designing scheme is reasonable, the technic is feasible, and the test system can satisfy the needs of electric properties test of the capacitance proximity fuze for HE.",2012,0, 5704,Link-level reliability control for wireless electrocardiogram monitoring in indoor hospital,"Reliability is an essential quality of safety-critical wireless systems for medical applications. However, wireless links are typically prone to bursts of errors, with characteristics which vary over time. We propose a wireless system suitable for realtime remote patient monitoring in which the necessary reliability is achieved by an efficient error control in the link layer. We have paired an example electrocardiography application to this wireless system. We also developed a tool chain to determine the reliability, in terms of the packet-delivery ratio, for various combinations of system parameters. A realistic case study, based on data from the MIT-BIT arrhythmia database, shows that the proposed wireless system can achieve an appropriate level of reliability for electrocardiogram monitoring if link-level error control is correctly implemented.",2012,0, 5705,On extended Similarity Scoring and Bit-vector Algorithms for design smell detection,"The occurrence of design smells or anti-patterns in software models complicate development process and reduce the software quality. The contribution proposes an extension to Similarity Scoring Algorithm and Bit-vector Algorithm, originally used for design patterns detection. This paper summarizes both original approaches, important differences between design patterns and anti-patterns structures, modifications and extensions of algorithms and their application to detect selected design smells.",2012,0, 5706,Evaluation of FDTD modelling as a tool for predicting the response of UHF partial discharge sensors,"Ultra high frequency (UHF) partial discharge sensors are important tools for condition monitoring and fault location in high voltage equipment. There are many designs of UHF sensors which can detect electromagnetic waves that radiate from partial discharge sources. The general types of UHF PD sensors are disc, monopole, probe, spiral, and conical types with each type of sensor having different characteristics and applications. Computational modelling of UHF PD sensors using Finite-difference time-domain (FDTD) simulation can simplify the process of sensor design and optimisation, reducing the development cost for repeated testing (in order to select the best materials and designs for the sensors), and giving greater insight into how the mechanical design and mounting will influence frequency response. This paper reports on an investigation into the application of FDTD methods in modelling and calibrating UHF PD sensors. This paper focuses on the disc-type sensor where the sensor has been modelled in software and the predicted responses are compared with experimental measurements. Results indicate that the FDTD method can accurately predict the output voltages and frequency responses of disc-type sensors. FDTD simulation can reduce reliance upon costly experimental sensor prototypes, leading to quicker assessment of design concepts, improved capabilities and reduced development costs.",2012,0, 5707,Microprocessor Soft Error Rate Prediction Based on Cache Memory Analysis,"Static raw soft-error rates (SER) of COTS microprocessors are classically obtained with particle accelerators, but they are far larger than real application failure rates that depend on the dynamic application behavior and on the cache protection mechanisms. In this paper, we propose a new methodology to evaluate the real cache sensitivity for a given application, and to calculate a more accurate failure rate. This methodology is based on the monitoring of cache accesses, and requires a microprocessor simulator. It is applied in this paper to the LEON3 soft-core with several benchmarks. Results are validated by fault injections on one implementation of the processor running the same programs: the proposed tool predicted all errors with only a small over-estimation.",2012,0,5334 5708,Expertus: A Generator Approach to Automate Performance Testing in IaaS Clouds,"Cloud computing is an emerging technology paradigm that revolutionizes the computing landscape by providing on-demand delivery of software, platform, and infrastructure over the Internet. Yet, architecting, deploying, and configuring enterprise applications to run well on modern clouds remains a challenge due to associated complexities and non-trivial implications. The natural and presumably unbiased approach to these questions is thorough testing before moving applications to production settings. However, thorough testing of enterprise applications on modern clouds is cumbersome and error-prone due to a large number of relevant scenarios and difficulties in testing process. We address some of these challenges through Expertus---a flexible code generation framework for automated performance testing of distributed applications in Infrastructure as a Service (IaaS) clouds. Expertus uses a multi-pass compiler approach and leverages template-driven code generation to modularly incorporate different software applications on IaaS clouds. Expertus automatically handles complex configuration dependencies of software applications and significantly reduces human errors associated with manual approaches for software configuration and testing. To date, Expertus has been used to study three distributed applications on five IaaS clouds with over 10,000 different hardware, software, and virtualization configurations. The flexibility and extensibility of Expertus and our own experience on using it shows that new clouds, applications, and software packages can easily be incorporated.",2012,0, 5709,Formalizing the Cloud through Enterprise Topology Graphs,"Enterprises often have no integrated and comprehensive view of their enterprise topology describing their entire IT infrastructure, software, on-premise and off-premise services, processes, and their interrelations. Especially due to acquisitions, mergers, reorganizations, and outsourcing there is no clear 'big picture' of the enterprise topology. Through this lack, management of applications becomes harder and duplication of components and information systems increases. Furthermore, the lack of insight makes changes in the enterprise topology like consolidation, migration, or outsourcing more complex and error prone which leads to high operational cost. In this paper we propose Enterprise Topology Graphs (ETG) as formal model to describe an enterprise topology. Based on established graph theory ETG bring formalization and provability to the cloud. They enable the application of proven graph algorithms to solve enterprise topology research problems in general and cloud research problems in particular. For example, we present a search algorithm which locates segments in large and possibly distributed enterprise topologies using structural queries. To illustrate the power of the ETG approach we show how it can be applied for IT consolidation to reduce operational costs, increase flexibility by simplifying changes in the enterprise topology, and improve the environmental impact of the enterprise IT.",2012,0, 5710,An anti-islanding for multiple photovoltaic inverters using harmonic current injections,"Islanding phenomenon is one of the major concerns in the distributed generator (DG) systems. Islanding occurs when one or more DGs supplied local loads without connecting to the grid utility. This unintentional condition must be detected with the minimum time possible to prevent the hazardous effects to a person worked in the system or equipment connected to the network. The two main methods of anti-islanding are passive and active methods. The passive methods are excellent in the speed of detection and power quality. However, the passive have relatively large non-detection zone (NDZ). On the other hands, the active methods have less relatively small NDZ and still provide a good speed of detection. This paper proposes an anti-islanding technique for multiple grid-connected inverters in photovoltaic (PV) system based on an active method. By injecting harmonic currents with the same or different harmonic orders and monitoring the change at the point of common coupling (PCC), the islanding condition can be diagnosed. Due to its excellent algorithm looking at specific frequencies, the Goertzel algorithm is employed in this paper to identify frequency components. Simulation results using PSIM software confirm the effectiveness to the technique in accordance with the requirements of the interconnection standards, IEEE Std. 1547 and IEEE Std. 9292000.",2012,0, 5711,Analysis of islanding detection methods for grid-connected distributed generation in provincial electricity authority distribution systems,"This paper presents a comparison study of two major distributed generation islanding detection methods including rate of change of frequency (ROCOF) and over/under frequency (OUF) techniques which could be used for Provincial Electricity Authority (PEA) distribution systems. The analysis is based on the detection time in different scenarios and false operations. Two practical distribution systems including 133-bus and 54-bus systems from PEA are used as case studies and DIgSILENT PowerFactory software is used as simulation tool. Simulation and test results show that ROCOF technique can detect islanding phenomena faster than OUF technique when the system has only one distributed generation. However, ROCOF technique is more sensitive to false operation than OUF technique.",2012,0, 5712,Design of fault detection system for high voltage cable,"This paper proposes the design of an algorithm to detect fault in underground cable for blocking auto-reclosing in distribution feeder of Metropolitan Electricity Authority (MEA). The algorithm consists of two major steps. First, overcurrent detection algorithm is applied to compare fault current with current setting. Next process, the impedance detection is used to identify the fault location base on comparing between the measuring and the setting of impedance value. Finally, these results (current & impedance) are used to make a decision of auto-recloser blocking. This paper also shows the results from software simulation and hardware simulation.",2012,0, 5713,Bridge deck survey with high resolution Ground Penetrating Radar,"Ground Penetrating Radar applications for structure surveying started to grow in the 1980s; amongst these, initial civil engineering applications included condition assessment of highway pavements and their foundations, with applications to structural concrete focusing on inspection of bridge decks. There are many factors that can cause or contribute to the damage of the top layer of concrete in bridge decks including the corrosion of steel rebar, freeze and thaw cycles, traffic loading, initial damage resulting from poor design and/or construction, and inadequate maintenance. When applied to the analysis of bridge decks, GPR can be successfully used for detecting internal corrosion of steel reinforcement within the concrete deck, which can be an indicator of poor quality overlay bonding or delamination at the rebar level. Therefore, this equipment has the ability to gain information about the condition of bridge decks in a more rapid and less costly fashion than coring and will perhaps yield a more reliable assessment than current geotechnical procedures. However, this application requires suitably designed equipment; for instance, optimization of antenna orientation to take advantage of signal polarization is an important feature for successfully locating reinforcing bars in a time-depth slice. Novel equipment has recently been developed to enable the nondestructive analysis of bridge decks; the IDS RIS Hi-Bright runs two arrays of high frequency sensors featuring a rapid, but very dense data collection, thus dramatically increasing the resolution of the GPR survey. Antenna dipoles in these arrays are deployed to collect two data sets with orthogonal antenna orientations, one with the electric field parallel to the scanning direction (VV), the other perpendicular to it (HH); in this way, the equipment is capable of collecting 16 profiles, 10 cm spaced in a single swath, thus collecting an incredible amount of information. Dedicated data analysis software provides- a 2-D tomography of the underground layers and a 3-D view of the surveyed volume. Main output include the determination of pavement and concrete thickness, the detection of moist areas as well as concrete damage and the location of rebars and ducts within the concrete slab.",2012,0, 5714,A study of the interference stripe phenomenon caused by electromagnetic wave propagating in multi-layered medium,"The interference stripe appears in the penetrating scan experiments of multi-layered medium detection, the experiments are implemented by the single-frequency plane array or the moved single transmitting/receiving (T/R) antenna. Based on the medium boundary conditions of Maxwell equations from the classic electromagnetic theory, the interference stripe phenomenon is studied in the paper. And the commercial simulation software CST Microwave Studio is used to construct the simulation models. Under the near-field condition, the propagation and reflection of electromagnetic wave in the monolayer medium and the multi-layered media are simulated. Two situations, the electromagnetic wave being perpendicular and oblique to the dielectric plate, are specially carried out to compare. The echo waves received by the antenna changed position are analyzed. It can be found that there is no interference stripe when the incident wave is perpendicular to the dielectric plate. However, the interference stripe phenomenon will arise when there is a certain angle between the incident wave and the normal of the dielectric plate. This can be proved by the scan experiments results. The existence of the interference stripe will cause serious disturbance to the target echo, because the former is greater than the latter. So it will affect the performance of target detection and identification. From the point of view of the electromagnetic wave incidence and reflection theorem, the existence reason of this phenomenon is discussed. The relationship between the incident angle and interference stripe is illuminated. The study of this phenomenon will be helpful for the signal processing of the plane array and the moved single T/R antenna scanning system, such as the interference stripe phenomenon reducing and the target imaging quality improving. It can also be helpful for solving some problems encountered in the highways detecting, archeology, geology survey, etc.",2012,0, 5715,Pipe condition assessments using Pipe Penetrating Radar,"This paper describes the development of Pipe Penetrating Radar (PPR), the underground in-pipe application of GPR, a non-destructive testing method that can detect defects and cavities within and outside mainline diameter (>;18 in/450mm) non-metallic (concrete, PVC, HDPE, etc.) underground pipes. The method uses two or more high frequency GPR antennae carried by a robot into underground pipes. The radar data are transmitted to the surface via fibre optic cable and are recorded together with the output from CCTV (and optionally sonar and laser). Proprietary software analyzes the data and pinpoints defects or cavities within and outside the pipe. Thus the testing can identify existing pipe and pipe bedding symptoms that can be addressed to prevent catastrophic failure due to sinkhole development and can provide useful information about the remaining service life of the pipe, enabling accurate predictability of needed intervention or the timing of replacement. This reliable non-destructive testing method significantly impacts subsurface infrastructure condition based asset management by supplying previously unattainable measurable conditions.",2012,0, 5716,An evolutionary algorithm for optimization of affine transformation parameters for dose matrix warping in patient-specific quality assurance of radiotherapy dose distributions,"Patient-specific quality assurance for sophisticated treatment delivery modalities in radiation oncology, such as intensity-modulated radiation therapy (IMRT), necessitates the measurement of the delivered dose distribution, and the subsequent comparison of the measured dose distribution with that calculated by the treatment planning software. The degree to which the calculated dose distribution is reproduced upon delivery is an indication that the treatment received by the patient is acceptable. A new method for the comparison between the planned and delivered dose distributions is introduced; it assesses the agreement between planar two-dimensional dose distributions. The method uses an evolutionary algorithm to optimize the parameters of the affine transformation, consisting of translation, rotation, and dilation (or erosion), that will warp the measured dose distribution to the planned dose distribution. The deviation of the composite transformation matrix from the identity matrix is an indication of an average geometrical error in the delivered dose distribution, and represents a systematic error in dose delivery. The residual errors in radiation dose at specific points are local errors in dose delivery. The method was applied to IMRT fields consisting of horizontal intensity bands. Analysis shows the method to be a promising tool for patient-specific quality assurance.",2012,0, 5717,Dual scheme based mathematical modeling of Magnetically Controlled Shunt Reactors 6500 kV,"Topicality of mathematic modeling of Magnetically Controlled Shunt Reactors (MCSR) as a network element using common existing software elements is shown in the article. Principles of the MCSR dual scheme creation as a thyristor controlled reactor or a static var compensator (in case of modeling of Source of Reactive Power, including a MCSR and a capacitor bank) are explained. Principles of development the mathematic model of the MCSR dual scheme are demonstrated. Example of development and application of MCSR dual mathematical model for calculating steady-state modes of high-voltage electric grid is shown. The potential of the MCSR dual mathematic model application for development of the algorithm of MCSR (SRP) automatic control at the point of connection to electrical grid is assessed.",2012,0, 5718,Development of the algorithm and software MVES-TV 2012 for assessment of touch voltage in MV networks with compensated neutral earthing,"The paper presents an algorithm for assessment of touch voltage in medium voltage networks with compensated neutral earthing. Proposed algorithm takes into account power line type (overhead line, underground cable line), earth fault value, number of substations, etc. Based on this algorithm, is developed a new software MVES-TV 2012 (Medium Voltage Electrical System Touch Voltage) by the authors of this paper. Calculation with this software ensures the safety of human life in any switchgear place to which persons have legitimate accesses. This software can be useful for distribution network project and exploiting engineers when trying to assess touch voltage in medium voltage networks with compensated neutral earthing.",2012,0, 5719,Trustworthy Web Service Selection Using Probabilistic Models,"Software architectures of large-scale systems are perceptibly shifting towards employing open and distributed computing. Service Oriented Computing (SOC) is a typical example of such environment in which the quality of interactions amongst software agents is a critical concern. Agent-based web services in open and distributed architectures need to interact with each other to achieve their goals and fulfill complex user requests. Two common tasks are influenced by the quality of interactions among web services: the selection and composition. Thus, to ensure the maximum gain in both tasks, it is essential for each agent-based web service to maintain a model of its environment. This model then provides a means for a web service to predict the quality of future interactions with its peers. In this paper, we formulate this model as a machine learning problem which we analyze by modeling the trustworthiness of web services using probabilistic models. We propose two approaches for trust learning of single and composed services; Bayesian Networks and Mixture of Multinomial Dirichlet Distributions (MMDD). The effectiveness of our approaches is empirically assessed using a simulation study. Our results show that representing the quality of a web service by Multinomial Dirichlet Distribution (MDD) provides high flexibility and accuracy in modeling trust. They also show that using our approaches to estimate trust enhances web services selection and composition.",2012,0, 5720,A Hybrid Diagnosis Approach for QoS Management in Service-Oriented Architecture,"Service flow in SOA systems need to detect quality of service (QoS) problems and to guarantee end-to-end performance. In previous work, we have proposed two faulty service identification methods: a dependency matrix based diagnosis and a Bayesian network based diagnosis. In this paper, we present a hybrid diagnosis to achieve high diagnosis accuracy and low diagnosis cost. The hybrid diagnosis reduces the problem size by applying dependency matrix based diagnosis result in Bayesian network and excluding services that are not critical to the end-to-end QoS from the diagnosis. Our experimental results show that the accuracy of the hybrid diagnosis is similar to the Bayesian network diagnosis yet reduces more than 90% of the diagnosis time.",2012,0, 5721,Location-Aware Collaborative Filtering for QoS-Based Service Recommendation,"Collaborative filtering is one of widely used Web service recommendation techniques. In QoS-based Web service recommendation, predicting missing QoS values of services is often required. There have been several methods of Web service recommendation based on collaborative filtering, but seldom have they considered locations of both users and services in predicting QoS values of Web services. Actually, locations of users or services do have remarkable impacts on values of QoS factors, such as response time, throughput, and reliability. In this paper, we propose a method of location-aware collaborative filtering to recommend Web services to users by incorporating locations of both users and services. Different from existing user-based collaborative filtering for finding similar users for a target user, instead of searching entire set of users, we concentrate on users physically near to the target user. Similarly, we also modify existing service similarity measurement of collaborative filtering by employing service location information. After finding similar users and services, we use the similarity measurement to predict missing QoS values based on a hybrid collaborative filtering technique. Web service candidates with the top QoS values are recommended to users. To validate our method, we conduct series of large-scale experiments based on a real-world Web service QoS dataset. Experimental results show that the location-aware method improves performance of recommendation significantly.",2012,0, 5722,An Approach of QoS-Guaranteed Web Service Composition Based on a Win-Win Strategy,"In Web service composition, the benefit conflicts between the user and the service provider ask the so-called both win to be supported. To address this concern, a novel QoS-guaranteed service composition approach based on a win-win strategy is proposed in this paper. First, a QoS model based on the probability interval is built to adapt to the dynamic nature of the Internet, and a corresponding user satisfaction evaluation method is designed. Next, a mathematical model of service composition based on game theory is proposed. Finally, a Genetic Algorithm (GA) is used to search an appropriate composite service with the Pareto optimum under the Nash equilibrium on both the user utility and the service provider utility achieved or approached. Simulation results have shown that the proposed approach is feasible and effective.",2012,0, 5723,Extending the Reliability of Wireless Sensor Networks through Informed Periodic Redeployment,"This paper investigates the reliability of wireless sensor networks, deployed over a square area, in regards to two aspects: network connectivity and node failures. Analyzing the phenomenon known as the border effects on the connectivity of such networks, we derive exact expressions for the expected effective connectivity degree of border nodes. We show that the relative average number of neighbors for nodes in the borders is independent of the node transmission range and of the overall network node density. Assuming a network composed of N uniformly distributed nodes over a square area of side L, our simulation experiments demonstrate that the connectivity of the overall network is dominated by the average node degree in the corner borders of the square network area. Using this result, and considering sensor node failure rates, we derive analytical expressions for the mean time to disconnect (MTTD) and the mean number of sensors remaining (MNSR) upon disconnection for a given network. For precise reliability estimates we also calculate the sensor redeployment period T and the number of sensors per redeployment N, that should be effected in order to keep the network continuously connected with probability higher than 99%. We then run additional simulations for a network subject to sensors failures to obtain experimental MTTD and MNSR values which we found to be very close to the analytically derived ones. These experiments also ratified that periodic sensor redeployments characterized by the pair (N, T), resulting from our analysis, can continuously extend the reliability of wireless sensor networks.",2012,0, 5724,TIL: Mutation-based Statistical Test Inputs Generation for Automatic Fault Localization,"Automatic Fault Localization (AFL) is a process to locate faults automatically in software programs. Essentially, an AFL method takes as input a set of test cases including failed test cases, and ranks the statements of a program from the most likely to the least likely to contain a fault. As a result, the efficiency of an AFL method depends on the """"quality"""" of the test cases used to rank statements. More specifically, in order to improve the accuracy of their ranking within test budget constraints, we have to ensure that program statements are executed by a reasonably large number of test cases which provide a coverage as uniform as possible of the input domain. This paper proposes TIL, a new statistical test inputs generation method dedicated to AFL, based on constraint solving and mutation testing. Using mutants where the locations of injected faults are known, TIL is able to significantly reduce the length of an AFL test suite while retaining its accuracy (i.e., the code size to examine before spotting the fault). In order to address the motivations stated above, the statistical generator objectives are two-fold: 1) each feasible path of the program is activated with the same probability, 2) the sub domain associated to each feasible path is uniformly covered. Using several widely used ranking techniques (i.e., Tarantula, Jaccard, Ochiai), we show on a small but realistic program that a proof-of-concept implementation of TIL can generate test sets with significantly better fault localization accuracy than both random testing and adaptive random testing. We also show on the same program that using mutation testing enables a 75% length reduction of the AFL test suite without decrease in accuracy.",2012,0, 5725,Semi-Automatic Security Testing of Web Applications from a Secure Model,"Web applications are a major target of attackers. The increasing complexity of such applications and the subtlety of today's attacks make it very hard for developers to manually secure their web applications. Penetration testing is considered an art, the success of a penetration tester in detecting vulnerabilities mainly depends on his skills. Recently, model-checkers dedicated to security analysis have proved their ability to identify complex attacks on web-based security protocols. However, bridging the gap between an abstract attack trace output by a model-checker and a penetration test on the real web application is still an open issue. We present here a methodology for testing web applications starting from a secure model. First, we mutate the model to introduce specific vulnerabilities present in web applications. Then, a model-checker outputs attack traces that exploit those vulnerabilities. Next, the attack traces are translated into concrete test cases by using a 2-step mapping. Finally, the tests are executed on the real system using an automatic procedure that may request the help of a test expert from time to time. A prototype has been implemented and evaluated on Web Goat, an insecure web application maintained by OWASP. It successfully reproduced Role-Based Access Control (RBAC) and Cross-Site Scripting (XSS) attacks.",2012,0, 5726,An Investigation of Classification-Based Algorithms for Modified Condition/Decision Coverage Criteria,"During software development, white-box testing is used to examine the internal design of the program. One of the most important aspects of white-box testing is the code coverage. Among various test coverage measurements, the Modified Condition/Decision Coverage (MC/DC) is a structural coverage measure and can be used to assess the adequacy and quality of the requirements-based testing (RBT) process. NASA has proposed a method to select the needed test cases for satisfying this criterion. However, there may have some flaws in NASA's method. That is, the selected test cases may not satisfy the original definition of the MC/DC criterion in some particular situations and perhaps can not detect errors completely. On the other hand, NASA's method may be hard to detect some operator errors. For example, we may not be able to detect the incorrectly coding or for xor in some cases. Additionally, this method is too complex and could take a lot of time to obtain the needed test cases. In this paper, we will propose a classification-based algorithm to select the needed test cases. First, test cases will be classified based on the outcome value of expression and the target condition. After classifying all test cases, MC/DC pairs can be found quickly, conveniently and effectively. Also, if there are some missing (unfound) test cases, our proposed classification-based method can also suggest to developers what kinds of test cases have to be generated. Finally, some experiments are performed based upon real programs to evaluate the performance and effectiveness of our proposed classification-based algorithm.",2012,0, 5727,Study of Safety Analysis and Assessment Methodology for AADL Model,"This paper focuses on safety model of embedded system architecture using AADL (Architecture Analysis and Design Language). For further integration of safety analysis and system modeling, we propose a new approach to evaluate and assess the safety property of embedded systems quantitatively. We establish the safety model of embedded systems by extending AADL with fault model, identify causal relationships between elementary failure modes, put forward the formal method to transform this safety model to DSPN (Deterministic Stochastic Petri Net) model for quantitative analysis and made transforming rules to support safety assessment automatically. A fire alarm system is discussed for further explanation.",2012,0, 5728,Paradigm in Verification of Access Control,"Access control (AC) is one of the most fundamental and widely used requirements for privacy and security. Given a subject's access request on a resource in a system, AC determines whether this request is permitted or denied based on AC policies (ACPs). This position paper introduces our approach to ensure the correctness of AC using verification. More specifically, given a model of an ACP, our approach detects inconsistencies between models, specifications, and expected behaviors of AC. Such inconsistencies represent faults (in the ACP), which we target at detecting before ACP deployment.",2012,0, 5729,A magnitude-phase detection method for grid-connected voltage of wind power generation system,"In order to keep low-voltage ride-through under the grid faults for wind power generation system, the magnitude and phase information of the system grid-connected voltage must be detected fleetly and precisely so that the control subsystem of wind power generation system can tackle the faults. The conventional three-phase software phase-locked loop (SPLL) has a low speed of dynamic response because of the effects of negative-sequence and harmonic components. Also it can not exactly extract the fundamental positive-sequence magnitude and phase of three-phase voltage. This paper proposes a novel three-phase magnitude-phase detection method based on PQR transformation. The proposed method can decouple the fundamental positive-sequence component and negative-sequence component of three-phase voltage, and detect the magnitudes and phases of them respectively. The simulation results verify that the proposed method can overcome the shortcomings of conventional three-phase SPLL, track the magnitude and phase information of the fundamental positive-sequence voltage accurately and improve the speed of dynamic response effectively under the bad harmonic condition. The novel magnitude-phase detection method can be used to detect exactly the grid-connected voltage of wind power generation system.",2012,0, 5730,Low voltage testing for interconnect opens under process variations,"Advances in test methodologies to deal with subtle behavior of some defects mechanisms as the technology scale are required. Among these interconnect opens are an important defect mechanism that requires detailed knowledge of its physical properties. Furthermore, in nanometer process variability is predominant and considering only nominal value of parameters is not realistic. In this work the detection capability of Low Voltage Testing for interconnect opens, considering process variations, is evaluated using a statistical model. To account for this the Probability of Detection of the defect is obtained. The proposed methodology is implemented in a software tool to determine the probability of detection of via opens for some ISCAS85 benchmark circuits. The results suggest that using Low Vdd in conjunction with favorable test vectors allow to improve the Probability of Detection of interconnect opens leading to better test quality.",2012,0, 5731,Non-intrusive fault tolerance in soft processors through circuit duplication,"The flexibility introduced by Commercial-Off-The-Shelf (COTS) SRAM based FPGAs in on-board system designs make them an attractive option for military and aerospace applications. However, the advances towards the nanometer technology come together with a higher vulnerability of integrated circuits to radiation perturbations. In mission critical applications it is important to improve the reliability of applications by using fault-tolerance techniques. In this work, a non-intrusive fault tolerance technique has been developed. The proposed technique targets soft processors (e.g. LEON3), and its detection mechanism uses a Bus Monitor to compare output data of a main soft-processor with its redundant module. In case of a mismatch, an error signal is activated, triggering the proposed fault tolerance strategy. This approach shows to be more efficient than the state-of-the-art Triple Modular Redundancy (TMR) and Software Implemented Hardware Fault Tolerance (SIHFT) approaches in order to detect and to correct faults on the fly with low area overhead and with no major performance penalties. The chosen case study is an under development On-Board Computer (OBC) system, conceived to be employed in future missions of the Brazilian Institute of Space Research (INPE).",2012,0, 5732,Modelling and symmetry reduction of a target-tracking protocol using wireless sensor networks,"To achieve precise modelling of real-time systems stochastic behaviours are considered which lead towards probabilistic modelling. Probabilistic modelling has been successfully employed in wide array of application domains including, for example, randomised distributed algorithms, communication, security and power management protocols. This study is an improvement over our previous work, which was based on the probabilistic analysis of a cluster-based fault tolerant target-tracking protocol (FTTT) using only grid-based sensor nodes arrangement. Probabilistic modelling is chosen for the analysis of FTTT protocol to facilitate benefits of symmetry reduction in conjunction with modelling. It is believed that for the first time correctness of the simplified version of a target-tracking protocol is verified by developing its continuous-time Markov chain (CTMC) model using symbolic modelling language. The proposed probabilistic model of a target-tracking wireless sensor networks will help to analyse the phases of FTTT protocol on a limited scale with finite utilisation of time. There are three main contributions of this study; first consideration of synchronised events between the modules, second, random placement of sensor nodes is taken into account in addition to grid-based sensor node arrangement, third one is the reduction in state space size through symmetry reduction technique, which also facilitates to analyse a larger size network. Symmetry reduction on Probabilistic Symbolic Model (PRISM) checker models is performed by PRISM-symm and the generic representatives in PRISM (GRIP) tool. Modelling of FTTT protocol is proved better with the usage of PRISM-symm after comparing the results of PRISM model, PRISM-symm and GRIP.",2012,0, 5733,Available car parking space detection from webcam by using adaptive mixing features,"This paper presents a robust approach for detection of available car parking spaces. With low quality of video camera as webcam and dynamic change of light around the car parking, it is hard to accurately detect or recognize the cars. Moreover the proposed appearance-based approach is efficient than recognition-based approach because it do not need to learn a huge of multi-view objects. In this paper, we propose adaptive background model-based object detection with dynamic mixing features of masked-area and edge orientation histogram (EOH) density. The average variance of variance of intensity change for dynamic background model is used to change ratio of mixing features dynamically. The masked-area density is density of predefined area of a parking slot that is weighted by Gaussian mask to robust density computation and the edge orientation histogram (EOH) density is density of the EOH in the predefined area that can be used under low contrast image as night scene. The experiments are performed both in simulation model and real scenes. The results show the proposed approach can handle dynamic change of light efficiently.",2012,0, 5734,Neural computing-based pedestrian detection from image sequence,"Preventing a traffic accident is a good way to solve many problems in the world for the current generation surrounding with many automotive technologies causing many people's death from the accident. The prevention makes an important impact to every society for making many people more safety and improving their lives' quality. In the fact, the primary cause is mostly drivers' carelessness and lacking of control, which might be because of addiction of drugs, resulting that a pedestrian walking on a road pains and then becomes dead. Therefore the problem can be solved using a computer for analysis and making a decision as a human called pedestrian detection. This study uses many several enhancement algorithms and processes for detecting integrated with an analysis based on a neural computing to decide whether a detected object is a pedestrian. Moreover the study shows an experimental result that the procedure is sufficient and efficient to be a fundamental knowledge for the next related work including a real-life traffic situation.",2012,0, 5735,A software monitoring framework for quality verification,"Software functional testing can unveil a wide range of potential malfunctions in applications. However, there is a significant fraction of errors that will be hardly detected through a traditional testing process. Problems such as memory corruptions, memory leaks, performance bottlenecks, low-level system call failures and I/O errors might not surface any symptoms in a tester's machine while causing disasters in production. On the other hand, many handy tools have been emerging in all popular platforms allowing a tester or an analyst to monitor the behavior of an application with respect to these dark areas in order to identify potential fatal problems that would go unnoticed otherwise. Unfortunately, these tools are not yet in widespread use due to few reasons. First, the usage of tools requires a certain amount of expertise on system internals. Furthermore, these monitoring tools generate a vast amount of data even with elegant filtering and thereby demand a significant amount of time for an analysis even from experts. As the end result, using monitoring tools to improve software quality becomes a costly operation. Another facet of this problem is the lack of infrastructure to automate recurring analysis patterns. This paper describes the current state of an ongoing research in developing a framework that automates a significant part of the process of monitoring various quality aspects of a software application with the utilization of tools and deriving conclusions based on results. According to our knowledge this is the first framework to do this. It formulates infrastructure for analysts to extract relevant data from monitoring tool logs, process those data, make inferences and present analysis results to a wide range of stakeholders in a project.",2012,0, 5736,UMAM-Q: An instrument to assess the intention to use software development methodologies,"The Software Engineering discipline has devoted much effort to the definition of new methods and paradigms that, even if empirically proven to provide certain gains in terms of process productivity and product quality, are difficult to transfer to industry. We claim that this fact is largely due to methodologists not taking into account the - largely subjective - set of variables that influence innovation adoption, together with its tailoring and operationalization to the particulars of software methodologies: we lack reliable and valid measurement instruments that allow for the early detection of methodologies weaknesses with respect to their ability to catch among practitioners. This paper reports on the development of one such instrument designed to measure the various perceptions that an individual may have with respect to adopting a software development methodology innovation. Our questionnaire is aimed at allowing methodologists not only to compare their methodologies with respect to others - in terms of variables such as ease of use, usefulness or compatibility - but also to avoid well-known mistakes such as forcing - as opposed to convincing - method adoption in organizations.",2012,0, 5737,TRHIOS: Trust and reputation in hierarchical and quality-oriented societies,"In this paper we present TRHIOS: a Trust and Reputation system for HIerarchical and quality-Oriented Societies. We focus our work on hierarchical medical organizations. The model estimates the reputation of an individual, RTRHIOS, taking into account information from three trust dimensions: the hierarchy of the system; the source of information; and the quality of the results. Besides the concrete reputation value, it is important to know how reliable that value is; for each of the three dimensions we calculate the reliability of the assessed reputations; and aggregating them, the reliability of the reputation of an individual. The modular approach followed in the definition of the different types of reputations provides the system with a high flexibility that allows adapting the model to the peculiarities of each society.",2012,0, 5738,A numerical simulation study on the CO2 leakage through the fault,"Many carbon capture and storage projects are underway all over the world to reduce greenhouse gas emissions. In this study, we numerically analyze the movement of injected CO2 through the faults undetected prior to injection. We use TOUGH2-MP ECO2N software to estimate the behavior of injected and leaked CO2. The storage site is 100 m thick saline aquifer located 850 m under the shallow continental shelf. It is assumed that CO2 is first leaked through the 1st fault located 400 m away from injection well and then leaked to the seabed through the 2nd fault located 400 m away from 1st fault. We vary the injection rate of CO2 to 0.25, 0.50, and 0.75 MtCO2/year to analyze the effect of injection rate. For 0.25 MtCO2/year injection rate, no leakage is calculated, however, for 0.50 and 0.75 MtCO2/year, the leakages of CO2 to seabed are detected. The starting times of leakage at 0.5 and 0.75 MtCO2/year injection rates are 22.9 and 17.8 years, respectively. The ratios of total leaked CO2 to total injected CO2 at 0.5 and 0.75 MtCO2/year injection rates are 8% and 16.4%, respectively.",2012,0, 5739,Automatic fault characterization via abnormality-enhanced classification,"Enterprise and high-performance computing systems are growing extremely large and complex, employing many processors and diverse software/hardware stacks. As these machines grow in scale, faults become more frequent and system complexity makes it difficult to detect and to diagnose them. The difficulty is particularly large for faults that degrade system performance or cause erratic behavior but do not cause outright crashes. The cost of these errors is high since they significantly reduce system productivity, both initially and by time required to resolve them. Current system management techniques do not work well since they require manual examination of system behavior and do not identify root causes. When a fault is manifested, system administrators need timely notification about the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize normal and abnormal system behavior. However, the complex effects of system faults are less amenable to these techniques. This paper demonstrates that the complexity of system faults makes traditional classification and clustering algorithms inadequate for characterizing them. We design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy significantly. Our experiments demonstrate that our techniques can detect and characterize faults with 85% accuracy, compared to just 12% accuracy for direct applications of traditional techniques.",2012,0, 5740,BLOCKWATCH: Leveraging similarity in parallel programs for error detection,"The scaling of Silicon devices has exacerbated the unreliability of modern computer systems, and power constraints have necessitated the involvement of software in hardware error detection. Simultaneously, the multi-core revolution has impelled software to become parallel. Therefore, there is a compelling need to protect parallel programs from hardware errors. Parallel programs' tasks have significant similarity in control data due to the use of high-level programming models. In this study, we propose BLOCKWATCH to leverage the similarity in parallel program's control data for detecting hardware errors. BLOCKWATCH statically extracts the similarity among different threads of a parallel program and checks the similarity at runtime. We evaluate BLOCKWATCH on seven SPLASH-2 benchmarks to measure its performance overhead and error detection coverage. We find that BLOCKWATCH incurs an average overhead of 16% across all programs, and provides an average SDC coverage of 97% for faults in the control data.",2012,0, 5741,Power quality analysis based on LABVIEW for current power generation system,"With the development of the current power generation technology, power quality of the generation system becomes a new problem, because of ocean energy variable and converters with large power electronic device applied. A real-time signal detection and harmonic analysis is great significance to improve power quality. The paper mainly proposes how to use LABVIEW software platform to develop a power quality detection and analysis system to detect the signals from the current power generation. Moreover, a simulation model of the current power generation using MATLAB is built to generate the analog signals, so we can also monitor the analog signals with the power quality system. The system is successfully designed to detect the electric power quality indexes after test results.",2012,0, 5742,6th workshop on recent advances in intrusion tolerance and reSilience (WRAITS 2012),"Now entering its sixth consecutive year, the last four being in conjunction with DSN, the primary theme of WRAITS is intrusion tolerance (IT for short). IT starts with the premise that software-based components will always contain bugs and misconfigurations that can be discovered, exposed and enabled by the increasingly new ways in which distributed and networked computer systems are being created today. IT acknowledges that it is impossible to completely prevent intrusions and attacks, and it is often impossible to accurately detect the act of intrusion and stop it early enough. Intrusion tolerant systems therefore must have the means to continue to operate correctly despite attacks and intrusions, and deny the attacker/intruder the success they seek as much as possible. For instance, an intrusion tolerant system may suffer loss of service or resources due to the attack but it may continue to provide critical services in a degraded mode or trigger automatic mechanisms to regain and recover the compromised services and resources. Other descriptions used for similar themed research include Survivability, Resilience, Trustworthy Systems, Byzantine Fault Tolerance, and Autonomic Self-Healing Systems. Indeed, this year's workshop has been slightly renamed from its predecessors (by also including reSilience in the title) to explicitly underscore the breadth of the topics involved.",2012,0, 5743,TinyChecker: Transparent protection of VMs against hypervisor failures with nested virtualization,"The increasing amount of resources in a single machine constantly increases the level of server consolidation for virtualization. However, along with the improvement of server efficiency, the dependability of the virtualization layer is not being progressed towards the right direction; instead, the hypervisor level is more vulnerable to diverse failures due to the increasing complexity and scale of the hypervisor layer. This makes tens to hundreds of production VMs in a machine easily risk a single point of failure. This paper tries to mitigate this problem by proposing a technique called TinyChecker, which uses a tiny nested hypervisor to transparently protect guest VMs against failures in the hypervisor layer. TinyChecker is a very small software layer designated for transparent failure detection and recovery, whose reliability can be guaranteed by its small size and possible further formal verification. TinyChecker records all the communication context between VM and hypervisor, protects the critical VM data, detects and recovers the hypervisors among failures. TinyChecker is currently still in an early stage, we report our design consideration and initial evaluation results.",2012,0, 5744,The application of decay rate analysis for WSN buffer dimensioning,"The provisioning of a Quality of Service (QoS) for applications requiring real-time service demands low latency, low Bit Error Rate (BER) and bounded end to end delay guarantees. Facilitating this within a Wireless Sensor Network (WSN) is Guaranteed Time slots (GTS's) activated using a beacon enabled mode and under the governance of the accepted protocol of the IEEE 802.15.4 standard. However the limitation in design of only seven GTS per superframe duration pre-determines a problem of allocation when requirements by applications in need of good QoS exceed the supply of GTS's. This results in the QoS aware traffic requiring differentiation of service in the buffer of the Personal Area Network Coordinator (PANC). A question arises as to the true ability of service dimensioning of networks without the prior knowledge of buffer allocation. In some WSN the buffer shares memory with the application and radio stack. This memory requirement will vary depending on the application. This research complements a previous paper in which the application of Decay Rate Analysis in relation to the dimensioning of buffer allocation of the various nodes was analysed using simulation and analytical methodology. The results illustrated the probability of queue occupancy for a specific application namely VoIP. This paper proposes that by employing the decay rate analysis a better understanding of buffer requirements at the PANC can be determined for different sensor applications. This analysis will further enhance the pre-deployment dimensioning of a network for greater operational efficiency and overall cost effectiveness.",2012,0, 5745,HCMM - a maturity model for measuring and assessing the quality of cooperation between and within hospitals,"Increased competition and market dynamics in healthcare force hospitals to intensify their efforts toward specialization and cooperation with others. In this paper, a maturity model is discussed that assists hospitals in evolving the required strategic, organizational, and technical capabilities in a systematic way so that the formation of collaborative structures and processes is efficient and effective. The so-called Hospital Cooperation Maturity Model (HCMM), queries a total of 36 reference points reflecting 3 distinct organizational dimensions relevant for the ability to cooperate. On the one hand it can be used as basis for benchmarking the quality of cooperation between a particular hospital and its business partners; on the other hand, it can also be applied as common ground for shared learning and improvement initiatives. In order to demonstrate its usability and applicability, an instantiation in form of a software prototype is presented. The paper ends with recommendations for healthcare practice and future research.",2012,0, 5746,Taming of the Shrew: Modeling the Normal and Faulty Behaviour of Large-scale HPC Systems,"HPC systems are complex machines that generate a huge volume of system state data called """"events"""". Events are generated without following a general consistent rule and different hardware and software components of such systems have different failure rates. Distinguishing between normal system behaviour and faulty situation relies on event analysis. Being able to detect quickly deviations from normality is essential for system administration and is the foundation of fault prediction. As HPC systems continue to grow in size and complexity, mining event flows become more challenging and with the upcoming 10 Pet flop systems, there is a lot of interest in this topic. Current event mining approaches do not take into consideration the specific behaviour of each type of events and as a consequence, fail to analyze them according to their characteristics. In this paper we propose a novel way of characterizing the normal and faulty behaviour of the system by using signal analysis concepts. All analysis modules create ELSA (Event Log Signal Analyzer), a toolkit that has the purpose of modelling the normal flow of each state event during a HPC system lifetime, and how it is affected when a failure hits the system. We show that these extracted models provide an accurate view of the system output, which improves the effectiveness of proactive fault tolerance algorithms. Specifically, we implemented a filtering algorithm and short-term fault prediction methodology based on the extracted model and test it against real failure traces from a large-scale system. We show that by analyzing each event according to its specific behaviour, we get a more realistic overview of the entire system.",2012,0, 5747,High-Level Model for Educational Collaborative Virtual Environments Development,"The paper presents a proposal high-level model for the development of Educational Collaborative virtual Environments based on engineering software and quality concepts for software development. A life cycle was identified, which was detailed the phases of development, taking into account different techniques and methods and related documentation. The main goal of this research is to demonstrate how we can use the model to develop our applications by virtual world's platforms. The model was conceived based on research carried out in area the development of models that integrate all phases of process development software and models for assessing collaborative virtual environments. The model used contains a set of diagrams to support developer teams in their tasks, mainly in the creation of educational collaborative virtual environments to be used in an e-learning context.",2012,0, 5748,Systematic mapping study of quality attributes measurement in service oriented architecture,Background: Service oriented architecture (SOA) promotes software reuse and interoperability as its niche. Developer usually expects that services being used are performing as promised. Capability to measure quality of services helps developer to predict the software behavior. Measuring quality attributes of software can be treated as a way to ensure that the developed software will/does meet expected qualities.,2012,0, 5749,Fault Detection and Diagnosis for Component-based Robotic Systems,"In the software engineering domain, much work has been done for fault detection and diagnosis (FDD) and many methods and technologies have been developed, especially for safety-critical systems. In the meantime, component-based software engineering has emerged and been widely adopted as an effective way to deal with various issues of modern systems such as complexity, scalability, and reusability. Robotics is one of the representative domains where this trend appears. As technology advances, robots are beginning to inhabit the same physical space as humans and this makes the safety issue more important, even critical. However, the safety of recent component-based robotic systems has not been extensively investigated yet. One effective way to achieve system safety is fault tolerance based on FDD which recent robot systems can benefit from. For this purpose, we propose a FDD scheme for component-based software systems with the requirements of flexibility, extendability, and efficiency. The proposed FDD scheme consists of three main components: filter and history buffer, filtering pipeline, and FDD pipeline. We implemented this scheme using the cisst framework and show how it can be systematically deployed in an actual system. As an illustrative example, a FDD pipeline is set up to detect a thread scheduling fault on various operating systems (Linux, RTAI, and Xenomai) and experimental results are presented. Although the target of this example is only one type of fault, it demonstrates how the proposed FDD scheme can be introduced to component-based environments in flexible and systematic ways and how system designers can define a fault and FDD pipeline for it. It is obvious that the importance of dependability-especially safety-of robots will significantly increase as robots are deployed in our daily lives, directly operate on us, or interact closely with us. Thus, the FDD scheme proposed in this paper can be a useful basis for robot dependability research in the fut- re.",2012,0, 5750,On the relationship between defects of nodes and characteristics of their neighbors in software structural network,"Software development is a complex task as there exist various relationships between different pieces of code. If software entities (e.g., classes) are considered as nodes and the relationships between entities (e.g., inheritance) are considered as edges, the static structure of software can be viewed as a software structural network. In this paper, the relationship between the quality of nodes and the characteristics of their neighbors in the software structural network has been investigated. The following observations have been made: On most occasions, the neighbors of the defect-prone nodes tend to have higher code complexity and be more centralized in the software structural network and undergo more frequent changes than those which are defect-free. The observations made in our research can be used to help software engineers assess risks during the software evolution activities (e.g., adding new entities and relationships) with the purpose of improving software designs.",2012,0, 5751,Sole error locating array and approximate error locating array,"Combinatorial interaction testing (CIT) is a method to detect the fault interactions among the parameters or components in a system. However the most works in the field of combinatorial interaction testing focus on detecting interaction faults rather than locating them. (d, t)-locating arrays and (d, t)-detecting arrays were proposed by C. J. Colbourn and D. W. McClary to locate and detect the interaction fault. In the paper, we study the structure of the special fault locating array that is able to locate sole interaction fault among the parameters or components, namely (1, t)-detecting arrays, and then propose the concept and construction method of approximate error locating array based on the special error locating array. The AETG-like algorithms to generate these special arrays are provided.",2012,0, 5752,Prediction of software maintainability using fuzzy logic,"The relationship between object oriented metrics and software maintainability is complex and non-linear. Therefore, there is considerable research interest in development and application of sophisticated techniques which can be used to build models for predicting software maintainability. However, when predicting maintainability not only product quality measurements are surrounded with imprecision and uncertainty, but also the relationships between the external and internal quality attributes suffer from imprecision and uncertainty. The reason behind that, there are at least two important sources of information for building the prediction model: historical data and human experts. Therefore, in this paper an attempt has been made to utilize the capability of fuzzy logic in handling imprecision and uncertainty to come up with an efficient maintainability prediction model. The proposed model is constructed using object-oriented metrics data in Li and Henry's datasets collected from two different object-oriented systems.",2012,0, 5753,Recognising the Capacities of Dynamic Reconfiguration for the QoS Assurance of Running Systems in Concurrent and Parallel Environments,"Recognizing the impact of reconfiguration on the QoS of running systems is especially necessary for choosing an appropriate approach to dealing with dynamic evolution of mission-critical or non-stop business systems. The rationale is that the impaired QoS caused by inappropriate use of dynamic approaches is unacceptable for such running systems. To predict in advance the impact, the challenge is two-fold. First, a unified benchmark is necessary to expose QoS problems of existing dynamic approaches. Second, an abstract representation is necessary to provide a basis for modeling and comparing the QoS of existing and new dynamic reconfiguration approaches. Our previous work [8] has successfully evaluated the QoS assurance capabilities of existing dynamic approaches and provided guidance of appropriate use of particular approaches. This paper reinvestigates our evaluations, extending them into concurrent and parallel environments by abstracting hardware and software conditions to design an evaluation context. We report the new evaluation results and conclude with updated impact analysis and guidance.",2012,0, 5754,Dataflow Weaknesses Analysis of Scientific Workflow Based on Fault Tree,"If potential contributors leading to system failure can be identified when a scientific workflow is modeled, a lot of system weaknesses may thus be revealed and improved. In this paper, we first identify a number of data dependency patterns in scientific workflows and their corresponding state functions. Then, a method to transform the state functions into fault tree symbols is presented. We use fault tree analysis method to identify critical elements and elements combinations that lead to the incorrect state of a final output and calculate the probability of the incorrect state of a final output based on the probabilities of the basic events in the analyzed workflow. Moreover, an importance measure is designed to prioritize the contributors leading to the incorrect state of a final output. Finally, the feasibility and effectiveness of the proposed methods are proved by example and experiments.",2012,0, 5755,Studies on the Effect of Laser Cooling in Atom Lithography for Nanometrology,"To meet the requirement of nanoscale dimensional metrology, lengthy standards with features below 100 nanometers are indispensable instruments. Our group has successfully fabricated length standards through atom lithography. For further improvement of the quality of these standards, laser cooling of Chromium atom beam was studied through a transverse Doppler cooling scheme. Moreover, utilizing the software developing kit (SDK) of CCD camera, we developed an image collecting software, which had functions of real-time display for gray-scale image and image measure. In our experimental setup, with the software, we could detect the laser induced fluorescence spots from marginal beams to monitor and analyze the effect of laser cooling.",2012,0, 5756,Communication Library to Overlap Computation and Communication for OpenCL Application,"User-friendly parallel programming environments, such as CUDA and OpenCL are widely used for accelerators. They provide programmers with useful APIs, but the APIs are still low level primitives. Therefore, in order to apply communication optimization techniques, such as double buffering techniques, programmers have to manually write the programs with the primitives. Manual communication optimization requires programmers to have significant knowledge of both application characteristics and CPU-accelerator architecture. This prevents many application developers from effective utilization of accelerators. In addition, managing communication is a tedious and error-prone task even for expert programmers. Thus, it is necessary to develop a communication system which is highly abstracted but still capable of optimization. For this purpose, this paper proposes an OpenCL based communication library. To maximize performance improvement, the proposed library provides a simple but effective programming interface based on Stream Graph in order to specify an applications communication pattern. We have implemented a prototype system on OpenCL platform and applied it to several image processing applications. Our evaluation shows that the library successfully masks the details of accelerator memory management while it can achieve comparable speedup to manual optimization in which we use existing low level interfaces.",2012,0, 5757,Monitoring and Predicting Hardware Failures in HPC Clusters with FTB-IPMI,"Fault-detection and prediction in HPC clusters and Cloud-computing systems are increasingly challenging issues. Several system middleware such as job schedulers and MPI implementations provide support for both reactive and proactive mechanisms to tolerate faults. These techniques rely on external components such as system logs and infrastructure monitors to provide information about hardware/software failure either through detection, or as a prediction. However, these middleware work in isolation, without disseminating the knowledge of faults encountered. In this context, we propose a light-weight multi-threaded service, namely FTB-IPMI, which provides distributed fault-monitoring using the Intelligent Platform Management Interface (IPMI) and coordinated propagation of fault information using the Fault-Tolerance Backplane (FTB). In essence, it serves as a middleman between system hardware and the software stack by translating raw hardware events to structured software events and delivering it to any interested component using a publish-subscribe framework. Fault-predictors and other decision-making engines that rely on distributed failure information can benefit from FTB-IPMI to facilitate proactive fault-tolerance mechanisms such as preemptive job migration. We have developed a fault-prediction engine within MVAPICH2, an RDMA-based MPI implementation, to demonstrate this capability. Failure predictions made by this engine are used to trigger migration of processes from failing nodes to healthy spare nodes, thereby providing resilience to the MPI application. Experimental evaluation clearly indicates that a single instance of FTB-IPMI can scale to several hundreds of nodes with a remarkably low resource-utilization footprint. A deployment of FTB-IPMI that services a cluster with 128 compute-nodes, sweeps the entire cluster and collects IPMI sensor information on CPU temperature, system voltages and fan speeds in about 0.75 seconds. The average CPU utilization of th- s service running on a single node is 0.35%.",2012,0, 5758,Towards High-Level Programming of Multi-GPU Systems Using the SkelCL Library,"Application programming for GPUs (Graphics Processing Units) is complex and error-prone, because the popular approaches - CUDA and OpenCL - are intrinsically low-level and offer no special support for systems consisting of multiple GPUs. The SkelCL library presented in this paper is built on top of the OpenCL standard and offers pre-implemented recurring computation and communication patterns (skeletons) which greatly simplify programming for multi-GPU systems. The library also provides an abstract vector data type and a high-level data (re)distribution mechanism to shield the programmer from the low-level data transfers between the system's main memory and multiple GPUs. In this paper, we focus on the specific support in SkelCL for systems with multiple GPUs and use a real-world application study from the area of medical imaging to demonstrate the reduced programming effort and competitive performance of SkelCL as compared to OpenCL and CUDA. Besides, we illustrate how SkelCL adapts to large-scale, distributed heterogeneous systems in order to simplify their programming.",2012,0, 5759,Laser Methane Sensor with the Function of Self-Diagnose,"Using the technology of tunable diode laser absorption spectroscopy and the technology of micro-electronics, a fiber laser methane sensor based on the microprocessor C8051F410 is given. In this paper, we use the DFB Laser as the light source of the sensor. By tuning temperature and driver current of the DFB laser, we can scan the laser over the methane absorption line, Based on the Beer-Lambert law, through detect the variation of the light power before and after the absorption we realize the methane detection. It makes the real-time and online detection of methane concentration to be true, and it has the advantages just as high accuracy, immunity to other gases, long calibration cycle and so on. The sensor has the function of adaptive gain and self-diagnose. By introducing digital potentiometers, the gain of the photo-electric conversion operational amplifier can be controlled by the microprocessor according to the light power. When the gain and the conversion voltage achieve the set value, then we can consider the sensor in a fault status, and then the software will alarm us to check the status of the probe. In addition, we introduce the feedback of the temperature to compensate the environment effects, So we improved the dependence and the stability of the measured results. At last we give some analysis on the sensor according the field application and according the present working, we have a look of our next work in the distance.",2012,0, 5760,Research and Develop on PCB Defect Intelligent Visual Inspection Robot,"In order to realize the fast detection of the printed circuit board's quality, a set of PCB defect intelligent automatic detection device based on machine vision is analysed and designed. The paper introduced the robot system construction, machine system, electronic control system, the design and realization of vision imaging system, software used a fast iterative algorithm for image segmentation based on 2D maximum between-cluster variance, using the template matching method for PCB image of the automatic detection and recognition defects. Experiment results on model machine show that the design of the intelligent visual inspection robot can detect PCB defect exactly and effectively, it also can be used in the actual PCB online detection.",2012,0, 5761,Experimental implementation of dynamic spectrum access for video transmission using USRP,In this paper we present the experimental implementation of dynamic spectrum access (DSA) algorithm using universal software radio peripheral (USRP) and GNU Radio. The setup contains two primary users and two cognitive radios or secondary users. One primary user is fixed and the other is allowed to change its position randomly. Depending upon the position of the primary user the cognitive user will use the spectrum band where the detected energy is below certain predefined threshold level. The cognitive radio users are also programmed to operate independently without interfering with each other using energy detection algorithm for spectrum sensing. The modulation scheme is set to GMSK for secondary user performing data transmission. This experimental setup is used to analyze the quality of video transmission using DSA which provides the insight regarding the possibility of using free spectrum space to improve the performance of the system and its advantage over a non-DSA system. From the experiment it is shown that under congestion and interference DSA perform better than a non- DSA system.,2012,0, 5762,Multi-purpose systems: A novel dataflow-based generation and mapping strategy,"The manual creation of specialized hard-ware infrastructures for complex multi-purpose systems is error-prone and time-consuming. Moreover, lots of effort is required to define an optimized and heterogeneous components library. To tackle these issues, we propose a novel design flow based on the Dataflow Process Networks Model of Computation. In particular, we have combined the operation of two state of the art tools, the Multi-Dataflow Composer and the Open RVC-CAL Compiler, handling respectively the automatic mapping of a reconfigurable multi-purpose substrate and the high level synthesis of hardware components. Our approach guarantees runtime efficiency and on-chip area saving both on FPGAs and ASICs.",2012,0, 5763,"Real-time 360 panoramic views using BiCa360, the fast rotating dynamic vision sensor to up to 10 rotations per Sec","This paper presents a novel smart camera BiCa360 for real-time 360 panoramic views using a rotating dynamic vision sensor at up to 10 rotations per sec. The system consists of (1) a dual-line dynamic vision sensor generating events at high temporal resolution, on-chip time stamping (1s resolution), having a high dynamic range and the sparse visual coding of the information, (2) a high-speed mechanical device rotating at up to 10 revolutions per sec (rps) where the sensor is mounted and (3) a real-time embedded software for panoramic reconstruction of the 360 panoramic views. Within this work, we show the capabilities of the system in terms of data quality (scene reconstruction). We made several experiments to assess the angular resolution as well as the visual quality of the data for rotations ranging from 1 to 10 rps. All evaluations were performed on natural scene with ambient illuminations. Within the live demonstration, we will show BiCa360 providing 360 panoramic views of the natural scene in real-time and at different rotations.",2012,0, 5764,Systematic literature reviews in global software development: A tertiary study,"Context: There has been an increase in research into global software development (GSD) and in the number of systematic literature reviews (SLRs) addressing this topic. Objective: The aim of this research is to catalogue GSD SLRs in order to identify the topics covered, the active researchers, the publication vehicles, and to assess the quality of the SLRs identified. Method: We performed a broad automated search to find SLRs dealing with GSD. We differentiate between SLR studies and papers reporting those studies. Data relating to each of the following was extracted and synthesized from each study: authors and their affiliation at the time of publication, the journal or conference in which the paper was published, the quality of each study and the main GSD study topic. Results: Twenty-four GSD SLR studies and 37 papers reporting those studies were identified. Major GSD topics covered include: (1) organizational environment, (2) project execution, and (3) project planning and control. The main research groups are based in Brazil (17), Ireland (8), and Sweden (7). Conclusions: GSD SLR studies are most frequently reported in the International Conference on Global Software Engineering and IEEE Software; the two most popular topics for research are risk factors due to the organizational environment and the development process. The most active researchers are based in Brazil. The quality of the SLR studies has not changed over time.",2012,0, 5765,Preliminary results of a systematic review on requirements evolution,"Background: Software systems must evolve in order to adapt in a timely fashion to the rapid changes of stakeholder needs, technologies, business environment and society regulations. Numerous studies have shown that cost, schedule or defect density of a software project may escalate as the requirements evolve. Requirements evolution management has become one important topic in requirements engineering research. Aim: To depict a holistic state-of-the-art of requirement evolution management. Method: We undertook a systematic review on requirements evolution management. Results: 125 relevant studies were identified and reviewed. This paper reports the preliminary results from this review: (1) the terminology and definition of requirements evolution; (2) fourteen key activities in requirements evolution management; (3) twenty-eight metrics of requirements evolution for three measurement goals. Conclusions: Requirements evolution is a process of continuous change of requirements in a certain direction. Most existing studies focus on how to deal with evolution after it happens. In the future, more research attention on exploring the evolution laws and predicting evolution is encouraged.",2012,0, 5766,Reporting guidelines for simulation-based studies in software engineering,"Background: Some scientific fields, such as automobile, drugs discovery or engineer have used simulation-based studies (SBS) to faster the observation of phenomena and evolve knowledge. All of them organize their working structure to perform computerized experiments based on explicit research protocols and evidence. The benefits have been many and great advancements are continuously obtained for the society. However, could the same approach be observed in Software Engineering (SE)? Are there research protocols and evidence based models available in SE for supporting SBS? Are the studies reports good enough to support their understanding and replication? AIM: To characterize SBS in SE and organize a set of reporting guidelines aiming at improving SBS' understandability, replicability, generalization and validity. METHOD: To undertake a secondary study to characterize SBS. Besides, to assess the quality of reports to understand the usually reported information regarding SBS. RESULTS: From 108 selected papers, it has been observed several relevant initiatives regarding SBS in software engineering. However, most of the reports lack information concerned with the research protocol, simulation model building and evaluation, used data, among others. SBS results are usually specific, making their generalization and comparison hard. No reporting standard has been observed. CONCLUSIONS: Advancements can be observed in SBS in Software Engineering. However, the lack of reporting consistency can reduce understandability, replicability, generalization and compromise their validity. Therefore, an initial set of guidelines is proposed aiming at improving SBS report quality. Further evaluation must be accomplished to assess the guidelines feasibility when used to report SBS in Software Engineering.",2012,0, 5767,An experimental setup to assess design diversity of functionally equivalent services,"Background: A number of approaches leverage design diversity to tolerate software design faults in service-oriented applications. The use of design diversity depends on the assumption that functionally equivalent services, i.e., variant services, rarely fail on the same input case. However, there are no directives to assess whether variant services are actually diverse and fail on disjoint subsets of the input space. Aim: To provide proper assessment of service diversity in order to achieve a high level of reliability by employing either a diversity-based solution with the variant services or a single service that exhibits higher reliability than would be the case if design diversity was adopted. Method: We propose an experimental setup that encompasses (i) a set of directives to organize the preparation and execution of the experiment to investigate service diversity; (ii) investigation of whether variant services are actually diverse by using statistical tests; and (iii) an analysis of if and by how much the reliability of a diversity-based solution that leverages voters is an improvement over one that uses a single service. We evaluated the applicability and usefulness of the proposed experimental setup by employing it to assess diversity of variant services adhering to four different requirements specifications. For each specification, we analysed three different services. Results: We found that the proposed directives were effective for the purposes of this assessment. Assessment results demonstrated that services implementing the four requirements specifications are actually diverse at a 0.05 significance level. For two of the specifications, coincident failures of two or more services are infrequent enough to promote gains in overall system reliability. Conclusions: Our findings reveal threats to the effectiven",2012,0, 5768,Aspect-oriented software maintenance metrics: A systematic mapping study,"Background: Despite the number of empirical studies that assess Aspect-Oriented Software Development (AOSD) techniques, more research is required to investigate, for example, how software maintainability is impacted when these techniques are employed. One way to minimize the effort and increase the reliability of results in further research is to systematize empirical studies in Aspect-Oriented Software Maintainability (AOSM). In this context, metrics are useful as indicators to quantify software quality attributes, such as maintenance. Currently, a high number of metrics have been used throughout the literature to measure software maintainability. However, there is no comprehensive catalogue showing which metrics can be used to measure AOSM. Aim: To identify an AOSM metrics suite to be used by researchers in AOSM research. Method: We performed a systematic mapping study based on Kitchenham and Charters' guidelines, which derived a research protocol, and used well known digital libraries engines to search the literature. Conclusions: A total of 138 primary studies were selected. They describe 67 aspect-oriented (AO) maintainability metrics. Also, out of the 575 object-oriented maintainability metrics that we analyzed, 469 can be adapted to AO software. This catalogue provides an objective guide to researchers looking for maintainability metrics to be used as indicators in their quantitative and qualitative assessments. We provide information such as authors, metrics description, and studies that used the metric. Researchers can use this information to decide which metrics are more suited for their studies.",2012,0, 5769,A mapping study of software code cloning,"Background: Software Code Cloning is widely used by developers to produce code in which they have confidence and which reduces development costs and improves the software quality. However, Fowler and Beck suggest that the maintenance of clones may lead to defects and therefore clones should be re-factored out. Objective: We investigate the purpose of code cloning, the detection techniques developed and the datasets used in software code cloning studies between the years of 2007 and 2011. This is to analyse the current research trends in code cloning to try and find techniques which have been successful in identifying clones used for defect prediction. Method: We used a mapping study to identify 220 software code cloning studies published from January 2007 to December 2011. We use these papers to answer six research questions by analysing their abstracts, titles and reading the papers themselves. Results: The main focus of studies is the technique of software code clone detection. In the past four years the number of studies being accepted at conferences and in journals has risen by 71%. Most datasets are only used once, therefore the performance reported by one paper is not comparable with the performance reported by another study. Conclusion: The techniques used to detect clones seem to be the main focus of studies. However it is difficult to compare the performance of the detection tools reported in different studies because the same dataset is rarely used in more than one paper. There are few benchmark datasets where the clones have been correctly identified. Few studies apply code cloning detection to defect prediction.",2012,0, 5770,Dynamic quality evaluation of elbow flowmeter,"In order to assess the dynamic quality of the elbow flowmeter, the influence of ambient temperature, inner-pipe medium temperature, and pipeline lengths of the front and rear straight sections on the geometrical shape of the elbow pipe was presented in the paper. The change rule of the curvature radius to inside diameter ratio (R/D) and the quality index in various operating conditions were acquired quantitatively. The significance of the dynamic quality evaluation lies in that the knowledge of the change rule can guide users to select the suitable flowmeters in some specific application, rather than simply choosing the meters with larger R/D, which may help to reduce the system cost. In addition, the actual R/D instead of the theoretical manufacturing R/D is used to calculate the volume of flow, which can improve the measurement accuracy. The evaluation system developed with VB, Access, and MATLAB has been applied to some thermal power plant, the good usage effect having been obtained.",2012,0, 5771,Haptic data compression based on quadratic curve prediction,"In this paper, a new haptic data compression method is presented. A quadratic curve predictor is constructed to improve the data reduction rate. Knowledge from human haptic perception is incorporated into the architecture to assess the perceptual quality of the compressed haptic signals. Experiments prove the effectiveness of the proposed approach in data reduction rate.",2012,0, 5772,The research on defect recognition method for rail magnetic flux leakage detecting,"In this paper, the ANSYS finite element software was used to establish a two-dimensional mathematical model of defect leakage field for the simulation of the cracks, cone-shaped, pits and other regular defects of rail with the FEM simulation method, through which axial and radial magnetic flux density distribution curves and other data of defect leakage field was acquired. By comparison of the simulation data and peak, peak-valley, peak-valley spacing and other characteristic values of the leakage magnetic flux density curves, the characteristics of different defect types and identical defect types of magnetic leakage signal was acquired to identify the defect information such as defect types, internal or external distribution, shapes and dimensions and other information.",2012,0, 5773,Fail-safe over-the-air programming and error recovery in wireless networks,"Wireless networks are an emerging technology which are employed for a growing number of applications. Maintenance and extensibility necessitate software updates after an initial deployment. Wireless updates are in most cases preferred, e.g. in large-scale networks or inaccessible deployments. Recent research either focused on reliable and fast transmission of a new firmware, or on a modular update of firmware parts. In this paper we additionally address the problem of error-prone software which can permanently disable the update functionality. A comprehensive and fail-safe update system is proposed fulfilling requirements for the usage in industrial environments.",2012,0, 5774,SecAgreement: Advancing Security Risk Calculations in Cloud Services,"By choosing to use cloud services, organizations seek to reduce costs and maximize efficiency. For mission critical systems that must satisfy security constraints, this push to the cloud introduces risks associated with cloud service providers not implementing organizationally selected security controls or policies. As internal system details are abstracted away as part of the cloud architecture, the organization must rely on contractual obligations embedded in service level agreements (SLAs) to assess service offerings. Current SLAs focus on quality of service metrics and lack the semantics needed to express security constraints that could be used to measure risk. We create a framework, called SecAgreement (SecAg), that extends the current SLA negotiation standard, WS-Agreement, to allow security metrics to be expressed on service description terms and service level objectives. The framework enables cloud service providers to include security in their SLA offerings, increasing the likelihood that their services will be used. We define and exemplify a cloud service matchmaking algorithm to assess and rank SecAg enhanced WS-Agreements by their risk, allowing organizations to quantify risk, identify any policy compliance gaps that might exist, and as a result select the cloud services that best meet their security needs.",2012,0, 5775,Improving Cloud Service Reliability -- A System Accounting Approach,"Nowadays an increasing number of companies only deploy their enterprise application services over the Internet. Software as a Service (SaaS) in a cloud computing environment allows these companies to focus on providing more competitive services instead of maintenance. As delivery of computing as a service, a trustworthy cloud service widely depends upon its reliability. For this reason, a newly defined Quality of Reliability (QoR) for cloud services is proposed in this paper. To achieve a good QoR, we not only analyze system events from both service consumers and providers, but also provide a layered composable system accounting architecture for cloud systems. A pipelined approach and a dependence estimation algorithm are introduced for pattern recognition and event analysis and prediction. A self-healing layer is also designed to achieve automatic recovery by re-composing services according to their functionalities and non-functional requirements. An implementation of this framework in an education services environment confirms the advantages over extant system accounting systems.",2012,0, 5776,A Framework for Detecting Anomalous Services in OSGi-Based Applications,"The service-centric applications are composed of third-party services. These services delivered by different vendors are usually black-box components which lack source code and design documents. It is difficult to evaluate their quality by static code analysis. Detecting anomalous services online is important to improve the reliability of these applications. This paper presents a framework for detecting anomalous services in the OSGi-based applications, followed by a method of monitoring services. We propose a method to monitor the resource utilization and interaction of services through tracing thread transfer. In addition, we detect anomalous services with XmR control charts. A prototype tool was implemented and applied in an application server. The experimental results show that our method 1) is of high accuracy for monitoring the resource utilization of the OSGi-based services; 2) does not introduce significant overhead; 3) can detect anomalous services effectively.",2012,0, 5777,ProPRED: A probabilistic model for the prediction of residual defects,"In this paper, we propose ProPRED, a probabilistic model for predicting residual defects based on Bayesian Networks (BN) in the software development lifecycle. With the chain rule for BN, ProPRED can be used to take the evidence of the influential factors to the activities (Analyze and Design, Development, Maintain, and Review and Test) that bring about the defects introduction and removal to reason and predict the probable residual defects. We refine and classify the influential factors to the four basic activities, and construct the ProPRED. Giving a case study, we conclude that the ProPRED improve its performance in reasoning under uncertainty and convenience in decision-making and quality control.",2012,0, 5778,SdDirM: A dynamic defect prediction model,"Defect prediction and estimation techniques play an important role in software reliability engineering. This paper proposes a dynamic defect prediction model named SdDirM (System dynamic based Defect injection and removal Model) to improve the quantitative defect management process. Using SdDirM, we can simulate defect introduction and removal processes, and predict and estimate the residual defects in different phases. We describe the modeling process, the validation and the results with the empirical and real project data compared with the other well known models. These experiments show that the managers can use the model to explore and analyze the potential improvements before the practice.",2012,0, 5779,Reducing Application-level Checkpoint File Sizes: Towards Scalable Fault Tolerance Solutions,"Systems intended for the execution of long-running parallel applications require fault tolerant capabilities, since the probability of failure increases with the execution time and the number of nodes. Checkpointing and rollback recovery is one of the most popular techniques to provide fault tolerance support. However, in order to be useful for large scale systems, current checkpoint-recovery techniques should tackle the problem of reducing checkpointing cost. This paper addresses this issue through the reduction of the checkpoint file sizes. Different solutions to reduce the size of the checkpoints generated at application level are proposed and implemented in a checkpointing tool. Detailed experimental results on two multicore clusters show the effectiveness of the proposed methods.",2012,0, 5780,Using distribution automation for a self-healing grid,"One of the principal characteristics of the modern grid identified by the National Energy Technology Laboratory (NETL) in its 2007 report to the US Department of Energy is self healing. That is, The modern grid will perform continuous self-assessments to detect, analyze, respond to, and as needed, restore grid components or network sections. (Reference: The NETL Modern Grid Initiative - Powering our 21st-Century Economy - Modern Grid Benefits; National Energy Technology Laboratory; August 2007.). Distribution Automation (DA) is able to assist in achieving this objective through the use of computer and communication technology, advanced software, and remotely operable high voltage switchgear. The Fault Location Isolation and Service Restoration (FLISR) application can improve reliability dramatically without compromising safety and asset protection. This article briefly describes the FLISR function, provides information on the major trends of the day and issues that need to be resolved, and suggests where the industry should go from here..",2012,0, 5781,Evaluating source trustability with data provenance: A research note,"One of the main challenges in intelligence work is to assess the trustworthiness of data sources. In an adversarial setting, in which the subjects under study actively try to disturb the data gathering process, trustworthiness is one of the most important properties of a source. The recent increase in usage of open source data has exacerbated the problem, due to the proliferation of sources. In this paper we propose computerized methods to help analysts evaluate the truthfulness of data sources (open or not). We apply methods developed in database and Semantic Web research to determine data quality (which includes truthfulness but also other related aspects like accuracy, timeliness, etc.). Research on data quality has made frequent use of provenance metadata. This is metadata related to the origin of the data: where it comes from, how and when it was obtained, and any relevant conditions that might help determine how it came to be in its current form. We study the application of similar methods to the particular situation of the Intelligence analyst, focusing on trust. This paper describes ongoing research; what is explained here is a first attempt at tackling this complex but very important problem. Due to lack of space, relevant work in the research literature is not discussed, and several technical considerations are omitted; finally, further research directions are only sketched.",2012,0, 5782,Design and implementation of detecting instrument for an airborne gun fire control computer,"This paper proposes a detecting instrument for an airborne gun fire control computer. Shooting system is a typical opened loop control system, whose most important part is fire control computer. The state of fire control computer directly effects shooting precision of airborne gun, therefore it's necessary to detect faults in cycle. Based on analysing interface characteristic of fire control computer, the paper proposed a testing scheme of in-situ detection and ex-situ detection, introduced testing principle, designed hardware and software of the detecting instrument, analysed testing precision and data. After application in the army, it is proved that the detecting instrument has high precision and speed, supports training and combat effectively. The design idea and realizing technique is a useful reference for detecting navy gun servo system.",2012,0, 5783,Interval-based algorithms to extract fuzzy measures for Software Quality Assessment,"In this paper, we consider the problem of automatically assessing sofware quality. We show that we can look at this problem, called Software Quality Assessment (SQA), as a multicriteria decision-making problem. Indeed, just like software is assessed along different criteria, Multi-Criteria Decision Making (MCDM) is about decisions that are based on several criteria that are usually conflicting and non-homogenously satisfied. Nonadditive (fuzzy) measures along with the Choquet integral can be used to model and aggregate the levels of satisfaction of these criteria by considering their relationships. However, in practice, fuzzy measures are difficult to identify. An automated process is necessary and possible when sample data is available. Several optimization approaches have been proposed to extract fuzzy measures from sample data; e.g., genetic algorithms, gradient descent algorithms, and the Bees algorithm, all local search techniques. In this article, we propose a hybrid approach, combining the Bees algorithm and an interval constraint solver, resulting in a focused search expected to be less prone to falling into local results. Our approach, when tested on SQA decision data, shows promise and compares well to previous approaches to SQA that were using machine learning techniques.",2012,0, 5784,Toward optimizing static target search path planning,"Discrete static open-loop target search path planning is known to be a NP (non-deterministic polynomial) - Hard problem, and problem-solving methods proposed so far rely on heuristics with no way to properly assess solution quality for practical size problems. Departing from traditional nonlinear model frameworks, a new integer linear programming (ILP) exact formulation and an approximate problem-solving method are proposed to near-optimally solve the discrete static search path planning problem involving a team of homogeneous agents. Applied to a search and rescue setting, the approach takes advantage of objective function separability to efficiently maximize probability of success. A network representation is exploited to simplify modeling, reduce constraint specification and speed-up problem-solving. The proposed ILP approach rapidly yields near-optimal solutions for realistic problems using parallel processing CPLEX technology, while providing for the first time a robust upper bound on solution quality through Lagrangean programming relaxation. Problems with large time horizons may be efficiently solved through multiple fast subproblem optimizations over receding horizons. Computational results clearly show the value of the approach over various problem instances while comparing performance to a myopic heuristic.",2012,0, 5785,FAST: Formal specification driven test harness generation,"Full coverage testing is commonly perceived as a mission impossible because software is more complex than ever and produces vast space to cover. This paper introduces a novel approach which uses ACSL formal specifications to define and reach test coverage, especially in the sense of data coverage. Based on this approach, we create a tool chain named FAST which can automatically generate test harness code and verify program's correctness, turning formal specification and static verification into coverage definition and dynamic testing. FAST ensures completeness of test coverage and result checking by leveraging the formal specifications. We have applied this methodology and tool chain to a real-world mission critical software project that requires high quality standard. Our practice shows using FAST detects extra code bugs that escape from other validation methods such as manually-written tests and random/fuzz tests. It also costs much less human efforts with higher bug detection rate and higher code and data coverage.",2012,0, 5786,A comprehensive frequency domain identification of a coastal p atrol vessel,"In This paper detailed frequency-domain system identification method is applied to identify steering dynamics of a naval coastal patrol vessel using a data analysis software tool, called CIFER. Advanced features such as Chirp-Z transform and composite windowing techniques are used to extract high quality frequency responses. An accurate, linear and robust transfer function models are derived for yaw and roll dynamics of the vessel. In addition, to evaluate the accuracy of the identified model, time-domain responses from a 45-45 zig-zag test are compared with the responses predicted by the identified model. The model shows excellent predictive capability that is well suited for simulation applications as well as control design.",2012,0, 5787,Application of decay rate analysis for GTS provisioning in Wireless Sensor Networks,"In a Wireless Sensor Network (WSN) the provision of a Guaranteed Time Slot (GTS) in a beacon enabled mode is best suited for applications with low latency and high Quality of Service (QoS) requirements such as those of real time traffic. However the advancements in technology have dramatically increased the volume of applications which require access to this real time service provision in such areas as healthcare, asset tracking, environmental monitoring, and military projects. Each super-frame structure contains only seven GTS's with the remaining time slots provisioned for the Contention Access Period (CAP) where applications with higher tolerance of latency vie for a position to transmit data. The restrictions within the Contention Free Period (CFP) to seven time slots per superframe structure, places enormous responsibilities on the network designers to dimension networks that can guarantee a high QoS. This work plans to apply decay rate analysis to GTS provisioning. Appropriate buffer dimensioning for specific applications can be achieved by predetermining the probability of buffer capacity. This will enable adequate parameter settings to be configured within a WSN to meet all real time traffic needs.",2012,0, 5788,Checkpointing in selected most fitted resource task scheduling in grid computing,"Grid applications run on environment that is prone to different kinds of failures. Fault tolerance is the ability to ensure successful delivery of services despite faults that may occur. Our research adds fault tolerance capacity with checkpointing and machine failure, to the current research, Selected Most Fitted (SMF) Task Scheduling for grid computing. This paper simulates one of fault tolerance techniques for grid computing, which is implementing checkpointing into Select Most Fitting Resource for Task Scheduling algorithm (SMF). We applied the algorithm of MeanFailure with Checkpointing in the SMF algorithm and named it MeanFailureCP-SMF. The MeanFailureCP-SMF is simulated using Gridsim with initial checkpointing interval at 20% job execution time. Results show that with MeanFailureCP-SMF has reduce the average execution time (AET) compare to the current SMF and MeanFailure Algorithm.",2012,0, 5789,Assessment model for educational collaborative virtual environments,The paper presents an proposal model for assessing the quality of educational collaborative virtual environments. This model is based on the Quantitative Evaluation framework developed by Escudeiro in 2008 and it consists in five phases. The purpose is to establish a theoretical model that highlights a number of relevant set of requirements in relation to the quality in educational collaborative virtual environments in order to facilitate the assessment of educational collaborative virtual environments. It is intended to apply the model during the lifecycle of product development and the selection of environment to support the learning/teaching process.,2012,0, 5790,Educational policy-making in managing undergraduate English majors' graduation thesis writing,"The frequent complaints from various levels about undergraduate thesis quality and its chronic problems over the years have brought up the current reformative policy exploration on teaching and tutoring undergraduate English majors' thesis writing online. This policy, characteristic of distributive, regulatory, evidence-based, democratic, sustainable and lifelong learning frameworks, was formulated after consulting higher level authorities, surveying students' and tutors' opinions, and then reviewed by the department academic committee before its implementation. It neither rejects nor belittles the teaching and tutoring of students' graduation thesis in the conventional way. Instead, it provides an alternative of distance learning for migrating seniors, opens up the possibility of writing and defending thesis online, facilitates the communication between all parties involved, and enables the administrators to monitor the learning, teaching and tutoring processes and to receive feedbacks from all anonymous users. It is argued that the e-governance of undergraduate English majors' graduation thesis writing can better counter possible policy risks with the combined efforts of all parties, and therefore predicts an optimistically foreseeable result.",2012,0, 5791,Transient Fault Tolerance for ccNUMA Architecture,"Transient fault is a critical concern in the reliability of microprocessors system. The software fault tolerance is more flexible and lower cost than the hardware fault tolerance. And also, as architectural trends point toward multi core designs, there is substantial interest in adapting parallel and redundancy hardware resources for transient fault tolerance. The paper proposes a process-level fault tolerance technique, a software centric approach, which efficiently schedule and synchronize of redundancy processes with ccNUMA processors redundancy. So it can improve efficiency of redundancy processes running, and reduce time and space overhead. The paper focuses on the researching of redundancy processes error detection and handling method. A real prototype is implemented that is designed to be transparent to the application. The test results show that the system can timely detect soft errors of CPU and memory that cause the redundancy processes exception, and meanwhile ensure that the services of application is uninterrupted and delay shortly.",2012,0, 5792,Fuzzy Fault Tree Based Fault Detection,"In this article, for the Linux operating system environment, with the characteristics of ambiguity and uncertainty for the occurrence probability of system failures, fuzzy theory is introduced into the fault tree analysis. The occurrence probability of basic events in the conventional fault tree is made fuzzy by introducing the concept of fuzzy sets. Using the upstream method for solving the minimum cut sets and transferring the different fuzzy numbers into triangular fuzzy numbers, the method is validated with the CPU error detection. This way provides a theoretical basis and implementation for system reliability evaluation, fault diagnosis and maintenance decisions.",2012,0, 5793,A Novel Framework of Self-Adaptive Fault-Tolerant for Pervasive Computing,"The increasing complexity of software and hardware resources and frequentative interaction among function components make fault-tolerant very challenging in pervasive computing system. In our paper, we propose a novel framework of self-adaptive fault-tolerant mechanism for pervasive computing environments. In our approach, the self-adaptive fault-tolerant mechanism is dynamically built according to various types of detected fault based on continuous monitoring, analysis of the component state. We put forward the architecture of fault-tolerant system and the policy-based fault-tolerant scheme, which adopt three-dimensional array of core features to capture spatial and temporal variability and the Event-Condition-Action rules. The mentioned mechanism has been designed and implemented on a prototype of office pervasive computing application systems, called POPCAS System. We have performed the experiments to evaluate the efficiency of the fault-tolerant mechanism. The results of the experiments show that the proposed mechanism can obviously improve reliability of the POPCAS System.",2012,0, 5794,Mean Opinion Score performance in classifying voice-enabled emergency communication systems,"Freedom Fone (FF) is an easy to use Interactive Voice Response (IVR) System that integrates with Global System for Mobile (GSM) telecommunications [1]. Sahana is a disaster management expert system [2]. Project intent was to interconnect the FF and Sahana free and open source software systems. The research adopted Emergency Data Exchange Language (EDXL) interoperable content standard [3] for data interchange between the two platforms. It was an initiative to enable Sarvodaya, Sri Lanka's largest humanitarian organization, with voice-enabled services for exchanging disaster information. An early automation challenge was introducing Sinhala and Tamil language Automatic Speech Recognition (ASR) and Text-To-Speech (TTS) software algorithms for interchanging information between the two disparate software systems [4]. Experiments with human substitution for ASR and TTS with decoupled less streamlined systems revealed inefficiencies [5]. Voice quality was a key factor affecting the Mean Time To Completion (MTTC). The research applied the International Telecommunication Union (ITU) recommended R800 Mean Opinion Score (MOS) and Difficulty Score (DS) voice quality evaluation methods [4]. The overall system was classified with a 3.52 MOS and predicted with a 29.44% DS [4]. This paper justifies the MOS classification accuracy in setting a 4.0 MOS threshold for differentiating good emergency communication IVRs from bad ones.",2012,0, 5795,Techniques for data-race detection and fault tolerance: A survey,"There are two primary methods for interactions among processes in concurrent software, i.e., shared memory and message passing. Both of these methods require synchronization routines implicitly or explicitly for concurrency control. Explicit synchronization techniques are language independent, while implicit techniques depend on the programming language. Synchronization techniques are prone to various types of faults which may cause the software to fail. Fault tolerance techniques have been effectively employed to tolerate such failures. In this paper, we present a critical analysis of the existing fault tolerance techniques designed to tolerate a particular type of synchronization failure that is caused by data race condition. Previous work shows that synchronization faults occur primarily due to large communication between processes. We provide an overview of techniques used for reducing communication and concurrency control faults. To analyze the existing fault tolerance techniques for synchronization failures, we have identified a set of criteria. The results of our evaluation have been summarized in a table at the end.",2012,0, 5796,PDF Scrutinizer: Detecting JavaScript-based attacks in PDF documents,"For a long time PDF documents have arrived in the everyday life of the average computer user, corporate businesses and critical structures, as authorities and military. Due to its wide spread in general, and because out-of-date versions of PDF readers are quite common, using PDF documents has become a popular malware distribution strategy. In this context, malicious documents have useful features: they are trustworthy, attacks can be camouflaged by inconspicuous document content, but still, they can often download and install malware undetected by firewall and anti-virus software. In this paper we present PDF Scrutinizer, a malicious PDF detection and analysis tool. We use static, as well as, dynamic techniques to detect malicious behavior in an emulated environment. We evaluate the quality and the performance of the tool with PDF documents from the wild, and show that PDF Scrutinizer reliably detects current malicious documents, while keeping a low false-positive rate and reasonable runtime performance.",2012,0, 5797,System Design of Perceptual Quality-Regulable H.264 Video Encoder,"In this work, a perceptual quality-regulable H.264 video encoder system has been developed. Exploiting the relationship between the reconstructed macro block and its best predicted macro block from mode decision, a novel quantization parameter prediction method is built and used to regulate the video quality according to a target perceptual quality. An automatic quality refinement scheme is also developed to achieve a better usage of bit budget. Moreover, with the aid of salient object detection, we further improve the quality on where human might focus on. The proposed algorithm achieves better bit allocation for video coding system by changing quantization parameters at macro block level. Compared to JM reference software with macro block layer rate control, the proposed algorithm achieves better and more stable quality with higher average SSIM index and smaller SSIM variation.",2012,0, 5798,Query Range Sensitive Probability Guided Multi-probe Locality Sensitive Hashing,"Locality Sensitive Hashing (LSH) is proposed to construct indexes for high-dimensional approximate similarity search. Multi-Probe LSH (MPLSH) is a variation of LSH which can reduce the number of hash tables. Based on the idea of MPLSH, this paper proposes a novel probability model and a query-adaptive algorithm to generate the optimal multi-probe sequence for range queries. Our probability model takes the query range into account to generate the probe sequence which is optimal for range queries. Furthermore, our algorithm does not use a fixed number of probe steps but a query-adaptive threshold to control the search quality. We do the experiments on an open dataset to evaluate our method. The experimental results show that our method can probe fewer points than MPLSH for getting the same recall. As a result, our method can get an average acceleration of 10% compared to MPLSH.",2012,0, 5799,Developing a Bayesian Network Model Based on a State and Transition Model for Software Defect Detection,"This paper describes a Bayesian Network model-to diagnose the causes-effect of software defect detection in the process of software testing. The aim is to use the BN model to identify defective software modules for efficient software test in order to improve the quality of a software system. It can also be used as a decision tool to assist software developers to determine defect priority levels for each phase of a software development project. The BN tool can provide a cause-effect relationship between the software defects found in each phase and other factors affecting software defect detection in software testing. First, we build a State and Transition Model that is used to provide a simple framework for integrating knowledge about software defect detection and various factors. Second, we convert the State and Transition Model into a Bayesian Network model. Third, the probabilities for the BN model are determined through the knowledge of software experts and previous software development projects or phases. Last, we observe the interactions among the variables and allow for prediction of effects of external manipulation. We believe that both STM and BN models can be used as very practical tools for predicting software defects and reliability in varying software development lifecycles.",2012,0, 5800,A Study of Student Experience Metrics for Software Development PBL,"In recent years, the increased failure originated in the software defects, in various information systems causes a serious social problem. In order to build a high-quality software, cultivation of ICT (Information and Communication Technology) human resources like a software engineer is required. A software development PBL (Project-based Learning) is the educational technique which lets students acquire knowledge and skill spontaneously through practical software development. In PBL, on the other hand, it is difficult to evaluate not only the quality of the product but also the quality of the development process in the project. In this paper, we propose the student evaluation metrics to assess the development process in PBL. The student evaluation metrics represent LOC (Lines of Code) and development time for each product developed by a student. By using online storage, these metrics can be measured and visualized automatically. We conducted an experiment to evaluate the accuracy of the metrics about development time. As a result, we confirmed that development time metrics can be measured with approximately 20% of error.",2012,0, 5801,An Ensemble Approach of Simple Regression Models to Cross-Project Fault Prediction,"In software development, prediction of fault-prone modules is an important challenge for effective software testing. However, high prediction accuracy may not be achieved in cross-project prediction, since there is a large difference in distribution of predictor variables between the base project (for building prediction model) and the target project (for applying prediction model.) In this paper we propose an prediction technique called """"an ensemble of simple regression models"""" to improve the prediction accuracy of cross-project prediction. The proposed method uses weighted sum of outputs of simple (e.g. 1-predictor variable) logistic regression models to improve the generalization ability of logistic models. To evaluate the performance of the proposed method, we conducted 132 combinations of cross-project prediction using datasets of 12 projects from NASA IV&V Facility Metrics Data Program. As a result, the proposed method outperformed conventional logistic regression models in terms of AUC of the Alberg diagram.",2012,0, 5802,Detecting Bad SNPs from Illumina BeadChips Using Jeffreys Distance,"Current microarray technologies are able to assay thousands of samples over million of SNPs simultaneously. Computational approaches have been developed to analyse a huge amount of data from microarray chips to understand sophisticated human genomes. The data from microarray chips might contain errors due to bad samples or bad SNPs. In this paper, we propose a method to detect bad SNPs from the probe intensities data of Illumina Beadchips. This approach measures the difference among results determined by three software Illuminus, GenoSNP and Gencall to detect the unstable SNPs. Experiment with SNP data in chromosome 20 of Kenyan people demonstrates the usefulness of our method. This approach reduces the number of SNPs that are needed to check manually. Furthermore, it has the ability in detecting bad SNPs that have not been recognized by other criteria.",2012,0, 5803,Directional undervoltage pilot scheme for distribution generation networks protection,"The trends of the actual distribution networks are moving toward a high penetration of distributed generation and power electronics converters. These technologies modify contribution-to-fault current magnitude and raise concern about new protection systems to accurately detect faults on distribution networks. This paper proposes a directional under-voltage pilot scheme to detect faulted branches in distribution networks. The aim of the proposed scheme is to provide an efficient algorithm with functions for fault detection, fault localization and fault isolation. The fault detection is based on the voltage measurements at each node of the distribution network when a fault occurs, which are compared with the prefault values. Then, once the fault is detected, the proposed scheme locates the fault comparing the current direction at each node. As the direction of the current when a fault occurs is known, the scheme uses this information to locate the faulted branch. After that, a trip signal is sent to the corresponding breakers in order to isolate the branch under fault. Besides, the proposed scheme enables back up protection using communication between adjacents nodes. A distribution network has been modeled in PSCAD/EMTDC software to verify the proposed algorithm, taking into account distributed generation provided by both wind turbines (doubly fed induction generator and permanent magnet generator with full converter) and solar photovoltaic installations. The behavior of the under-voltage measurements and the current direction has been studied for both generation and loads nodes. This algorithm has been tested varying fault location and resistance along the modeled distribution network.",2012,0, 5804,Automated ontology construction from scenario based software requirements using clustering techniques,"Ontologies have been utilized in many different areas of software engineering. As software systems grow in size and complexity, the need to devise methodologies to manage the amount of information and knowledge becomes more apparent. Utilizing ontologies in requirement elicitation and analysis is very practical as they help to establish the scope of the system and facilitate information reuse. Moreover ontologies can serve as a natural bridge to transition from the requirements gathering stage to designing the architecture for the system. However manual construction of ontologies is time consuming, error prone and subjective. Therefore it is greatly beneficial to devise automated methodologies which allow knowledge extraction from system requirements using an automated and systematic approach. This paper introduces an approach to systematically extract knowledge from system requirements to construct different views of ontologies for the system as a part of a comprehensive framework to analyze and validate software requirements and design.",2012,0, 5805,Relationship of intangible assets and competitive advantage for software production: A Brazilian companies study,"The competitive strategies of software-producing organizations are generally based on the opportunities and risks of each business deal and do not sufficiently explore the potential of their intangible assets. This study aims to evaluate the impact of intangible resources management in the formulation of competitive strategies in software-production. To assess the existence, intensity, and conditions of the cause-effect relationship between intangible assets and competitive advantages, it was conducted an exploratory survey among Brazilian companies that produce software, belonging to the same economic cluster. The attributes of intangible resources were the same used in a study conducted in Europe in 2004 (value, rareness, imitation and substitution) and the results were statistically evaluated, using contingency tables and Chi-square tests. At same time, it was performed a case study with some of these Brazilian organizations to know the intensity and conditions under which the relationship could occur. The research found that intangible resources are structured according to the type of business involved, and they emphasize the basic management's elements: schedule - cost and quality as the most visible for the organization and customers. The intangible resource knowledge provides opportunities for the organization to identify effective partnerships and establish strong and long relationship.",2012,0, 5806,Steady-state and transient performances of Oman transmission system with 200 MW photovoltaic power plant,"The paper presents steady-state and transient studies to assess the impact of a 200 MW Photovoltaic Power plant (PVPP) connection on the Main Interconnected Transmission System (MITS) of Oman. The PVPP consists mainly of a large number of solar arrays, DC/DC converters, DC/AC inverters, filters, and step-up transformers. Two proposed locations are considered to connect the PVPP plant to MITS: Manah 132 kV and Adam 132/33 kV grid stations in Al-Dakhiliah region. The transmission grid model of 2016 has been updated to include the simulation of the proposed 200 MW PVPP at either Manah or Adam. The DIgSILENT PowerFactory professional software is used to simulate the system and to obtain the results. The results include percentage of transmission line loadings, percentage of transformer loadings, busbar voltages, grid losses, in addition to 3-phase and 1-phase fault levels. Also, simulation studies have been performed to assess the transmission system transient responses to the PVPP outage. Steady state and transient analyses have shown that the connection of the PVPP plant at Manah or Adam to the transmission system is acceptable. The transient responses have proved that the system remains stable when it is subjected to the PVPP forced outage.",2012,0, 5807,Android-based universal vehicle diagnostic and tracking system,"This system aims to provide a low-cost means of monitoring a vehicle's performance and tracking by communicating the obtained data to a mobile device via Bluetooth. Then the results can be viewed by the user to monitor fuel consumption and other vital vehicle electromechanical parameters. Data can also be sent to the vehicle's maintenance department which may be used to detect and predict faults in the vehicle. This is done by collecting live readings from the engine control unit (ECU) utilizing the vehicle's built in on-board diagnostics system (OBD). An electronic hardware unit is built to carry-out the interface between the vehicle's OBD system and a Bluetooth module, which in part communicates with an Android-based mobile device. The mobile device is capable of transmitting data to a server using cellular internet connection.",2012,0, 5808,Framework for effective utilization of e-content in engineering education,"Even though lot of useful content is available on the Internet that is relevant to engineering courses, the scattered nature of the content being available from different websites, wastage of time in finding the right content, absence of any mechanism that certifies the correctness of the accessed content, different levels of the content on a given topic, too-lengthy non-interactive text content, are some of the problems faced by learners, in using this available content. In this context it is important to have a system that can direct the learner to right content, which is presented in an easily understandable manner, with virtual experimentation options wherever applicable, along with appropriate assessments carried out to certify the assessed. In this backdrop, a framework is proposed in this paper that can offer the details of e-content, in terms of its type, relevance, level, correctness, extent of coverage, and usage statistics. By making this info available from authenticated portals like University websites, updating it frequently by sharing information from similar other portals, taking into account the user feedbacks and subject experts' ratings to decide the content quality, learners can be enabled to access good quality e-content in less time and effort. Software agents similar to citation agents are proposed to be used for collecting the details from multiple sites in Internet. For assessment also, the software clients that run on content pages can bring the result of such assessments to the portals where the data belonging to such other previous assessments taken by the user were also stored to offer assessment scores of the learner over a period of time on multiple subjects and skills. Overall, this system can reduce the burden of the learner in accessing the required content and can make his learning more interesting by having competitive learning with online assessments and credits obtained thereon.",2012,0, 5809,Condition Monitoring for Detecting VFC and STATCOM Faults in Wind Turbine Equipped with DFIG,"Condition monitoring of Doubly Fed Induction Generators (DFIG) is growing in importance for wind turbines. This paper investigates the effect of VFC (Variable Frequency Converter) and STATCOM Faults on wind turbine equipped with DFIG operation in order to condition monitoring of wind turbine. Consequently, a proposed method is used to detecting these faults means of harmonic components analyzing of DFIG rotor current. The simulation has been done with PSCAD/EMTDC software.",2012,0, 5810,A Study on Reclosing of Asaluyeh-Isfahan 765 kV Transmission Line Considering the Effect of Neutral Reactor in Reducing Resonant Voltages,"Shunt reactors are utilized on long transmission lines (in this case a 640km 765kV line) to reduce over voltages experienced under light load conditions. When the compensated line is opened to clear a fault, it is found that the open-phase voltage does not disappear. In some cases, a dangerous transient voltage with a resonant frequency between 30Hz and 55Hz can be seen which gradually reduces in magnitude. This over voltage can damage line connected equipments like shunt reactors and open circuit-breakers. This phenomenon is because of the trapped charges existing on transmission line during the dead time of secondary arc extinction. In case of high transient over voltages, we need additional equipments to reduce them. Normally, closing resistors are used to limit the over voltages. This solution is expensive and prone to failure. The proposed method is the usage of neutral reactors in the neutral of shunt reactors which is a cost effective solution. The technique is shown to reduce the resultant over voltages on the power system and allows for faster reclosing thus improving stability. In this research work, the investigation of a neutral reactor application is performed for transmission lines using transient simulation with appropriate models. The application of neutral reactors for reducing the resonant over voltages and reclosing over voltages during fault clearing is analyzed. The simulations are carried out by ATP/EMTP software and the advantages of using neutral reactors for Asaluyeh-Isfahan 765 kV transmission line are discussed.",2012,0, 5811,Research on the intelligent protection system of coal conveyor belt,"A sort of intelligent protection system for conveyor belt was designed, which mainly aimed at the phenomenon of belt slipped, belt broken, overloaded and coupling broken etc. It was designed by using intelligent instrument, monitoring software and fault detection technology, furthermore, the software and hardware were described. The protection system can automatically detect, diagnose corresponding fault, and give alarm signal with sound and light to stop the fault belt machine in due course. The application effect has proved that the protection system can improve intelligent degree of the whole burning coal transport and reduce the manual greatly and save a lot of funds for enterprise.",2012,0, 5812,Prediction of armored equipment maintenance support materials' consumption based on imitation,"On the foundation of equipment fault regulation's research, through the analysis of the influential main factors of equipment maintenance materials' consumption, making use of Anylogic software, combining the actual usage of armored equipment in the army, setting up a equipment peacetime usage model, and then to predict the consumption of armored equipment maintenance material. The result express that it can carry on prediction to armored equipment maintenance materials' consumption in this way. Then to make a solid foundation for armored equipment maintenance resources transport ion and maintenance support ability estimation research.",2012,0, 5813,Analyzing system of electric signals in spot welding process,"Analyzing system of electric signals in spot welding process is a very important detecting instrument for d enveloping high quality welding equipment. Based on general platform with Windows operation circumstance, high speed data acquisition card PCL1800, and industrial computer. A software for detecting and analyzing electric signals in spot welding process with high speed and multifunction has been developed by using Visual C++6.0. The system can detect current and voltage waveform of spot welding process, and extract dynamic characteristic information of spot welding process. Thus, it provides a powerful tool for developing high performance welding and optimizing parameters in spot welding process.",2012,0, 5814,Document Quality Checking Tool for Global Software Development,"Software development projects often utilize global resources to reduce costs. Typically a large volume of unstructured office documents are involved. Unfortunately, in many cases the low quality of unstructured documents due to various location-related barriers (e.g. time zones, languages, and cultures) can cause negative effects on the outcomes of projects. Several approaches have been introduced for document quality checking but they have not generalized well enough to handle various unstructured documents in a broad range of projects. Based on past experience, we have prepared guidelines, templates, rules, and document quality-checking tools for designing and developing global software development projects. In this paper we specifically focus on the effectiveness of our document quality checking tool. The challenges for such a checking tool are that it must be generally adaptive and also highly accurate to be practical for industrial use. Our approach is template-based and consists of an extraction process for the physical-syntactic structure, a transformation process for the logical-semantic structure and an analysis process. Our experiments inspected 66 authentic customer documents, detecting 118 errors. The accuracy as measured by the true-positive ratio (accurately detected true errors) was 98.3% and the true-negative ratio (accurately detected non-errors) was 99.4%.",2012,0, 5815,Temporal segmentation tool for high-quality real-time video editing software,"The increasing use of video editing software requires faster and more efficient editing tools. As a first step, these tools perform a temporal segmentation in shots that allows a later building of indexes describing the video content. Here, we propose a novel real-time high-quality shot detection strategy, suitable for the last generation of video editing software requiring both low computational cost and high quality results. While abrupt transitions are detected through a very fast pixel-based analysis, gradual transitions are obtained from an efficient edge-based analysis. Both analyses are reinforced with a motion analysis that helps to detect and discard false detections. This motion analysis is carried out exclusively over a reduced set of candidate transitions, thus maintaining the computational requirements demanded by new applications to fulfill user needs.",2012,0, 5816,Fault coverage of a timing and control flow checker for hard real-time systems,"Dependability is a crucial requirement of today's embedded systems. To achieve a higher level of fault tolerance, it is necessary to develop and integrate mechanisms for a reliable fault detection. In the context of hard real-time computing, such a mechanism should also guarantee correct timing behavior, an essential requirement for these systems. In this paper, we present results of the fault coverage of a lightweight timing and control flow checker for hard real-time systems. An experimental evaluation shows that more than 30% of injected faults can be detected by our technique, while the number of errors leading to an endless loop is reduced by around 80 %. The check mechanism causes only very low overhead concerning additional memory usage (15.0% on average) and execution time (12.2% on average).",2012,0, 5817,OPAL 2: Rapid optical simulation of silicon solar cells,"The freeware program OPAL 2 computes the optical losses associated with the front surface of a Si solar cell. It calculates the losses for any angle of incidence within seconds, where the short computation time is achieved by decoupling the ray tracing from the Fresnel equations. Amongst other morphologies, OPAL 2 can be used to assess the random-pyramid texture of c-Si solar cells, or the `isotexture' of mc-Si solar cells, and to determine (i) the optimal thickness of an antireflection coating with or without encapsulation, (ii) the impact of imperfect texturing, such as non-ideal texture angles, over-etched isotexture, and flat regions, and (iii) the subsequent 1D generation profile in the Si. This paper describes the approach and assumptions employed by OPAL 2 and presents examples that demonstrate the dependence of optical losses on texture quality and incident angle.",2012,0, 5818,Modeling and simulation based study for on-line detection of partial discharge of solid dielectric,"Nowadays electric utilities are facing major problems due to the ageing and deterioration of high voltage (HV) power equipments in their operating service period. There are several solid materials are used in high voltage power system equipments for insulation purpose. The insulators used in HV power equipment always have a small amount of impurity inside it. The impurity is mainly in the form of solid, gas or liquid. In most cases the impurity is in the form of air bubbles (void) which creates a weak zone inside the insulator. Therefore, this void is the reason for the occurrence of partial discharge in high voltage power equipments while sustaining the high voltage. Ageing and deterioration is mainly occurs due to the presence of partial discharge in such insulator used in the high voltage power equipments. The presence of partial discharge for a long period of time is also causes the insulation failure of high voltage equipments used in power system. Therefore, the partial discharge detection and measurement is necessary for prediction and reliable operation of insulation in high voltage power equipments. In this work, to study the on-line detection of partial discharge an epoxy resin is taken as a solid dielectric for simulating and modeling purpose. This epoxy resin with small impurity (air bubble) under high voltage stress creates a source of partial discharge inside the dielectric. The generated partial discharge is continuously detected and monitored by using LabVIEW software. Simulation of real time detection, de-noising and different analytic techniques of partial discharge signal by using LabVIEW software is proposed which gives the real time visualization of partial discharge signal produced inside the high voltage power equipment.",2012,0, 5819,Application of neural networks for transformer fault diagnosis,"Power transformer is one of the most important components in a power system. It experiences thermal and electrical stresses during its operation. The insulation system consisting of mineral oil and the insulation paper used in transformer undergoes chemical changes under these stresses and gases are generated. These gases dissolve in oil. The extracted dissolved gases are analysed in the laboratory using gas chromatograph. The fault identifications in a transformer are based on certain key- gas ratios. International standards such as IEEE and ASTM are in use for fault identification. However, these standards are not able to diagnose the fault under certain conditions. Hence, there is a need to improve the diagnostic accuracy. In this paper an attempt has been made to diagnose the faults in a power transformer using a three level perceptron network. Three types of neural network simulation models are developed using MATLABTM Software and trained using the IEC TC 10 databases of faulty equipments inspected in service. The outputs of the Neural Network models are compared with the IEEE and ASTM methods. The comparison of the results indicates that the condition assessments offered by the models are capable of predicting the fault with higher success rate than the conventional diagnostic methods.",2012,0, 5820,Compressed C4.5 Models for Software Defect Prediction,"Defects in every software must be handled properly, and the number of defects directly reflects the quality of a software. In recent years, researchers have applied data mining and machine learning methods to predicting software defects. However, in their studies, the method in which the machine learning models are directly adopted may not be precise enough. Optimizing the machine learning models used in defects prediction will improve the prediction accuracy. In this paper, aiming at the characteristics of the metrics mined from the open source software, we proposed three new defect prediction models based on C4.5 model. The new models introduce the Spearman's rank correlation coefficient to the basis of choosing root node of the decision tree which makes the models better on defects prediction. In order to verify the effectiveness of the improved models, an experimental scheme is designed. In the experiment, we compared the prediction accuracies of the existing models and the improved models and the result showed that the improved models reduced the size of the decision tree by 49.91% on average and increased the prediction accuracy by 4.58% and 4.87% on two modules used in the experiment.",2012,0, 5821,Efficient Refinement Checking for Model-Based Mutation Testing,"In model-based mutation testing, a test model is mutated for test case generation. The resulting test cases are able to detect whether the faults in the mutated models have been implemented in the system under test. For this purpose, a conformance check between the original and the mutated model is required. We have developed an approach for conformance checking of action systems, which are well-suited to specify reactive and non-deterministic systems. We rely on constraint solving techniques. Both, the conformance relation and the transition relation are encoded as constraint satisfaction problems. Earlier results showed the potential of our constraint-based approach to outperform explicit conformance checking techniques, which often face state space explosion. In this work, we go one step further and show optimisations that really boost our performance. In our experiments, we could reduce our runtimes by 80%.",2012,0, 5822,Reliability Prediction for Component-Based Systems: Incorporating Error Propagation Analysis and Different Execution Models,"Reliability, one of the most important quality attributes of a software system, should be predicted early in the development. This helps to improve the quality of the system in a cost-effective way. Existing reliability prediction methods for component-based systems use Markov models and are often limited to a model of stopping failures and sequential executions. Our approach relaxes these constraints by incorporating error propagation analysis and multiple execution models together consistently. We demonstrate the applicability of our approach by modeling the reliability of the reporting service of a document exchange server and conduct reliability predictions.",2012,0, 5823,Redefinition of Fault Classes in Logic Expressions,"Fault-based testing selects test cases to detect hypothesized faults. In logic expression testing, many fault classes have been defined by researchers based on the syntax of the expressions. Due to the syntactic nature of the logic expressions, some fault classes may exist in one form (say, disjunctive normal form - DNF) of the logic expressions but not in other forms (say, general form). As a result, different fault-based testing techniques have been developed for different types of logic expressions and these techniques have different fault detecting capabilities. For example, some have high detecting power in DNF but low detecting power in the general form. Another complication arises when software developers decide which forms of logic expressions should be used in the first place. Should software developers use the general form for flexibility but compromise that with fewer fault classes and less fault detection? Or should they use DNF for more fault classes and, hence, better fault detection (because software developers have more ''hypothesized faulty'' scenarios to test) but sacrificing the generality of the expressions. In this paper, we propose a set of uniform definitions of fault classes such that they can be applied irrespective of the syntactic nature of the logic expressions, to produce consistent fault-based testing techniques and fault detection capabilities.",2012,0, 5824,Multi-valued Decision Diagrams for the Verification of Consistency in Automotive Product Data,"Highly customizable products and mass customization - as increasing trends of the last years - are mainly responsible for an immense growth of complexity within the digital representations of knowledge of car manufacturers. We developed a method to detect and analyze inconsistencies by employing a Multi-Valued Decision Diagram (MDD) which issued to encode the set of all valid product configurations. On this basis, we stated a number of rules of consistency that are checked by a set-based verification scheme.",2012,0, 5825,Modular Heap Abstraction-Based Code Clone Detection for Heap-Manipulating Programs,"Code clone is a prevalent activity during the development of softwares. However, it is harmful to the maintenance and evolution of softwares. Current techniques for detecting code clones are most syntax-based, and cannot detect all code clones. In this paper, we present a novel semantic-based clone detection technique by obtaining the similarity about the precondition and post condition of each procedure, which are computed by a context and field sensitive fix point iteration algorithm based on modular heap abstraction in heap-manipulating programs. Experimental evaluation about a set of C benchmark programs shows that the proposed approach can be scalable to detect various clones that existing syntax-based clone detectors have missed.",2012,0, 5826,Fingertips Detection Algorithm Based on Skin Colour Filtering and Distance Transformation,"Multi-Fingertip location algorithm is always difficult and hot in Finger-based human-computer interaction systems. There are two major difficulties in this field: 1) obtaining accurate hand binary image, 2) locating fingertips in hand binary image. This article presented a multi-fingertip real-time track and location algorithm based on skin colour and distance transformation. The algorithm consists of the following steps. First, it use elliptical boundary model to detect skin colour of human hand in YCbCr colour spaces. After that, we use distance transformation to filter finger from hand, leaving palm area only. Meanwhile, it uses zero and first moment to calculate center of gravity of palm. Then the initial position of fingertips and finger-roots can be located by pixels of hand edge to the center of gravity of palm. Last, it accurately locates fingertips according to the position relationship between fingertips and finger-roots. Experimental results show fingertips can be located quickly and accurately by the algorithm, which fully meets the requirements of real-time computing tasks.",2012,0, 5827,Robustness validation of integrated circuits and systems,"Robust system design is becoming increasingly important, because of the ongoing miniaturization of integrated circuits, the increasing effects of aging mechanisms, and the effects of parasitic elements, both intrinsic and external. For safety reasons, particular emphasis is placed on robust system design in the automotive and aerospace sectors. Until now, the term robustness has been applied very intuitively and there has been no proper way to actually measure robustness. However, the complexity of contemporary systems makes it difficult to fulfill tight specifications. For this reason, robustness must be integrated into a partially automated design flow. In this paper, a new approach to robustness modeling is presented, in addition to new ways to quantify or assess the robustness of a design. To demonstrate the flexibility of the proposed approach, it is adapted and applied to several different scenarios. These include the robustness evaluation of digital circuits under aging effects, such as NBTI; the robustness modeling of analog and mixed signal circuits using affine arithmetic; and the robustness study of software algorithms on a high system level.",2012,0, 5828,Towards a lightweight model driven method for developing SOA systems using existing assets,"Developing SOA based systems and migrating legacy systems to SOA are difficult and error prone tasks, where approaches, methods and tools play a fundamental role. For this reason, several proposals have been brought forward in literature to help SOA developers. This paper sketches a novel method for the development of systems based on services, i.e., adhering to the SOA paradigm, which follows the model driven paradigm. Our method is based on a meet-in-the-middle approach that allows the reuse of existing assets (e.g., legacy systems). The starting point of this method is a UML model representing the target business process and the final result is a detailed design model of the SOA system. The method, explained in this paper using a simple running example, has been applied successfully within an industrial project.",2012,0, 5829,Hardware implementation of GMDH-type artificial neural networks and its use to predict approximate three-dimensional structures of proteins,"Implementation of artificial neural networks in software on general purpose computer platforms are brought to an advanced level both in terms of performance and accuracy. Nonetheless, neural networks are not so easily applied in embedded systems, specially when the fully retraining of the network is required. This paper shows the results of the implementation of artificial neural networks based on the Group Method of Data Handling (GMDH) in reconfigurable hardware, both in the steps of training and running. A hardware architecture has been developed to be applied as a co-processing unit and an example application has been used to test its functionality. The application has been developed for the prediction of approximate 3-D structures of proteins. A set of experiments have been performed on a PC using the FPGA as a co-processor accessed through sockets over the TCP/IP protocol. The design flow employed demonstrated that it is possible to implement the network in hardware to be easily applied as an accelerator in embedded systems. The experiments show that the proposed implementation is effective in finding good quality solutions for the example problem. This work represents the early results of the novel technique of applying the GMDH algorithms in hardware for solving the problem of protein structures prediction.",2012,0, 5830,Binary-channel can covers defects detection system based on machine vision,"In order to realize the on-line detection and elimination of unqualified can covers, a binary-channel inspection system based on machine vision is proposed in this article. The system mainly consists of two illumination sources, two cameras, two sensors, an IPC, an interface circuit, two eliminating devices, a set of algorithms for image processing and a software for the control of independent binary-channel. In working status, each channel of the system is placed directly above a conveyor, which transports can covers to the detection position so that the camera is triggered, and then an image is captured with a flash. The cover images are transferred to the IPC and then processed by the algorithm that based on template matching and variation model. Depending on the processing results unqualified covers are eliminated. The system is proved to be non-pollution, low-cost, and defects such as double covers, no glue, shoulder scratch, distortion can be detected with a 98.7% accuracy and a speed of 1200 covers per minute.",2012,0, 5831,Detecting the Onset of Dementia Using Context-Oriented Architecture,"In the last few years, Aspect Oriented Software Development (AOSD) and Context Oriented Software Development (COSD) have become interesting alternatives for the design and construction of self-adaptive software systems. An analysis of these technologies shows them all to employ the principle of the separation of concerns, Model Driven Architecture (MDA) and Component-based Software Development (CBSD) for building high quality of software systems. In general, the ultimate goal of these technologies is to be able to reduce development costs and effort, while improving the adaptability, and dependability of software systems. COSD, has emerged as a generic development paradigm towards constructing self-adaptive software by integrating MDA with context-oriented component model. The self-adaptive applications are developed using a Context-Oriented Component-based Applications Model-Driven Architecture (COCA-MDA), which generates an Architecture Description language (ADL) presenting the architecture as a components-based software system. COCA-MDA enables the developers to modularise the application based on their context-dependent behaviours, and separate the context-dependent functionality from the context-free functionality of the application. In this article, we wish to study the impact of the decomposition mechanism performed in MDA approaches over the software self-adaptability. We argue that a better and significant advance in software modularity based on context information can increase software adaptability and increase their performance and modifiability.",2012,0, 5832,Varying Topology of Component-Based System Architectures Using Metaheuristic Optimization,"Today's complex systems require software architects to address a large number of quality properties. These quality properties can be conflicting. In practice, software architects manually try to come up with a set of different architectural designs and then try to identify the most suitable one. This is a time-consuming and error-prone process. Also this may lead the architect to sub optimal designs. To tackle this problem, metaheuristic approaches, such as genetic algorithms, for automating architecture design have been proposed. Metaheuristic approaches use degrees of freedom to automatically generate new solutions. In this paper we present how to address topology of the hardware platform as a degree of freedom for system architectures. This aspect of varying architectures has not yet been addressed in existing metaheuristic approaches to architecture design. Our approach is implemented as part of the AQOSA (Automated Quality-driven Optimization of Software Architectures) framework. AQOSA aids architects by automatically synthesizing optimal solutions by using multiobjective evolutionary algorithms and it reports the trade-offs between multiple quality properties as output. In this paper we use an example system to show that the hardware-topology degree of freedom helps evolutionary algorithm to explore a larger design space. It can find new architectural solutions which would not be found otherwise.",2012,0, 5833,TIRT: A Traceability Information Retrieval Tool for Software Product Lines Projects,"Software Product Line has proven to be an effective methodology for developing a diversity of software products at lower costs, in shorter time, and with higher quality. However, the adoption and maintenance of traceability in the context of product lines is considered a difficult task, due to the large number and heterogeneity of assets developed during product line engineering. Furthermore, the manual creation and management of traceability relations is difficult, error-prone, time consuming and complex. In this sense, Traceability Information Retrieval Tool (TIRT) was proposed in order to mitigate the maintenance traceability problem. An experimental study was performed in order to identify the viability of the proposed tool and traceability scenarios.",2012,0, 5834,Path Coverage Criteria for Palladio Performance Models,"Component-based software engineering is supported by performance prediction approaches on the design level ensuring desired properties of systems throughout their entire lifecycle. The achievable prediction quality is a direct result of the quality of the used performance models, which is usually assured by validation. Existing approaches often rely solely on the expertise of performance engineers to determine if sufficient testing has occurred. There is a lack of quantitative criteria capturing which aspects of a model have been assessed and covered successfully. In this paper, we define path coverage criteria for Palladio performance models and show how the required testing effort can be estimated for arbitrary Palladio models. We demonstrate the applicability of effort estimation for each coverage criterion, provide estimates for a complex model from the Common Component Modelling Example, and show how these estimates can guide criteria selection.",2012,0, 5835,A Model-Driven Dependability Analysis Method for Component-Based Architectures,"Critical distributed real-time embedded component-based systems must be dependable and thus be able to avoid unacceptable failures. To efficiently evaluate the dependability of the assembly obtained by selecting and composing components, well-integrated and tool-supported techniques are needed. Currently, no satisfying tool-supported technique fully integrated in the development life-cycle exists. To overcome this limitation, we propose CHESS-FLA, which is a model-driven failure logic analysis method. CHESS-FLA allows designers to: model the nominal as well as the failure behaviour of their architectures, automatically perform dependability analysis through a model transformation, and, finally, ease the interpretation of the analysis results through back-propagation onto the original architectural model. CHESS-FLA is part of an industrial quality tool-set for the functional and extra-functional development of high integrity embedded component-based systems, developed within the EU-ARTEMIS funded CHESS project. Finally, we present a case study taken from the telecommunication domain to illustrate and assess the proposed method.",2012,0, 5836,Random Test Case Generation and Manual Unit Testing: Substitute or Complement in Retrofitting Tests for Legacy Code?,"Unit testing of legacy code is often characterized by the goal to find a maximum number of defects with minimal effort. In context of restrictive time frames and limited resources, approaches for generating test cases promise increased defect detection effectiveness. This paper presents the results of an empirical study investigating the effectiveness of (a) manual unit testing conducted by 48 master students within a time limit of 60 minutes and (b) tool-supported random test case generation with Randoop. Both approaches have been applied on a Java collection class library containing 35 seeded defects. With the specific settings, where time and resource restrictions limit the performance of manual unit testing, we found that (1) the number of defects detected by random test case generation is in the range of manual unit testing and, furthermore, (2) the randomly generated test cases detect different defects than manual unit testing. Therefore, random test case generation seems a useful aid to jump start manual unit testing of legacy code.",2012,0, 5837,From Assumptions to Context-Specific Knowledge in the Area of Combined Static and Dynamic Quality Assurance,"High-quality software is an indispensable requirement today. Low-quality products can result in high overall costs (e.g., due to rework). Quality assurance can help to reduce the number of defects before a software product is delivered. However, quality assurance itself can be a major cost driver, especially testing activities. One solution for balancing these costs is to focus testing on defect-prone parts, which is nowadays often done by using product and process metrics. However, data from static quality assurance activities that is available early is usually not considered when focusing testing activities. Integration of static and dynamic quality assurance activities is a promising strategy for exploiting synergy effects and, consequently, one way to reduce costs and effort. For effective and efficient integration, knowledge about the relationships between the integrated techniques is necessary, which is often not available. Thus, assumptions have to be stated and evaluated. Existing approaches for this typically describe procedures only on a high level. Therefore, this paper presents procedures how to define, derive, and evaluate assumptions in a systematic and detailed manner for the integrated inspection and testing (In2Test) approach.",2012,0, 5838,Micro Pattern Fault-Proneness,"One of the goals of Software Engineering is to reduce, or at least to try to control, the defectiveness of software systems during the development phase. The aim of our study is to analyze the relationship between micro patterns (introduced by Gil and Maman) and faults in a software system. Micro patterns are similar to design patterns, but their characteristic is that they can be identified automatically, and are at a lower level of abstraction with respect to design patterns. Our study aims to show, through empirical studies of open source software systems, which categories of micro patterns are more correlated to faults. Gil and Maman demonstrated, and subsequent studies confirmed, that 75% of the classes of a software system are covered by micro patterns. In our study we also analyze the relationship between faults and the remaining 25% of classes that do not match with any micro pattern. We found that these classes are more likely to be fault-prone than the others. We also studied the correlation among all the micro patterns of the catalog, in order to verify the existence of relationships between them.",2012,0, 5839,Guiding Testing Activities by Predicting Defect-Prone Parts Using Product and Inspection Metrics,"Product metrics, such as size or complexity, are often used to identify defect-prone parts or to focus quality assurance activities. In contrast, quality information that is available early, such as information provided by inspections, is usually not used. Currently, only little experience is documented in the literature on whether data from early defect detection activities can support the identification of defect prone parts later in the development process. This article compares selected product and inspection metrics commonly used to predict defect-prone parts. Based on initial experience from two case studies performed in different environments, the suitability of different metrics for predicting defect-prone parts is illustrated. These studies revealed that inspection defect data seems to be a suitable predictor, and a combination of certain inspection and product metrics led to the best prioritizations in our contexts.",2012,0, 5840,Towards a uniform evaluation of the science quality of SKA technology options: Polarimetrie aspects,"We discuss how to evaluate SKA technology options with regard to science output quality. In this work we will focus on polarimetry. We review the SKA specification for polarimetry and assess these requirements. In particular we will use as a illustrative case study a comparison of two dish types combined with two different feeds. The dish types we consider are optimized axi-symmetric prime-focus and offset Gregorian reflector systems; and the two feeds are the Eleven-feed (wideband) and a choked horn (octave band). To evaluate the imaging performance we employ end-to-end simulations in which given sky models are, in software, passed through a model of the telescope design according to its corresponding radio interferometrical measurement equation to produce simulated visibilities. The simulated visibilities are then used to generate simulated sky images. These simulated sky images are then compared to the input sky models and various figures-of-merit for the imaging performance are computed. A difficulty is the vast parameter space for observing modes and configurations that exists even when the technology is fixed. However one can fixed certain standard benchmark observation modes that can be applied across the board to the various technology options. The importance of standardized, end-to-end simulations, such as the one presented here, is that they address the high-level science output from SKA as a whole rather than low-level specifications of its individual parts.",2012,0, 5841,Capability of single hardware channel for automotive safety applications according to ISO 26262,"There is no doubt that electromobility will be the future. All-electric vehicles were already available on the market in 2011 and 14 new vehicles will be commercially available in 2012. Due to the fact that automotive applications are influenced by the safety requirements of the ISO 26262, nowadays the use of new technologies requires more and more understanding for fail-safe and fault-tolerant systems due to increasingly complex systems. The safety of electric vehicles has the highest priority because it helps contribute to customer confidence and thereby ensures further growth of the electromobility market. Therefore in series production redundant hardware concepts like dual core microcontrollers running in lock-step-mode are used to reach ASIL D requirements given from the ISO 26262. In this paper redundant hardware concepts and the coded processing will be taken into account, which are listed in the current standard ISO 26262 as recommended safety measures.",2012,0, 5842,An Empirical Study on Design Diversity of Functionally Equivalent Web Services,"A number of approaches based on design diversity moderate the communication between clients and functionally equivalent services, i.e., variant services, to tolerate software faults in service-oriented applications. Nevertheless, it is unclear whether variant services are actually diverse and fail on disjoint subsets of the input space. In a previous work, we proposed an experimental setup to assess design diversity of variant services that realize a requirements specification. In this work, we utilize the proposed experimental setup to assess the design diversity of a number of third-party Web services adhering to seven different requirements specifications. In this paper, we describe in detail the main findings and lessons learnt from this empirical study. Firstly, we investigate whether variant services are in fact diverse. Secondly, we investigate the effectiveness of service diversity for tolerating faults. The results suggest that there is diversity in the implementation of variant services. However, in some cases, this diversity might not be sufficient to improve system reliability. Our findings provide an important knowledge basis for engineering effective fault-tolerant service applications.",2012,0, 5843,A Critical Survey of Security Indicator Approaches,"To better control IT security in software engineering and IT management, we need to assess security qualities in the different phases of a system's lifecycle. To this end, various security indicators, measures, and metrics have been proposed by scientists and practitioners, but few have gained general acceptance. We surveyed the current state of the art in qualita-tive and quantitative security measurement to characterize the available measurement strategies, their maturity, and the conceptual or technical obstacles preventing further progress in this field of research. We classified the proposed security indicators with respect to their characteristic properties and derived a classification tree delineating the different security assessment strategies and their derived security measures. Based on this overview, we analyzed the relative merits and deficiencies of current approaches, and we suggested future steps towards better security metrics. This paper summarizes the main results of our survey.",2012,0, 5844,Type Classification against Fault Enabled Mutant in Java Based Smart Card,"Smart card are often the target of software or hardware attacks. For instance the most recent attacks are based on fault injection which can modify the behavior of applications loaded in the card, changing them as mutant application. In this paper, we propose a new protection mechanism which makes application to be less prone to mutant generation. This countermeasure requires a transformation of the original program byte codes which remains semantically equivalent. It requires a modification of the Java Virtual Machine which remains backward compatible and a dedicated framework to deploy the applications. Hence, our proposition improves the ability of the platform to resist to Fault Enabled Mutant.",2012,0, 5845,Application of a reliability model generator to a pressure tank system,"A number of mathematical modelling techniques exist which are used to measure the performance of a given system, by assessing each individual component within the system. This can be used to determine the failure frequency or probability of failure of the system. Software is available to undertake the task of analysing these mathematical models after an individual or group of individuals manually create the models. The process of generating these models is time consuming and reduces the impact of the model on the system design. One way to improve this would be to automatically generate the model. In this work the procedure to automatically construct a model, based on Petri nets, for systems undergoing a phased-mission is applied to a pressure tank system, undertaking a four phase mission.",2012,0, 5846,A Lingustic Approach for Robustness in Context Aware Applications,"Context-aware applications are vulnerable to errors due to the devices and networks engaged in the systems, as well as the complex control and data structures in the applications. Although usually fault tolerant technologies and software verifications are widely used to prevent and remedy errors, we notice that programming languages used in developing context-aware applications also play important roles in generating less error-prone programs. In this paper we introduce our recent efforts devoted to devising a programming language supporting safety related features and formal semantics for context aware applications.",2012,0, 5847,Performance Management of Virtual Machines via Passive Measurement and Machine Learning,"Virtualization is commonly used to efficiently operate servers in data centers. The autonomic management of virtual machines enhances the advantages of virtualization. For the development of such management, it is important to establish a method to accurately detect performance degradation in virtual machines. This paper proposes a method that detects degradation via the passive measurement of traffic exchanged by virtual machines. Using passive traffic measurement is advantageous because it is robust against heavy loads, nonintrusive to the managed machines, and independent of hardware/software platforms. From the measured traffic metrics, performance state is determined by a machine learning technique that algorithmically determines the complex relationship between traffic metrics and performance degradation from training data. Moreover, the feasibility and effectiveness of the proposed method are confirmed experimentally.",2012,0, 5848,Cloud Resource Provisioning to Extend the Capacity of Local Resources in the Presence of Failures,"In this paper, we investigate Cloud computing resource provisioning to extend the computing capacity of local clusters in the presence of failures. We consider three steps in the resource provisioning including resource brokering, dispatch sequences, and scheduling. The proposed brokering strategy is based on the stochastic analysis of routing in distributed parallel queues and takes into account the response time of the Cloud provider and the local cluster while considering computing cost of both sides. Moreover, we propose dispatching with probabilistic and deterministic sequences to redirect requests to the resource providers. We also incorporate check pointing in some well-known scheduling algorithms to provide a fault-tolerant environment. We propose two cost-aware and failure-aware provisioning policies that can be utilized by an organization that operates a cluster managed by virtual machine technology and seeks to use resources from a public Cloud provider. Simulation results demonstrate that the proposed policies improve the response time of users' requests by a factor of 4.10 under a moderate load with a limited cost on a public Cloud.",2012,0, 5849,Sensor Placement with Multiple Objectives for Structural Health Monitoring in WSNs,"Sensor placement plays a vital role in deploying wireless sensor networks (WSNs) for structural health monitoring (SHM) efficiently and effectively. Existing civil engineering approaches do not seriously consider WSN constraints, such as communication load, network connectivity, and fault tolerance. In this paper, we study the methodology of sensor placement optimization for SHM that addresses three key aspects: finding a high quality placement of a set of sensors that satisfies civil engineering requirements; ensuring the communication efficiency and low complexity for sensor placement; and reducing the probability of a network failure. Particularly, after the placement of a subset of sensors, we find some distance-sensitive, but unused, near optimal locations for the remaining sensors to achieve a communication-efficient WSN. By means of the placement, we present a """"connectivity tree"""" by which structural health state or network maintenance can be achieved in a decentralized manner. We then optimize the system performance by considering multiple objectives: lifetime prolongation, low communication cost, and fault tolerance. We validate the efficiency and effectiveness of this approach through extensive simulations and a proof-ofconcept implementation on a real physical structure.",2012,0, 5850,A Partial Reconstruction of Connected Dominating Sets in the Case of Fault Nodes,"Node failure in a connected dominating set (CDS) is an event of non-negligible probability. For applications where fault tolerance is critical, a traditional dominating-set based routing may not be a desirable form of clustering. For a typical localized algorithm to construct CDS, it has the time complexity of O(II), where is the maximum degree of an input graph. In this paper we inspect the problem of load balancing in a dominating-set based routing. The motivation of load balancing is to prolong the network lifetime, while minimize the partitions of the network due to node failure, where they cause interruptions in communication among nodes. The idea is that by finding alternative nodes within a restricted range and locally reconstructing a CDS to include them, instead of totally reconstructing a new CDS. The number of nodes which should be awaken during partial reconstruction is less than 2(-1)p, where is the nodes from CDS and the neighbor of the faulty node.",2012,0, 5851,Double Mutual-Aid Checkpointing for Fast Recovery,"Because of the enlarging system size and the increasing number of processors, the probability of errors and multiple simultaneously failures become the norm rather than the exception. Therefore, to tolerate multiple failures is indispensable. Normally, most diskless checkpointing need the maximum recovery overhead no mater how many failures happen at the same time. However, a small number of processors' failures happen more frequently than the worse case. This study resolves the dilemma between more fault tolerance and fast recovery by presenting a novel diskless checkpointing which makes use of double mutual-aid checkpoints. It not only gives the necessary and sufficient condition but also proposes a method for determination the setting of double mutual-aid checkpoints.",2012,0, 5852,Reliability Enhancement of Fault-prone Many-core Systems Combining Spatial and Temporal Redundancy,"The increasing transistor integration capacity will entail hundreds of processors on a single chip. Further, this will lead to an inherent susceptibility to errors of these systems. To obtain reliable systems again, various redundancy techniques can be applied. Of course, the usage of those techniques involves a significant overhead. Therefore, the identification of the optimal degree of redundancy is an important objective. In this paper we focus on core-level redundancy and checkpointing rollback-recovery. A model to determine the optimal degree of spatial and temporal redundancy regarding the minimal expected execution time will be introduced. Further, we will show that in several cases, the minimal expected execution time is achieved just by a simultaneous combination of both techniques, spatial redundancy and temporal redundancy.",2012,0, 5853,Accelerated aging experiments for capacitor health monitoring and prognostics,"This paper discusses experimental setups for health monitoring and prognostics of electrolytic capacitors under nominal operation and accelerated aging conditions. Electrolytic capacitors have higher failure rates than other components in electronic systems like power drives, power converters etc. Our current work focuses on developing first-principles-based degradation models for electrolytic capacitors under varying electrical and thermal stress conditions. Prognostics and health management for electronic systems aims to predict the onset of faults, study causes for system degradation, and accurately compute remaining useful life. Accelerated life test methods are often used in prognostics research as a way to model multiple causes and assess the effects of the degradation process through time. It also allows for the identification and study of different failure mechanisms and their relationships under different operating conditions. Experiments are designed for aging of the capacitors such that the degradation pattern induced by the aging can be monitored and analyzed. Experimental setups and data collection methods are presented to demonstrate this approach.",2012,0, 5854,Intelligent software sensors and process prediction for glass container forming processes based on multivariate statistical process control techniques,"Glass container forming processes have attracted more attention over the past years due to the problem of lacking process information and correlation for key variables within the processes. In this paper an approach to develop process modeling and intelligent software sensing is presented for application based on multivariate statistical process control methods. The intelligent software sensors are able to provide real time estimation of key variables, and Partial Least Squares (PLS) techniques have allowed for forward prediction of final product quality variables. An application of software sensors used for container forming blank temperature is presented along with PLS being applied to predict the wall and base dimensions of glass container products. Initial results show that these methods are very promising in providing a significant improvement within this area which is usually unmonitored and is susceptible to long time delays between forming and quality inspection.",2012,0, 5855,Finite element approach for performances prediction of a small synchronous generator using ANSYS software,"The paper presents a finite element (FE) based efficient analysis procedure for very small three-phase synchronous machines. Two FE formulation approaches are proposed to achieve this goal: the magnetostatic and the non-linear transient time stepped formulations. This combination allows us to predict the steady-state and the transient performance at no-load and in the case of a line-to-line short circuit fault. The method is successfully applied for replication and modeling of a small 120-VA, 4-salient pole, 208-V and 60-Hz, wound rotor laboratory synchronous generator. The closeness of FE simulated and experimental results greatly attest to the effectiveness of the proposed FE based small-generator modeling framework.",2012,0, 5856,Boundary-aided Extreme Value Detection based pre-processing algorithm for H.264/AVC fast intra mode prediction,"The mode decision in the intra prediction of an H.264/AVC encoder requires complex computations and a significant amount of time to select the best mode that achieves the minimum rate-distortion (RD). The complex computations for the mode decision cause difficulty in real-time applications, especially for software-based H.264/AVC encoders. This study proposes an efficient fast algorithm called Boundary-aided Extreme Value Detection (BEVD) to predict the best direction mode, excluding the DC mode, for fast intra-mode decision. The BEVD-based edge detection can predict luma-44, luma-1616, and chroma-88 modes effectively. The first step involves using the pre-processing mode selection algorithm to find the primary mode that can be selected for fast prediction. The second step requires applying the selected fewer high-potential candidate modes to calculate the RD cost for the mode decision. The encoding time is largely reduced, and similar video quality is also maintained. Simulation results show that the proposed BEVD method reduces encoding time by 63 %, and requires a bit-rate increase of approximately 1.7 %, and a decrease in peak signal-to-noise ratio (PSNR) by approximately 0.06 dB in QCIF and CIF sequences, compared with the H.264/AVC JM 14.2 software. The proposed method achieves less PSNR degradation and bit-rate increase compared to previous methods with more encoding time reduction.",2012,0, 5857,Supporting Acceptance Testing in Distributed Software Projects with Integrated Feedback Systems: Experiences and Requirements,"During acceptance testing customers assess whether a system meets their expectations and often identify issues that should be improved. These findings have to be communicated to the developers -- a task we observed to be error prone, especially in distributed teams. Here, it is normally not possible to have developer representatives from every site attend the test. Developers who were not present might misunderstand insufficiently documented findings. This hinders fixing the issues and endangers customer satisfaction. Integrated feedback systems promise to mitigate this problem. They allow to easily capture findings and their context. Correctly applied, this technique could improve feedback, while reducing customer effort. This paper collects our experiences from comparing acceptance testing with and without feedback systems in a distributed project. Our results indicate that this technique can improve acceptance testing -- if certain requirements are met. We identify key requirements feedback systems should meet to support acceptance testing.",2012,0, 5858,A Framework for Obtaining the Ground-Truth in Architectural Recovery,"Architectural recovery techniques analyze a software system's implementation-level artifacts to suggest its likely architecture. However, different techniques will often suggest different architectures for the same system, making it difficult to interpret these results and determine the best technique without significant human intervention. Researchers have tried to assess the quality of recovery techniques by comparing their results with authoritative recoveries: meticulous, labor-intensive recoveries of existing well-known systems in which one or more engineers is integrally involved. However, these engineers are usually not a system's original architects or even developers. This carries the risk that the authoritative recoveries may miss domain-, application-, and system context-specific information. To deal with this problem, we propose a framework comprising a set of principles and a process for recovering a system's ground-truth architecture. The proposed recovery process ensures the accuracy of the obtained architecture by involving a given system's architect or engineer in a limited, but critical fashion. The application of our work has the potential to establish a set of """"ground truths"""" for assessing existing and new architectural recovery techniques. We illustrate the framework on a case study involving Apache Hadoop.",2012,0, 5859,Automated Reliability Prediction from Formal Architectural Descriptions,"Quantitative assessment of quality attributes (i.e., non-functional requirements, such as performance, safety or reliability) of software architectures during design supports important early decisions and validates the quality requirements established by the stakeholder. In current practice, these quality requirements are most often manually checked, which is time-consuming and error-prone due to the overwhelmingly complex designs. We propose an automated approach to assess the reliability of software architectures. It consists in extracting a Markov model from the system specification written in an Architecture Description Language (ADL). Our approach translates the specified architecture to a high-level probabilistic model-checking language, supporting system validation and quantitative reliability prediction against usage profile, component arrangement and architectural styles. We validate our approach by applying it to different architectural styles and comparing those with two different quantitative reliability assessment methods presented in the literature: the composite and the hierarchical methods.",2012,0, 5860,TracQL: A Domain-Specific Language for Traceability Analysis,"Traceability analysis is used to improve quality in the software development process. As such an analysis is complex to implement and often requires a lot of dense code that is specific to the system being traced, there is a need for a framework to express traceability analysis tasks. This paper presents the Traceability Query Language TracQL, an expressive, extensible, representation-independent, and fast domain-specific language. Known approaches do not fulfill all these requirements. We examine TracQL and compare it to other approaches on a software ageing problem, namely to detect divergence between architecture and code. The necessary TracQL code is much shorter (by a factor of 1.7) and about twice as fast as what known approaches can achieve.",2012,0, 5861,Documenting Early Architectural Assumptions in Scenario-Based Requirements,"In scenario-based requirement elicitation techniques such as quality attribute scenario elicitation and use case engineering, the requirements engineer is typically forced to make some implicit early architectural assumptions. These architectural assumptions represent initial architectural elements such as supposed building blocks of the envisaged system. Such implicitly specified assumptions are prone to ambiguity, vagueness, duplication, and contradiction. Furthermore, they are typically scattered across and tangled within the different scenario-based requirements. This lack of modularity hinders navigability of the requirement body as a whole. This paper discusses the need to explicitly document otherwise implicit architectural assumptions. Such an explicit intermediary between quality attribute scenarios and use cases enables the derivation and exploration of interrelations between these different requirements. This is essential to lower the mental effort required to navigate these models and facilitates a number of essential activities in the early development phases such as the selection of candidate drivers in attribute-driven design, architectural trade-off analysis and architectural change impact analysis.",2012,0, 5862,Workload-aware System Monitoring Using Performance Predictions Applied to a Large-scale E-Mail System,"Offering services in the internet requires a dependable operation of the underlying software systems with guaranteed quality of service. The workload of such systems typically significantly varies throughout a day and thus leads to changing resource utilisations. Existing system monitoring tools often use fixed threshold values to determine if a system is in an unexpected state. Especially in low load situations, deviations from the system's expected behaviour are detected too late if fixed value thresholds (leveled for peak loads) are used. In this paper, we present our approach of a workload-aware performance monitoring process based on performance prediction techniques. This approach allows early detections of performance problems before they become critical. We applied our approach to the e-mail system operated by Germany's largest e-mail provider, the 1&1 Internet AG. This case study demonstrates the applicability of our approach and shows its accuracy in the predicted resource utilisation with an error of mostly less than 10%.",2012,0, 5863,Extracting and Facilitating Architecture in Service-Oriented Software Systems,"In enterprises using service-oriented architecture (SOA) architectural information is used for various activities including analysis, design, governance, and quality assurance. Architectural information is created, stored and maintained in various locations like enterprise architecture management tools, design tools, text documents, and service registries/repositories. Capturing and maintaining this information manually is time-intensive, expensive and error-prone. To address this problem we present an approach for automatically extracting architectural information from an actual SOA implementation. The extracted information represents the currently implemented architecture and can be used as the basis for quality assurance tasks and, through synchronization, for keeping architectural information consistent in various other tools and locations. The presented approach has been developed for a SOA in the banking domain. Aside from presenting the main drivers for the approach and the approach itself, we report on experiences in applying the approach to different applications in this domain.",2012,0, 5864,On a Feature-Oriented Characterization of Exception Flows in Software Product Lines,"The Exception Handling (EH) is a widely used mechanism for building robust systems. In Software Product Line (SPL) context it is not different. As EH mechanisms are embedded in most of mainstream programming languages, we can find exception signalers and handlers spread over code assets associated to common and variable SPL features. When exception signalers and handlers are added to an SPL in an unplanned way, one of the possible consequences is the generation of faulty family instances (i.e., instances on which common or variable features signal exceptions that are mistakenly caught inside the system). This paper reports a first systematic study, based on manual inspection and static code analysis, in order to categorize the possible ways exceptions flow in SPLs, and analyze its consequences. Fault-prone exception handling flows were consistently detected during this study, such as flows on which a variable feature signaled an exception a different variable feature handled it.",2012,0, 5865,An Introspection Mechanism to Debug Distributed Systems,"Distributed systems are hard to debug due to the difficulty to collect, organize and relate information about their behavior. When a failure is detected the task to infer the system's state and the operations that have some connection with the problem is often quite difficult and usual debugging techniques often do not apply and, when they do, they are not very effective. This work presents a mechanism based on event logs annotated with contextual information, allowing visualization tools to organize events according to the context of interest for the system operator. We applied this mechanism to a real system and its the effort and cost to detect and diagnose the cause of problems was dramatically reduced.",2012,0, 5866,ET-DMD: An Error-Tolerant Scheme to Detect Malicious File Deletion on Distributed Storage,"Distributed storage is a scheme to store data in networked storage services, which is the basis of popular cloud storage solutions. Although this scheme has huge benefits in reducing maintenance and operation cost, it has several security concerns. Among them, malicious file deletion by the storage providers is a top concern. In this paper, we develop a novel error-tolerant solution, ET-DME, to effectively detect malicious file deletion behaviors in distributed storage services. Our approach prevents malicious servers from forging evidence to bypass data auditing test. In addition, our approach does not limit the number of challenges made by the client. Our approach also has low computation and communication overhead.",2012,0, 5867,"Some framework, Architecture and Approach for analysis a network vulnerability","Network administrators must rely on labour-intensive processes for tracking network configurations and vulnerabilities, which requires a lot of expertise and error prone. Organizational network vulnerabilities and interdependencies are so complex to make traditional vulnerability analysis become inadequate. Decision support capabilities let analysts make tradeoffs between security and optimum availability, and indicates how best to apply limited security resources. Recent work in network security has focused on the fact that a combination of exploitation is the typical way in which the invader breaks into the network. Researchers have proposed various algorithms to generate graphs based attack tree (or graph). In this paper, we present a framework, Architecture and Approach to Vulnerability Analysis.",2012,0, 5868,A comparison on fish freshness determination method,"Basically, freshness contributes a major factor to quality of fishery products. Several methods have been used to measure fish freshness which are sensory analysis, chemical method and physical method. The aim of the study is to make a comparison between fish freshness meter and quantification of RGB color indices in order to detect fish freshness. The sensor used in this study is Torrymeter which would measure three types of species while quantification of RGB color is focused on the fish eyes and gills.",2012,0, 5869,Heterogeneous tasks and conduits framework for rapid application portability and deployment,"Emerging heterogeneous and homogeneous processing architectures demonstrate significant increases in throughput for scientific applications over traditional single core processors. Each of these processing architectures vary widely in their processing capabilities, memory hierarchies, and programming models. Determining the system architecture best suited to an application or deploying an application that is portable across a number of different platforms is increasingly complex and error prone within this rapidly increasing and evolving design space. Quickly and easily designing portable, high-performance applications that can function and maintain their correctness properly across these widely varied systems has become paramount. To deal with these programming challenges, there is a great need for new models and tools to be developed. One example is MIT Lincoln Laboratory's Parallel Vector Tile Optimizing Library (PVTOL) which simplifies the task of developing software in C++ for these complex systems. This work extends the Tasks and Conduits framework in PVTOL to support GPU architectures and other heterogeneous platforms supported by the NVIDIA CUDA and OpenCL programming models. This allows the rapid portability of applications to a very wide range of architectures and clusters. Using this framework, porting applications from a single CPU core to a GPU requires a change of only 5 source lines of code (SLOC) in addition to the CUDA or OpenCL kernel. Using GPU-PVTOL we have achieved 22x speedup in an application of Monte Carlo simulations of photon propagation through a biological medium, and a 60x speedup of a 3D cone beam computed tomography (CT) image reconstruction algorithm.",2012,0, 5870,A path selection decision-making model at the application layer for Multipath Transmission,"With development of web application, humans have much more requirement on network Quality of Service (QoS). This paper proposes an path selection decision-making model used in application layer for Multipath Transmission (MPT) to overcome the challenge which QoS of network layer will be corrupted in multiple paths transmission mechanism. P2FT method is proposed in this paper. That module can decide path selections according to characteristics of application layer. We dynamically provide data that should transmits in each path. Further, due to characteristics of multiple paths transmission, the model macroscopically regulates the application layer to choice more appropriate path real time. In performance evaluation part, we obtain transmission time by calculating in this simulation. The simulation results from the graphs illustrate that our proposed model decrease 61.627% of transmission time and achieve 1.0000 probability of successful transmission respectively compared with traditional bearer network. The path to cooperate with each other, to achieve network resources optimization, high efficiency, high speed transmission, and make the transmission have QoS guarantee.",2012,0, 5871,Adaptive Random Test Case Generation for Combinatorial Testing,"Random testing (RT), a fundamental software testing technique, has been widely used in practice. Adaptive random testing (ART), an enhancement of RT, performs better than original RT in terms of fault detection capability. However, not much work has been done on effectiveness analysis of ART in the combinatorial test spaces. In this paper, we propose a novel family of ART-based algorithms for generating combinatorial test suites, mainly based on fixed-size-candidate-set ART and restricted random testing (that is, ART by exclusion). We use an empirical approach to compare the effectiveness of test sets obtained by our proposed methods and random selection strategy. Experimental data demonstrate that the ART-based tests cover all possible combinations at a given strength more quickly than randomly chosen tests, and often detect more failures earlier and with fewer test cases in simulations.",2012,0, 5872,"Trustworthiness of Open Source, Open Data, Open Systems and Open Standards","A compelling direction of improving trustworthiness of software-based systems is to open their ingredients: Open source software, open data sets, open system interfaces like open technical standards allow constructing major or up to all elements of a software-based systems. That gives means to use the wisdom of the crowd to assess and evaluate the quality, security and trustworthiness of software components. In addition, the software components can mature along a continuous feedback and revision loop with the crowd.",2012,0, 5873,An Effective Defect Detection and Warning Prioritization Approach for Resource Leaks,"Failing to release unneeded system resources such as I/O streams can result in resource leaks, which can lead to performance degradation and system crashes. Existing resource-leak detectors are usually based on predefined defect patterns to detect resource leaks in software. However, they typically report too many false positives and negatives, and also lack effective warning prioritization. Our empirical investigation shows that, their predefined defect patterns are not precise enough, and moreover, their used defect detection processes are not suitable enough for the defect patterns. In our approach, we introduce a novel Expressive Defect Pattern Specification Notation (EDPSN). With EDPSN, a resource-leak defect pattern can be defined more precisely by specifying conditional method calls and more expressively by including guiding information for the defect detection and warning prioritization process, such as the characteristics of its preferred defect detection process and the effective prioritization impact factors for its related warnings. Based on the EDPSN-based defect pattern, our approach tries to flexibly tune out a suitable defect detection and warning prioritization process. Through evaluations on three real-world projects (Eclipse-3.0.1, JBoss-3.0.6, and Weka-3.6.4), we show that our approach achieves high average precision (96%) and recall (74%), 26% and 49% higher than existing approaches, respectively.",2012,0, 5874,CATest: A Test Automation Framework for Multi-agent Systems,"Agents are difficult to test because it is notoriously complicated to observe their proactive, autonomous and non-deterministic behaviours and hard to judge their correctness in dynamic environments. This paper proposes a specification-based test automation framework and presents a tool called CATest for testing multi-agent systems (MAS). The agent-based formal specification language SLABS plays three roles in the framework. First, it is used to guide the instrumentation of the agent under test so that its behaviour can be observed and recorded systematically. Second, the correctness of agent's behaviours recorded during test executions are automatically checked against the formal specifications. Finally, the test adequacy is measured by the coverage of the specification and determined according to a set of adequacy criteria specifically designed for testing MAS. An experiment with the tool has demonstrated its capability of detecting faults in MAS.",2012,0, 5875,Towards Dynamic Random Testing for Web Services,"In recent years, Service Oriented Architecture (SOA) has been increasingly adopted to develop applications in the context of Internet. To develop reliable SOA-based applications, an important issue is how to ensure the quality of Web services. In this paper, we propose a dynamic random testing (DRT) technique for Web services which is an improvement of the widely practiced random testing. We examine key issues when adapting DRT to the context of SOA and develop a prototype for such an adaptation. Empirical studies are reported where DRT is used to test two real-life Web services and mutation analysis is employed to measure the effectiveness. The experimental results show that DRT can save up to 24% test cases in terms of detecting the first seeded fault, and up to 21% test cases in terms of detecting all seeded faults, both with the cases of uniformed mutation analysis and distribution-aware mutation analysis, which refer to faults being seeded in an even or clustered way, respectively. The proposed DRT and the prototype provide an effective approach to testing Web Services.",2012,0, 5876,"Software Testing, Software Quality and Trust in Software-Based Systems","In our daily life we increasingly depend on software-based systems deployed as embedded software control systems in the automotive domain, or the numerous health or government applications. Software-based systems are more and more developed by reusable components available as commercial off-the-shelf components or open source components. The successful introduction of such integrated systems into businesses however does depend whether we trust the system or not. Trust and therewith the quality of software-based systems is determined by many properties such as completeness, consistency, maintainability, security, safety, reliability, and usability, among others. However during the development of software-based systems there are many opportunities to introduce errors in the different phases of the software development lifecycle. Testing is commonly applied as the predominant activity in industry to ensure high software quality providing a wide variety of methods and techniques to detect different types of errors in software-based systems. The panel goal is to discuss software testing strategy and techniques to improve the quality of the software and at the same time to build trust with customers. The panel will discuss the experts view on what the key factors are in developing high quality software-based systems. Through the panel, the discussions shall include the impact of testing on software quality within several domains and their businesses.",2012,0, 5877,Using Program Dynamic Analysis for Weak Algorithm Detection,"Most of the proposed software testing approaches have concentrated on identifying faults in the implemented programs with the premise that the program may be implemented incorrectly. Although such premise is valid for many developed software systems and applications, it has been observed that many detected defects are caused by poor designed software. In this paper we present a dynamic analysis approach that identifies program deficiency that may be caused by poorly designed algorithm. Detecting such deficiency may help in identifying a weakly implemented program algorithm by identifying places in the program that may cause such weakness and suggest to the software designer/developer to redesign the program with a better algorithm.",2012,0, 5878,Image Analysis Using Machine Learning: Anatomical Landmarks Detection in Fetal Ultrasound Images,"Accurate and robust image analysis software is crucial for assessing the quality of ultrasound images of fetal biometry. In this work, we present the result of our automated image analysis method based on a machine learning algorithm in detecting important anatomical landmarks employed in manual scoring of ultrasound images of the fetal abdomen. Experimental results on 2384 images are promising and the clinical validation using 300 images demonstrates a high level agreement between the automated method and experts.",2012,0, 5879,CASViD: Application Level Monitoring for SLA Violation Detection in Clouds,"Cloud resources and services are offered based on Service Level Agreements (SLAs) that state usage terms and penalties in case of violations. Although, there is a large body of work in the area of SLA provisioning and monitoring at infrastructure and platform layers, SLAs are usually assumed to be guaranteed at the application layer. However, application monitoring is a challenging task due to monitored metrics of the platform or infrastructure layer that cannot be easily mapped to the required metrics at the application layer. Sophisticated SLA monitoring among those layers to avoid costly SLA penalties and maximize the provider profit is still an open research challenge. This paper proposes an application monitoring architecture named CASViD, which stands for Cloud Application SLA Violation Detection architecture. CASViD architecture monitors and detects SLA violations at the application layer, and includes tools for resource allocation, scheduling, and deployment. Different from most of the existing monitoring architectures, CASViD focuses on application level monitoring, which is relevant when multiple customers share the same resources in a Cloud environment. We evaluate our architecture in a real Cloud testbed using applications that exhibit heterogeneous behaviors in order to investigate the effective measurement intervals for efficient monitoring of different application types. The achieved results show that our architecture, with low intrusion level, is able to monitor, detect SLA violations, and suggest effective measurement intervals for various workloads.",2012,0, 5880,Workload-Aware Online Anomaly Detection in Enterprise Applications with Local Outlier Factor,"Detecting anomalies are essential for improving the reliability of enterprise applications. Current approaches set thresholds for metrics or model correlations between metrics, and anomalies are detected when the thresholds are violated or the correlations are broken. However, we have found that the dynamic workload fluctuating over multiple time scales causes system metrics and their correlations to change. Moreover, it is difficult to model various metric correlations in complex applications. This paper addresses these problems and proposes an online anomaly detection approach for enterprise applications. A method is presented for recognizing workload patterns with an incremental clustering algorithm. The Local Outlier Factor (LOF) based on the specific workload pattern is adopted for detecting anomalies. Our approach is evaluated on a testbed running the TPC-W benchmark. The experimental results show that our approach can capture workload fluctuations accurately and detect the typical faults effectively.",2012,0, 5881,Photo-generated carriers decay behavior of nano-crystalline -SiC thin film grown at different substrate bias voltage,"The photo-generated carriers decay behavior of nanocrystalline -SiC film grown by helicon wave plasma-enhanced chemical vapor deposition process at different substrate bias voltage is measured by microwave absorption technique. The probability of nanosecond fast decay of the photo-generated carriers concentration increases with substrate bias. The fast decay of photo generated carriers related to the radiative recombination process and substrate bias led to an increase in trap depth in the slow decay. The decay of nano-SiC thin films at different substrate bias voltage with different time constant indicates that the increase of substrate negative bias led to the increase in the density of deep trap levels and different trap level position related to different decay time. Photo-carrier transient behavior is closely related to the film microstructure, The high defect states density of nano-silicon carbide grain boundary lead to carriers trapping probability increase and the nonradiative recombination probability decreases.",2012,0, 5882,Automated Web Service Composition Using Genetic Programming,"Automated web service composition can largely reduce human efforts in business integration. We present an approach to fully automate web service composition without workflow or knowing the semantic meaning of atomic web service. The experiment results show that the accuracy of our composition method using Genetic Programming (GP), in terms of the number of times an expected composition that can be derived versus the total number of runs, can be over 90%. Based on the traditional GP used in web service composition, our algorithm achieved improvements in three aspects: 1. We do black-box testing on each individual in each population. The success rate of tests is taken into account by the fitness function of GP so that the convergence rate can be faster; 2. We comply with services knowledge rules such as service dependency graph (SDG) when generating individual web service compositions in each population to improve the convergence process and population quality; 3. We choose cross-over or mutation operation based on the parent individuals' input and output analysis instead of by probability as typically done in related work. In this way, GP can generate better children even under the same parents.",2012,0, 5883,A Prototype of Network Failure Avoidance Functionality for SAGE Using OpenFlow,"A tiled display wall (TDW), which is a single large display device composed of multiple sets of computers and displays, has recently gained the attention of scientists. In particular, SAGE, which is a middleware for building TDW, allows scientists to browse a multiple series of visualized results of computer simulation and analysis through the use of network streaming. Each visualized result can be generated on a different remote computer. For this reason SAGE has been increasingly hailed as a promising visualization technology that will solve the geographical distribution problem of computational and data resources. SAGE depends heavily on a network streaming technique in its architecture, but does not have any recovery mechanism against cases of network problems. This research, therefore, aims at realizing a network failure avoidance functionality for SAGE, focusing on OpenFlow against SAGE vulnerability to network failure. Specifically, the functionality is designed and developed as a composition of the following three functions: network failure detection, network topology understanding, and a packet forwarding control function. The key concept behind our design is that the network control function from OpenFlow should be built into SAGE. The evaluation in the paper confirms that the proposed and prototyped network failure avoidance functionality can detect failures on network routes and then reroute network streaming for visualization on TDW.",2012,0, 5884,A Reputation System for Trustworthy QoS Information in Service-Based Systems,"During service-based systems (SBS) development, the qualities of service (QoS) are significant factors to compose high-quality workflows and the QoS claimed by service providers may be not trustworthy enough. In this paper, a reputation system supervising both service providers and service clients without much monitoring cost is introduced. The approach is based on the claimed QoS deviation from the feedback QoS reported by monitors or clients and is able to provide more trustworthy predicted QoS. An experiment conducted at last validates that the reputation system is effective and efficient.",2012,0, 5885,Simplifying the Design of Signature Workflow with Patterns,"Signatures responsible for authentication, authorization, etc, are important in many workflow applications. Most studies associated with signatures are focused on digital signatures only, and modeling of signature workflows is seldom studied. However, the dependencies between signatures can be complex, and thus modeling signature workflows becomes time consuming and error-prone. In this paper, we propose six patterns to simplify the design of signature workflows. All the patterns are described in BPMN and a case study is made to illustrate how to apply these patterns in construction of a workflow. A method for applying these patterns in development of workflows is sketched, and the advantages, simplifying the construction of a workflow with BPMN, are also revealed with the case study.",2012,0, 5886,SE-EQUAM - An Evolvable Quality Metamodel,"Quality has become a key assessment factor for organizations to determine if their software ecosystems are capable to meet constantly changing environmental factors and requirements. Many quality models exist to assess the evolvability or maintainability of software systems. Common to these models is that they, contrary to the software ecosystems they are assessing, are not evolvable or reusable. In this research, we introduce SE-EQUAM a novel ontology-based quality assessment metamodel that was designed from ground up to support model reuse and evolvability. SE-EQUAM takes advantage of Semantic Web technologies such as support for the open world assumption, incremental knowledge population, and knowledge inference. We present a case study that illustrates the reusability and evolvability of our SE-EQUAM approach.",2012,0, 5887,Integration and Analysis of Design Artefacts in Embedded Software Development,"In model-based development of embedded software product lines, artefacts, i. e. the requirements document, implementation model, and tests, often become extremely complex w. r. t. size and dependencies. Moreover, the interrelationships among the artefacts are not obvious and information about development, design decisions as well as variability-related aspects are missing. Hence, engineers have to thoroughly analyse such dependencies to incorporate changes during evolution of the product (line) to assure quality. As this task is time-intensive and error-prone such analysis efforts have to be automated. This paper presents a comprehensive and extensible framework under development which provides (1) artefact integration and (2) analysis functionality to address these issues by following an approach based on a central database.",2012,0, 5888,Internet-Based Evaluation and Prediction of Web Services Trustworthiness,"As most Web services are delivered by third parties over unreliable Internet and are late bound at run-time, it is reasonable and useful to evaluate and predict the trustworthiness of Web services. In this paper, we propose a novel approach to evaluate and predict Web services trustworthiness using comprehensive trustworthy evidences collected from the Internet. First, we use an effective way to collect comprehensive trustworthy evidences from the Internet, which include both objective evidences (e.g. QoS) and subjective evidences (e.g. reputation). Second, Web services trustworthiness is evaluated with collected trustworthy evidences on a regular basis. Finally, the cumulative evaluation records are modeled as time series, and we propose a multi-step Web services trustworthiness prediction process, which can automatically and iteratively identify and optimize the model to fit the trustworthiness series data. Experiments conducted on a large-scale real-world dataset show that our method manages to collect comprehensive trustworthy evidences from the Internet and can effectively evaluate and predict the trustworthiness of Web services, which helps users to reuse Web services.",2012,0, 5889,Applying Distributed Object Technology to Distributed Embedded Control Systems,"In this paper, we describe our Java RMI inspired Object Request Broker architecture MicroRMI for use with networked embedded devices. MicroRMI relieves the software developer from the tedious and error- prone job of writing communication protocols for interacting with such embedded devices. MicroRMI supports easy integration of high-level application specific control logic with low-level device specific control logic. Our experience from applying MicroRMI in the context of a distributed robotics control application, clearly demonstrates that it is feasible to use distributed object technology in developing control systems for distributed embedded platforms possessing severe resource restrictions.",2012,0, 5890,Enhance Software Quality Using Data Mining Algorithms,"In recent decades the production of large software projects are very large and is costly and time consuming during the phases of software development there are some bugs. Some of the errors generated by the software to detect errors in the initial is phases these errors and may not be seen until the final phases. To clear this error may be the next generation of software. Time and expense of producing the software is error. Error in this phase will increase the cost and time. Over time, larger projects And the error in estimating software cost is higher and higher. and these days detecting the possible defect is one of consideration to rely on software quality. So there is a need to create a prediction model and we can use data mining methods to predict defects. This paper examined ways of imposing clustering on various projects and putting them in groups with the similar characteristics. By using this pattern we can choose a defect predication model that is able to predict the defect of whole group.",2012,0, 5891,Code Smell Detecting Tool and Code Smell-Structure Bug Relationship,"This paper proposes an approach for detecting the so- called bad smells in software known as Code Smell. In considering software bad smells, object-oriented software metrics were used to detect the source code whereby Eclipse Plugins were developed for detecting in which location of Java source code the bad smell appeared so that software refactoring could then take place. The detected source code was classified into 7 types: Large Class, Long Method, Parallel Inheritance Hierarchy, Long Parameter List, Lazy Class, Switch Statement, and Data Class. This work conducted analysis by using 323 java classes to ascertain the relationship between the code smell and structural defects of software by using the data mining techniques of Naive Bayes and Association Rules. The result of the Naive Bayes test showed that the Lazy Class caused structural defects in DLS, DE, and Se. Also, Data Class caused structural defects in UwF, DE, and Se, while Long Method, Large Class, Data Class, and Switch Statement caused structural defects in UwF and Se. Finally, Parallel Inheritance Hierarchy caused structural defects in Se. However, Long Parameter List caused no structural defects whatsoever. The results of the Association Rules test found that the Lazy Class code smell caused structural defects in DLS and DE, which corresponded to the results of the Naive Bayes test.",2012,0, 5892,Efficient binary representation of delta Quantization Parameter for High Efficiency Video Coding,"This paper proposes an efficient binary representation of delta Quantization Parameter (QP) for High Efficiency Video Coding (HEVC). Video encoders adapt QPs of coding blocks for visual quality optimization and rate control. Although they send only delta QPs (dQPs) obtained by causal prediction, the side information overhead is expensive. Therefore the HEVC design necessitates an efficient dQP coding. The proposed scheme converts a dQP to a binary string in which the first and second bins indicate the significance and sign of the dQP respectively and the rest represents the magnitude minus 1. Furthermore, it detects and truncates redundant bins in the binary strings by using the sign and an admissible dQP range. Thus it reduces the length of dQP binary strings and improves dQP coding efficiency. Simulation results using HEVC reference software demonstrate that the proposed scheme improves the dQP coding efficiency by 6% while reducing its bin rates by 25%.",2012,0, 5893,Location of DC line faults in conventional HVDC systems with segments of cables and overhead lines using terminal measurements,"Summary form only given. This paper presents a novel algorithm to determine the location of DC line faults in an HVDC system with a mixed transmission media consisting of overhead lines and cables, using only the measurements taken at the rectifier and inverter ends of the composite transmission line. The algorithm relies on the travelling wave principle, and requires the fault generated surge arrival times at two ends of the DC line as inputs. With accurate surge arrival times obtained from time synchronized measurements, the proposed algorithm can accurately predict the faulty segment as well as the exact fault location. Continuous wavelet transform coefficients of the input signal are used to determine the precise time of arrival of travelling waves at the DC line terminals. Two possible input signals, the DC voltage measured at the converter terminal and the current through the surge capacitors connected at the DC line end, are examined and both signals are found to be equally effective for detecting the travelling wave arrival times. Performance of the proposed fault-location scheme is analyzed through detailed simulations carried out using the electromagnetic transient simulation software PSCAD. The impact of measurement noise on the fault location accuracy is also studied in the paper.",2012,0,5449 5894,Guidelines for selection of an optimal structuring element for Mathematical Morphology based tools to detect power system disturbances,"Mathematical Morphology (MM) has been reported as a promising application to detect power system disturbances. The real-time applications of MM based tools are also reported to detect disturbances. However, there is no clear guideline for selection of the structuring element for a particular application, despite the fact that the structuring element is a key component of any MM based tool. This paper shows a method to generalize and numerically optimize the structuring element to detect power system disturbances. Power system fault cases are simulated using a professional time-domain software, and the current and voltage waveforms from these cases are used to illustrate the methodology. Results are observed and analyzed. Some guidelines to select an optimum structuring element to detect power system disturbances are provided based on the results.",2012,0, 5895,Model-based integration technology for next generation electric grid simulations,"Simulation-based evaluation of the behavior of the electric grid is complex, as it involves multiple, heterogeneous, interacting cyber-physical system like domains. Each simulation domain has sophisticated tools, but their integration into a coherent framework is a very difficult, time-consuming, laborintensive, and error-prone task. This means that computational studies cannot be done rapidly and the process does not provide timely answers to the planners, operators and policy makers. Furthermore, grid behavior has to be tested against a number of scenarios and situations, meaning that a huge number of simulations must be executed covering the potential space of possibilities. Designing and efficiently deploying such computational experiments by utilizing multi-domain tools for integrated smart grid is a major challenge. This paper addresses these important issues by integrating multiple modeling tools from diverse domains in a single coherent framework for integrated simulation of smart grids.",2012,0, 5896,An analysis of free Web-based PHRs functionalities and I18n,"The growth of the Internet, Web technologies, and other electronic tools are allowing the public to become more informed and actively engaged in their health care than was possible in the past. Personal Health Records (PHR) offer users possibility of managing their own health data. Many patients are using PHRs to communicate with doctors in order to improve healthcare quality and efficiency. A large number of companies have emerged to provide consumers with the opportunity to use online PHRs within a healthcare platform, proposing different functionalities and services. This paper analyzes and assesses the functionalities and internationalization (i18n) of free Web based PHRs.",2012,0, 5897,Detecting flash artifacts in fundus imagery,"In a telemedicine environment for retinopathy screening, a quality check is needed on initial input images to ensure sufficient clarity for proper diagnosis. This is true whether the system uses human screeners or automated software for diagnosis. We present a method for the detection of flash artifacts found in retina images. We have collected a set of retina fundus imagery from February 2009 to August 2011 from several clinics in the mid-South region of the USA as part of a telemedical project. These images have been screened with a quality check that sometimes omits specific flash artifacts, which can be detrimental for automated detection of retina anomalies. A multi-step method for detecting flash artifacts in the center area of the retina was created by combining characteristic colorimetric information and morphological pattern matching. The flash detection was tested on a dataset of 5218 images representative of the population. The system achieved a sensitivity of 96.54% and specificity of 70.16% for the detection of the flash artifacts. The flash artifact detection can serve as a useful tool in quality screening of retina images in a telemedicine network. The detection can be expected to improve automated detection by either providing special handling for these images in combination with a flash mitigation or removal method.",2012,0, 5898,Online monitoring and diagnosis of RFID readers and tags,"The need to fault-tolerance in RFID systems is increasing with the increased use of this technology in critical domains such as real-time processing fields. Although many efforts have been made to make this technology more dependable, it is still unreliable. In this paper, we propose a complementary approach to existing ones. It consists of a probabilistic monitoring that detects failures of RFID system components and a verification process that refines the diagnosis process by finding the causes of the detected failures.",2012,0, 5899,An approach of attribute selection for reducing false alarms,"Defect Prediction is one of the method in SQA (Software Quality Assurance) that attracts developers because it can reduce the testing efforts as well as development time. One problem in defect prediction is `curse of dimensionality', as hundreds of attributes are there in a dataset in software repository. In this paper we try to analyze whether there is any way to remove more attributes after attribute selection and the effect of this reduction of attributes on performance of defect prediction. We found that False positive rate (False Alarms) is reduced by using our method of attribute selection, which in turn can be used to reduce the resource allocation for detecting defective modules.",2012,0, 5900,Investigating object-oriented design metrics to predict fault-proneness of software modules,"This paper empirically investigates the relationship of class design level object-oriented metrics with fault proneness of object-oriented software system. The aim of this study is to evaluate the capability of the design attributes related to coupling, cohesion, complexity, inheritance and size with their corresponding metrics in predicting fault proneness both in independent and combine basis. In this paper, we conducted two set of systematic investigations using publicly available project datasets over its multiple subsequent releases to performed our investigation and four machine learning techniques to validated our results. The first set of investigation consisted of applying the univariate logistic regression (ULR), Spearman's correlation and AUC (Area under ROC curve) analysis on four PROMISE datasets. This investigation evaluated the capability of each metric to predict fault proneness, when used in isolation. The second set of experiments consisted of applying the four machine learning techniques on the next two subsequent versions of the same project datasets to validate the effectiveness of the metrics. Based on the results of individual performance of metrics, we used only those metrics that are found significant, to build multivariate prediction models. Next, we evaluated the significant metrics related to design attributes both in isolation and in combination to validated their capability of predicting fault proneness. Our results suggested that models built on coupling and complexity metrics are better and more accurate than those built on using the rest of metrics.",2012,0, 5901,A quantitative model for the evaluation of reengineering risk in infrastructure perspective of legacy system,"Competitive business environment wants to revolutionize existing legacy system in to self-adaptive ones. Nowadays legacy system reengineering has emerged as a well-known system renovation technique. Reengineering rapidly replace legacy development for keeping up with modern business and user requirements. However renovation of legacy system through reengineering is a risky and error-prone mission due to widespread changes it requires in the majority of case. Quantifiable risk measures are necessary for the measurement of reengineering risk to take decision about when the modernization of legacy system through reengineering is successful. We present a quantifiable measurement model to measure comprehensive impact of different reengineering risk arises from infrastructure perspective of legacy system. The model consists of five reengineering risk component, including Deployment Risk, Organizational Risk, Resource Risk, Development Process Risk and Personal Risk component. The results of proposed measurement model provide guidance to take decision about the evolution of a legacy system through reengineering.",2012,0, 5902,Incorporating fault dependent correction delay in SRGM with testing effort and release policy analysis,"Software Reliability growth models are helping the software society in predicting and analyzing the software product in terms of quality. In this context several software reliability growth models are proposed in the literature. Majority of models concentrated on fault detection process, ignoring the correction. Error detection, correction and dependency are the important phenomenon for the software reliability models. In this paper we proposed a new SRGM model based on correction lag and error dependency with incorporating the testing effort. All numerical calculations are carried out on real datasets and results are analyzed. By analyzing the results our proposed model fits well for the datasets.",2012,0, 5903,Impedance angle changes analysis applied to short circuit fault detection,"Induction motor winding fault is one of the frequent faults, and one of the most important reasons of making traction motors out of order. In this paper, an appropriate and effective method based on impedance angle changes is proposed to detect turn to turn fault. To do this, finite element method (FEM) by helping of ANSOFT software is used for creating different turn to turn faults conditions. A 1.5 KW, 3-phase squirrel cage induction motor has been used for experimental tests and verifying simulation results.",2012,0, 5904,Changes in submerged macrophyte communities in southern Lake Garda in the last 14-years,"In this study, in situ data and hyperspectral MIVIS (Multispectral Infrared and Visible Imaging Spectrometer) images collected over a period of 14 years were used to assess changes in submerged macrophytes colonization patterns in southern Lake Garda.",2012,0, 5905,Artificial neural network-based metric selection for software fault-prone prediction model,"The identification of a module's fault-proneness is very important for minimising cost and improving the effectiveness of the software development process. How to obtain the relation between software metrics and a module's fault-proneness has been the focus of much research. One technical challenge to obtain this relation is that there is relevance between software metrics. To overcome this problem, the authors propose a reduction dimensionality phase, which can be generally implemented in any software fault-prone prediction model. In this study, the authors present applications of artificial neural network (ANN) and support vector machine in software fault-prone prediction using metrics. A new evaluation function for computing the contribution of each metric is also proposed in order to adapt to the characteristics of software data. The vital characteristic of this approach is the automatic determination of ANN architecture during metrics selection. Four software datasets are used for evaluating the performance of the proposed model. The experimental results show that the proposed model can establish the relation between software metrics and modules' fault-proneness. Moreover, it is also very simple because its implementation requires neither extra cost nor expert's knowledge. The proposed model has good performance, and can provide software project managers with trustworthy indicators of fault prone components.",2012,1, 5906,Cloud Monitor: Monitoring Applications in Cloud,"With the advent of cloud computing applications, monitoring becomes a valid concern. Monitoring for failures in a cloud application is difficult because of multiple failure points spanning both hardware and software. Moreover the cluster nature of a cloud application increases the scope of failure and it becomes even harder to detect the same. This paper presents Cloud Monitor - a scalable framework for monitoring cloud applications. Cloud Monitor monitors cluster nodes for errors. It supports dependent monitors, redundancy, multiple notification levels and auto-healing. Cloud Manager supports a flexible architecture where users can add custom monitors and associated self-heal actions.",2012,0, 5907,Detecting Workload Hotspots and Dynamic Provisioning of Virtual Machines in Clouds,"One of the primary goals of Cloud Computing is to provide reliable QoS. The users of the cloud applications may access their applications from any Region. The cloud infrastructure must be Elastic enough to improve the QoS requirements. In order to provide reliable QoS, the cloud infrastructure must be able to detect the potential workload hotspots for various cloud applications across Regions and take appropriate measures. This paper presents an approach to detect workload hotspots using application access pattern based method in the cloud. This paper also presents how the existing VDN based Virtual Machine provisioning approach [1] can be used to provision new Virtual Appliances at the detected hotspots dynamically and efficiently at the potential hotspots to improve the QoS.",2012,0, 5908,Anomaly Teletraffic Intrusion Detection Systems on Hadoop-Based Platforms: A Survey of Some Problems and Solutions,"Telecommunication networks are getting more important in our social lives because many people want to share their information and ideas. Thanks to the rapid development of the Internet and ubiquitous technologies including mobile devices such as smart phones, mobile phones and tablet PCs, the quality of our lives has been greatly influenced and rapidly changed in recent years. Internet users have exponentially increased as well. Meanwhile, the explosive growth of teletraffic called big data for user services threatens the current networks, and we face menaces from various kinds of intrusive incidents through the Internet. A variety of network attacks on network resources have continuously caused serious damage. Thus, active and advanced technologies for early detecting of anomaly teletraffic on Hadoop-based platforms are required. In this paper, a survey of some problems and technical solutions for anomaly teletraffic intrusion detection systems based on the open-source software platform Hadoop has been investigated and proposed.",2012,0, 5909,Transient-Error Detection and Recovery via Reverse Computation and Checkpointing,"The integration of error detection and recovery mechanisms becomes mandatory as the probability of the occurrence of transient errors increases. The current study proposes a software-based fault tolerant technique that achieves both detection and recovery. The proposed technique is based on two main mechanisms, namely, reverse computation and check pointing. This study is the first to introduce reverse computation for error detection by comparing the input data of the original computation and the output data of the reverse computation. Live variable analysis is introduced to reduce the overhead of the check pointing technique. A translation tool is implemented to make the original source code fault tolerant with automatic error detection and recovery abilities. Fault injection and performance overhead experiments are performed to evaluate the proposed technique. Experimental results show that most errors can be recovered with relatively low performance overhead.",2012,0, 5910,Partial discharge monitoring system for PD characteristics of typical defects in GIS using UHF method,"GIS is now widely used in power system because of its compact structure, easy maintenance, and reliable operation. It is necessary to detect partial discharge (PD) in GIS because partial discharge (PD) causes deterioration of insulation and losses of power system. UHF method has great advantages of high sensitivity, strong anti-interference ability, the ability of locating PD sources, and recognizing the defect type. In this, the partial discharge monitoring system for GIS Based on UHF Method is introduced including both the hardware and software systems. A PD detection and diagnosis test system for GIS is also established based on the monitoring system above. The PD characteristics of two typical defects in GIS are researched, and the Phase Resolved Partial Discharge (PRPD) spectrograms are drawn after dealing with the PD Pulse Sequence of typical defects models, by which the type of defect could be identified.",2012,0, 5911,Discrete wavelet transform and probabilistic neural network algorithm for classification of fault type in underground cable,"This paper proposes an algorithm based on a combination of discrete wavelet transform (DWT) and probabilistic neural network (PNN) for classifying fault types on underground cable. Simulations and the training process for the PNN are performed using ATPIEMTP and MATLAB. The mother wavelet daubechies4 (db4) is employed to decompose high frequency component from these signals. The maximum coefficients of DWT of phase A, B, C and zero sequence for post-fault current waveforms are used as an input for the training pattern. Various cases studies based on Thailand electricity distribution underground systems have been investigated so that the algorithm can be implemented. The coefficients of DWT are also compared with those of PNN in this paper. The results show that the proposed algorithm is capable of performing the fault classification with satisfactory accuracy.",2012,0, 5912,Using the fuzzy analytic hierarchy process to the balanced scorecard: A case study for the elementary and secondary schools' information department of south Taiwan,"The purpose of this study is to establish balanced scorecard (BSC) in performance measurement of elementary and secondary schools' MIS Department. We take a broader definition of elementary and secondary schools' MIS Department as an assembly which brings forth some specific functional activities to fulfill the task of MIS. BSC used as a measurement tool to assess study subjects, according to its strategy and goal formed by its assignment property, can be divided into four dimensions: finance, customer, inter process, learning and growth, which can provide us with a timely, efficient, flexible, simple, accurate, and highly overall reliable measurement tool. In order to extract the knowledge and experience from related experts to pick out important evaluation criteria and opinion, this study combines fuzzy theory and the analytical hierarchy process (AHP) to calculate the weights. After completing weighted calculation of every dimension and indicator, the BSC model is thus established. The findings of this study show that the indicator weightings between and among all the levels are not the same, rather there exists certain amount of differences. The degrees of attention drawing in order of importance among all dimensions are customer, financial, internal process and learning and growth dimension. After comprehensively analyzing indictors of performance measurement included in every level, the highly valued top five indictors are, when conducting dimension performance measurement in elementary and secondary schools' MIS Department, Rationalize software and hardware and maintenance expenses, Budget satisfy and control, Quick response and handling, Improve service quality, and High effective information system.",2012,0, 5913,Narrowing the gaps in Concern-Driven Development,"Concern-Driven Development (CDD) promises improved productivity, reusability, and maintainability because high-level concerns that are important to stakeholders are encapsulated regardless of how these concerns are distributed over the system structure. However, to truly capitalize on the benefits promised by CDD, concerns need to be encapsulated across software development phases, i.e., across different types of models at different levels of abstraction. Model-Driven Engineering plays an important role in this context as the automated transformation of concern-oriented models (a) allows a software engineer to use the most appropriate modeling notation for a particular task, (b) automates error-prone tasks, and (c) avoids duplication of modeling effort. The earlier transformations can be applied in a CDD process, the greater the potential cost savings. Hence, we report on our experiences in applying tool supported transformations from scenario-based requirements models to structural and behavioral design models during CDD. While automated model transformations certainly contribute to the three benefits mentioned above, they can also lead to more clearly and succinctly defined modeling activities at each modeling level and aid in the precise definition of the semantics of the used modeling notations.",2012,0, 5914,Impact assessment of AC and DC electric rope shovels on coal mine power distribution system,"Electric rope shovel is a major piece of equipment in coal mines. They consume significant amounts of power and each of their motions require few thousands kVA. Therefore, commissioning a new shovel in the existing power system of a mine has to be done with very careful analysis of its impact on the network. Aspects such as load flow studies, fault analysis, protection co-ordination, harmonic analysis and arc flash studies are of great interest for the power system engineers on site. The aim of this research paper is to present the modeling process used to simulate the operation of both an electric rope shovel in operation and another new shovel to be commissioned in a coal mine and assess their impact on the power system.",2012,0, 5915,Classification and tendencies of evaluations in e-learning,"The use of information technology in education is a very promising field which has led to many paradigms, educational models and the implementation of e-learning systems that are effectively used in education and training. For educators, training professionals and corporate managers, it is always hectic to choose the right approach and accordingly the e-learning system that fit their actual business needs. This is mainly due to the lack of comprehensive studies done to assess and compare existing systems that allow guiding stakeholders choosing the right e-learning system for the right learners in the right learning context. This paper is a contribution towards this direction; it proposes an overview of approaches that have been conducted to assess e-learning systems. From this study we propose a classification of approaches and e-learning systems that is based on four main criteria: i) Who: that deals with the stakeholders and actors of learning systems; ii) What: which represents the elements of an e-learning system to evaluate; iii) When: addresses the phase of development of the e-learning system in which the assessment is done; iv) Which: the method used to evaluate the systems. The results of this work are presented and discussed in this paper.",2012,0, 5916,Implementation of remote temperature-measuring by using a thermopile for Wireless Sensor Network,"Wireless Sensor Networks (WSN) is an important platform to build an intelligent house in the future. This paper describes a non-contact thermometer by detecting an object's radiant power. We attempt to design a portable device for remote measuring temperature, and then the device will be developed as a sensing node for WSN. A thermopile equipped with a lens was used to implement the performance. The detection range is from 0C to 300C. The average error is within 3C. In this study, LabVIEW software was used to perform the design of a thermistor linearization and data logging. This study may provide a useful reference for researchers attempting to increase quality of remote radiometry.",2012,0, 5917,Context and policy based fault-tolerant scheme in mobile ubiquitous computing environment,"In ubiquitous computing system, the increasing mobile and dynamic of software and hardware resources and frequentative interaction among function components make fault-tolerant design very challenging. In this paper, we propose a context and policy based self-adaptive fault-tolerant mechanism for a mobile ubiquitous computing environment such as a mobile ad hoc network. In our approach, the fault-tolerant mechanism is dynamically built according to various types of detected faults based on continuous monitoring and analysis of the component states. We put forward the architecture of fault-tolerant system and the context-based and policy-based fault-tolerant scheme, which adopts ontology-based context modeling method and the Event-Condition-Action execution rules. The mechanism has been designed and implemented as self-adaptive fault-tolerant middleware, shortly called SAFTM, on a preliminary prototype for a dynamic ubiquitous computing environment such as mobile ad hoc network. We have performed the experiments to evaluate the efficiency of the fault-tolerant mechanism. The results of the experiments show that the performance of the self-adaptive fault tolerant mechanism is realistic.",2012,0, 5918,An Empirical Analysis on Fault-Proneness of Well-Commented Modules,"Comment statements are useful to enhance the readability and/or understandability of software modules. However, some comments may adjust the readability/understandability of code fragments that are too complicated and hard to understand-a kind of code smell. Consequently, some well-written comments may be signs of poorquality modules. This paper focuses on the lines of comments written in modules, and performs an empirical analysis with three major open source software and their fault data. The empirical results show that the risk of being faulty in wellcommented modules is about 2 to 8 times greater than noncommented modules.",2012,0, 5919,Locating Source Code to Be Fixed Based on Initial Bug Reports - A Case Study on the Eclipse Project,"In most software development, a Bug Tracking System is used to improve software quality. Based on bug reports managed by the bug tracking system, triagers who assign a bug to fixers and fixers need to pinpoint buggy files that should be fixed. However if triagers do not know the details of the buggy file, it is difficult to select an appropriate fixer. If fixers can identify the buggy files, they can fix the bug in a short time. In this paper, we propose a method to quickly locate the buggy file in a source code repository using 3 approaches, text mining, code mining, and change history mining to rank files that may be causing bugs. (1) The text mining approach ranks files based on the textual similarity between a bug report and source code. (2) The code mining approach ranks files based on prediction of the fault-prone module using source code product metrics. (3) The change history mining approach ranks files based on prediction of the fault-prone module using change process metrics. Using Eclipse platform project data, our proposed model gains around 20% in TOP1 prediction. This result means that the buggy files are ranked first in 20% of bug reports. Furthermore, bug reports that consist of a short description and many specific words easily identify and locate the buggy file.",2012,0, 5920,Predicting Fault-Prone Modules Using the Length of Identifiers,"Identifiers such as variable names and function names in source code are essential information to understand code. The naming for identifiers affects on code understandability, thus, we expect that they affect on software quality. In this study, we examine the relationship between the length of identifiers and existence of software faults in a software module. The results of experiment using the random forest technique show that there is a positive relationship between the length of identifier and existence of software faults.",2012,0, 5921,Table of contents,The following topics are dealt with: software engineering; source code location;p QORAL; fault-prone modules prediction; service-oriented MSR integration; and coding patterns.,2012,0,6968 5922,A Static Approach to Prioritizing JUnit Test Cases,"Test case prioritization is used in regression testing to schedule the execution order of test cases so as to expose faults earlier in testing. Over the past few years, many test case prioritization techniques have been proposed in the literature. Most of these techniques require data on dynamic execution in the form of code coverage information for test cases. However, the collection of dynamic code coverage information on test cases has several associated drawbacks including cost increases and reduction in prioritization precision. In this paper, we propose an approach to prioritizing test cases in the absence of coverage information that operates on Java programs tested under the JUnit framework-an increasingly popular class of systems. Our approach, JUnit test case Prioritization Techniques operating in the Absence of coverage information (JUPTA), analyzes the static call graphs of JUnit test cases and the program under test to estimate the ability of each test case to achieve code coverage, and then schedules the order of these test cases based on those estimates. To evaluate the effectiveness of JUPTA, we conducted an empirical study on 19 versions of four Java programs ranging from 2K-80K lines of code, and compared several variants of JUPTA with three control techniques, and several other existing dynamic coverage-based test case prioritization techniques, assessing the abilities of the techniques to increase the rate of fault detection of test suites. Our results show that the test suites constructed by JUPTA are more effective than those in random and untreated test orders in terms of fault-detection effectiveness. Although the test suites constructed by dynamic coverage-based techniques retain fault-detection effectiveness advantages, the fault-detection effectiveness of the test suites constructed by JUPTA is close to that of the test suites constructed by those techniques, and the fault-detection effectiveness of the test suites constructed by some of - UPTA's variants is better than that of the test suites constructed by several of those techniques.",2012,0, 5923,AltAnalyze - An Optimized Platform for RNA-Seq Splicing and Domain-Level Analyses,"The deep sequencing of transcriptomes has revolutionized our ability to detect known and novel RNA variants at a never before observed resolution. To capitalize on these ever improving technologies, we require functionally rich methods of annotation to predict and evaluate the consequences of RNA isoform variation at the level of proteins, domains and microRNA binding sites. We introduce a new version of the popular open-source application AltAnalyze, capable of analyzing RNA-Sequencing (RNA-Seq) datasets as well as splicing-sensitive or conventional arrays. This software can be run through an intuitive graphical user interface or command-line. Over 60 species and data from various RNA-Seq alignment workflows are immediately supported without any specialized configuration. AltAnalyze provides multiple options for gene expression quantification, filtering, quality control and biological interpretation. Hierarchical clustering heatmaps, principal component analysis plots, lineage correlation diagrams and visualization of enriched pathways are automatically produced for differentially expressed genes. For detection of alternative splicing, promoter or polyadenylation events, AltAnalyze combines both reciprocal-junction and alternative-exon expression approaches to identify annotated and novel RNA variation. By connecting these regulated splicing-events with optimal inclusion and exclusion isoforms, AltAnalyze is able to evaluate the impact of alternative RNA expression on protein domains, annotated motifs and binding sites for microRNAs. From a broader perspective, AltAnalyze examines the enrichment of effected domains and microRNA binding sites, to highlight the global impact of alternative splicing. Together, AltAnalyze provides an efficient, streamlined and comprehensive set of analysis results, to determine the biological impact of transcriptome regulation.",2012,0, 5924,ACCGen: An Automatic ArchC Compiler Generator,"The current level of circuit integration led to complex designs encompassing full systems on a single chip, known as System-on-a-Chip (SoC). In order to predict the best design options and reduce the design costs, designers are required to perform a large design space exploration on early stages of the design. To speed up this process, Electronic Design Automation (EDA) tools are employed to model and experiment with the system. ArchC is an """"Architecture Description Language"""" (ADL) and a set of tools that can be leveraged to automatically build SoC simulators based on high-level system models, enabling easy and fast design space exploration in early stages of the design. Currently, ArchC is capable of automatically generating hardware simulators, assemblers, and linkers for a given architecture model. In this work, we present ACCGen, an automatic Compiler Generator for ArchC, the missing link on the automatic generation of compiler tool chains for ArchC. Our experimental results show that compilers generated by ACCGen are correct for Mibench applications. They compare, as well, the generated code quality with LLVM and gcc, two well-known open-source compilers. We also show that ACCGen is fast and has little impact on the design space exploration turnaround time, allowing the designer to, using an easy and fully automated workflow, completely assess the outcome of architectural changes in less than 2 minutes.",2012,0, 5925,A Framework for Generating Integrated Component Fault Trees from Architectural Views,"Safety is a property of a system which can only be assessed by conducting analysis which reveals how interacting components create situations that are unsafe because components that individually fulfill their requirements do not ensure safety at the system level. CFTs(Component Fault Trees) [1] which are specialized fault trees have been used as models to analyze systems. Systems today are typically built by groups of people who expertise in different disciplines. One of the problems of the current state of art is that there is no structured way of combining information obtained from experts in various disciplines who have different views of a system into a CFT. We provide a framework using which one can semi-automatically combine CFTs created by several stakeholders/experts into a single integrated CFT. This enables one to effectively combine experience and wisdom of experts obtained from diverse perspectives of the system into a single, more complete CFT. The resulting integrated CFT(which we call iCFT) allows safety engineers or other stakeholders to see the influences that components have on one another in a manner that would not have been revealed unless a system was viewed from varied perspectives.",2012,0, 5926,Using Tool-Supported Model Based Safety Analysis -- Progress and Experiences in SAML Development,"Software controls in technical systems are becoming more and more important and complex. Model based safety analysis can give provably correct and complete results, often in a fully automatic way. These methods can answer both logical and probabilistic questions. In common practice, the needed models must be specified in different input languages of different tools depending on the chosen verification tool for the desired aspect. This is time consuming and error-prone. To cope with this problem we developed the safety analysis modeling language (SAML). In this paper, we present a new tool to intuitively create probabilistic, non-deterministic and deterministic specifications for formal analysis. The goal is to give tool-support during modeling and thus make building a formal model less error-prone. The model is then automatically transformed into the input language of state of the art verification engines. We illustrate the approach on a case-study from nuclear power plant domain.",2012,0, 5927,Real-Time Anomaly Detection in Streams of Execution Traces,"For deployed systems, software fault detection can be challenging. Generally, faulty behaviors are detected based on execution logs, which may contain a large volume of execution traces, making analysis extremely difficult. This paper investigates and compares the effectiveness and efficiency of various data mining techniques for software fault detection based on execution logs, including clustering based, density based, and probabilistic automata based methods. However, some existing algorithms suffer from high complexity and do not scale well to large datasets. To address this problem, we present a suite of prefix tree based anomaly detection techniques. The prefix tree model serves as a compact loss less data representation of execution traces. Also, the prefix tree distance metric provides an effective heuristic to guide the search for execution traces having close proximity to each other. In the density based algorithm, the prefix tree distance is used to confine the K-nearest neighbor search to a small subset of the nodes, which greatly reduces the computing time without sacrificing accuracy. Experimental studies show a significant speedup in our prefix tree based and prefix tree distance guided approaches, from days to minutes in the best cases, in automated identification of software failures.",2012,0, 5928,An Autonomic Reliability Improvement System for Cyber-Physical Systems,"System reliability is a fundamental requirement of cyber-physical systems. Unreliable systems can lead to disruption of service, financial cost and even loss of human life. Typical cyber-physical systems are designed to process large amounts of data, employ software as a system component, run online continuously and retain an operator-in-the-loop because of human judgment and accountability requirements for safety-critical systems. This paper describes a data-centric runtime monitoring system named ARIS (Autonomic Reliability Improvement System) for improving the reliability of these types of cyber-physical systems. ARIS employs automated online evaluation, working in parallel with the cyber-physical system to continuously conduct automated evaluation at multiple stages in the system workflow and provide real-time feedback for reliability improvement. This approach enables effective evaluation of data from cyber-physical systems. For example, abnormal input and output data can be detected and flagged through data quality analysis. As a result, alerts can be sent to the operator-in-the-loop, who can then take actions and make changes to the system based on these alerts in order to achieve minimal system downtime and higher system reliability. We have implemented ARIS in a large commercial building cyber-physical system in New York City, and our experiment has shown that it is effective and efficient in improving building system reliability.",2012,0, 5929,On the development of Software-Based Self-Test methods for VLIW processors,"Software-Based Self-Test (SBST) approaches are an effective solution for detecting permanent faults; this technique has been widely used with a good success on generic processors and processors-based architectures; however, when VLIW processors are addressed, traditional SBST techniques and algorithms must be adapted to each particular VLIW architecture. In this paper, we present a method that formalizes the development flow to write effective SBST programs for VLIW processors, starting from known algorithms addressing traditional processors. In particular, the method addresses the parallel Functional Units, such as ALUs and MULs, embedded into a VLIW processor. Fault simulation campaigns confirm the validity of the proposed method.",2012,0, 5930,Software exploitable hardware Trojans in embedded processor,"Growing threat of hardware Trojan attacks in untrusted foundry or design house has motivated researchers around the world to analyze the threat and develop effective countermeasures. In this paper, we focus on analyzing a specific class of hardware Trojans in embedded processor that can be enabled by software or data to leak critical information. These Trojans pose a serious threat in pervasively deployed embedded systems. An attacker can trigger these Trojans to extract valuable information from a system during field deployment. We show that an adversary can design a low-overhead hard-to-detect Trojan that can leak either secret keys stored in a processor, the code running in it, or the data being processed.",2012,0, 5931,Old wine in new wineskins: Upgrading the liquids reflectometer instrument user control software at the Spallation Neutron Source,"The Liquids Reflectometer (LR) Instrument installed at the Spallation Neutron Source (SNS) enables observations of chemical kinetics, solid-state reactions, phase-transitions and chemical reactions in general [1]. The ability of the instrument to complete measurements quickly and therefore process many samples is a key capability inherent in the system design [2]. Alignment and sample environment management are a time consuming and error prone process that has led to the development of automation in the control software operating the instrument. In fact, the original LR user interface, based on the Python scripting language, has been modularized and adapted to become the standard interface on many other instruments. A project to convert the original Python [3] implementation controlling the LR instrument into the modular version standardized at SNS was undertaken in the spring of 2012. The key features of automated sample alignment and robot-driven sample management system enable the instrument to reduce the manual labor required to prepare and execute observations, freeing up precious time for analysis and reporting activity. We present the modular PyDas control system [4], its implementation for the LR, and the lessons learned during the upgrade process.",2012,0, 5932,Stealth assessment of hardware Trojans in a microcontroller,"Many experimental hardware Trojans from the literature explore the potential threat vectors, but do not address the stealthiness of the malicious hardware. If a Trojan requires a large amount of area or power, then it can be easier to detect. Instead, a more focused attack can potentially avoid detection. This paper explores the cost in both area and power consumption of several small, focused attacks on an Intel 8051 microcontroller implemented with a standard cell library. The resulting cost in total area varied from a 0.4% increase in the design, down to a 0.150% increase in the design. Dynamic and leakage power showed similar results.",2012,0, 5933,Probe-based distributed algorithms for deadlock detection,"Distributed algorithms discussed in this paper provide a better performance than other well known algorithms designed for detecting deadlock, with respect to the communication bus load and fault tolerance, while the same assumptions for the operating conditions are preserved. Several concepts are defined for a group of deadlocked distributed tasks: probe messages, propagation law of a probe message, centralized and distributed management of probe messages, and cyclic trace of a probe message. Also, a propagation law for probe messages and two theorems are formulated and tested to establish valuable conditions of deadlock detection in distributed applications.",2012,0, 5934,Failure analysis of distributed scientific workflows executing in the cloud,"This work presents models characterizing failures observed during the execution of large scientific applications on Amazon EC2. Scientific workflows are used as the underlying abstraction for application representations. As scientific workflows scale to hundreds of thousands of distinct tasks, failures due to software and hardware faults become increasingly common. We study job failure models for data collected from 4 scientific applications, by our Stampede framework. In particular, we show that a Naive Bayes classifier can accurately predict the failure probability of jobs. The models allow us to predict job failures for a given execution resource and then use these failure predictions for two higher-level goals: (1) to suggest a better job assignment, and (2) to provide quantitative feedback to the workflow component developer about the robustness of their application codes.",2012,0, 5935,Autoconfiguration of enterprise-class application deployment in virtualized infrastructure using OVF activation mechanisms,"IT-based services existing today, such as the ones supporting e-commerce systems or corporate applications, demand complex architectures to address enterprise-class requirements (high availability, vast user demand, etc.). In particular, most enterprise-class applications are multi-tiered and multi-node, i.e. composed of many independent systems with complex relationships among them. In recent years, virtualization technologies have brought many advantages to enterprise-class application implementation, such as cost consolidation and ease of management. However, even when using virtualization, the configuration operations associated to the deployment of enterprise-class applications are still a challenging task since they constitute a mostly manual, time-consuming and error prone process. In this paper we propose a solution to that problem based on the automation of that procedure by means of the OVF activation mechanism. Our work focuses on a practical case, which has been used to assess the feasibility of our solution and to extract valuable lessons that we expose as part of the article.",2012,0, 5936,Power quality event source directivity detection based on V-I scatter graph,"This paper presents a power quality event source directivity detection based on V-I scatter graph. The proposed method is capable to detect both voltage sag and transient power quality event source direction in term of upstream and downstream based on one bus measurement. The V-I scatter graph are plotted based on the RMS voltage and current magnitude at the same instant of time sample onto the voltage versus current graph. Based on the basic principle of power flow, the power quality event source directivity detection are determine by the sector where the plotted VI samples exceeded the voltage and current limits set on the V-I scatter graph. The proposed method is evaluated using simulation model waveforms. The evaluation shows a very promising result of power quality event source direction detection for voltage sag caused by line fault and transient caused by capacitor bank energizing.",2012,0, 5937,Real time power system harmonic distortion assessment virtual instrument,"This paper presents a real time power system harmonic distortion assessment instrument implemented using virtual instrument concept. The measurement hardware consists of measurement probes with signal conditioning circuit connected to a data acquisition module. The data acquisition module is interface to computer through a USB interface. Harmonic distortion assessment algorithm using Fourier transform was implemented using MATLAB. A graphical user interface was developed to display the harmonic distortion assessment in real time. The harmonic distortion is able assess individual harmonic order up to 50th order and the total harmonic distortion according to IEEE 519, IEC 61000-2-2 or EN 50160 standards selected by the user. An acceptability index is also proposed in this paper for trend assessment. The proposed real time harmonic distortion assessment virtual instrument was evaluated through field testing. The test results show promising performance for the proposed real time power system harmonic distortion assessment virtual instrument.",2012,0, 5938,Jaguar: Time shifting air traffic scenarios using a genetic algorithm,"This paper describes the redesign of the Federal Aviation Administration's implementation of a genetic algorithm used for time shifting flights in air traffic scenarios. Time shifted scenarios are used in testing decision support tools that predict the potential loss of separation between aircraft. This paper describes the improvements that resulted when this application was redesigned and coded in Java. The improvements described in this paper include the following: Maintainability improved as a result of a modular design using object-oriented techniques. : Usability improved as a result of more efficient logging techniques, configuration methods, and user interfaces. : Quality of the solution improved as a result of a more accurate method for calculating of aircraft-to-aircraft conflicts. : Timeliness for obtaining a solution improved as a result of using modern software engineering techniques, such as distributing the fitness function across multiple processors and caching fitness scores.",2012,0, 5939,A feasibility study for ARINC 653 based operational flight program development,"The aircraft manufacturers are constantly driving to reduce manufacturing lead times and cost at the same time as the product complexity increases and technology continues to change. As avionics systems have evolved, particularly over the past two or three decades, the level of functional integration has increased dramatically. Integrated modular avionics (IMA) is a solution that allows the aviation industry to manage their avionics complexity. IMA defines an integrated system architecture that preserves the fault containment and `separation of concerns' properties of the federated architectures, where independent functional chains share a common computing resource. In software side, the air transport industry has developed ARINC 653 specification as a standardized real time operating system (RTOS) interface definition for IMA. It allows hosting multiple applications of different software levels with partitions on the same hardware in the context of IMA architecture. The primary components of the ARINC 653 are core and applications software. This paper describes a study that assessed the feasibility of developing an ARINC 653 based operational flight program (OFP) prototype and will provide valuable lessons learned through OFP development. The OFP architecture consists of two distinct modules: a core that interfaces and monitors the hardware and provides a standard and common environment for software applications; and an application module that performs the avionics functions. The prototype OFP is being integrated with the FA-50 simulator at the avionics laboratory of Korea Aerospace Industries.",2012,0, 5940,Maximizing fault tolerance in a low-s WaP data network,"The BRAIN (Braided Ring Availability/Integrity Network) is a radically different type of data network technology that uses a combination of a braided ring topology and high-integrity message propagation mechanisms. The BRAIN was originally designed to tolerate two passive failures or one passive and one active failure (including a Byzantine failure). In recent developments, the BRAIN's fault tolerance has been increased to the level where it can tolerate two active failures (including two Byzantine failures), as long as the two failures are not colluding. A colluding failure is an active failure that supports one or more other active failures to cause a system failure. To be effective, these active failures must be syntactically correct - i.e., cannot be detected by inline error detection, such as CRCs, checksums, physical encoding (e.g. 8B/10B), protocol rules, or reasonableness checks. The probability of colluding failures happening is so low that this new BRAIN, for all practical purposes, is a two-fault tolerant network. This improvement in fault tolerance comes at no additional cost. That is, it uses exactly the same minimal amount of hardware as the original BRAIN. As an example comparison, this new version of the BRAIN requires less size, weight, and power (SWaP) than a typical two-channel AFDX network, while tolerating more faults and more types of faults. The nodes used by the BRAIN, are simplex (they require no redundancy in themselves for integrity) and the fault tolerance provided by the BRAIN can be made transparent to all application software. The BRAIN can check that redundant nodes (e.g. pair-wise adjacent nodes) produce bit-for-bit identical outputs, without resorting to clock-step self-checking pair processing that is rapidly becoming technologically infeasible due to the higher speeds of modern processors. The BRAIN also simplifies the creation of architectures with dissimilar redundancy. The design of these BRAIN improvements were guided by the- use of the Symbolic Analysis Laboratory (SAL) model-checker in a novel use of formal methods for exploratory development early in the design cycle of a new protocol.",2012,0, 5941,Semi-supervised learning of decision making for parts faults to system-level failures diagnosis in avionics system,"Supervised fault detection and fault diagnosis are the techniques for recognizing small faults with abrupt or incipient time behavior in closed loops. Thus the acquired data scale and software scale became more and more huge that active fault diagnosis treats with the data hardly. After decades of Artificial Intelligence development, AI technology has achieved significant results. Machine learning methods in AI have been widely used and developed in the field of fault diagnosis and prognosis. This paper discusses and demonstrates a complete machine learning fault diagnosis structure based on support vector regression, neural gas clustering, multiple-classes support vector machine, and Bayesian fuzzy fault tree, which are semi-supervised to isolate and predict faults from a component to a system/subsystem when there are partly uncertainty faults and finally provide a decision for the maintenance. It is crucial that machine learning methods are applied in the fault detection and prediction. Furthermore, the diagnostic intelligence can be found in multi-dimension empirical data and from granularity partition of avionics system based on the knowledge found and representation. Therefore the symptom-knowledge-information is suitable for representing the faults or failures in a system. The presented structure is generic and can be extended to the verification and validation of other diagnosis and prognostic algorithms on different platforms. It has been successfully preventing aircraft system/subsystem failures, identifying and predicting failures that will occur, which provides real application on making health management information and decisions.",2012,0, 5942,A QoS-Aware Service Optimization Method Based on History Records and Clustering,"The number of alternative Web services that provide the same functionality but differ in non-functional characteristics, i.e., Quality of Service (QoS) parameters is more and more larger, and the service providers could not always deliver their promised quality. In view of this challenge, we propose a QoS-aware service optimization method based on history records and clustering, named QHRC. In this method, we take advantage of the history records of services' past performance quality rather than using the tentative QoS values provided by the service providers. And an adaptive hierarchical fuzzy clustering algorithm named H2D-SC algorithm is adopted to cluster the QoS history records for each Web service, then using the centroids of the subclusters to generate QoS history-record based composition plans. Our method aims at ranking the service composition plans by the corresponding QoS history-record based composition plans to select a most qualified service composition plan. At last, we assess the efficiency of the proposed method with an example and experiments.",2012,0, 5943,A QoS-Aware Performance Prediction for Self-Healing Web Service Composition,"As composition consists of different Web Services invocations, when one component service fails, composite Web Service will not operate appropriately. The easy solution to this problem is to reselect the service every time service fails. However, it is not feasible due to the high complexity of the reselection, which will interrupt the execution of composite service, lead to an extra delay and influence the performance of the composite service. In this paper we propose an approach on Quality of Service (QoS) aware performance prediction for self-healing Web Service Composition. In our approach, we first propose a self-healing cycle which has three phases such as monitoring, diagnostics and repair. Next, in order to minimize a number of reselections we propose Decision Tree based performance prediction approach. With our approach, the component services which have previously violated QoS parameter values can be predicted. We will demonstrate that proposed solution has better performance in supporting the self-healing Web Service composition comparing to traditional way.",2012,0, 5944,Visualizing concurrency faults in ARINC-653 real-time applications,"The ARINC-653 standard architecture for flight software specifies an application executive (APEX) which provides an application programming interface and defines a hierarchical framework which provides health management for error detection and recovery. In every partition of the architecture, however, processes may have to deal with asynchronous realtime signals from peripheral devices or may communicate with other processes through blackboards or buffers. This configuration may lead programs into concurrency faults such as unintended race conditions which are common and difficult to be removed by testing. Unfortunately, existing tools for reporting concurrency faults in applications that use concurrent signal handlers can neither represent the complex interactions between an ARINC-653 application and its error handlers nor provide effective means for understanding the dynamic behavior of concurrent signal handlers involved into data races. Thus, this paper presents an intuitive tool that visualizes the partial ordering of runtime events to detect concurrency faults in an ARINC-653 application that uses concurrent signal handlers. It uses vertically parallel arrows with different colors to capture the logical concurrency between the application, its error handlers and concurrent signal handlers, and materializes synchronization operations with differently colored horizontal arrows. Our visualization tool allows at a glance, to visually detect data races and provides a great understanding of the program internal for an easy debugging process.",2012,0,5945 5945,Visualizing concurrency faults in ARINC-653 real-time applications,"Presents a collection of slides covering the following topics: the ARINC-653 standard; defines an application executive (APEX) to provide services for integrated modular avionics; provides temporal- and spatial-partitioning to enable applications, each executing in a partition, to run simultaneously and independently on the same architecture; health monitor to detect and provide recovery mechanisms for hardware and software failures.",2012,0, 5946,A model-driven approach for configuring and deploying Systems of Systems,"Configuration and deployment of systems for defense and air traffic control is often a complex task because a System of Systems (SoS) is always distributed on different geographic areas, composed by hundreds of components (e.g. applications, processes, services, hosts), running under multiple hardware constraints, on different resources, and subject to mission critical requirements. The configuration of such SoS or a part of it, involves the production of many configuration files describing the structure of the SoS in general, the configuration parameters of each component, and how each component has to interact with the others. Due to the considerable size and complexity of the configuration files (i.e. hundreds of lines of code), the adoption of a manual approach is clearly error prone. This work presents a model-driven approach for supporting the configuration of mission-critical SoS or part of it.",2012,0, 5947,CDA: A Cloud Dependability Analysis Framework for Characterizing System Dependability in Cloud Computing Infrastructures,"Cloud computing has become increasingly popular by obviating the need for users to own and maintain complex computing infrastructure. However, due to their inherent complexity and large scale, production cloud computing systems are prone to various runtime problems caused by hardware and software failures. Dependability assurance is crucial for building sustainable cloud computing services. Although many techniques have been proposed to analyze and enhance reliability of distributed systems, there is little work on understanding the dependability of cloud computing environments. As virtualization has been an enabling technology for the cloud, it is imperative to investigate the impact of virtualization on the cloud dependability, which is the focus of this work. In this paper, we present a cloud dependability analysis (CDA) framework with mechanisms to characterize failure behavior in cloud computing infrastructures. We design the failure-metric DAGs (directed a cyclic graph) to analyze the correlation of various performance metrics with failure events in virtualized and non-virtualized systems. We study multiple types of failures. By comparing the generated DAGs in the two environments, we gain insight into the impact of virtualization on the cloud dependability. This paper is the first attempt to study this crucial issue. In addition, we exploit the identified metrics for failure detection. Experimental results from an on-campus cloud computing test bed show that our approach can achieve high detection accuracy while using a small number of performance metrics.",2012,0, 5948,Entropy-Based Detection of Incipient Faults in Software Systems,"This paper develops and validates a methodology to detect small, incipient faults in software systems. Incipient faults such as memory leaks slowly deteriorate the software's performance over time and if left undetected, the end result is usually a complete system failure. The proposed method combines tools from information theory and statistics: entropy and principal component analysis (PCA). The entropy calculation summarizes the information content associated with the collected low-level metrics and reduces the computational burden incurred by the subsequent PCA step which detects underlying patterns and correlations present in the multivariate data, as well as distortions in the correlations indicative of an incipient fault. We use the technique to detect memory bloat within the Trade6 enterprise application under dynamic workload patterns, showing that small leaks can be detected quickly and with a low false alarm rate. Our method is also robust to the periodic/seasonal patterns affecting the metrics used to detect the fault.",2012,0, 5949,Assuring software quality by code smell detection,"In this retrospective we will review the paper """"Java Quality Assurance by Detecting Code Smells"""" that was published ten years ago at WCRE. The work presents an approach for the automatic detection and visualization of code smells and discusses how this approach could be used in the design of a software inspection tool. The feasibility of the proposed approach was illustrated with the development of jCOSMO, a prototype code smell browser that detects and visualizes code smells in JAVA source code. It was the first tool to automatically detect code smells in source code, and we demonstrated the application of this tool in an industrial quality assessment case study. In addition to reviewing the WCRE 2002 work, we will discuss subsequent developments in this area by looking at a selection of papers that were published in its wake. In particular, we will have a look at recent related work in which we empirically investigated the relation between code smells and software maintainability in a longitudinal study where professional developers were observed while maintaining four different software systems that exhibited known code smells. We conclude with a discussion of the lessons learned and opportunities for further research.",2012,0, 5950,Can Lexicon Bad Smells Improve Fault Prediction?,"In software development, early identification of fault-prone classes can save a considerable amount of resources. In the literature, source code structural metrics have been widely investigated as one of the factors that can be used to identify faulty classes. Structural metrics measure code complexity, one aspect of the source code quality. Complexity might affect program understanding and hence increase the likelihood of inserting errors in a class. Besides the structural metrics, we believe that the quality of the identifiers used in the code may also affect program understanding and thus increase the likelihood of error insertion. In this study, we measure the quality of identifiers using the number of Lexicon Bad Smells (LBS) they contain. We investigate whether using LBS in addition to structural metrics improves fault prediction. To conduct the investigation, we assess the prediction capability of a model while using i) only structural metrics, and ii) structural metrics and LBS. The results on three open source systems, ArgoUML, Rhino, and Eclipse, indicate that there is an improvement in the majority of the cases.",2012,0, 5951,A Framework to Compare Alert Ranking Algorithms,"To improve software quality, rule checkers statically check if a software contains violations of good programming practices. On a real sized system, the alerts (rule violations detected by the tool) may be numbered by the thousands. Unfortunately, these tools generate a high proportion of """"false alerts"""", which in the context of a specific software, should not be fixed. Huge numbers of false alerts may render impossible the finding and correction of """"true alerts"""" and dissuade developers from using these tools. In order to overcome this problem, the literature provides different ranking methods that aim at computing the probability of an alert being a """"true one"""". In this paper, we propose a framework for comparing these ranking algorithms and identify the best approach to rank alerts. We have selected six algorithms described in literature. For comparison, we use a benchmark covering two programming languages (Java and Smalltalk) and three rule checkers (Find Bug, PMD, Small Lint). Results show that the best ranking methods are based on the history of past alerts and their location. We could not identify any significant advantage in using statistical tools such as linear regression or Bayesian networks or ad-hoc methods.",2012,0, 5952,The Secret Life of Patches: A Firefox Case Study,"The goal of the code review process is to assess the quality of source code modifications (submitted as patches) before they are committed to a project's version control repository. This process is particularly important in open source projects to ensure the quality of contributions submitted by the community, however, the review process can promote or discourage these contributions. In this paper, we study the patch lifecycle of the Mozilla Fire fox project. The model of a patch lifecycle was extracted from both the qualitative evidence of the individual processes (interviews and discussions with developers), and the quantitative assessment of the Mozilla process and practice. We contrast the lifecycle of a patch in pre- and post-rapid release development. A quantitative comparison showed that while the patch lifecycle remains mostly unchanged after switching to rapid release, the patches submitted by casual contributors are disproportionately more likely to be abandoned compared to core contributors. This suggests that patches from casual developers should receive extra care to both ensure quality and encourage future community contributions.",2012,0, 5953,Software Aging Detection Based on NARX Model,"Software aging is a severe test on the reliability of the software. In this paper, we present a method of nonlinear autoregressive models with exogenous inputs to detect the aging phenomenon of the software system. This method considered the relationship between multivariable and the influence of the delay of historical data. The experimental analysis shows that, using the NARX model to detect fault can be effectively applied in the software aging test.",2012,0, 5954,Quality Measurement for Cloud Based E-commerce Applications,"Cloud based e-Commerce applications are favored over traditional systems due to their capacity to reduce costs in various aspects like use of resources, low operating costs, eliminate capital costs, low maintenance and service costs. Its core functionality of optimizing performance and its automatic system recovery is crucial in web applications. Using a cloud platform for web applications increases productivity and decreases the replication of business documents saving businesses money in the current economic climate. A stable system is needed to achieve this and quality measurement is crucial to establish baselines to help predict resources for the future of the business. The proposed quality measurement model is one such designed for Cloud based e-Commerce applications. It aims to create a repository or an error-Knowledge Management System(e-KMS) for known online defects with capacity to add in future defects as they occur when using the applications. By mapping these defects directly to quality factors affected, accurate quality measurement can be achieved.",2012,0, 5955,Functional safety aspects of pattern detection algorithms,"Pattern detection algorithms may be used as part of safety-relevant processes employed by industrial systems. Current approaches to functional safety mainly focus on random faults in hardware and the avoidance of systematic faults in both software and hardware. In this paper we build on the concepts of the international standard for functional safety IEC 61508 to extend safety-relevant notions to numerical and logical processes (algorithms) employed in pattern detection systems. In particular, we target the uncertainty pertaining to face detection systems where incorrect detection affects the overall system performance. We discuss a dual channel system that comprises two of the most commonly used and widely available face detection algorithms, Viola-Jones and Kienzle et al. We present a method for deriving the probability of failure in demand (PFD) from the combination of these two channels using both: 1oo2 and 2oo2 voting schemes. Finally, we compare experimental results from both the perspectives of availability and safety, and present conclusions with respect to the appropriate choice of information combination schemes and system architectures.",2012,0, 5956,Dynamic redeployment of control software in distributed industrial automation systems during runtime,"Current research on the reconfiguration of automated industrial manufacturing systems focuses mainly on the reconfiguration of the production process. For uninterrupted operation also the distributed control system has to allow for reconfiguration during runtime. In this paper a method for dynamic redeployment of control software during runtime is presented, comprising a structural and behavioral concept. To enable this dynamic redeployment, control hardware faults are detected and the control software deployment is adjusted to the smaller number of controllers. Likewise, additional hardware resources are integrated to the system. An implementation and evaluation within a demonstration scenario shows the feasibility of the concept.",2012,0, 5957,Variable neighborhood search-based subproblem solution procedures for a parallel shifting bottleneck heuristic for complex job shops,"The shifting bottleneck heuristic (SBH) for complex job shops decomposes the overall scheduling problem into a series of scheduling problems related to machine groups. These smaller, more tractable, scheduling problems are called subproblems. The heuristic is based on a disjunctive graph that is used to model the relationship between the sub-problems. In this paper, we use a parallel implementation of the SBH to assess the impact of a variable neighborhood search-based subproblem solution procedure (SSP) on global performance measures like the total weighted tardiness. Based on designed simulation experiments, we demonstrate the advantage of using high-quality SSPs.",2012,0, 5958,Controlling Hardware Synthesis with Aspects,"The synthesis and mapping of applications to configurable embedded systems is a notoriously hard process. Tools have a wide range of parameters, which interact in very unpredictable ways, thus creating a large and complex design space. When exploring this space, designers must understand the interfaces to the various tools and apply, often manually, a sequence of tool-specific transformations making this an extremely cumbersome and error-prone process. This paper describes the use of aspect-oriented techniques for capturing synthesis strategies for tuning the performance of applications' kernels. We illustrate the use of this approach when designing application-specific architectures generated by a high-level synthesis tool. The results highlight the impact of the various strategies when targeting custom hardware and expose the difficulties in devising these strategies.",2012,0, 5959,Analytical Design Space Exploration Based on Statistically Refined Runtime and Logic Estimation for Software Defined Radios,"The exploration of the design space for complex hardware-software systems requires accurate models for the system components, which are often not available in early design phases, resulting in error-prone resource estimations. For a HWSW system with a finite set of design points, we present an analytical approach to evaluate the quality of a distinctive design point choice. Our approach enables the designer to gain a measure for statistical confidence whether an application with realtime requirements can be successfully implemented on a chosen set of processors and reconfigurable logic. By a statistical evaluation of runtime, latency, logic resources and memory requirements, a probability metric for each realization alternative in the system is derived, that gives a realization probability for different mappings and different combinations of chips. We apply our principles to an FPGA/DSP digital radio receiver system and evaluate the realization probabilities for a different combination of chip sizes and mappings. Finally, we compare our approach against conventional estimation techniques, such as worst-case evaluation.",2012,0, 5960,PROCOMON: An Automatically Generated Predictive Control Signal Monitor,"Today security and safety applications are often a large conglomerate of complex different components. Because of a strong trend to high system integration to fulfill financial and production cost constraints, as much of these components as possible are combined to form large-scale system-on-chips. Risks of dependability and security problems caused by device degradation and adversaries lead to a wide range of research concerning fault detection and recovery techniques in recent years. Especially in safety systems the concurrent use of different checking techniques, to protect the integrity of the operation, is preferred. Standard duplication or triplication methods for such critical devices are not completely fulfilling this property and raising a need for new on-line testing and recovery methodologies. Furthermore, the smart-card sector produces a strong need for new checking techniques with a low resource footprint. Therefore, this paper presents a novel automatized hardware generation flow to create a predictive control signal monitor unit in an automatized way. Depending on the instruction loaded by the processor pipeline this unit will predict the signature of following control signal changes. Hence, a new way of fault detection, weak checking, is implemented without introducing any large additional hardware blocks. A case study using an open-source processor is also presented to show the applicability of our approach.",2012,0, 5961,Concurrent error detection scheme for HaF hardware,"HaF (Hash Function) is a dedicated cryptographic hash function considered for verification of the integrity of data. It is suitable for both software and hardware implementation. HaF has an iterative structure. This implies that even a single transient error at any stage of the computation of hash value results in a large number of errors in the final hash value. Hence, detection of errors becomes a key design issue. In the hardware design of cryptographic algorithms, concurrent error detection (CED) techniques have been proposed not only to protect the encryption and decryption process from random faults but also from the intentionally injected faults by some attackers. In this paper, we show the propagation of errors in the VHDL model of HaF-256 and then we propose and analyse some error detection schemes. In proposed CED scheme all the components are protected and all single and multiple, transient and permanent bit flip faults will be detected.",2012,0, 5962,Detecting partially fallen-out magnetic slot wedges in AC machines based on electrical quantities only,"The winding system of high voltage machines is usually composed of pre-formed coils. To facilitate the winding fitting process stator slots are usually wide opened. These wide opened slots are known to cause disturbances of the magnetic field distribution. Thus losses are increased and machine's efficiency is reduced. A common way to counteract this drawback is given by placing magnetic slot wedges in the slots. During operation the wedges are exposed to high magnetic and mechanical forces. As a consequence wedges can get loose and finally fall out into the air-gap. State-of-the-art missing slot wedge detection techniques deal with the drawback that the machine must be disassembled, what is usually very time consuming. In this paper a method is investigated which provides the possibility of detecting missing magnetic slot wedges based only on measurement of electrical quantities and without machine disassembling. The method is based on exploitation of machine reaction on transient voltage excitation. The resulting current response contains information on machine's magnetic state. This information is composed of several machine asymmetries including the fault (missing wedge) induced asymmetry. A specific signal processing chain provides a distinct separation of all asymmetry components and delivers a high sensitive fault indicator. Measurements for several fault cases are presented and discussed. A sensitivity analysis shows the high accuracy of the method and the ability to detect even partially missing slot wedges.",2012,0, 5963,On the expressiveness of business process modeling notations for software requirements elicitation,"Business process models have proved to be useful for requirements elicitation. Since software development depends on the quality of the requirements specifications, generating high-quality business process models is therefore critical. A key factor for achieving this is the expressiveness in terms of completeness and clarity of the modeling notation for the domain being modeled. The Bunge-Wand-Weber (BWW) representation model is frequently used for assessing the expressiveness of business process modeling notations. This article presents some propositions to adapt the BWW representation model to allow its application to the software requirements elicitation domain. These propositions are based on the analysis of the Guide to the Software Engineering Body of Knowledge (SWEBOK) and the Guide to the Business Analysis Body of Knowledge (BABOK). The propositions are validated next by experts in business process modeling and software requirements elicitation. The results show that the BWW representation model requires to be specialized by including concepts specific to software requirements elicitation.",2012,0, 5964,From requirements to software trustworthiness using scenarios and finite state machine,"The notion of software trustworthiness evaluation in the literature is inherently subjective. It depends on how the software is used and in what context it is used. Moreover different users evaluate a software system according to different criteria, point of view and background. Therefore to assess the software trustworthiness, it is not wise to look for a general set of characteristics and parameters; instead, there is need to define a model that is tailored to the functional and quality requirements that the software has to fulfill. This paper shows a way to model software trustworthiness by using Finite State Machine (FSM) notation and scenarios. The approach introduces a novel behavioristic model for verifying software trustworthiness based on scenarios of interactions between the software and its users and environment. These interactions consist of simple scenarios of examples or counterexamples of desired behavior. The approach supports incremental changes in requirements/scenarios. An experiment of application of the model for verifying software trustworthiness based on the scenarios of interactions between the software and its users and environment is presented in a separate case study [40].",2012,0, 5965,A Survey of Key Factors Affecting Software Maintainability,"Today software quality is a major point of concern and it is important to be able to assess the maintainability of a software system. Maintainability is the ability of the system to undergo modifications with a degree of ease. These changes could impact interfaces, components, and features when adding or modifying the functionality and meeting the new customer's requirements to cope with the changing environment. This paper describes the several types of vital factors that affect the maintainability of software systems as proposed by different researchers in different software maintainability models. These factors play a crucial role in the maintainability assessments. The maintainability models can be used to improve the quality of the software product so that maintenance can be performed efficiently and effectively.",2012,0, 5966,Reliability Analysis of Task Model in Real-Time Fault-Tolerant Systems,"One notable advantage of Model-Driven Architecture(MDA) method is that software developers could do sufficient analysis and tests on software models in the design phase, which helps construct high confidence on the expected software performance and behaviors. In this paper, we present a general reliability model, based on the relationship between real-time requirements and time costs of fault tolerance, to analyze reliability of the task execution model in real-time software design phase when using MDA method. This reliability model defines arrival rates of faults and fault-tolerant mechanisms to model non-permanent faults and the corresponding fault handling costs. By analyzing the probability of tasks being schedulable in the worst-case execution scenario, reliability and schedulability are combined into an unified analysis framework, and an algorithm for reliability analysis is also given under static priority scheduling. When no assumptions of fault occurrences made on the task model, this reliability model regresses to a generic schedulability model.",2012,0, 5967,Building Useful Program Analysis Tools Using an Extensible Java Compiler,"Large software companies need customized tools to manage their source code. These tools are often built in an ad-hoc fashion, using brittle technologies such as regular expressions and home-grown parsers. Changes in the language cause the tools to break. More importantly, these ad-hoc tools often do not support uncommon-but-valid code code patterns. We report our experiences building source-code analysis tools at Google on top of a third-party, open-source, extensible compiler. We describe three tools in use on our Java code base. The first, Strict Java Dependencies, enforces our dependency policy in order to reduce JAR file sizes and testing load. The second, error-prone, adds new error checks to the compilation process and automates repair of those errors at a whole-code base scale. The third, Thindex, reduces the indexing burden for a Java IDE so that it can support Google-sized projects.",2012,0, 5968,Impact Analysis in the Presence of Dependence Clusters Using Static Execute after in WebKit,"Impact analysis based on code dependence can be an integral part of software quality assurance by providing opportunities to identify those parts of the software system that are affected by a change. Because changes usually have far reaching effects in programs, effective and efficient impact analysis is vital, which has different applications including change propagation and regression testing. Static Execute After (SEA) is a relation on program elements (procedures) that is efficiently computable and accurate enough to be a candidate for use in impact analysis in practice. To assess the applicability of SEA in terms of capturing real defects, we present results on integrating it into the build system of Web Kit, a large, open source software system, and on related experiments. We show that a large number of real defects can be captured by impact sets computed by SEA, albeit many of them are large. We demonstrate that this is not an issue in applying it to regression test prioritization, but generally it can be an obstacle in the path to efficient use of impact analysis. We believe that the main reason for large impact sets is the formation of dependence clusters in code. As apparently dependence clusters cannot be easily avoided in the majority of cases, we focus on determining the effects these clusters have on impact analysis.",2012,0, 5969,When Does a Refactoring Induce Bugs? An Empirical Study,"Refactorings are - as defined by Fowler - behavior preserving source code transformations. Their main purpose is to improve maintainability or comprehensibility, or also reduce the code footprint if needed. In principle, refactorings are defined as simple operations so that are """"unlikely to go wrong"""" and introduce faults. In practice, refactoring activities could have their risks, as other changes. This paper reports an empirical study carried out on three Java software systems, namely Apache Ant, Xerces, and Ar-go UML, aimed at investigating to what extent refactoring activities induce faults. Specifically, we automatically detect (and then manually validate) 15,008 refactoring operations (of 52 different kinds) using an existing tool (Ref-Finder). Then, we use the SZZ algorithm to determine whether it is likely that refactorings induced a fault. Results indicate that, while some kinds of refactorings are unlikely to be harmful, others, such as refactorings involving hierarchies (e.g., pull up method), tend to induce faults very frequently. This suggests more accurate code inspection or testing activities when such specific refactorings are performed.",2012,0, 5970,Using Coding-Based Ensemble Learning to Improve Software Defect Prediction,"Using classification methods to predict software defect proneness with static code attributes has attracted a great deal of attention. The class-imbalance characteristic of software defect data makes the prediction much difficult; thus, a number of methods have been employed to address this problem. However, these conventional methods, such as sampling, cost-sensitive learning, Bagging, and Boosting, could suffer from the loss of important information, unexpected mistakes, and overfitting because they alter the original data distribution. This paper presents a novel method that first converts the imbalanced binary-class data into balanced multiclass data and then builds a defect predictor on the multiclass data with a specific coding scheme. A thorough experiment with four different types of classification algorithms, three data coding schemes, and six conventional imbalance data-handling methods was conducted over the 14 NASA datasets. The experimental results show that the proposed method with a one-against-one coding scheme is averagely superior to the conventional methods.",2012,1, 5971,Test effectiveness index: Integrating product metrics with process metrics,"Defect measurement is an important method in the improvement of software quality. Recent approaches of defect measurement are inappropriate to small software organizations by reason of their intricacy. This paper gives a simple approach of defect measurement, which integrates the power of product metrics with process metrics, i.e., it can not only detect the defect-prone modules, but also find the problems in the software process. This approach uses the results of successive two rounds of testing to create the test effectiveness index constructively. A case study is conducted and the results indicate that the defect-prone modules can be identified and problems of testing process can be discovered by test effectiveness index.",2012,0, 5972,De novo co-assembly of bacterial genomes from multiple single cells,"Recent progress in DNA amplification techniques, particularly multiple displacement amplification (MDA), has made it possible to sequence and assemble bacterial genomes from a single cell. However, the quality of single cell genome assembly has not yet reached the quality of normal multiceli genome assembly due to the coverage bias and errors caused by MDA. Using a template of more than one cell for MDA or combining separate MDA products has been shown to improve the result of genome assembly from few single cells, but providing identical single cells, as a necessary step for these approaches, is a challenge. As a solution to this problem, we give an algorithm for de novo co-assembly of bacterial genomes from multiple single cells. Our novel method not only detects the outlier cells in a pool, it also identifies and eliminates their genomic sequences from the final assembly. Our proposed co-assembly algorithm is based on colored de Bruijn graph which has been recently proposed for de novo structural variation detection. Our results show that de novo co-assembly of bacterial genomes from multiple single cells outperforms single cell assembly of each individual one in all standard metrics. Moreover, co-assembly outperforms mixed assembly in which the input datasets are simply concatenated. We implemented our algorithm in a software tool called HyDA which is available from http://compbio.cs.wayne.edu/software/hyda.",2012,0, 5973,A method of zero self-modification and temperature compensation for indoor air quality etection based on a software model,"It is very difficult to apply non-dispersive infrared sensor to detect the indoor air quality and maintain very low zero and temperature drift over long periods. Frequently manual zero setting and calibration are required. To solve the issues of zero and temperature drift of non-dispersive infrared sensor, a software model based on zero gas intensity, reference channels intensity, standard temperature, environmental temperature, temperature drift coefficient, etc. has been established to automatically modify and compensate the zero and temperature drift existing in the long-term continuous operation of the infrared sensor. The test result and long-term application indicate the detection precision of the instrument is less than 5%F.S in various changing environmental conditions. The average detection precision of carbon dioxide has been improved from 9.26% before comprehensive processing to 1.23% after processing, while the average detection precision of methane has been improved from 10.61% before comprehensive processing to 0.70% after processing. As a result, the disadvantages existing in many gas detectors including poor stability and short calibration cycle have been overcome, thus effectively improving the detection precision and stability of the instrument and reducing the maintenance cost.",2012,0, 5974,Initial results of web based blended learning in the field of air cargo security,"With the currently implemented high standards in passenger screening, air cargo is being perceived as the security chain's weakest link in civil aviation and therefore becomes an attractive target for terrorists. Detailed regulations exist to harden air cargo against terrorist attacks. Blended learning training methods can be used to enable screeners to detect suspicious consignments even in situations when technical measures (e.g. x-ray) do not indicate any threat In this study, blended learning was conducted at a handling agents premises at a Swiss airport in three courses (seven trainees in total) and evaluated subsequently. Results show a very high satisfaction with the training and very high scores in the final exam. However, trainees repeatedly skipped text inside the web based training (WBT) leading to the conclusion that the WBT has to be optimized in terms of presentation modes. Suggestions on how to create even more engaging WBT content can be found in various methods of classification of computer based training (CBT) and are discussed in this paper.",2012,0, 5975,Method of Information Extracting from Section Map Based on ActiveX Technology,"In the production of hydrographic surveying, a lot of inside work are still manually completed, time-consuming and error-prone, such as dimensioning of Basal Area and table. This paper discusses extracting river cross-sectional information from CAD with ActiveX software integration technology, some information is also processed and excavated, and Basal Area and earthwork table are automatically completed. Therefore these works achieve greater efficiency and quality of work. Undoubtedly, it has great applied value.",2012,0, 5976,The Forensics for Detecting Manipulation on Part of Text,"The adventure of sophisticated photo editing software has made it increasingly easier to manipulate text in digital images. Visual inspection cannot definitively distinguish the forgeries from authentic photographs. In the imaging process, as defocusing, atmospheric turbulence, diffraction and the defects of image device, digital image can't reproduce the details of texture perfectly, while the text image that obtained by photo-editing software will do better, and will show no difference with the authentic image in the appearance. In this paper, we describe a new forensics technique that focuses on manipulation on part of text. By decomposition of the text and to extract the characteristics of edge points in text image, we use support vector machine (SVM) to training classification model which is used for the identification of the authenticity of text message in images. Our test result indicates that this method can detect the manipulation on part of text, and the general detection rate is promised.",2012,0, 5977,Research on Dynamic Message Routing for ESB Based on Message System,"This paper applied the message routing components in the Enterprise Integration Patterns to the routing model of enterprise service bus, and made up a Reliable fault-tolerant dynamic message routing model. Give the core modules, algorithms and principles of the message router, and compare with existing reactive routing model and predict the dynamic routing model. On this basis, analysis the problems to be solved for dynamic routing in enterprise service bus as well as future research directions.",2012,0, 5978,Fault Testing Device of Fire Control System Based on Virtual Instrumentation Technology,"The fire control system of the mine dispenser vehicle is the core of rocket launching and mine configuration. However it is prone to various faults and failures due to the complex battlefield environments. In this paper the fault testing device for fire control system with diversified functions, easier portability and better performance is designed and developed. The main work is as follows. The hardware platform of fault testing device consists of upper computer and lower computer. And the upper computer undertakes the display of diagnosis process, fault causes and troubleshooting results. It is mainly composed of industrial processing computer (IPC), touch screen, LCD and other peripheral interface. The lower computer is responsible for the acquisition, conditioning and connection of the electrical control signals. It contains two CPLD chips, signal conditioning circuit, serial interface as well as input interfaces for large amount of electrical control signals of the fire control system. In addition, the fault testing software application implements the display of diagnosis process, fault causes and troubleshooting results. It is developed by virtual instrumentation software platform LabVIEW. The fault testing device actually offers effective technical method in improving the repair and protection skills of mine dispenser vehicle.",2012,0, 5979,Pattern-Based Modifiability Analysis of EJB Architectures,"Over the last years, several techniques to evaluatemodifiability of software architectureshave been developed. One of such techniques is change impact analysis (CIA), which aids developers in assessing the effects of change scenarios on architectural modules. However, CIA does not take into account the pattern structures behind those modules. In architectural frameworks, such as the Enterprise Java Beans (EJB) architecture, the use of patterns is a key practice to achieve modifiability goals. Although patterns can be easily understood individually, when an application combines several pattern instances the analysis is not straightforward. In practice, many EJB designsare assessed in an ad-hoc manner and relying on the developers' experience. A way of dealing with this problem is through the integration of modifiabiliy analysis models and patterns. We propose a knowledge-based approach that explicitly links the EJB patterns to a scenario-based analysis for multi-tier architectures. Specifically, we have developed a modifiability reasoning framework that reifies the EJB patterns present in a given design solution and, for a set of predetermined scenarios, the reasoning framework identifies which architectural elements can be affected by the scenarios. The reasoning framework outputs metrics for each of the scenarios regarding specific EJB tiers. The main contribution of this approach is that assists developers to evaluate EJB alternatives, providing quantitativeinformation about the modifiability implications of their decisions. A preliminary evaluation has shown that the the reasoning framework is viable to analyze EJB designs.",2012,0, 5980,A Semantic Web based approach for design pattern detection from source code,"Design patterns provide experience reusability and increase quality of object oriented designs. Knowing which design patterns are implemented in a software is important in comprehending, maintaining and refactoring its design. However, despite the interest in using design patterns, traditionally, their usage is not explicitly documented. Therefore, a method is required to reveal this information from some artifacts of the systems (e.g. source codes, models, and executables). In this paper, an approach is proposed which uses the Semantic Web technologies for automatically detecting design patterns from Java source code. It is based on the semantic data model as the internal representation, and on SPARQL query execution as the analysis mechanism. Experimental evaluations demonstrate that this approach is both feasible and effective, and it reduces the complexity of detecting design patterns to creating a set of SPARQL queries.",2012,0, 5981,Linear parameter-varying model identification with structure selection for autonomic web service systems,"Information technology (IT) systems must provide their users with prescribed quality of service (Qos) levels, usually defined in terms of application performance. Qos requirements are in general difficult to satisfy, since the system workload may vary by orders of magnitude within the same business day. To meet Qos requirements, resources have to dynamically be allocated among running applications, re-configuring them at run-time. To deal with resource allocation so as to manage system overload issues admission control and server virtualization are typically used: the former is a protection mechanism that rejects requests under peak workload, whereas the latter allows partitioning physical resources such as CPU and disks into multiple virtual ones. For designing effective controllers to ensure the desired Qos levels, a reliable model of the server dynamics is needed. The given systems, while retaining the time-varying nature that allows one to model workload variations. To address this issue, a constrained black-box subspace identification approach endowed with a novel structure selection is designed, and the performance of the identified models are assessed on experimental data.",2012,0, 5982,Software complexity: A fuzzy logic approach,Software complexity is one of the important quality attribute and predicting complexity is a difficult task for software engineers. Current measures can be used to compute complexity but these methods are not sufficient. New methods or paradigms are being searched for predicting complexity because complexity prediction can help us in estimating many other quality attributes like testability and maintainability. The main goal of this paper is to explore the role of new paradigms like fuzzy logic in complexity prediction. In this paper we have proposed a fuzzy logic based approach to predict software complexity.,2012,0, 5983,An automatic transient detection system which can be incorporated into an algorithm to accurately determine the fault level in networks with DG,"The use of distributed generation is on the increase within the United Kingdom and the Distribution Network Operators (DNO's) require a novel approach of assessing potential fault levels in near real-time to assist with network planning and design. The short circuit current is the current expected to flow into a short circuit fault at a known point on the system, and therefore, the fault level is the product of the open circuit voltage and short circuit current. Recent techniques used by the industry involve power system software that calculates the fault level in accordance with BS EN 60909, however, this frequently provides a conservative answer and possibly this will be a factor restricting future connections of distributed generation. This paper will describe the initial stages of the development of an algorithm which can be used alongside a digital signal controller (a Texas Instruments TMS320F28335) to calculate in near real-time the fault level at a specified point on the distribution network. Matlab & Simulink are utilised to both simulate source faults and to create the initial elements of the algorithm which are analysed utilsing the test program. The implementation of a Short Time Fourier Transform (STFT) to determine when a fault occurs is discussed. Finally the results from these simulations are examined and presented alongside a discussion of future work.",2012,0, 5984,Evaluating the reliability & availability of more-electric aircraft power systems,"With future aircraft designs increasingly embracing the more-electric concept, there is likely to be a greater reliance on electrical systems for safe flight. More-electric aircraft (MEA) will have a greater number of electrical loads which are critical to the aircraft flight. It therefore is essential that the design of aircraft power systems embraces new technologies and methods in order to achieve targets for aircraft certification. The various design drivers (e.g. weight, space) for aircraft will also have to be considered when incorporating reliability into future MEA. This paper will investigate options for future platforms to meet reliability and availability targets whilst continuing to improve overall efficiency. The paper proposes a software tool that has the ability to determine the reliability of a number of potential alternative design architectures. This paper outlines present designs of such systems and how reliability is enhanced through the use of redundancy and back-up generation. The regulatory challenges associated with aircraft are summarised, including a discussion on reliability targets for various loads. Techniques for assessing the reliability of aircraft systems are described and simple examples of their application to aircraft electrical systems are provided. These examples will highlight the advantages and drawbacks attributed to each method and the reasoning behind the selection of these techniques on which an analysis tool can be based.",2012,0, 5985,SNR estimation techniques for low SNR signals,"Radio receivers contain a set of adaptive algorithms that estimate the received signal's unknown parameters required by the receiver to demodulate the signal. Often missing from the standard parameter list is Signal to Noise ratio (SNR) or equivalently Eb/No. The SNR estimate is the ubiquitous scale factor associated with all maximum likelihood estimators. The SNR qualifies the signal quality letting the estimator algorithms know whether the observables are reliable, hence should make significant contribution to the estimate or are unreliable and should make limited contribution to the estimate. Error correcting algorithms also use SNR to set soft decision probabilities and likelihood ratios. Many SNR estimates are accurate at high SNR when we really don't need the estimates and are inaccurate at low SNR when we have most need for them. This paper discusses two SNR estimator techniques which maintain estimation accuracy down to very low SNR values.",2012,0, 5986,Virtual synchrophasor monitoring network,"Synchrophasors are considered one of the most important measuring quantities in the future of power systems. The main reason of synchrophasor monitoring is to prevent or detect in advance events that may cause failure or even damages of the transmission network. That is why Phasor Measurement Units (PMU) getting increased attention nowadays. Main function of PMU is to measure and report the synchrophasors in widely dispersed locations in the power system network. Main PMU features in terms of evaluation and reporting describes the IEEE C37.118 document. Data stream from PMUs receives Phasor Data Concentrator (PDC), synchronize data streams and provides output data stream with defined reporting rate. The paper describes developed software suite which simulates set of PMUs and Phasor Data Concentrator (PDC). The aim of this work was to create educative implementation of PMU and PDC in graphical programming language LabVIEW. This software enables creation of virtual synchrophasor monitoring network without using of any other hardware than PC and can be used as a teaching tool.",2012,0, 5987,Transient stability assessment of synchronous generator in power system with high-penetration photovoltaics,"As photovoltaic (PV) capacity in power system increases, the capacity of synchronous generator needs to be reduced relatively. This leads to the lower system inertia and the higher generator reactance, and hence the generator transient stability may negatively be affected. In particular, the impact on the transient stability may become more serious when the considerable amounts of PV systems are disconnected simultaneously during voltage sag. In this work, the generator transient stability in the power system with significant PV penetration is assessed by a numerical simulation. In order to assess the impact from various angles, simulation parameters such as levels of PV penetration, variety of power sources (inverter or rotational machine), and existence of LVRT capability are considered. The simulation is performed by using PSCAD/EMTDC software.",2012,0, 5988,Taking control: Modular and adaptive robotics process control systems,"Robotics systems usually comprise sophisticated sensor and actuator systems with no less complex control applications. These systems are subject to frequent modifications and extensions and have to adapt to their environment. While automation systems are tailored to particular production processes, autonomous vehicles must adaptively switch their sensors and controllers depending on environmental conditions. However, when designing and implementing the process control system, traditional control theory focuses on the control problem at hand without having this variability in mind. Thus, the resulting models and implementation artefacts are monolithic, additionally complicating the real-time system design. In this paper, we present a modularisation approach for the design of robotics process control systems, which not only aims for variability at design-time but also for adaptivity at run-time. Our approach is based on a layered control architecture, which includes an explicit interface between the two domains involved: control engineering and computer science. Our architecture provides separation of concerns in terms of independent building blocks and data flows. For example, the replacement of a sensor no longer involves the tedious modification of downstream filters and controllers. Likewise, the error-prone mapping of high-level application behaviour to the process control system can be omitted. We validated our approach by the example of an autonomous vehicle use case. Our experimental results demonstrate ease of use and the capability to maintain quality of control on par with the original monolithic design.",2012,0, 5989,SOA-based platform implementing a structural modelling for large-scale system fault detection: Application to a board machine,"This paper presents a tool designed for analysing fault propagation and fault impact on large-scale process performances. The analysis is based on structural description of the process. The main physical variables are associated to each subsystem and a relational model linking these variables for all the different functioning modes of the system is determined. In large-scale systems every component must provide a certain function in order to make the overall system working satisfactorily. When a fault or a badly tuned parameter affects a control loop, the required function cannot be fulfilled which may cause a failure. Therefore, some Loop Performance Indexes (LPI) indicating if the control loops operate properly are necessary to evaluate the impact of the failure on the overall process performances represented by a high level index Key Performance Index (KPI). Structural models provide an interesting approach for the analysis of a system and also studying the impact of a fault because they only need a limited knowledge about the behaviour of the system. Generic component models can be used to describe the system architecture. At the first level different statistical tests are applied to the KPI. When a set of LPI or KPI deviate from their nominal or desired values, the elements which are source of an eventual malfunctioning can be searched in the structural graph by searching the nodes predecessors. The selected LPI are tested in their turn by mean of statistical tests. A node is declared to be faulty if the value of the corresponding LPI is out of the acceptable (pre defined) limits. The procedure is iterated until the last level of the model is reached. This procedure researches the possible cause of KPI value significant deviation. The procedure was applied on a board machine. In this process, the main KPI is the value of moisture of the board at the end of the production chain. The corresponding structural model which relates the moisture (to- node) to the control loops (nodes) has been developed. In order to validate the large-scale capabilities of such approach, the model has been integrated within the PREDICT's SOA(Services Oriented Archiecture) software platform: KASEM (Kowledge and Advanced Services for E-Monitoring). The platform enables to apply the on-line statistical test to the KPI and LPIs of the board machine and supports the iterative procedure Indeed, the iterative procedure based on the structural graph was integrated as one of the KASEM diagnostic tools with a dynamic and animated graph and used during the KASEM workflow to solve the problem.",2012,0, 5990,Configurable RTL model for level-1 caches,"Level-1 (L1) cache memories are complex circuits that tightly integrate memory, logic, and state machines near the processor datapath. During the design of a processor-based system, many different cache configurations that vary in, for example, size, associativity, and replacement policies, need to be evaluated in order to maximize performance or power efficiency. Since the implementation of each cache memory is a time-consuming and error-prone process, a configurable and synthesizable model is very useful as it helps to generate a range of caches in a quick and reproducible manner. Comprising both a data and instruction cache, the RTL cache model that we present in this paper has a wide array of configurable parameters. Apart from different cache size parameters, the model also supports different replacement policies, associativities, and data write policies. The model is written in VHDL and fits different processors in ASICs and FPGAs. To show the usefulness of the model, we provide an example of cache configuration exploration.",2012,0, 5991,Integrated didactic software package for computer based analysis of power quality,"The actual state and operation of the Romanian power system ask for a certain expertise in detecting the causes of electromagnetic perturbations and their evaluation in power grids. Consequently, all the aspects regarding the power quality issues became a common characteristic of the power systems' curricula within Romanian power engineering faculties. The students attending these classes are involved in computer-based laboratory works. This paper describes the authors' contribution regarding the development of an integrated software package for power quality analysis at the Faculty of Electrical Engineering, University of Craiova. This software structure is built on a link between professional specialized software packages and software subroutines conceived by the authors. The parameters related to voltage/current harmonics can be analyzed using MATLAB subroutines and EDSA package - Electrical Power System Design Software. The results can be visualized as different types of reports. They can be further exported to EDSA program or/and MATLAB subroutine in order to size the harmonic filters and evaluate their effect on power quality in the analyzed power grids. EDSA programs package, as well as the subroutines developed in MATLAB environment are traditional tools used by the students attending the Power Quality classes within the Faculty of Electrical Engineering.",2012,0, 5992,An FPGA-based probability-aware fault simulator,"A recent approach to deal with the challenges that come along with the shrinking feature size of CMOS circuits is probabilistic computing. Those challenges, such as noise or process variations, result in a certain probabilistic behavior of the circuit and its gates. Probabilistic Computing, also referred to as pCMOS, does not try to avoid the occurrence of errors, but tries to determine the probability of errors at the output of the circuit, and to limit it to a value that the specific application can tolerate. Past research has shown that probabilistic computing has potential to drastically reduce the power consumption of circuits by scaling the supply voltage of gates to a value where they become non-deterministic, while tolerating a certain amount of probabilistic behavior at the output. Therefore, one main task in the design of pCMOS circuits is to determine the error probabilities at the output of the circuit, given a combination of error probabilities at the gates. In earlier work, pCMOS circuits have been characterized by memory-consuming and complex analytical calculations or by time-consuming software-based simulations. Hardware-accelerated emulators exist in large numbers, but miss the support of injecting errors with specified probabilities into as many circuit elements the user specifies at the same time. In this paper, we propose an FPGA-based fault simulator that allows for fast error probability classification, injection of errors at gate- and RT-level, and that is furthermore independent on the target architecture. Moreover, we demonstrate the usefulness of such a simulator by characterizing the probabilistic behavior of two benchmark circuits and reveal their energy-saving capability.",2012,0, 5993,From off-Line to continuous on-line maintenance,"Summary form only given. Software is the cornerstone of the modern society. Many human activities rely on software systems that shall operate seamlessly 24/7, and failures in such systems may cause severe problems and considerable economic loss. To efficiently address a growing variety of increasingly complex activities, software systems rely on sophisticated technologies. Most software systems are assembled from modules and subsystems that are often developed by third party organization, and sometime are not even available at the system build time. This is the case for example of many Web applications that link Web services built and changed independently by third party organizations while the Web applications are running. The progresses of software engineering in the last decades have increased the productivity, reduced the costs and improved the reliability of software products, but have not eliminated the occurrence of field failures. Detecting and removing all faults before deployment is practically too expensive even for systems that are simple and fully available at design time, and impossible when systems are large and complex, and are dynamically linked to modules that may be developed and distributed only after the deployment of the system. The classic stop-and-go maintenance approaches that locate and fix field faults offline before deploying new system versions are important, but not sufficient to guarantee a seamless 24/7 behavior, because the faulty systems remain in operation until the faults have been removed and new systems redeployed [1]. On the other hand, classic fault tolerant approaches that constrain developers' freedom and rely on expensive mechanisms to avoid or mask faults do not match the cost requirements of many modern systems, and do not extend beyond the set of safety critical systems [2]. Self-healing systems and autonomic computing tackle these new challenges by moving activities from design to runtime. In self-healing systems, the- borderline between design and runtime activities fades, and both design and maintenance activities must change to enable activities such as fault diagnoses and fixes to be performed fully automatically and at runtime. Maintenance activities rely on information that are usually available at design time are are not part of the system runtime infrastructure. For example, corrective maintenance requires some knowledge about the expected system behavior to locate and fix the faults, while adaptive and perfective maintenance requires some knowledge about libraries and components to identify new modules that better cope with the the changes in the requirements and in the environment. In classic maintenance approaches, this knowledge is mastered by the developers, who gather and use the required information offline to deal with the emerging maintenance problems. In self healing systems the knowledge required for maintenance activities shall be available at runtime. Self healing systems shall be designed with enough embedded knowledge to deal with unplanned events, and shall be able to exploit this information automatically and at runtime to recover from unexpected situations, like field failures. The challenges of designing powerful self-healing systems relies in the ability to minimize the amount of extra knowledge to be provided at design time, while feeding a powerful automatic recovery mechanism. An interesting approach relies on the observation that software systems are redundant by nature, and exploits the intrinsic redundancy of software system to fix faults, thus minimizing the extra effort required at design time to feed the self-healing mechanism [3]. The intrinsic redundancy of software stems from design and reusability practice: the reuse of libraries may results in different ways to achieve the same or similar results, the design for modularity may produce methods with equivalent behavior, backward compatibility may keep deprecated and new implementations in t",2012,0, 5994,"Leveraging natural language analysis of software: Achievements, challenges, and opportunities","Summary form only given. Studies continue to report that more time is spent reading, locating, and comprehending code than actually writing code. The increasing size and complexity of software systems makes it significantly more challenging for humans to perform maintenance tasks on software without automated and semi-automated tools to support them, especially in the error-prone tasks. Thus, software engineers increasingly rely on software engineering tools to automate maintenance tasks as much as possible. The program analyses that drive today's software engineering tools have historically focused on analyzing the program's data and control flow, dependencies, and other structural information about the program to uncover and prove program properties. Yet, a software system is more than just the source code and its structure. To build effective software tools, the underlying automated analyses need to use all the information available to make the tools as intelligent and useful as possible. By adapting natural language processing (NLP) to source code analysis, and integrating information retrieval (IR), NLP, and traditional program analyses, we can expect significant improvement in automated and semi-automated software engineering tools for many different software engineering tasks. In this talk, I will overview research in text analysis of software and discuss our achievements to date, the challenges faced in text analysis, and the opportunities for text analysis of software in the future.",2012,0, 5995,Finding errors from reverse-engineered equality models using a constraint solver,"Java objects are required to honor an equality contract in order to participate in standard collection data structures such as List, Set, and Map. In practice, the implementation of equality can be error prone, resulting in subtle bugs. We present a checker called EQ that is designed to automatically detect such equality implementation bugs. The key to EQ is the automated extraction of a logical model of equality from Java code, which is then checked, using Alloy Analyzer, for contract conformance. We have evaluated EQ on four open-source, production code bases in terms of both scalability and usefulness. We discuss in detail the detected problems, their root causes, and the reasons for false alarms.",2012,0, 5996,Assessing the effect of requirements traceability for software maintenance,"Advocates of requirements traceability regularly cite advantages like easier program comprehension and support for software maintenance (i.e., software change). However, despite its growing popularity, there exists no published evaluation about the usefulness of requirements traceability. It is important, if not crucial, to investigate whether the use of requirements traceability can significantly support development tasks to eventually justify its costs. We thus conducted a controlled experiment with 52 subjects performing real maintenance tasks on two third-party development projects: half of the tasks with and the other half without traceability. Our findings show that subjects with traceability performed on average 21% faster on a task and created on average 60% more correct solutions - suggesting that traceability not only saves downstream cost but can profoundly improve software maintenance quality. Furthermore, we aimed for an initial cost-benefit estimation and set the measured time reductions by using traceability in relation to the initial costs for setting-up traceability in the evaluated systems.",2012,0, 5997,Modelling the Hurried bug report reading process to summarize bug reports,"Although bug reports are frequently consulted project assets, they are communication logs, by-products of bug resolution, and not artifacts created with the intent of being easy to follow. To facilitate bug report digestion, we propose a new, unsupervised, bug report summarization approach that estimates the attention a user would hypothetically give to different sentences in a bug report, when pressed with time. We pose three hypotheses on what makes a sentence relevant: discussing frequently discussed topics, being evaluated or assessed by other sentences, and keeping focused on the bug report's title and description. Our results suggest that our hypotheses are valid, since the summaries have as much as 12% improvement in standard summarization evaluation metrics compared to the previous approach. Our evaluation also asks developers to assess the quality and usefulness of the summaries created for bug reports they have worked on. Feedback from developers not only show the summaries are useful, but also point out important requirements for this, and any bug summarization approach, and indicates directions for future work.",2012,0, 5998,Domain specific warnings: Are they any better?,"Tools to detect coding standard violations in source code are commonly used to improve code quality. One of their original goals is to prevent bugs, yet, a high number of false positives is generated by the rules of these tools, i.e., most warnings do not indicate real bugs. There are empirical evidences supporting the intuition that the rules enforced by such tools do not prevent the introduction of bugs in software. This may occur because the rules are too generic and do not focus on domain specific problems of the software under analysis. We underwent an investigation of rules created for a specific domain based on expert opinion to understand if such rules are worthwhile enforcing in the context of defect prevention. In this paper, we performed a systematic study to investigate the relation between generic and domain specific warnings and observed defects. From our experiment on a real case, long term evolution, software, we have found that domain specific rules provide better defect prevention than generic ones.",2012,0, 5999,A structured approach to assess third-party library usage,"Modern software systems build on a significant number of external libraries to deliver feature-rich and high-quality software in a cost-efficient and timely manner. As a consequence, these systems contain a considerable amount of third-party code. External libraries thus have a significant impact on maintenance activities in the project. However, most approaches that assess the maintainability of software systems largely neglect this important factor. Hence, risks may remain unidentified, threatening the ability to effectively evolve the system in the future. We propose a structured approach to assess the third-party library usage in software projects and identify potential problems. Industrial experience strongly influences our approach, which we designed in a lightweight way to enable easy adoption in practice. We present an industrial case study showing the applicability of the approach to a real-world software system.",2012,0, 6000,Time-leverage point detection for time sensitive software maintenance,"Correct real-time behavior is an important aspect for time sensitive software, but it is difficult to get right. Time faults can be introduced not just during software development but also maintenance. So software maintainers without time information tend to have more chances to introduce unintended time behaviors. In this paper, we propose time change impact analysis to help maintainers estimate the potential influence of time changes on programs before the software evolves. Our main insight is that by being reminded and warned that a small-time change at some places in the source code will largely affect the whole task execution time, maintainers can be more cautious when updating such places. Because these places have a leverage effect that multiplies the task execution time in a subtle way, we call them time-leverage points. We give an approach to detect the time-leverage points based on a dynamic testing method, which instruments the program at a point for introducing a small delay and observes its impact on the task execution time. We implement a prototype tool and empirically evaluate the approach.",2012,0, 6001,Move code refactoring with dynamic analysis,"In order to reduce coupling and increase cohesion, we refactor program source code. Previous research efforts for suggesting candidates of such refactorings are based on static analysis, which obtains relations among classes or methods from source code. However, these approaches cannot obtain runtime information such as repetition count of loop, dynamic dispatch and actual execution path. Therefore, previous approaches might miss some refactoring opportunities. To tackle this problem, we propose a technique to find refactoring candidates by analyzing method traces. We have implemented a prototype tool based on the proposed technique and evaluated the technique on two software systems. As a result, we confirmed that the proposed technique could detect some refactoring candidates, which increase code quality.",2012,0, 6002,Applying technical stock market indicators to analyze and predict the evolvability of open source projects,"For decades stock market traders make financially critical buy/sell decisions depending on external and internal factors affecting the price of individual stocks. Moving Averages combined with technical analysis patterns are some of the most basic and widely used indicators to support buy/sell decisions. In this research, we present a novel cross-disciplinary approach that uses these technical stock market indicators for analyzing the community evolvability of open source software systems.",2012,0, 6003,Adapting Linux for mobile platforms: An empirical study of Android,"To deliver a high quality software system in a short release cycle time, many software organizations chose to reuse existing mature software systems. Google has adapted one of the most reused computer operating systems (i.e., Linux) into an operating system for mobile devices (i.e., Android). The Android mobile operating system has become one of the most popular adaptations of the Linux kernel with approximately 60 millions new mobile devices running Android each year. Despite many studies on Linux, none have investigated the challenges and benefits of reusing and adapting the Linux kernel to mobile platforms. In this paper, we conduct an empirical study to understand how Android adapts the Linux kernel. Using software repositories from Linux and Android, we assess the effort needed to reuse and adapt the Linux kernel into Android. Results show that (1) only 0.7% of files from the Linux kernel are modified when reused for a mobile platform; (2) only 5% of Android files are affected by the merging of changes on files from the Linux repository to the Android repository; and (3) 95% of bugs experienced by users of the Android kernel are fixed in the Linux kernel repository. These results can help development teams to better plan software adaptations.",2012,0, 6004,The demacrofier,"C++ programs can be rejuvenated by replacing error-prone usage of the C Preprocessor macros with type safe C++11 declarations. We have developed a classification of macros that directly maps to corresponding C++11 expressions, statements, and declarations. We have built a set of tools that replaces macros with equivalent C++ declarations and iteratively introduces the refactorings into the software build.",2012,0, 6005,Improving Coverage-Based Localization of Multiple Faults Using Algorithms from Integer Linear Programming,"Coverage-based fault localization extends the utility of testing from detecting the presence of faults to their localization. While coverage-based fault localization has shown good evaluation results for the single fault case, its ability to localize several faults at once appears to be limited. In this paper, we show how two partitioning procedures borrowed from integer linear programming can help improve the accuracy of standard coverage-based fault locators in presence of multiple faults by breaking down the localization problem into several smaller ones that can be dealt with independently. Experimental results suggest that our approach is indeed useful, the more so as its cost appears to be negligible.",2012,0, 6006,What Is System Hang and How to Handle It,"Almost every computer user has encountered an un-responsive system failure or system hang, which leaves the user no choice but to power off the computer. In this paper, the causes of such failures are analyzed in detail and one empirical hypothesis for detecting system hang is proposed. This hypothesis exploits a small set of system performance metrics provided by the OS itself, thereby avoiding modifying the OS kernel and introducing additional cost (e.g., hardware modules). Under this hypothesis, we propose SHFH, a self-healing framework to handle system hang, which can be deployed on OS dynamically. One unique feature of SHFH is that its """"light-heavy"""" detection strategy is designed to make intelligent tradeoffs between the performance overhead and the false positive rate induced by system hang detection. Another feature is that its diagnosis-based recovery strategy offers a better granularity to recover from system hang. Our experimental results show that SHFH can cover 95.34% of system hang scenarios, with a false positive rate of 0.58% and 0.6% performance overhead, validating the effectiveness of our empirical hypothesis.",2012,0, 6007,A Light-Weight Defect Classification Scheme for Embedded Automotive Software and Its Initial Evaluation,"Objective: Defect classification is an essential part of software development process models as a means of early identification of patterns in defect inflow profiles. Such classification, however, may often be a tedious task requiring analysis work in addition to what is necessary to resolve the issue. To increase classification efficiency, adapted schemes are needed. In this paper a light-weight defect classification scheme adapted for minimal process footprint -- in terms of learning and classification effort -- is proposed and initially evaluated. Method: A case study was conducted at Volvo Car Corporation to adapt the IEEE Std. 1044 for automotive embedded software. An initial evaluation was conducted by applying the adapted scheme to defects from an existing software product with industry professionals as subjects. Results: The results showed that the classification scheme was quick to learn and understand -- required classification time stabilized around 5-10 minutes already after practicing on 3-5 defects. The results also showed that the patterns in the classified defects were interesting for the professionals, although in order to apply statistical methods more data was needed. Conclusions: We conclude that the adapted classification scheme captures what is currently tacit knowledge and has the potential of revealing patterns in the defects detected in different project phases. Furthermore, we were, in the initial evaluation, able to contribute with new information about the development process. As a result we are currently in the process of incorporating the classification scheme into the company's defect reporting system.",2012,0, 6008,Static Analysis of Model Transformations for Effective Test Generation,"Model transformations are an integral part of several computing systems that manipulate interconnected graphs of objects called models in an input domain specified by a metamodel and a set of invariants. Test models are used to look for faults in a transformation. A test model contains a specific set of objects, their interconnections and values for their attributes. Can we automatically generate an effective set of test models using knowledge from the transformation? We present a white-box testing approach that uses static analysis to guide the automatic generation of test inputs for transformations. Our static analysis uncovers knowledge about how the input model elements are accessed by transformation operations. This information is called the input metamodel footprint due to the transformation. We transform footprint, input metamodel, its invariants, and transformation pre-conditions to a constraint satisfaction problem in Alloy. We solve the problem to generate sets of test models containing traces of the footprint. Are these test models effective? With the help of a case study transformation we evaluate the effectiveness of these test inputs. We use mutation analysis to show that the test models generated from footprints are more effective (97.62% avg. mutation score) in detecting faults than previously developed approaches based on input domain coverage criteria (89.9% avg.) and unguided generation (70.1% avg.).",2012,0, 6009,Oracle-Centric Test Case Prioritization,"Recent work in testing has demonstrated the benefits of considering test oracles in the testing process. Unfortunately, this work has focused primarily on developing techniques for generating test oracles, in particular techniques based on mutation testing. While effective for test case generation, existing research has not considered the impact of test oracles in the context of regression testing tasks. Of interest here is the problem of test case prioritization, in which a set of test cases are ordered to attempt to detect faults earlier and to improve the effectiveness of testing when the entire set cannot be executed. In this work, we propose a technique for prioritizing test cases that explicitly takes into account the impact of test oracles on the effectiveness of testing. Our technique operates by first capturing the flow of information from variable assignments to test oracles for each test case, and then prioritizing to ``cover'' variables using the shortest paths possible to a test oracle. As a result, we favor test orderings in which many variables impact the test oracle's result early in test execution. Our results demonstrate improvements in rate of fault detection relative to both random and structural coverage based prioritization techniques when applied to faulty versions of three synchronous reactive systems.",2012,0, 6010,The Nature of the Times to Flight Software Failure during Space Missions,"The growing complexity of mission-critical space mission software makes it prone to suffer failures during operations. The success of space missions depends on the ability of the systems to deal with software failures, or to avoid them in the first place. In order to develop more effective mitigation techniques, it is necessary to understand the nature of the failures and the underlying software faults. Based on their characteristics, software faults can be classified into Bohrbugs, non-aging-related Mandelbugs, and aging-related bugs. Each type of fault requires different kinds of mitigation techniques. While Bohrbugs are usually easy to fix during development or testing, this is not the case for non-aging-related Mandelbugs and aging-related bugs due to their inherent complexity. Systems need mechanisms like software restart, software replication or software rejuvenation to deal with failures caused by these faults during the operational phase. In a previous study, we classified space mission flight software faults into the three above-mentioned categories based on problems reported during operations. That study concentrated on the percentages of the faults of each type and the variation of these percentages within and across different missions. This paper extends that work by exploring the nature of the times to software failure due to Bohrbugs and non-aging-related Mandelbugs for eight JPL/NASA missions. We start by applying trend tests to the times to failure to check if there is any reliability growth (or decay) for each type of failure. For those times to failure sequences with no trend, we fit distributions to the data sets and carry out goodness-of-fit tests. The results will be used to guide the development of improved operational failure mitigation techniques, thereby increasing the reliability of space mission software.",2012,0, 6011,On the Use of Boundary Scan for Code Coverage of Critical Embedded Software,"Code coverage tools are becoming increasingly popular as valuable aids in assessing and improving the quality of software structural tests. For some industries, such as aeronautics or space, they are mandatory in order to comply with standards and to help reduce the validation time of the applications. These tools usually rely on code instrumentation, thus introducing important time and memory overheads that may jeopardize its applicability to embedded and real-time systems. This paper explores the use of IEEE 1149.1 (boundary scan) infrastructure and on-chip debugging facilities from embedded processors for collecting the program execution trace during tests, without the introduction of any extra code, and then extracting detailed code coverage analysis and profiling information. We are currently developing an extension to the csXception tool to include such capabilities, in order to study the advantages, difficulties and impediments of using boundary scan for code coverage.",2012,0, 6012,Using Non-redundant Mutation Operators and Test Suite Prioritization to Achieve Efficient and Scalable Mutation Analysis,"Mutation analysis is a powerful and unbiased technique to assess the quality of input values and test oracles. However, its application domain is still limited due to the fact that it is a time consuming and computationally expensive method, especially when used with large and complex software systems. Addressing these challenges, this paper makes several contributions to significantly improve the efficiency of mutation analysis. First, it investigates the decrease in generated mutants by applying a reduced, yet sufficient, set of mutants for replacing conditional (COR) and relational (ROR) operators. The analysis of ten real-world applications, with 400,000 lines of code and more than 550,000 generated mutants in total, reveals a reduction in the number of mutants created of up to 37% and more than 25% on average. Yet, since the isolated use of non-redundant mutation operators does not ensure that mutation analysis is efficient and scalable, this paper also presents and experimentally evaluates an optimized workflow that exploits the redundancies and runtime differences of test cases to reorder and split the corresponding test suite. Using the same ten open-source applications, an empirical study convincingly demonstrates that the combination of non-redundant operators and prioritization leveraging information about the runtime and mutation coverage of tests reduces the total cost of mutation analysis further by as much as 65%.",2012,0, 6013,Mutation Testing of Event Processing Queries,"Event processing queries are intended to process continuous event streams. These queries are partially similar to traditional SQL queries, but provide the facilities to express rich features (e.g., pattern expression, sliding window of length and time). An error while implementing a query may result in abnormal program behaviors and lost business opportunities. Moreover, queries can be generated with unsanitized inputs and the structure of intended queries might be altered. Thus, a tester needs to test the behavior of queries in presence of malicious inputs. Mutation testing has been found to be effective to assess test suites quality and generating new test cases. Unfortunately, there is no effort to perform mutation testing of event processing queries. In this work, we propose mutation-based testing of event processing queries. We choose Event Processing Language (EPL) as our case study and develop necessary mutation operators and killing criteria to generate high quality event streams and malicious inputs. Our proposed operators modify different features of EPL queries (pattern expression, windows of length and time, batch processing of events). We develop an architecture to generate mutants for EPL and perform mutation analysis. We evaluate our proposed EPL mutation testing approach with a set of developed benchmark containing diverse types EPL queries. The evaluation results indicate that the proposed operators and mutant killing criteria are effective to generate test cases capable of revealing anomalous program behaviors (e.g., event notification failure, delay of event reporting, unexpected event), and SQL injection attacks. Moreover, the approach incurs less manual effort and can complement other testing approach such as random testing.",2012,0, 6014,"An Empirical Study of the Effectiveness of """"Forcing"""" Diversity Based on a Large Population of Diverse Programs","Use of diverse software components is a viable defence against common-mode failures in redundant software-based systems. Various forms of """"Diversity-Seeking Decisions"""" (""""DSDs"""") can be applied to the process of developing, or procuring, redundant components, to improve the chances of the resulting components not failing on the same demands. An open question is how effective these decisions, and their combinations, are for achieving large enough reliability gains. Using a large population of software programs, we studied experimentally the effectiveness of specific """"DSDs"""" (and their combinations) mandating differences between redundant components. Some of these combinations produced much better improvements in system probability of failure per demand (PFD) than """"uncontrolled"""" diversity did. Yet, our findings suggest that the gains from such """"DSDs"""" vary significantly between them and between the application problems studied. The relationship between DSDs and system PFD is complex and does not allow for simple universal rules (e.g. """"the more diversity the better"""") to apply.",2012,0, 6015,Software at Scale for Building Resilient Wireless Sensor Networks,"Wireless Sensor Networks (WSNs) are widely recognized as a promising solution to build next-generation monitoring systems. Their industrial uptake is however still compromised by the low level of trust on their performance and dependability. Whereas analytical models represent a valid mean to assess nonfunctional properties via simulation, their wide use is still limited by the complexity and dynamicity of WSNs, which lead to unaffordable modeling costs. This paper proposes an approach to characterize the resiliency of the software of WSN software to failures. The focus is in providing a procedure and related tools to assess i) how the node software, hardware platforms, topology and routing protocols impact on the failure behavior of nodes and and of the network, and, vice-versa, ii) how the failure of a node mutates the behavior of the running software and routing protocol. The approach adopts a software characterization process which is based on i) Failure Mode and Effect Analysis, ii) automated fault injection experiments, iii) high-level description of the fault tolerant mechanisms of WSN software in a proposed framework.",2012,0, 6016,Early Performance Estimation for Industrial Component-Based Design of Reliable Software Defined Radio System,"The growing complexity of software applications, combined with increasing reliability requirements and constant quality and time-to-market constraints, creates new challenges for performance engineering practices in the area of real-time embedded systems. It is namely expected that delivered products combine timing garantees with fault tolerant behavior, e.g. by switching to fault tolerant modes in case of errors, while respecting strict real-time requirements. When developing such real-time systems according to a traditional application of the """"Va""""-cycle, performance verification and validation activities start only when development and integration are completed. As a consequence, performance issues are detected at a late stage. At this time, they are more difficult and expensive to fix. At Thales, we have therefore focused on the automation of performance engineering activities and their application as early as possible in the industrial design process of reliable Software Defined Radio (SDR) systems, as a mean to shorten the design time and reduce risks of timing failures. The SDR particularity consists in implementing the signal modulation of a communication radio using software rather than hardware. It is namely much easier to reprogram the modulation carrier in order to fit different situations with the same hardware radio (e.g. each national army typically uses specific waveforms to guarantee the confidentiality of communications). The implementation of a reliable software waveform can be a difficult task, as it involves real-time constraints as well as embedded aspects while dealing with complex algorithms including those for fault recovery. Thales created a software framework family, named MyCCM (Make your Component Container Model), to support the implementation of such real-time embedded software. MyCCM is a tailorable component based design approach that takes inspiration from the Lightweight Component Container Model (LwCCM) defined by the OMG. It impl- ments the concept of functional components that encapsulate algorithms. The components correspond to passive code controlled by an underlying runtime, and are connected through communication ports to create a complete application. This allows the construction of applications by assembling independent functional components. It also enforces the separation of concerns between the functional aspects described by component ports and the non functional elements that stay outside the components (message chaining across the whole component architecture, FIFO sizes, task priorities, communications mechanisms, execution times, etc). MyCCM models can be designed either using Thales internal modeling tools or using UML modelers with plain UML. Our developed framework for the early performance estimation of SDR MyCCM models is represented in the figure below. The first step consists in extending the SDR MyCCM model with performance properties, i.e. timing and behavior characteristics of the application (e.g. execution times and activation frequencies for threads and methods, data dependencies and communication protocols between threads) and execution characteristics of the hardware platform (e.g. speed, scheduling policy, etc). The OMG standard MARTE is key language for this purpose. However, due to the complexity of its syntax, it may result in very complex and confusing diagrams and models. We have therefore adapted the MARTE syntax based on the Thales SDR designers' feedback, thus allowing representing the performance properties in an easier and much more intuitive manner. We have opted for scheduling analysis techniques for the performance estimation of the extended MyCCM models, which is the next step in our framework. These techniques are well adapted for this purpose, since they rely on an abstraction of the timing relevant characteristics and behaviors. From these characteristics, the scheduling analysis systematically derives worst-case scheduling scenarios, and timi",2012,0, 6017,Quality Playbook: Ensuring Release to Release Improvement,"Summary form only given. Before a major feature release is made available to customers, it is important to be able to anticipate if the release will be of lesser quality than its predecessor release. Our research group has developed models that use development and test times, resource levels, code added, and bugs found and fixed (or not fixed) to predict whether or not a new feature release will achieve a key quality goal - to be of better quality than its predecessor release. If the release quality prediction models, developed early in the development branches integration phase, indicate a likely upcoming quality problem in the field, another set of predictive models ('playbook' models) are then developed and used by our team to identify development or test practices that are in need of improvement. These playbook models are key components of what we call 'quality playbooks,' that are designed to address several objectives: . Identify 'levers' that positively influence feature release quality. Levers are in-process engineering metrics that are associated with specific development or test processes/practices and measure their adoption and effectiveness. . If possible, identify levers that can be invoked early in the lifecycle, to enable the development and test teams to improve deficient practices and remediate the current release under development. If it is not possible to identify early levers but possible to identify levers later in the lifecycle, we can only change deficient practices to improve the quality of future successor releases. . Determine the potential quality impact of changes suggested by the profile of significant levers. Low impact levers are likely not to be addressed by development teams. . Determine the resource and schedule investments needed to change and implement practices: Training, disruption, additional engineering time, etc. . Using impact and investment calculations identify which practices to change, either for the current relea- e or just for subsequent releases. Develop a prioritization/ROI scheme to provide planning guidance to development and test teams. . Identify specific practice changes needed, or new practices to adopt. . Design and plan pilot programs to test the models, including the impact and investment components. Using this 'playbook' approach, our team has developed models for 31 major feature releases that are resident on 11 different hardware platforms. These models have identified six narrowly-defined classes of metrics that include both actionable levers and 'indicator' metrics that correlate well with release quality. (Indicator metrics do also correlate well, but are less specifically actionable.) The models for these six classes of metrics (and their associated practices) include strong levers and strong indicators for all releases and platforms thus far examined. Impact and investment results are also described in this paper, as are pilot programs that have tested the validity of the modeling and business calculation results. Two additional large-scale pilots of the 'playbook' approach are underway, and these are also described.",2012,0, 6018,Assessing Product Quality through PMR Analysis: A Perspective,"Success of a software product in the marketplace is defined by Quality of the product among other factors Assessing the current quality of the product is essential before it can be improved The assessment needs to be accurate in order to have the right action plan to improve quality Test methods and code reviews used to identify quality gaps in pre-release software are inadequate Product quality can be assessed by analysing PMRs (Problem Management Reports) in post-release software a) A PMR is a well defined document that is used to Report a customer found issue with the product Track the customer issue to closure b) Analysis of PMRs opened in the past are a clear indication of: i) Product components that are Most commonly used and having the most number of bugs, classified by sub-component Least commonly used ii) Sub-components to be focused on, to provide a visible improvement in the overall product quality c) PMR analysis improves the accuracy of the assessment as it is based on actual product usage In turn improves the accuracy / relevance of the action plan formulated to improve quality The paper outlines the quality assessment of a software product through PMR analysis.",2012,0, 6019,The Effect of Testability on Fault Proneness: A Case Study of the Apache HTTP Server,"Numerous studies have identified measures that relate to the fault-proneness of software components. An issue practitioners face in implementing these measures is that the measures tend to provide predictions at a very high level, for instance the per-module level, so it is difficult to provide specific recommendations based on those predictions. We examine a more specific measure, called software testability, based on work in test case generation. We discuss how it could be used to make more specific code improvement recommendations at the line-of-code level. In our experiment, we compare the testability of fault prone lines with unchanged lines. We apply the experiment to Apache HTTP Server and find that developers more readily identify faults in highly testable code. We then compare testability as a fault proneness predictor to McCabe's cyclomatic complexity and find testability has higher recall.",2012,0, 6020,Debugging Spreadsheets: A CSP-based Approach,"Despite being staggeringly error prone, spreadsheets can be viewed as a highly flexible end-users programming environment. As a consequence, spreadsheets are widely adopted for decision making, and may have a serious economical impact for the business. Hence, approaches for aiding the process of pinpointing the faulty cells in a spreadsheet are of great value. We present a constrain-based approach, CONBUG, for debugging spreadsheets. The approach takes as input a (faulty) spreadsheet and a test case that reveals the fault and computes a set of diagnosis candidates for the debugging problem we are trying to solve. To compute the set of diagnosis candidates we convert the spreadsheet and test case to a constraint satisfaction problem. From our experimental results, we conclude that CONBUG can be of added value for the end user to pinpoint faulty cells.",2012,0, 6021,Predicting Data Dependences for Slice Inspection Prioritization,"Data dependences play a central role in program debugging and comprehension. They serve as building blocks for program slicing and statistical fault localization, among other debugging approaches. Unfortunately, static analysis reports many data dependences that, in reality, are infeasible or unlikely to occur at runtime. This phenomenon is exacerbated by the extensive use of pointers and object-oriented features in modern software. Dynamic analysis, in contrast, reports only data dependences that occur in an execution but misses all other dependences that can occur in the program. To tackle the imprecision of data-dependence analysis, we present a novel static analysis that predicts the likelihood of occurrence of data dependences. Although it is hard to predict execution frequencies accurately, our preliminary results suggest that our analysis can distinguish the data dependences most likely to occur from those less likely to occur, which helps engineers prioritize their inspection of dependences in slices. These are promising results that encourage further research.",2012,0, 6022,Wielding Statistical Fault Localization Statistically,"Program debugging is a laborious but necessary phase of software development. It generally consists of fault localization, bug fix, and regression testing. Statistical software fault localization automates the manual and error-prone first task. It predicts fault locations by analyzing dynamic program spectrum captured in program runs. Previous studies mostly focused on how to provide reliable input data to such a technique and how to process the data accurately, but inadequately studied how to wield the output result of such a technique. In this work, we raise the assumption of symmetric distribution on the effectiveness of such a technique in locating faults, based on empirical results. We use maximum likelihood estimate and linear programming to develop a tuning method to enhance the result of a statistical fault localization technique. Experiments with two representative such techniques on two realistic UNIX utility programs validate our assumption and show our method effective.",2012,0, 6023,Automated Risk-Based Testing by Integrating Safety Analysis Information into System Behavior Models,"The development of safety-critical software-intensive systems requires systematic quality assurance on all stages of the development process. Executable development artifacts are validated against the system specifications. Risk-based test approaches enable the distribution of test effort in a specific way to cover critical system parts, functions, and requirements. The development process of safety-critical systems usually implies analysis activities for determining and understanding hazards and risks. Moreover, it requires a systematic design of the system structure and behavior based on the specification. For achieving a high degree of automation of test case derivation, existing formal models from the risk analysis and system design phases are combined. The approach presented here focuses on integration of fault trees into state-based behavior models. Therefore, fault trees are analyzed and their elements are assessed for their validity and significance for the test modeling. The approach systematically transforms the relevant fault tree elements like single critical basic events, system states, or sequences of events into elements of the state-based behavior model. The resulting model enables the automated generation of test cases considering risk-based test purposes such as the coverage of critical states, transitions, or sub-models. The feasibility of the approach is shown in a small case study.",2012,0, 6024,Assessing AUTOSAR Systems Using Fault Injection,"This fast abstract introduces a fault injection approach that achieves the mentioned objective while facing a number of facts and technical limitations discussed onwards. While commercially available AUTOSAR basic software (BSW) implementations are certified and ISO 26262 complaint, third party hardware and application software might have not gone through the same rigorous and extensive non-simulation-based validation activities. Also, AUTOSAR was built without taking explicitly fault injection needs into account, which resulted in the lack of required accessibility to either hardware or software interfaces in order to support the injection of faults.",2012,0, 6025,An LED Monitoring System Based on the Real-Time Power Consumption Detection Technology,"A Kind of LED lighting system using AD7755, micro controller and Ethernet drives is presented, to control multiple LED in remote place and detect power consumption of the system. The hardware and software designs are also described. The system can not only adjust the three-color LED light group brightness of each node through host computer, but also get the energy consumption data of each node in time. All the information can be processed in host computer. The results of experiments show that the system works steadily and has good communication quality, achieving the purpose for controlling LED lights brightness and monitoring LED lights power consumption.",2012,0, 6026,Notice of Violation of IEEE Publication Principles
Services Selection of Transactional Property for Web Service Composition,"Notice of Violation of IEEE Publication Principles

""""Services Selection of Transactional Property for Web Service Composition""""
by Guojun Zhang
in the Proceedings of the Eighth International Conference on Computational Intelligence and Security, November 2012, pp. 605-608

After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

This paper has copied significant portions of the original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.

""""TQoS: Transactional and QoS-Aware Selection Algorithm for Automatic Web Service Composition""""
by Joyce El Haddad, Maude Manouvrier, and Marta Rukoz
in the IEEE Transactions on Services Computing, Vol 3, No. 1, March 2010, pp. 73-85

Web service composition enables seamless and dynamic integration of business applications on the web. Due to the inherent autonomy and heterogeneity of component Web services it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service. In this paper we propose a novel selection approach based on transactional properties of ensuring reliability. We build a model to implement transactional-aware service selection, and use the model composite Web service to guarantee reliable execution. We evaluate our approach experimentally using both real and synthetically generated datasets.",2012,0, 6027,Improved Differential Fault Analysis of SOSEMANUK,"We present a more efficient differential fault analysis (DFA) attack on SOSEMANUK, a new synchronous software-oriented stream cipher, which is contained in the current eSTREAM Portfolio. In the previous study, it is required around 6144 faults, 248 SOSEMANUK iterations and 238.17 bytes storage to recovers the secret inner state of the cipher. We offer an improved attack and show that only around 4608 faults, 235.16 SOSEMANUK iterations and 223.46 bytes storage are needed under the same or even weaker fault model. The simulation results of the proposed attack show that it takes about 11.35 hours when using a PC.",2012,0, 6028,Streamlining Service Levels for IT Infrastructure Support,"For IT Infrastructure Support (ITIS), it is crucial to identify opportunities for reducing service costs and improving service quality. We focus on streamlining service levels i.e., finding right resolution level for each ticket, to reduce time, efforts and cost for ticket handling, without affecting workloads and user satisfaction. We formalize this problem and present two statistics-based search algorithms for identifying problems suitable for left-shift (from expensive, expertise intensive L2 level to cheaper, simpler L1 level) and right-shift (from L1 to L2). The approach is domain-driven: it produces directly usable and often novel results, without any trial-and error experimentation, along with detailed justifications and predicted impacts. This helps in acceptance among end-users and more active use of the results. We discuss one real-life case-study of results produced by the algorithms.",2012,0, 6029,Independent Assessment of Safety-Critical Systems: We Bring Data!,"Safety-critical systems are systems where failures lead to catastrophic results: resulting in loss of life, significant property damage, or damage to the environment. These systems range from aerospace on-board control systems, ground flight control systems, medical devices, nuclear power plants control, automotive systems, military systems, just to name a few. Other information systems are becoming more and more """"safety-critical"""" due to the financial impacts of failures and the fact that human lives depend on them.The constant technology evolution is making these systems more complex and more common, and we depend more on them. Thus, we need to guarantee maximum dependability and safety properties with better processes and tools. The systems complexity is usually boosted by the flexibility of software, and thus software is becoming both a solution and a problem. Keeping dependability of software at the highest level requires evolutions that cover the processes and tools and all the development/qualification life-cycle phases: Specification; Architecture; Coding; Verification and Validation. Independent (software) verification and validation (ISVV) activities have been used and evolving since the seventies to ensure high safety and dependability and to take advantage of organizational and technical independence by avoiding biased assessments. Critical Software has been involved in developing and applying ISVV methods and techniques since the early 2000's, and this experience collected a significant amount of data that cover different domains: space on-board and ground systems, aeronautics, transportation, financial and banking, amongst others. This industrial paper will not cover the technical details of the processes, methods and tools applied, but will instead present important metrics and subsequent findings from the collected data. The results presented cover both technical issues found per each phase of the development life-cycle and required effort to perfor- the independent assessments. The outcome is interesting since it allows to compare data between similar or different industries (process/organizational maturities), same and different domains/criticalities, software developed either by experienced industrial partners or less experienced, life-cycle phase where more problems are detected or where they are easily detected, software/system dependability evolution after the first assessments, efficiency of the applied techniques and return on investment according to consultants level, initial systems maturity, criticality, project life-cycle phase, etc. All these factors can be analyzed from the collected metrics, and we can also conclude on the number non-common issues found, that include abnormal behavior of the systems (for example under non-nominal conditions), significant organizational factors (yes, they are really important) or human factors (operator related risks, security threats, etc). A study from Johnson and Holloway over some of the major aviation and maritime accidents in North America during 1996-2006 concluded that the proportion of causal and contributory factors related to organizational issues exceeded those due to human errors. The study showed that the causal and contributory factors in the USA aviation accidents have the following distribution: 48% is related to organizational factors, the equivalent human factors represented 37%, equipment factors represented 12%, other causes represented 3%; The same exercise for maritime accidents classified: 53% due to organizational factors, 24-29% as human error, 10-19% to equipment failures, and 2-4% as other causes. The data presented and analyzed in this industrial paper comes from dozens of projects, and originated over 3000 issues. This article will present the facts related to safety-critical software development quality metrics performed by independent assessments of quite mature systems, and will infer also return on investment (R",2012,0, 6030,AFD: Adaptive failure detection system for cloud computing infrastructures,"Cloud computing has become increasingly popular by obviating the need for users to own and maintain complex computing infrastructure. However, due to their inherent complexity and large scale, production cloud computing systems are prone to various runtime problems caused by hardware and software failures. Autonomic failure detection is a crucial technique for understanding emergent, cloud-wide phenomena and self-managing cloud resources for system-level dependability assurance. To detect failures, we need to monitor the cloud execution and collect runtime performance data. These data are usually unlabeled, and thus a prior failure history is not always available in production clouds, especially for newly managed or deployed systems. In this paper, we present an Adaptive Failure Detection (AFD) framework for cloud dependability assurance. AFD employs data description using hypersphere for adaptive failure detection. Based on the cloud performance data, AFD detects possible failures, which are verified by the cloud operators. They are confirmed as either true failures with failure types or normal states. AFD adapts itself by recursively learning from these newly verified detection results to refine future detections. Meanwhile, AFD exploits the observed but undetected failure records reported by the cloud operators to identify new types of failures. We have implemented a prototype of the AFD system and conducted experiments in an on-campus cloud computing environment. Our experimental results show that AFD can achieve more efficient and accurate failure detection than other existing schemes.",2012,0, 6031,Novel approach for Interference Management in cognitive radio,"Cognitive radio based on a software-defined radio, is an emerging wireless communication system today. It is regarded as fifth generation (5G) mobile system. Cognitive radio research can be categorized in three parts, which are Spectrum Management, Intelligence Management and Interference Management. Spectrum management detects white space and manages different spectrum issues between primary and secondary users. Intelligence management on the other hand use different artificial intelligence techniques such as Neural Networks, Rule based system, Genetic Algorithm etc. in order to develop efficient cognitive engine. Finally Interference management focus on implementation issues of cognitive radio by dealing with channel awareness, link quality and resource allocation which mainly depend on the right choice of transmit power. This paper proposes an algorithm based on MAC scheduling techniques and uses two parallel processes to control transmit power. The first process go through different mathematical calculations of SINR, channel capacity etc of all links between different radio nodes within a network. The second process deals with environmental conditions with the help of Fuzzy logic. These both processes works together to adjust the value of transmit power of every node in order to mitigate interference and maintain QoS of the service.",2012,0, 6032,Region-Based perceptual quality regulable bit allocation and rate control for video coding applications,"In this paper, a perceptual quality regulable H.264 video encoder system has been developed. We use structure similarity index as the quality metric for distortion-quantization modeling and develop a bit allocation and rate control scheme for enhancing regional perceptual quality. Exploiting the relationship between the reconstructed macroblock and its best predicted macroblock from mode decision, a novel quantization parameter prediction method is built and used to regulate the video quality of the processing macroblock according to a target perceptual quality. Experimental results show that the model can achieve high accurate. Compared to JM reference software with macroblock layer rate control, the proposed encoding system can effectively enhance perceptual quality for target video regions.",2012,0, 6033,An Optimal Approach Towards Recognizing Broken Thai Characters in OCR Systems,"This paper presents a novel technique for recognizing broken Thai characters found in degraded Thai text documents by modeling it as a set-partitioning problem (SPP). The technique searches for the optimal set-partition of the connected components by which each subset yields a reconstructed Thai character. Given the non-linear nature of the objective function needed for optimal set-partitioning, we design an algorithm we call Heuristic Incremental Integer Programming (HIIP), that employs integer programming (IP) with an incremental approach using heuristics to hasten the convergence. To generate corrected Thai words, we adopt a probabilistic generative approach based a Thai dictionary corpus. The proposed technique is applied successfully to a Thai historical document and poor quality Thai fax document with promising accuracy rates over 93%.",2012,0, 6034,Hierarchical prosodic boundary prediction for Uyghur TTS,"Correct prosodic boundary prediction is crucial for the quality of synthesized speech. This paper presents the prosodic hierarchy of Uyghur-language which belongs to agglutinative language. A two-layer bottom-up hierarchical approach based on conditional random fields (CRF) is used for predicting prosodic word (PW) and prosodic phrase (PP) boundaries. In order to disambiguate the confusion between different prosodic boundaries at punctuation sites, CRF based prosodic boundary determination model is used and integrated with bottom-up hierarchical approach. Word suffix feature is considered useful for prosodic boundary prediction and added into the feature sets. The experimental results show that the proposed method successfully resolves the confusion between different prosodic boundaries. Consequently, further enhance the accuracy of prosodic boundary prediction.",2012,0, 6035,New scientific contributions to the prediction of the reliability of critical systems which based on imperfect debugging method and the increase of quality of service,"This paper presents a new method by which it is possible to realistically predict the software reliability of critical systems. The main feature of this method is that it allows estimating the number of remaining critical faults in the software. The algorithm employs well-known methods such as Imperfect Debugging and it provides a more reliable prognosis than the methods conventionally used for this purpose. Furthermore, the new approach describes two processes of handling critical failures (one for detection and one for correction). The new algorithm also takes into account the socalled repair time, a measurement that is vitally important for a reliable prognosis. For use in the prediction model, it is mathematically described as a time function. As every programmer knows, it can be difficult to have even the simplest program run without faults. So-called software reliability models (SRM's), based on stochastic and aiming to predict the reliability of both software and hardware, have been used since the 70's. SRM's rely on certain model assumptions some of which cannot be deemed realistic anymore. Hence, for today's reliability engineering, these models are insufficient. At this point in time, though, there are hardly any methods that enable us to obtain predictions as to how the reliability of critical faults or the failure rate of critical systems behave over time. Currently, there is no mathematical model distinguishing between critical and non-critical faults, and only few models consider Imperfect Debugging (ID). The method presented here, however, is based on ID and it is able to distinguish between critical and non-critical software faults. Moreover, this new method employs a so-called Time-Delay and thus two new processes have to be designed. Mathematically, these processes describe the detection of faults and their correction, respectively. It is necessary to define appropriate distribution functions and to clearly state the requisite model assumption- .",2012,0, 6036,Software Aging in Virtualized Environments: Detection and Prediction,"Software aging has been cited in many scenarios including Operating System, Web Servers, Real-time Systems. However, few studies have been conducted in long running virtualized environments where more and more software is being delivered as a service. Furthermore, state-of-the-art methods lack the ability to deal with miscellaneous upper applications and underlying systems transparently in virtualized scenarios. In this paper, we detect aging phenomenon by conducting experiments in physical and virtual machines and identify the differences between the two, and propose a feature code-based methodology for failure prediction through system call, then implement a prototype in virtual machine manager layer to predict failure time and rejuvenate transparently, which is suitable in virtualized scenarios. The evaluation shows the prediction deviation against reality is less than 10%.",2012,0, 6037,Improving accuracy of DGA interpreation of oil-filled power transformers needed for effective condition monitoring,"The probability of equipment failure increases over time as the age and rate of use increases. Since faults are the major cause of these failures, there are several ways and means used towards predicting fault occurrence and thus preventing the equipment from failing by diagnosing its condition. In oil filled transformers, the Dissolve Gas Analysis (DGA) is used as one of the well-established techniques to predict incipient faults inside its enclosure. With the existence of more than 6 known methods of DGA fault interpretation techniques; there is the likelihood that they may give different conditions for the same sample. Using a combination of many of the diagnostic methods will therefore increase the accuracy of the interpretation and so increase the certainty of the transformer condition. This paper presents a computer program based condition diagnosis system developed to combine four DGA assessment techniques; Rogers Ratio Method, IEC Basic Ratio Method, Duval Triangle method and Key Gas Method. A user friendly GUI presented to give a visual display of the four techniques and the output of the combined interpretation. The result of the prediction analyses done to test the accuracy of the program shows an overall DGA prediction accuracy of 97.03% compared to the 91% of the most reliable individual method; Duval Triangle and near elimination of `no prediction' condition.",2012,0, 6038,Improving design quality by automatic verification of activity diagram syntax,"The quality of the product is an important issue in software development and quality assurance is an important aspect of any software design. One of the factors that affect the software quality is the correctness of its design. Any defect in the design can lead to high cost for defect correction. Activity diagrams are used to model the dynamic or behavioral aspects of the system. In this paper, an algorithm that analyzes activity diagrams and automatically verifies the syntax of each of its components is presented. Incomplete workflow can lead to incorrect results and a missing edge can lead to incomplete workflow. Mismatch in fork, join pair can lead to concurrency issues and synchronization problems. Detection of such errors in the design phase ensures product quality. The activity diagram is transformed to its components and analysis is performed on the components based on the syntactic specifications to detect errors. The workflow in the diagram and syntactic correctness of control flow are analyzed by the algorithm. Errors, if any, in the diagram are identified and a log of the errors is maintained in the error table. Analysis of the activity diagram and verification of its syntax can help in the development of a product whose quality is assured.",2012,0, 6039,Entropy based bug prediction using support vector regression,"Predicting software defects is one of the key areas of research in software engineering. Researchers have devised and implemented a plethora of defect/bug prediction approaches namely code churn, past bugs, refactoring, number of authors, file size and age, etc by measuring the performance in terms of accuracy and complexity. Different mathematical models have also been developed in the literature to monitor the bug occurrence and fixing process. These existing mathematical models named software reliability growth models are either calendar time or testing effort dependent. The occurrence of bugs in the software is mainly due to the continuous changes in the software code. The continuous changes in the software code make the code complex. The complexity of the code changes have already been quantified in terms of entropy as follows in Hassan [9]. In the available literature, few authors have proposed entropy based bug prediction using conventional simple linear regression (SLR) method. In this paper, we have proposed an entropy based bug prediction approach using support vector regression (SVR). We have compared the results of proposed models with the existing one in the literature and have found that the proposed models are good bug predictor as they have shown the significant improvement in their performance.",2012,0, 6040,An efficient programming rule extraction and detection of violations in software source code using neural networks,The larger size and complexity of software source code builds many challenges in bug detection. Data mining based bug detection methods eliminate the bugs present in software source code effectively. Rule violation and copy paste related defects are the most concerns for bug detection system. Traditional data mining approaches such as frequent Itemset mining and frequent sequence mining are relatively good but they are lacking in accuracy and pattern recognition. Neural networks have emerged as advanced data mining tools in cases where other techniques may not produce satisfactory predictive models. The neural network is trained for possible set of errors that could be present in software source code. From the training data the neural network learns how to predict the correct output. The processing elements of neural networks are associated with weights which are adjusted during the training period.,2012,0, 6041,A Compositional Trust Model for Predicting the Trust Value of Software System QoS Properties,"Trust of a software system (i.e., the degree of confidence that a system conforms to its specification) can be defined in terms the composition of trust values for each individual software component (or service) used in the creation of that software system and their interaction patterns. This paper therefore presents the different composition patterns, and the common associations between various quality-of-service (QoS) properties of the composed system for the identified composition patterns. It also presents composition rules for deriving systemic trust values for each composition pattern. Lastly, results from applying the composition patterns to a case study show our trust composition model can predict trust of a composed system with 55%-70% uncertainty that increases with system complexity.",2012,0, 6042,Intelligent system for predicting wireless sensor network performance in on-demand deployments,"The need for advanced tools that provide efficient design and planning of on-demand deployment of wireless sensor networks (WSN) is critical for meeting our nation's demand for increased intelligence, reconnaissance, and surveillance in numerous safety-critical applications. For practical applications, WSN deployments can be time-consuming and error-prone, since they have the utmost challenge of guaranteeing connectivity and proper area coverage upon deployment. This creates an unmet demand for decision-support systems that help manage this complex process. This paper presents research-in-progress to develop an advanced decision-support system for predicting the optimal deployment of wireless sensor nodes within an area of interest. The proposed research will have significant impact on the future application of WSN technology, specifically in the emergency response, environmental quality, national security, and engineering education domains.",2012,0, 6043,High-performance scalable information service for the ATLAS experiment,"The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ) project. The IS provides high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about hundred gigabytes of information which is being constantly updated with the update interval varying from a second to few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In latter case IS subscribers receive information within few milliseconds after it was updated. IS can handle arbitrary types of information including histograms produced by the HLT applications and provides C++, Java and Python API. The Information Service is a primarily and in most cases a unique source of information for the majority of the online monitoring analysis and GUI applications, used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with the latency of few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Ea- h information item in IS has an associated URL which can be used to access that item online via HTTP protocol. This functionality is being used by many online monitoring applications which can run in a WEB browser, providing real-time monitoring information about ATLAS experiment over the globe. This paper will describe design and implementation of the IS and present performance results which have been taken in the ATLAS operational environment.",2012,0, 6044,Experience with the custom-developed ATLAS offline trigger monitoring framework and reprocessing infrastructure,"The offline trigger monitoring of the ATLAS experiment is assessing the data quality and analyses those events where no trigger decision could be made. Within the offline monitoring, which is started shorty after the data acquisition has finished, the online data quality assessment is reflected. Additionally, a reprocessing system tests changes to the trigger software and configuration to ensure their stability and reliability before they can become operating. This note explains the activities performed to provide a flawless monitoring of the operation of the ATLAS trigger system and how to assess the quality of the recorded data.",2012,0, 6045,An application using micro TCA for real-time event assembly,"The Electronic Systems Engineering Department of the Computing Sector at the Fermi National Accelerator Laboratory has undertaken the effort of designing an AMC that meets the specifications within the MicroTCA framework. The application chosen to demonstrate the hardware is the real-time event assembly of data taken by a particle tracking pixel telescope. In the past, the telescope would push all of its data to a PC where the data was stored to disk. Then event assembly, geometry inference, and particle tracking were all done at a later time. This approach made it difficult to efficiently assess the quality of the data as it was being taken - at times, resulting in wasted test beam time. Now, we can insert in the data path, between the telescope and the PC, a commercial MicroTCA crate housing our AMC. The AMC receives, buffers, and processes the data from the tracking telescope and transmits complete, assembled events to the PC in real-time. In this paper, we report on the design approach and the results achieved when the MicroTCA hardware was employed for the first time during a test beam run at the Fermi Test Beam Facility in 2012.",2012,0, 6046,Simplistic concept of low-cost virtual training environment capable of automatic user input data evaluation/validation,"Evaluation and validation user data during practical exams, trainings, interviews or another type of testing is tedious, error prone mission. The more difficult this mission becomes when there is more than one way for job-seeker or student to do the desired task. Setting up the environment consisting of multiple nodes on which the challenge should be accomplished is equally tedious also. This paper presents novel approach which makes this mission easier. We present an easy way how to setup environment based on open-source software which can be use in real world scenarios like training, testing or practicing.",2012,0, 6047,Sleep in the cloud: On how to use available heart rate monitors to track sleep and improve quality of life,"As the modern society accumulates sleep debt that jeopardizes health, performance and wellbeing, people become increasingly interested in self-assessment. We aim to enable sleep self-evaluation using available Heart Rate (HR) monitors, mobile and cloud technology. Sleep was evaluated using a proprietary ECG-based validated sleep diagnostic software adapted to HR data obtained from HR monitor belts (HRMs) which are widely used to monitor HR during physical activity. Data were transmitted and stored on an iPhone, using a dedicated application. Two wireless communication channels are used for HRMs: (P1) Wearlink and (P2) ANT+. The stored information was uploaded to the cloud and automatically analyzed. The functionality of an automated sleep monitoring and analysis, using HRMs, with either P1 or P2 transmission, iPhone, and cloud based SleepRate software analysis has been checked. HR belts with ANT+ were the most suitable for recording HR during sleep. Millions of people own HRMs to assess their training. Now they can use the same device to evaluate and improve their sleep, thus improving their daytime physical and cognitive performance , wellbeing and overall health.",2012,0, 6048,Quantitative 3D evaluation of myocardial perfusion during regadenoson stress using multidetector computed tomography,"We tested the hypothesis that quantitative 3D analysis of myocardial perfusion from MDCT images obtained during regadenoson stress would more accurately detect the presence of significant coronary artery disease (CAD) than the same analysis when performed on resting MDCT images. Fifty consecutive patients referred for CT coronary angiography (CTCA) underwent additional imaging with regadenoson (0.4mg, Astellas) using prospective gating (256-channel, Philips). Custom software was used to calculate for each myocardial segment an index of severity and extent of perfusion abnormality, Qh, which was compared to perfusion defects predicted by the presence and severity of coronary stenosis on CTCA. In segments supplied by arteries with luminal narrowing >;50%, myocardial attenuation was slightly reduced compared to normally perfused segments at rest (9121 vs. 9326 HU, NS), and to a larger extent at stress (10221 vs. 11220 HU, p<;0.05). In contrast, index Qh was significantly increased in these segments at rest (0.400.48 vs. 0.260.41, p<;0.05) and reached a nearly 3-fold difference at stress (0.660.74 vs. 0.280.51, p<;0.05). The addition of regadenoson improved the diagnosis of CAD, as reflected by an increase in sensitivity (from 0.57 to 0.91) and improvement in accuracy (0.65 to 0.77). In conclusion, quantitative 3D analysis of MDCT images allows objective detection of CAD, the accuracy of which is improved by regadenoson stress.",2012,0, 6049,A management system for adult cardiac surgery,"A new system for the computerized management of the surgical path was developed by Tuscany Gabriele Monasterio Foundation at the Heart Hospital of """"G. Pasquinucci"""" Massa. The system has been in operating since 2009 and manages the paths of more than 2500 surgical patients / year in cardiology. The system was developed from the need to make more efficient and flexible operating room activities, which are related / linked to the waiting lists and the availability of medical resources (beds, staff, implantable devices, etc.). The surgical path is characterized by many professionals and clinical settings that make it difficult to maintain a timely and efficient global unity. In addition to this, to assess the quality of the hospital and take action for improvement, it is also necessary to extend the surgical path to the postoperative period.",2012,0, 6050,On the use of failure detection and recovery mechanism for network failure,"Future of internet is predicted to be multi-interfaced. Site Multi-homing by IPv6 Intermediation-Shim6 is a proposal presented in IETF to provide multi-homing support in IPv6 based networks. Although initially it was intended for static networks but recently it has been tested to provide end host mobility. Failure detection and recovery in Shim6 is performed through REAchability (REAP) Protocol. This protocol shows significant improvements over MIPv6. Recently, due to inherited flaws in MIPv6, more protocols and combination of protocols are implemented and also been tested. In this contribution we implemented LinShim6 test bed and observed the behavior of switching locators under the use of REAP. This work is done by keeping in view that how the QoS factors are affected when REAP is used for network failure and recovery in home environment. REAP is activated when the communication path is failed. Sometimes host has access to multiple ISPs through its multiple locators. If one locator fails, communication is shifted to another locator. In Shim6 context it is called locator change. We experimentally validated the working of LinShim6 and condition of network considering packet loss, jitter, throughput and data transferred. In addition we propose that if an intelligent approach is used, switching delay can be reduced. We called this Shim6 Assisted Mobility Scheme (SAMS). Some initial experimental results are also presented in this work.",2012,0, 6051,Microcontroller based automatic fault identification in a distribution system,A delta-star transformer connected to distribution system has been analyzed with the approach of generalized theory of electrical machines and expressions for symmetrical and different unsymmetrical faults are derived. The detailed theoretical analysis shows that magnitude and phase angle of fault current varies depending on nature and types of faults. A microcontroller based continuous monitoring unit is developed to detect and identify types of faults in the transformer connected distribution system. Software executed in microcontroller evaluates nature of a fault based on measured magnitudes and phase angles of voltages and currents under faulty condition. Theoretical analysis and experimental results validate acceptability of the developed unit..,2012,0, 6052,Impact weighted uptime in hierarchical LTE networks: Application and measurement,"The success of 4G LTE networks will depend upon the quality of new real-time high bandwidth applications like streaming video, on-line gaming, and Telepresence. These applications are very sensitive to short duration outages (SDOs) of network elements. The classical availability metric is not adequate for 4G LTE network evaluation because it is not sensitive enough to SDOs. We introduce a new metric: the Impact Weighted Network Uptime (IWNU), whereby uptime at each hierarchical level is weighted by the respective number of base stations affected as a result of that level failure. We illustrate its usefulness for segments of LTE networks having up to three levels of hierarchy. The reliability model of each hierarchical level is described by an absorbing Markov chain whose absorption states correspond to failures affecting base stations. We use the level uptime to compare different redundancy configurations at upper hierarchical levels and provide numerical results demonstrating that additional redundancy may not visibly increase the level uptime in the presence of silent failures which are not immediately detected. We propose also a new method for tracking the field reliability of the 4G LTE service by utilizing existing software features and protocols readily available within Cisco network elements. Service impacting hardware and software outages are registered by this method with an accuracy of one second. The outage data is then used for calculation of the downtime and the proposed IWNU metric which are not currently automated by any tool.",2012,0, 6053,Automated Visual Quality Analysis for Media Production,"Automatic quality control for audiovisual media is an important tool in the media production process. In this paper we present tools for assessing the quality of audiovisual content in order to decide about the reusability of archive content. We first discuss automatic detectors for the common impairments noise and grain, video breakups, sharpness, image dynamics and blocking. For the efficient viewing and verification of the automatic results by an operator, three approaches for user interfaces are presented. Finally, we discuss the integration of the tools into a service oriented architecture, focusing on the recent standardization efforts by EBU and AMWA's Joint Task Force on a Framework for Interoperability of Media Services in TV Production (FIMS).",2012,0, 6054,SAFER: System-level Architecture for Failure Evasion in Real-time Applications,"Recent trends towards increasing complexity in distributed embedded real-time systems pose challenges in designing and implementing a reliable system such as a self-driving car. The conventional way of improving reliability is to use redundant hardware to replicate the whole (sub)system. Although hardware replication has been widely deployed in hard real-time systems such as avionics, space shuttles and nuclear power plants, it is significantly less attractive to many applications because the amount of necessary hardware multiplies as the size of the system increases. The growing needs of flexible system design are also not consistent with hardware replication techniques. To address the needs of dependability through redundancy operating in real-time, we propose a layer called SAFER(System-level Architecture for Failure Evasion in Real-time applications) to incorporate configurable task-level fault-tolerance features to tolerate fail-stop processor and task failures for distributed embedded real-time systems. To detect such failures, SAFER monitors the health status and state information of each task and broadcasts the information. When a failure is detected using either time-based failure detection or event-based failure detection, SAFER reconfigures the system to retain the functionality of the whole system. We provide a formal analysis of the worst-case timing behaviors of SAFER features. We also describe the modeling of a system equipped with SAFER to analyze timing characteristics through a model-based design tool called SysWeaver. SAFER has been implemented on Ubuntu 10.04 LTS and deployed on Boss, an award-winning autonomous vehicle developed at Carnegie Mellon University. We show various measurements using simulation scenarios used during the 2007 DARPA Urban Challenge. Finally, we present a case study of failure recovery by SAFER when node failures are injected.",2012,0, 6055,Model-Driven Comparison of State-Machine-Based and Deferred-Update Replication Schemes,"In this paper, we analyze and experimentally compare state-machine-based and deferred-update (or transactional) replication, both relying on atomic broadcast. We define a model that describes the upper and lower bounds on the execution of concurrent requests by a service replicated using either scheme. The model is parametrized by the degree of parallelism in either scheme, the number of processor cores, and the type of requests. We analytically compared both schemes and a non-replicated service, considering a bcast- and request-execution-dominant workloads. To evaluate transactional replication experimentally, we developed Paxos STM---a novel fault-tolerant distributed software transactional memory with programming constructs for transaction creation, abort, and retry. For state-machine-based replication, we used JPaxos. Both systems share the same implementat ion of atomic broadcast based on the Paxos algorithm. We present the results of performance evaluation of both replication schemes, and a non-replicated (thus prone to failures) service, considering various workloads. The key result of our theoretical and experimental work is that neither system is superior in all cases. We discuss these results in the paper.",2012,0, 6056,FORTRESS: Adding Intrusion-Resilience to Primary-Backup Server Systems,"Primary-backup replication enables arbitrary services, which need not be built as deterministic state machines, to be reliable against server crashes. Further, when the primary does not crash, the performance can be close to that of an un-replicated, 1-server system and is arguably far better than what state machine replication can offer. These advantages have made primary-backup replication a widely used technique in commercial provisioning of services, even though the technique assumes that residual software bugs in a server system can lead only to crashes and cannot result in state corruption. This assumption cannot hold against an attacker intent on exploiting vulnerabilities and corrupting the service state when attacks lead to intrusions. This paper presents a system, called FORTRESS, which can encapsulate a primary-backup system and safeguard it from being intruded. At its core, FORTRESS applies proactive obfuscation techniques in a manner appropriate to primary-backup replication and deploys proxy servers for additional defence. Gain in intrusion resilience is shown to be substantial when assessed through analytical evaluations and simulations for a range of attacker scenarios. Further, by implementing two web-based applications, the average performance drop is demonstrated to be in the order of tens of milliseconds even when obfuscation intervals are as small as tens of seconds.",2012,0, 6057,AAD: Adaptive Anomaly Detection System for Cloud Computing Infrastructures,"Cloud computing has become increasingly popular by obviating the need for users to own and maintain complex computing infrastructure. However, due to their inherent complexity and large scale, production cloud computing systems are prone to various runtime problems caused by hardware and software failures. Autonomic failure detection is a crucial technique for understanding emergent, cloudwide phenomena and self-managing cloud resources for system-level dependability assurance. To detect failures, we need to monitor the cloud execution and collect runtime performance data. These data are usually unlabeled, and thus a prior failure history is not always available in production clouds, especially for newly managed or deployed systems. In this paper, we present an Adaptive Anomaly Detection (AAD) framework for cloud dependability assurance. It employs data description using hypersphere for adaptive failure detection. Based on the cloud performance data, AAD detects possible failures, which are verified by the cloud operators. They are confirmed as either true failures with failure types or normal states. The algorithm adapts itself by recursively learning from these newly verified detection results to refine future detections. Meanwhile, it exploits the observed but undetected failure records reported by the cloud operators to identify new types of failures. We have implemented a prototype of the algorithm and conducted experiments in an on-campus cloud computing environment. Our experimental results show that AAD can achieve more efficient and accurate failure detection than other existing scheme.",2012,0, 6058,On atomicity enforcement in concurrent software via Discrete Event Systems theory,"Atomicity violations are among the most severe and prevalent defects in concurrent software. Numerous algorithms and tools have been developed to detect atomicity bugs, but few solutions exist to automatically fix such bugs. Some existing solutions add locks to enforce atomicity, which can introduce deadlocks into programs. Our recent work avoids deadlock bugs in concurrent programs by adding control logic synthesized using Discrete Event Systems theory. In this paper, we extend this control framework to address single-variable atomicity violation bugs. We use the same class of Petri net models as in our prior work to capture program semantics, and handle atomicity violations by control specifications in the form of linear inequalities. We propose two methodologies for synthesizing control logic that enforces these linear inequalities without causing deadlocks; the resulting control logic is embedded into the program's source code by program instrumentation. These results extend the scope of concurrency bugs in software systems that can be handled by techniques from control engineering. Case studies involving two real Java programs demonstrate our solution procedure.",2012,0, 6059,Checking consistency between documents of requirements engineering phase,"During the initial software specification phase, requirement document, use cases description and interface prototypes can be generated as a way to aid in the construction of system data. The consistency among these documents is a quality attribute which must be emphasized at this phase of the software development process. The QualiCES method is presented herein; it allows assessing the consistency among these software documents, and is supported by a checklist and by a consistency metrics developed to this end. As benefits, there is defect detection and a software quality warranty from the beginning of software development. The method was executed in a case study. Based on the results, the viability for applying the method can be verified, as well as the proposal innovation degree.",2012,0, 6060,Mining crosscutting concerns with ComSCId: A rule-based customizable mining tool,"One of the first steps when reengineering legacy systems into aspect-oriented ones is to identify the crosscutting concerns (CCC) presented in the architecture of the former; a process known as aspect mining. However, this is a time- consuming and error-prone task when conducted manually. In this paper, we present a customizable mining tool, called ComSCId, which searches for the CCC in legacy Java systems in an automatic way. ComSCId owns a repository which stores all the rules used as base for the mining process. In this repository there are pre-defined rules for some ordinary CCC like persistence, buffering and logging. Moreover, the main characteristic of this repository is its flexibility, since it allows adding new rules or customizing the existing ones to specific contexts or domains. We conducted two studies to evaluate ComSCId and we have observed high percentages of identification coverage when using this tool in an incremental way.",2012,0, 6061,An assessment of security requirements compliance of cloud providers,"Cloud provider assessment is important for cloud consumers to determine, when outsourcing computing work, which providers can serve their business and system requirements. This paper presents an initial attempt to assess security requirements compliance of cloud providers by following the Goal Question Metric approach and defining a weighted scoring model for the assessment. The security goals and questions that address the goals are taken from Cloud Security Alliance's Cloud Controls Matrix and Consensus Assessments Initiative Questionnaire. We then transform such questions into more detailed ones and define metrics that help provide quantitative answers to the transformed questions based on evidence of security compliance provided by the cloud providers. The scoring is weighted by quality of evidence, i.e. its compliance with the associated questions and its completeness. We propose a scoring system architecture which utilizes CloudAudit and assess Amazon Web Services as an example.",2012,0, 6062,SLA-driven capacity planning for Cloud applications,"Cloud computing paradigm has become the solution to provide good service quality and exploit economies of scale. However, the management of such elastic resources, with different Quality-of-Service (QoS) combined with on-demand self-service, is a complex issue. This paper proposes an approach driven by Service Level Agreement (SLA) for optimizing the capacity planning for Cloud applications. The main challenge for a service provider is to determine the best trade-off between profit and customer satisfaction. In order to address this issue, we follow a queueing network proposal and present an analytical performance model to predict Cloud service performance. Based on a utility function and a capacity planning method, our solution calculates the optimal configuration of a Cloud application. We rely on autonomic computing to adjust continuously the configuration. Simulation experiments indicate that our model i) faithfully captures the performance of Cloud applications for a number of workloads and configurations and ii) successfully keeps the best trade-off.",2012,0, 6063,CloudGuide: Helping users estimate cloud deployment cost and performance for legacy web applications,"With cloud business growing, many companies are joining the market as cloud service providers. Most providers offer similar services with slightly different pricing models, and performance data remains scarce. This leaves cloud users with the puzzle of guessing what costs they will need to pay to run their legacy applications in a cloud environment. Cloud Guide is a tool suite that provides users with an estimated cost of running a legacy application on various cloud providers based on specific performance requirements. CloudGuide predicts the cloud computing resources required by a targeted application based on a queuing model and estimates the deployment cost for the application. CloudGuide allows users to explore cloud configurations that meet different performance requirements and cost constraints, and can be used to find new configuration when workload changes. The experiments presented in this study evaluated a multi-tiered network application and showed that CloudGuide can choose high-quality cloud configuration and can be used to assist system administrators with dynamic provisioning decisions.",2012,0, 6064,Classification of color objects like fruits using probability density function (PDF),"Fruits like apples are valued based on their appearance (i.e. color, sizes, shapes, presence of surface defects) and hence classified into different grades. Grading process helps in achieving better standards and quality of fruits. Of the many available color models, HSI model provides a highly effective color evaluation particularly for analyzing biological products. Human assessment furnishes only qualitative data and such inspection is time consuming and cost-intensive. Machine vision systems with specialized image processing software provide a solution that may satisfy the demand. The analysis was carried out on images of 187 apple fruits, shows that classification done based on median of PDF. In order to avoid the mismatch in grading the same it has been classified further using Histogram Intersection, which determines the closeness between two images i.e. 1 if two images are similar and 0 if they are dissimilar.",2012,0, 6065,Design and Realization of Timely Auto-Detection Based on High-precision Photoelectric Encoder,"A high-speed processing circuit based on DSP is designed to meet the increasing demand for the reliability of the high-precision encoder, and all analog signals are acquired by AD converter. The processing circuit can timely online automatically detect, diagnose and repair the fault point in time. The time of diagnosis and maintenance is highly reduced while the reliability is enhanced. The high-precision, high-intelligence and high-reliability photoelectric encoder is achieved by this method in practical application and the expected result is obtained.",2012,0, 6066,RC-Finder: Redundancy Detection for Large Scale Source Code,"Redundant code not only causes noise in code debugging which confuses developers, but also correlates with the presence of traditional severe software errors. RC-Finder, a redundancy detection system for large-scale code is proposed to detect six kinds of redundancy. This paper analyzes each kind of redundant code, and provides the detailed algorithm respectively. The experiments on large scale open source software systems show that RC-Finder can find redundant code efficiently. With RC-Finder, it is very convenient for developers to detect and correct these kinds of defects, and thereby to further guarantee the software quality.",2012,0, 6067,Detecting Bad Smells with Weight Based Distance Metrics Theory,"Detecting bad smells in program design and implementation is a challenging task. Manual detection is proved to be time-consuming and inaccurate under complex situation. Weight based distance metrics and relevant conceptions are introduced in this paper, and the automatic approach for bad smells detection is proposed based on Jaccard distance. The conception of distance between entities and classes is defined and relevant computing formulas are applied in detecting. New weight based distance metrics theory is proposed to detect feature envy bad smell. This improved approach can express more detailed design quality and invoking relationship than the original distance metrics theory. With these improvements the automation of bad smells detection can be achieved with high accuracy. And then the approach is applied to detect bad smells in JFreeChart open source code. The experimental results show that the weight based distance metrics theory can detect the bad smell more accurately with low time complexity.",2012,0, 6068,Optimized Collaborative Filtering Algorithm Based on Item Rating Prediction,"Collaborative filtering recommendation algorithm is currently the most widely used personalized recommendation algorithm. Sparsity problem of user rating data led to the recommendation quality of traditional collaborative filtering algorithms are far from ideal. To solve the problem, the paper first cloud model and project characteristic attributes to calculate the similarity between the project has taken into consideration in computing project similarity scores were similar between the project and consider the project between the characteristic attribute similarity, and then to predict ungraded items rated. Finally, the cloud model to calculate the similarity between users to obtain the target user's nearest neighbor. Experimental results show that the algorithm improves the accuracy of the similarity of the calculated project, and effectively solve the problem of data sparsity, and improve the quality of the recommendation system recommended.",2012,0, 6069,Spectrum-based fault diagnosis for service-oriented software systems,"Due to the loosely coupled and highly dynamic nature of service-oriented systems, the actual configuration of such system only fully materializes at runtime, rendering many of the traditional quality assurance approaches useless. In order to enable service systems to recover from and adapt to runtime failures, an important step is to detect failures and diagnose problematic services automatically. This paper presents a lightweight, fully automated, spectrum-based diagnosis technique for service-oriented software systems that is combined with a framework-based online monitor. An experiment with a case system is set up to validate the feasibility of pinpointing problematic service operations. The results indicate that this approach is able to identify problematic service operations correctly in 72% of the cases.",2012,0, 6070,Revenue maximization with quality assurance for composite web services,"Service composition is one of the major approaches in service oriented architecture (SOA) based systems. Due to the inherent stochastic nature of services execution environment the issue of composite service quality assurance within SOA is a very challenging one. Such heterogeneous environment requires dynamic, run-time composition of services. We show how to determine a policy that satisfies the quality assurance for the composite service provider with the aim of revenue maximization for this provider. The quality assurance is defined as the probability that end-to-end deadline will be met, while taking into account service availability, (composite) service response-time and costs. The calculated policy is determined using dynamic programming and allows fast decision making for run-time composition. Besides, we determine the end-to-end response-time distributions resulting from determined policies. We illustrate the proposed solution with a number of experiments.",2012,0, 6071,BFT-r: A proactive Byzantine Fault-Tolerant agreement with rotating coordinator and mutable blacklist mechanism,"With the advent of replication-based approach for a distributed environment, a major coordination problem i.e., Consensus can be solved in the presence of some malicious replicas. Therefore, we attempt to design an agreement algorithm with proactive detection of such malicious replicas. The paper presents an algorithm BFT-r i.e., Byzantine Fault Tolerance with rotating coordinator. The basic idea is to rotate the role of the primary coordinator among all the participating replicas. Undoubtedly, the assignment of each participating replica to be primary increases the possibility of a faulty replica to be selected as primary. Therefore, in order to avoid such problem, our protocol runs a mutable blacklist mechanism in which an array of previously detected faulty replicas is maintained and propagated among the different nodes so as to avoid the decision from a faulty replica. The mutable blacklist mechanism is in line with the proactive nature of the proposed protocol. The necessary correctness proof has also been presented along with the simulation analysis. The protocol is robust and exhibits better efficiency for long-lived applications/systems.",2012,0, 6072,Using host criticalities for fault tolerance in mobile agent systems,"Monitoring is a crucial factor for smooth run of distributed systems such as mobile agent based system. Various activities in such systems require monitoring such as performance analysis and tuning, scheduling strategies and fault detection. In this paper we present monitoring and fault tolerance technique for mobile agent based systems. We present mobile agent based fault prevention and detection technique where the team of mobile agents monitor each host in mobile agent based system. This research focuses on building an automatic, adaptive and predictive determining policy where critical host agents are identified in advance by monitoring agents, to avoid their failures. The novelty of proposed approach is constant collection and updating of local as well as global information of the system. This policy is determined by calculating weights; taking into account the criticality of the hosts by their monitoring agents which keep updating the weights of hosts. These weights act are used for decision making of checkpointing. These monitoring mobile agents act together to detect undesirable behaviors and also provide support for restoring the system back to normalcy. We also report on the result of reliability and performance issue of our proposed approach.",2012,0, 6073,Application of Custom Power Park to improve power quality of sensitive loads,"This paper presents the performance of custom power devices in a Custom Power Park (CPP) by series and shunt compensation in a distribution supply system. A static transfer switch (STS) provides alternate feeder supply to park loads where shunt active power filter (SAPF), and a dynamic voltage restorer (DVR) are employed to compensate current harmonics, voltage sag, and voltage interruptions. A novel voltage detection controller based on single-phase synchronous d-q reference frame is developed which can detect single or three-phase balanced/unbalanced voltage imperfections. Extensive simulation studies are carried out by using PSCAD/EMTDC software and the ability of CPP to provide a group of customers with premium quality of power is analyzed.",2012,0, 6074,Fault tolerant gripper in robotics,"The work presented in this article shows, through a case study, the different stages of designing a fault tolerant system; a system which can tolerate faults without losing its operational capability. This system includes a gripper mounted on the wrist of a robot manipulator. First the faults were identified, then the residual generation stage was designed. Analyzing this residual allows us to show that artificially injected faults can be detected and an alarm is generated for an emergency stop command. With regards to the gripper, we have only dealt with the faults we were able to solve thanks to the accommodation stage which must control both hardware and software reconfigurations. The considered faults originated in the electronic interfaces and for an efficient supervision we have designed the whole gripper control. The electronic interfaces concerned by fault occurrence are duplicated to ensure robot task continuity.",2012,0, 6075,A Cloud Service Design for Quality Evaluation of Design Patterns,"In recent years, the influences of design patterns on software quality have attracted increasing attention in the area of software engineering, as design patterns encapsulate valuable knowledge to resolve design problems, and more importantly to improve the design quality. Our previous research has proposed an effectiveness evaluation method to assess the design pattern's quality. the proposed method performs experiments in open source projects. as the version of open source project grows, the analysis time increases in pace with the increasing source code size. the computation performance cannot be applicable to the actual real time data analysis. in this research, we develop an effectiveness evaluation cloud service based on powerful computing capability to process a large amount of data. We collect design pattern's applications in open source projects, and provide programmers valuable and real-time analysis information to help them inspect the value of deployed design patterns.",2012,0, 6076,FPGA based RNG for random WOB method in unit cube capacitance calculation,"Monte Carlo (MC) method is widely used in resolving mathematical problems that are too complicated to solve analytically. The method involves with sampling process of the random numbers and probability to estimate the result. As MC method depending on a large number of good quality random numbers to produce a high accuracy result, developing a good random number generator (RNG) is vital. Most of random number generators are developed in software based, with the improvement of Field Programmable Gate Arrays (FPGA) density and speed in recent days, implementing random number generator (RNG) directly into hardware is feasible. Random Walk on the Boundary (WOB) is one of MC methods that applied to calculate the unit cube capacitance. The unique requirement of this method is the random numbers produced by RNG must fall within the Gaussian distribution and the maximum decimal values in the range of [0, 1]. Thus, in this paper we presented a novel hardware RNG for Random WOB method to calculate unit cube capacitance in FPGA. The RNG is implemented in floating-point base, using the combination of Cellular Automata Shift Register (CASR) and Linear Feedback Shift Register (LFSR) then channeled through Box-Muller transformation. There is linear approximation in computing logarithmic function applied in Box-Muller transformation. Based on statistical tests, the random numbers generated is nearly 97.5% resembling the standard normal distribution.",2012,0, 6077,Assessing and improving software quality in safety critical systems by the application of a SOFTWARE TEST MATURITY MODEL,"Drawing on wide experience over many years, BitWise has evolved a SOFTWARE TEST MATURITY MODEL. This is of particular value in the testing of safety critical software. This brings significant benefits in terms of cost and effective quality. This paper explains the Model and enables development groups to assess their current capabilities and plan any required improvements.",2012,0, 6078,"Remote prognosis, diagnosis and maintenance for automotive architecture based on least squares support vector machine and multiple classifiers","Software issues related to automotive controls account for an increasingly large percentage of the overall vehicles recalled. To alleviate this problem, vehicle diagnosis and maintenance systems are increasingly being performed remotely, that is while the vehicle is being driven without need for factory recall and there is strong consumer interest in Remote Diagnosis and Maintenance (RD&M) systems. Such systems are developed with different building blocks/elements and various capabilities. This paper presents a novel automotive RD&M system and prognosis architecture. The elements of the proposed system include vehicles, smart phones, maintenance service centers, vehicle manufacturer, RD&M experts, RD&M service centers, logistics carry centers, and emergency centers. The system promotes the role of smart phones used to run prognosis and diagnosis tools based on Least Squares Support Vector Machine (LS-SVM) multiple classifiers. During the prognosis phase, the smart phone stores the history of any forecast failures and send them, only if any failure already occurred during the diagnosis, to the RD&M service centre. The later will then forward it to RD&M experts as a real failure data to improve the training data used in prognosis classification and predication of the remaining useful life (RUL). The (LS-SVM) is used widely in prognostics and system health management of spacecraft in-orbit and it is applied to monitor spacecraft's performance, detect faults, identify the root cause of the fault, and predict RUL. The same approach is applied in this paper. Finally, the RD&M software architectures for the vehicle and the smart phone are presented.",2012,0, 6079,Application stress testing Achieving cyber security by testing cyber attacks,"Application stress testing applies the concept of computer network penetration testing to software applications. Since software applications may be attacked - from inside or outside a protected network boundary - they are threatened by actions and conditions which cause delays, disruptions, or failures. Stress testing exposes software systems to simulated cyber attacks, revealing potential weaknesses and vulnerabilities in their implementation. By using such testing, these internal weaknesses and vulnerabilities can be discovered earlier in the software development life cycle, corrected prior to deployment, and lead to improved software quality. Application stress testing is a process and software prototype for verifying the quality of software applications under severe operating conditions. Since stress testing is rarely - if at all - performed today, the possibility of deploying critical software systems that have been stress tested provides a much stronger indication of their ability to withstand cyber attacks. Many possible attack vectors against critical software can be verified as true threats and mitigated prior to deployment. This improves software quality and serves as a tremendous risk reduction for critical software systems used in government and commercial enterprises. The software prototype models and verifies failure conditions of a system under test (SUT). The SUT is first executed in a virtual environment and its normal operational modes are observed. A normal behavior model is generated in order to predict failure conditions based on attack models and external SUT interfaces. Using off-the-shelf software tools, the predictions are verified in the virtual environment by stressing the executing SUT with attacks against the SUT. Results are presented to testers and system developers for dispensation or mitigation.",2012,0, 6080,Learning features for predicting OCR accuracy,"In this paper, we present a new method for assessing the quality of degraded document images using unsupervised feature learning. The goal is to build a computational model to automatically predict OCR accuracy of a degraded document image without a reference image. Current approaches for this problem typically rely on hand-crafted features whose design is based on heuristic rules that may not be generalizable. In contrast, we explore an unsupervised feature learning framework to learn effective and efficient features for predicting OCR accuracy. Our experimental results, on a set of historic newspaper images, show that the proposed method outperforms a baseline method which combines features from previous works.",2012,0, 6081,Teaching software inspection effectiveness: An active learning exercise,"This paper discusses a novel active learning exercise which teaches students how to perform and assess the effectiveness of formal software inspections. In this exercise, students are responsible for selecting an artifact from their senior capstone design projects. The students then use fault injection to strategically place faults within the artifact that should be caught by the inspection exercise. Based on the needs of the team, students prepare an inspection packet consisting of a set of inspection instructions, applicable checklists, and the inspection artifact. Students then hire a set of inspectors based on classmates' backgrounds and experiences. The team leader then holds two inspection meetings and reports the results. The results are then used to assess the effectiveness of the inspection. Overall, in analyzing 5 years worth of data from this exercise, it is found that students are capable of selecting appropriate materials for inspection and performing appropriate software inspections. The yield of students is lower than an experienced professional might have and the inspection rates tend to be slightly higher than desired for their experience. However, the yield is related to individual preparation time. Students overall find this to be a highly educational experience and highly recommend it be continued for future classes.",2012,0, 6082,Work in progress: Engaging faculty for program improvement via EvalTools: A new software model,"In this paper we present our experiences with a new software-tool model for program assessment and evaluation, and engineering programs improvement in assessing learning effectiveness. In the past, the assessment plan was based on the typical practice of assigning a person in the department for overseeing the whole process. The non-collaborative evaluation that is managed by a single program leader is not effective and problematic. To involve more faculty and in an effort to prepare for the most recent ABET visit, we decided to adopt EvalTools in fall of 2010. EvalTools is designed and developed according to ABET standards to provide a mechanism for collecting and analyzing data about the program, students' performance and their learning achievements. In addition, EvalTools is instrumental in providing a mechanism to simplify the process of inspecting the assessment results as well as identifies strengths and shortcomings of the program before ABET review. More importantly getting faculty members excited about results and involved in the process of program improvement is a major accomplishment. Our experience via first-time implementation of EvalTools shows very useful results for this model that can be easily disseminated for various programs in various disciplines. In this paper we will show: process of best use of relevant features in aid of streamlining faculty's time in data collection as well as evaluation was achieved; our results and how we succeeded in improving our program quality in an effective, efficient and systematic way; that simple curriculum revisions for multiple programs as a result of using EvalTools for programs under going ABET is possible; that capturing the process of effective trainings needed for faculty and staff in a simple manner; faculty's experience in a constructive and engaging manner.",2012,0, 6083,Integration of Safety Verification with Conformance Testing in Real-Time Reactive System,"In the paper, we propose a method that can be applied to verify implementation in real-time reactive system. Different from other software model checking approaches, our method is based on testing. This approach allows the verification of safety property to be conducted directly on real code instead of models extracted from final implementation. Verifying that kind of models is a hard work and can only be applied to parts of the implementation. The method is done by establishing a connection between safety verification and conformance testing in real-time system. We first prove a theorem that in real-time system, under the input enabled precondition, if an implementation conforms to its specification and the specification satisfies the safety properties, the implementation satisfies it either. Then, based on contropositivity of the former conclusion, we present a test case generation framework which forms basis for generating test cases that can be used to detect violations of safety properties in the implementation. In addition, this test generation framework can also detect more nonconformance defects when compared with other real time test generation methods. The method is illustrated with a train gate control system.",2012,0, 6084,A Preference and Honesty Aware Trust Model for Web Services,"Trust is one of the most critical factors for a service requestor when selecting the best one from a large pool of services. However, existing web service trust models either do not focus on satisfying customer's preference for different quality of service (QoS) attributes, or do not pay enough attention to the impact of malicious ratings on trust evaluation. To address these gaps, a dynamic trust evaluation model considering customer's preference and false ratings is proposed in this paper. The model introduces an approach automatically mining customer's preference from their requirements. The preference is used to determine the weights on each QoS attribute when integrating trust of the multi-dimensional QoS attributes. The local trust of a service for the customer is derived by combining trust of QoS attributes and customer's ratings. Then, the customers are divided into different groups according to their preferences, and the honesty of each group is assessed by filtering out dishonest customers based on a hybrid approach combining rating consistency clustering and average method. Finally, the weight on ratings is dynamically adjusted according to the results of honesty assessment when calculating the global trustworthiness of a service for the user group. The simulation results indicate that the model works well on personalized evaluation of trust, and it can effectively dilute the influence of malicious ratings.",2012,0, 6085,Software-Based Online Monitoring of Cache Contents on Platforms without Coherence Fabric,"In favor of smaller chip areas and associated fabrication costs, designers of embedded multi-core systems on occasion decide not to include cache coherence logic in the hardware design. However, handling all cache coherence exclusively in software is error-prone, and there are presently no tools supporting developers in this task. Thus, we propose a new software testing method, based on online inspection of the cache contents, to pinpoint programming mistakes related to cache handling. This concept helps localizing the causing data symbol even for complicated cache handling errors, e.g. where the causing and manifesting code-location of an error differ. Our solution is a pure software solution and does not require any specialized hardware. We evaluate our approach by using it in a large application, and show that we can detect typical cache-related errors.",2012,0, 6086,An Empirical Study on Improving Severity Prediction of Defect Reports Using Feature Selection,"In software maintenance, severity prediction on defect reports is an emerging issue obtaining research attention due to the considerable triaging cost. In the past research work, several text mining approaches have been proposed to predict the severity using advanced learning models. Although these approaches demonstrate the effectiveness of predicting the severity, they do not discuss the problem of how to find the indicators in good quality. In this paper, we discuss whether feature selection can benefit the severity prediction task with three commonly used feature selection schemes, Information Gain, Chi-Square, and Correlation Coefficient, based on the Multinomial Naive Bayes classification approach. We have conducted empirical experiments with four open-source components from Eclipse and Mozilla. The experimental results show that these three feature selection schemes can further improve the predication performance in over half the cases.",2012,0, 6087,Linking Functions and Quality Attributes for Software Evolution,"Software quality properties, normally derived from non-functional requirements, are becoming more important for software. A main reason for software evolution is the unsatisfaction to software quality properties. When improving these properties through software evolution, it is essential to know whether software functions are affected and by how much. This paper proposes an approach to linking the functions with the quality properties of software for evolution via software architecture styles, aiming at contributing to (1) predicting evolution efforts and (2) transforming software for improving its quality.",2012,0, 6088,Validating the Effectiveness of Object-Oriented Metrics over Multiple Releases for Predicting Fault Proneness,"In this paper, we empirically investigate the re-lationship of existing class level object-oriented metrics with fault proneness over the multiple releases of the software. Here we first, evaluate each metric for their potential to predict faults independently by performing univariate logistic regression analysis. Next, we perform cross-correlation analysis between the significant metrics to find the subset of these metrics for an improved performance. The obtained metrics subset was then used to predict faults over the subsequent releases of the same project datasets. In this study, we used five publicly available project datasets over their multiple successive releases. Our results reported that the identified subset metrics demonstrated an improved fault prediction with higher accuracy and reduced misclassification errors.",2012,0, 6089,An Empirical Analysis of the Impact of Comment Statements on Fault-Proneness of Small-Size Module,"Code size metrics are commonly useful in predicting fault-prone modules, and the larger module tends to be more faulty. In other words, small-size modules are considered to have lower risks of fault. However, since the majority of modules in a software are often small-size, many ``small but faulty'' modules have been found in the real world. Hence, another fault-prone module prediction method, intended for small-size module, is also required. Such a new method for small-size module should use metrics other than code size since all modules are small size. This paper focuses on ``comments'' written in the source code from a novel perspective of size-independent metrics, comments have not been drawn much attention in the field of fault-prone module prediction. The empirical study collects 11,512 small-size modules, whose LOC are less than the median, from three major open source software, and analyzes the relationship between the lines of comments and the fault-proneness in the set of small-size modules. The empirical results show the followings: 1) A module in which some comments are written is more likely to be faulty than non-commented ones, the fault rate of commented modules is about 1.8-3.5 times higher than that of non-commented ones. 2) Writing one to four lines of comments would be thresholds of the above tendency.",2012,0, 6090,An Approach to Estimating Cost of Running Cloud Applications Based on AWS,"Estimating the cost is important for cloud application developers to services in clouds, and becomes even important when it needs remaining a certain service level at the same time. Though currently much work has been down to predict cost and performance of cloud applications, most of them perform either before application design or after the application construction, which leads to either imprecise estimation or irreparable design fault. In this paper, we propose an approach to estimate the cost of running typical applications in Amazon Web Service (AWS) cloud during design phase. We propose an UML Activity-extended model (AeModel) to describe execution of application service and introduce an extraction algorithm to extract information contained in the AeModel automatically. We propose a cost model on AWS, which can help developers to estimate operating cost during design phase and satisfy performance needs, with an algorithm to produce suitable purchase solutions automatically. We perform case studies using a web-based business application to show effectiveness of our approach, and find that our approach can help developers lessen cost by adjusting application models.",2012,0, 6091,A Guided Mashup Framework for Rapid Software Analysis Service Composition,"Historical data about software projects is stored in repositories such as version control, bug tracking and mailing lists. Analyzing such data is vital to discover unthought-of-yet-interesting insights of a software project. Even though a wide range of software analysis techniques are already available, integration of such analyses is yet to be systematically addressed. Inspired from the recently introduced concept of Software as a Service, our research group investigated the concept of Software Analysis as a Service (SOFAS), a distributed and collaborative software analysis platform. SOFAS allows software analyses to be accessed, composed into workflows, and executed over the Internet. However, traditional service composition is a complex, time consuming and error-prone process, which requires experts in both composition languages and existing standards. In this paper, we propose a mashup platform to address the problem of software analysis composition in a light-weight, programming-free process-centric way. Our proposed mashup platform provides design-time guidance to the users throughout the mashup design by integrating a continuous feedback mechanism. It requires exploiting semantic web technologies and Software Engineering Ontologies (SEON).",2012,0, 6092,Assessing Platform Suitability for Achieving Quality in Guest Applications,"Selecting a computing platform, such as private cloud or stand-alone virtualization based, is arguably a very critical task in an enterprise. It impacts several aspects of software systems -- from architecture to post-deployment support and operations. Emergence of various virtualization and cloud based platforms has added to the complexity of the said assessment and selection process. The main reason for this complexity is that each such platform possesses unique characteristics, and each such characteristic impacts Quality Attributes (QA) achievable by the guest applications. A novel method is presented to perform assessment of platforms on QA criteria. This method makes use of fuzzy sets techniques for performing multi-criteria evaluation of platforms. Taking a set of platforms and QA criteria as inputs, this method produces an ordered ranking of platforms. This output can be used in architecture design activities. Efficacy of the proposed approach has been demonstrated by assessing several variants of virtualization and cloud based platforms on a set of QA criteria.",2012,0, 6093,A Heuristic Rule Reduction Approach to Software Fault-proneness Prediction,"Background: Association rules are more comprehensive and understandable than fault-prone module predictors (such as logistic regression model, random forest and support vector machine). One of the challenges is that there are usually too many similar rules to be extracted by the rule mining. Aim: This paper proposes a rule reduction technique that can eliminate complex (long) and/or similar rules without sacrificing the prediction performance as much as possible. Method: The notion of the method is to removing long and similar rules unless their confidence level as a heuristic is high enough than shorter rules. For example, it starts with selecting rules with shortest length (length=1), and then it continues through the 2nd shortest rules selection (length=2) based on the current confidence level, this process is repeated on the selection for longer rules until no rules are worth included. Result: An empirical experiment has been conducted with the Mylyn and Eclipse PDE datasets. The result of the Mylyn dataset showed the proposed method was able to reduce the number of rules from 1347 down to 13, while the delta of the prediction performance was only. 015 (from. 757 down to. 742) in terms of the F1 prediction criteria. In the experiment with Eclipsed PDE dataset, the proposed method reduced the number of rules from 398 to 12, while the prediction performance even improved (from. 426 to. 441.) Conclusion: The novel technique introduced resolves the rule explosion problem in association rule mining for software proneness prediction, which is significant and provides better understanding of the causes of faulty modules.",2012,0, 6094,Quality-Aware Academic Research Tool Development,"Many organizations have adopted several different kinds of commercial software tools for the purpose of developing quality software, reducing time-to-market, and automating labor intensive and error-prone tasks. Academic researchers have also developed various types of tools, primarily as a means toward providing a prototype reference implementation that corresponds to some new research concept. In addition, academic researchers also use the tool building task itself as a mechanism for students to learn and practice various software engineering principles (e.g., requirements management, design, implementation, testing, configuration management, and release management) from building the tools. Although some academic tools have been developed with observance of sound software engineering practices, most academic research tool development still remains an ad hoc process because tools tend to be developed quickly and without much consideration for quality. In this paper, we present several quality factors to be considered when developing software tools for academic research purposes. We also present a survey of tools that have been presented at major conferences to examine the status quo of academic research tool development in terms of these factors.",2012,0, 6095,Generation of Character Test Input Data Using GA for Functional Testing,"Typically, test input data is manually created for the functional testing of software applications hence is a time consuming and error prone activity. In this paper, we present an approach to generate test input data from structured requirements specifications models such as UML use case activity diagrams (UCADs). We propose Constraint Representation Syntax (CRS) for framing software attribute properties as a part of structuring the Software Requirements Specifications (SRS). Then, structured models are parsed into a set of functional paths along with their predicates containing attribute constraints. Genetic algorithm is used to generate test input data that satisfy these predicates. Based on our approach a prototype tool has been developed and a case study results are evaluated.",2012,0, 6096,An Interval-Based Model for Detecting Software Defect Using Alias Analysis,"Alias analysis is a branch of static program analysis aiming at computing variables which are alias of each other. It is a basis research for many analyses and optimizations in software engineering and compiler construction. Precise modeling of alias analysis is fundamental for software analysis. This paper presents two practical approximation models for representing and computing alias: memory-sensitive model (MSM) and value-sensitive model (VSM). Based on defect-oriented detecting, we present a method to detect software defect using VSM and MSM, which realizes inter-procedure detecting by procedure summary. According to whether type of analysis object coming from defect is value-sensitive or memory-sensitive, we propose two detecting algorithms based on two alias models respectively. One is for memory leak (ML) based on MSM, and the other is for invalid arithmetic operation (IAO) based on VSM. We apply a defect testing system (DTS) to detect six C++ open source projects for proving our models effectiveness. Experimental results show that applying our technique to detect IAO and ML defect can improve detecting efficiency, at the same time reduce potential false positives and false negatives.",2012,0, 6097,A Two-Level Prioritization Approach for Regression Testing of Web Applications,A test case prioritization technique reschedules test cases for regression testing in an order to achieve specific goals like early fault detection. We propose a new two level prioritization approach to prioritize test cases for web applications as a whole. Our approach automatically selects modified functionalities in a web application and executes test cases on the basis of the impact of modified functionalities. We suggest several new prioritization strategies for web applications and examine whether these prioritization strategies improve the rate of fault detection for web applications. We propose a new automated test suite prioritization model for web applications that selects test cases related to modified functionalities and reschedules them using our new prioritization strategies to detect faults early in test suite execution.,2012,0, 6098,"E-Net, system for energy management","Energy management software E-Net, developed by Quartz Matrix, is a powerful tool for evaluating energy efficiency and discovering consumption reduction opportunities. Based on a modular architecture, service oriented, E-Net software offers users a powerful environment for generating consumption forecasting, tracking budgets and assessing quality energy distribution. At the same time it is a technical, commercial and managerial tool.",2012,0, 6099,Faults detection on a wound rotor induction machines by principal components analysis,"This paper deals with faults detection and localization of wound rotor induction machines based on principal components analysis method. Both, localization and detection approaches consist in analyzing a detection index which is established on principal components. Once the faults are detected, the affected state variables are localized. The EWMA filter is applied to improve the fault detection quality by reducing the rate of false alarms. An accurate analytical modeling of the wound rotor induction machines is proposed and implemented on the software Matlab to obtain the state variables data of both healthy and faulted machines. Several simulation results are presented and analyzed.",2012,0, 6100,Cognitive behavior analysis framework for fault prediction in cloud computing,"Complex computing systems, including clusters, grids, clouds and skies, are becoming the fundamental tools of green and sustainable ecosystems of future. However, they can also pose critical bottlenecks and ignite disasters. The complexity and high number of variables could easily go beyond the capacity of any analyst or traditional operational research paradigm. In this work, we introduce a multi-paradigm, multi-layer and multi-level behavior analysis framework which can adapt to the behavior of a target complex system. It not only learns and detects normal and abnormal behaviors, it could also suggest cognitive responses in order to increase the system resilience and its grade. The multi-paradigm nature of the framework provides a robust redundancy in order to cross-cover possible hidden aspects of each paradigm. After providing the high-level design of the framework, three different paradigms are discussed. We consider the following three paradigms: Probabilistic Behavior Analysis, Simulated Probabilistic Behavior Analysis, and Behavior-Time Profile Modeling and Analysis. To be more precise and because of paper limitations, we focus on the fault prediction in the paper as a specific event-based abnormal behavior. We consider both spontaneous and gradual failure events. The promising potential of the framework has been demonstrated using simple examples and topologies. The framework can provide an intelligent approach to balance between green and high probability of completion (or high probability of availability) aspects in computing systems.",2012,0, 6101,Tutorial on building M&S software based on reuse,"The development of software for modeling and simulation is still a common step in the course of projects. Thereby any software development is error prone and expensive and it is very likely that the software produced contains flaws. This tutorial will show which techniques are needed in modeling and simulation software independent from application domains and model description means and how reuse and the use of state of the art tools can improve the software production process. The tutorial is based on our experiences made on developing and using JAMES II, a flexible framework created for building specialized M&S software products, for research on modeling and simulation, and for applying modeling and simulation.",2012,0, 6102,ARMISCOM: Autonomic reflective middleware for management service composition,"In services composition the failure of a single service generates an error propagation in the others services involved, and therefore the failure of the system. Such failures often cannot be detected and corrected locally (single service), so it is necessary to develop architectures to enable diagnosis and correction of faults, both at individual (service) as global (composition levels). The middlewares, and particularly reflective middlewares, have been used as a powerful tool to cope with inherent heterogeneous nature of distributed systems in order to give them greater adaptability capacities. In this paper we propose a middleware architecture for the diagnosis of fully distributed service compositions called ARMISCOM, which is not coordinated by any global diagnoser. The diagnosis of faults is performed through the interaction of the diagnoser present in each service composition, and the repair strategies are developed through consensus of each repairer distributed equally in each service composition.",2012,0, 6103,SR-SIM: A fast and high performance IQA index based on spectral residual,"Automatic image quality assessment (IQA) attempts to use computational models to measure the image quality in consistency with subjective ratings. In the past decades, dozens of IQA models have been proposed. Though some of them can predict subjective image quality accurately, their computational costs are usually very high. To meet real-time requirements, in this paper, we propose a novel fast and effective IQA index, namely spectral residual based similarity (SR-SIM), based on a specific visual saliency model, spectral residual visual saliency. SR-SIM is designed based on the hypothesis that an image's visual saliency map is closely related to its perceived quality. Extensive experiments conducted on three large-scale IQA datasets indicate that SR-SIM could achieve better prediction performance than the other state-of-the-art IQA indices evaluated. Moreover, SR-SIM can have a quite low computational complexity. The Matlab source code of SR-SIM and the evaluation results are available online at http://sse.tongji.edu.cn/linzhang/IQA/SR-SIM/SR-SIM.htm.",2012,0, 6104,Anti-ghost of differently exposed images with moving objects,"In a typical image synthesis where multiple differently exposed images are captured for processing, it is important to design an anti-ghost algorithm so as to prevent ghosting artifacts from appearing in the final image. An anti-ghost algorithm is usually composed of a detection module and a correction module. In this paper, a new detection module is proposed to detect non-consistent pixels of all input images without predefining any initial reference image. The proposed module is suitable when an interactive mode is desired. In addition, a bidirectional approach is introduced to correct the non-consistent pixels in the correction module. Compared with existing unidirectional correction methods, the proposed bidirectional correction approach uses information from two adjacent images of a detected image to correct its non-consistent pixels. This leads to a quality improvement in the final image.",2012,0, 6105,Automation System for Validation of Configuration and Security Compliance in Managed Cloud Services,"Validation of configuration and security compliance at the time of creating new service is an important part of service management process and governance in most IT delivery organizations. It is performed to ensure that security risks, governance controls and vulnerabilities are proactively managed through the lifecycle of the services, and to guarantee that all discovered problems and issues are addressed and remediated for quality assurance before the services are delivered to customers. The validation process is complex and is typically carried out by following a checklist with questions and answers through manual steps that are time consuming and error prone. This lengthy process is particularly troublesome when providing managed cloud services to enterprise customers with a pre-specified request fulfillment time in SLA. In order to improve the timeliness and quality of cloud services, we have introduced an automation system to orchestrate the validation process with executable scripts to be executed against the services. We will describe a novel policy mechanism to capture exception rules for eliminating possible interference in security configuration contained in the scripts. We will explain how our system is designed and implemented to fulfill the needs of large enterprises from both the service provider's and the service consumer's vantage points.",2012,0, 6106,Efficient Data Tagging for Managing Privacy in the Internet of Things,"The Internet of Things creates an environment where software systems are influenced and controlled by phenomena in the physical world. The goal is invisible and natural interactions with technology. However, if such systems are to provide a high-quality personalised service to individuals, they must by necessity gather information about those individuals. This leads to potential privacy invasion. Using techniques from Information Flow Control, data representing phenomena can be tagged with their privacy properties, allowing a trusted computing base to control access based on sensitivity and the system to reason about the flows of private data. For this to work well, tags must be assigned as soon as possible after phenomena are detected. Tagging within resource-constrained sensors raises worries that computing the tags may be too expensive and that useful tags are too large in relation to the data's size and the data's sensitivity. This paper assuages these worries, giving code templates for two small micro controllers (PIC and AVR) that effect meaningful tagging.",2012,0, 6107,Towards a Fault-Tolerant Wireless Sensor Network Using Fault Injection Mechanisms: A Parking Lot Monitoring Case,"A Wireless Sensor Network (WSN) requires a high level of robust and fault tolerant sensing and actuating capabilities, specially when the application aims to gather delicate and urgent data with reasonable latency. Hence, verifying the behavior properties under the presence of faults remains an important step in developing an application over a WSN. A comprehensive study on characterization and understanding of all the possible faults is required in order to generate and inject 'any' known error to the system. In order to ensure appearance of all the faults and possible bugs in the system, conception and developing a fault injector which generates and injects any requested fault to the system is promising. This becomes more important and critical when the fault happens very rarely, while due to Murphy's law it happens certainly along the network life. Considering that occurrence of faults depends heavily on the specifications of the use case, in this paper we concentrate on a sensor network which aims to detect the presence of vehicles on parking lots. We try to categorize and characterize the faults driven by this system as the first step of developing a fault injector.",2012,0, 6108,Design and modeling of a non-blocking checkpointing system,"As the capability and component count of systems increase, the MTBF decreases. Typically, applications tolerate failures with checkpoint/restart to a parallel file system (PFS). While simple, this approach can suffer from contention for PFS resources. Multi-level checkpointing is a promising solution. However, while multi-level checkpointing is successful on today's machines, it is not expected to be sufficient for exascale class machines, which are predicted to have orders of magnitude larger memory sizes and failure rates. Our solution combines the benefits of non-blocking and multi-level checkpointing. In this paper, we present the design of our system and model its performance. Our experiments show that our system can improve efficiency by 1.1 to 2.0x on future machines. Additionally, applications using our checkpointing system can achieve high efficiency even when using a PFS with lower bandwidth.",2012,0, 6109,Cost- and deadline-constrained provisioning for scientific workflow ensembles in IaaS clouds,"Large-scale applications expressed as scientific workflows are often grouped into ensembles of inter-related workflows. In this paper, we address a new and important problem concerning the efficient management of such ensembles under budget and deadline constraints on Infrastructure- as-aService (IaaS) clouds. We discuss, develop, and assess algorithms based on static and dynamic strategies for both task scheduling and resource provisioning. We perform the evaluation via simulation using a set of scientific workflow ensembles with a broad range of budget and deadline parameters, taking into account uncertainties in task runtime estimations, provisioning delays, and failures. We find that the key factor determining the performance of an algorithm is its ability to decide which workflows in an ensemble to admit or reject for execution. Our results show that an admission procedure based on workflow structure and estimates of task runtimes can significantly improve the quality of solutions.",2012,0, 6110,A data-driven model for software reliability prediction,"In the actual software development, failure data is rarely pure linear or nonlinear. It is usually formed by the linear and nonlinear patterns at the same time. These models can be divided into two main categories: analytical model and data-driven model. Analytical SRMs are proposed based on underlying assumptions about the nature of software faults, the stochastic behavior of the software processes and the development environments. On the contrary, the so-called data-driven models, borrowing heavily from artificial intelligence techniques, rely directly on the collected data describing input and output characteristics. Compared to analytical SRMs, data-driven models have much less unpractical assumptions and are much abler to make abstractions and generalizations of the software failure process. It has been recognized that the auto regression integrated moving average (ARIMA) and the support vector machine (SVM) perform fairly well in predicting linear and nonlinear time series data. Therefore, we propose a hybrid approach to software reliability forecasting using both ARIMA and SVM models.",2012,0, 6111,High performance automatic number plate recognition in video streams,"We present a range of image and video analysis techniques that we have developed in connection with license plate recognition. Our methods focus on two areas - efficient image preprocessing to improve low-quality detection rate and combining the detection results from multiple frames to improve the accuracy of the recognized license plates. To evaluate our algorithms, we have implemented a complete ANPR system that detects and reads license plates. The system can process up to 110 frames per second on single CPU core and scales well to at least 4 cores. The recognition rate varies depending on the quality of video streams (amount of motion blur, resolution), but approaches 100% for clear, sharp license plate input data. The software is currently marketed commercially as CarID1. Some of our methods are more general and may have applications outside of the ANPR domain.",2012,0, 6112,Quality of Estimations - How to Assess Reliability of Cost Predictions,"Software Project Cost Prediction is one of the unresolved problems of mankind. While today's civil engineering work is more or less under control, software projects are not. Cost overruns are so frequent that it is wise never trusting any initial cost estimate but take precaution for higher cost. Nevertheless, finance managers need reliable estimates in order to be able to fund software and ICT projects without running risks. Estimates are usually readily available - for instance based on functional size and benchmarking. However, the question how reliable these estimations are is often left out, or answered in a purely statistical manner that gives no clue to practitioners what these overall statistical variations means for them. This paper explains how to make use of Six Sigma's transfer functions that map cost defined by a committee of GUFPI-ISMA onto project cost. Transfer functions reverse the process of estimation: they show how much a project costs under suitable assumptions for the cost drivers. If cost drivers can be measured, and transfer functions can be determined with known accuracy, not only project cost can be predicted but also the range and probability for such cost to occur.",2012,0, 6113,Measuring and Evaluating a DotNet Application System to Better Predict Maintenance Effort,"The ISO standard 9126 defines the basic quality criteria for evaluating a software product and suggests a suite of metrics for measuring them, however it remains for the user of the standard to apply those metrics to his particular situation. This paper describes how the metrics were extended to assess the static quality criteria as well as the complexity of a large Dot Net application. In addition, the size of the software was measured to be able to compare it with similar systems of the same type. The result was a comparative evaluation to aide the owners of that system in planning further maintenance and evolution activities. Besides that cost estimations were made for maintenance and further development. The measurement project described here is a practical example of how metrics can be applied to assess existing software systems.",2012,0, 6114,Metrics Based Software Supplier Selection - Best Practice Used in the Largest Dutch Telecom Company,"This article provides insight into a 'best practice' used for the selection of software suppliers at the largest Dutch telecom operator, KPN[1]. It explains the metrics rationale applied by KPN when selecting only one preferred supplier (system integrator) per domain instead of the various suppliers that were previously active in each domain. Presently (Q2 2012) the selection and contracting process is entering its final phase. In this paper, the model that was built and used to assess the productivity of the various suppliers and the results of the supplier selection process are discussed. In addition, a number of lessons learned and recommendations are shared.",2012,0, 6115,"Incremental Sampling Process for Actual Function Points Validation in a Contract, An Empirical Experiment","Customer verification of functional size measures provided by the supplier in the acceptance phase is a critical task for the correctness of contract execution. A lack of control by the customer, both in depth and in scope, can lead to relevant deviations of the actual unitary price if compared to that accepted in the bid assignment process, with potential consequences in terms of unfairness or, in some cases, illegality. In this paper we summarize an efficient and well defined approach to validate supplier's functional size measurements in order to present the validation experiment. This approach was extensively presented at SMEF 2012. The approach, although statistically based, is rigorous, since it defines clear and unambiguous game roles, and efficient, in order to spend the adequate effort to achieve the expected confidence about supplier's functional size measurement capabilities. The approach consists in applying a variation of the Incremental Sampling Method that allows the customer tuning the validation effort on the quality level of size measures provided by the supplier, detected by the gap among these measures and the ones checked and validated on a sampled base. An empirical validation experiment, which is the focus of the present paper, is presented to illustrate the advantages of the approach.",2012,0, 6116,Verification of Spatio-Temporal Role Based Access Control using Timed Automata,"The verification of Spatio-Temporal Role Based Access Control policies (STRBAC) during the early development life cycle improves the security of the software. It helps to identify inconsistencies in the Access Control policies before proceeding to other phases where the cost of fixing defects is augmented. This paper proposes a formal method for an automatic analysis of STRBAC policies. It ensures that the policies are consistent and conflict-free. The method proposed in this paper makes the use of Timed Automata to verify the STRBAC policies. This is done by translating the STRBAC model into Timed Automata, and then the produced Timed Automata is implemented and verified using the model checker UPPAAL. This paper presents a security query expressed using TCTL to detect inconsistency caused due to the interaction between STRBAC policies. With the help of an example, this paper shows how we convert STRBAC model to Timed Automata models and verify the resulting models using the UPPAAL to identify an erroneous design.",2012,0, 6117,Studying volatility predictors in open source software,"Volatile software modules, for the purposes of this work, are defined as those that are significantly more change-prone than other modules in the same system or subsystem. There is significant literature investigating models for predicting which modules in a system will become volatile, and/or are defect-prone. Much of this work focuses on using source code-related characteristics (e.g., complexity metrics) and simple change metrics (e.g., number of past changes) as inputs to the predictive models. Our work attempts to broaden the array of factors considered in such prediction approaches. To this end, we collected data directly from development personnel about the factors they rely on to foresee what parts of a system are going to become volatile. In this paper, we describe a focus group study conducted with the development team of a small but active open source project, in which we asked this very question. The results of the focus group indicate, among other things, that a period of volatility in a particular area of the system is often predicted by a pattern characterized by inactivity in a certain area (resulting in that area becoming less mature than others), increased communication between developers regarding opportunities for improvement in that area, and then the emergence of a champion who takes the initiative to start working on those improvements. The initial changes lead to more changes (both to extend the improvements already made and to fix problems introduced), thus leading to volatility.",2012,0, 6118,Predicting defect numbers based on defect state transition models,"During software maintenance, a large number of defects could be discovered and reported. A defect can enter many states during its lifecycle, such as NEW, ASSIGNED, and RESOLVED. The ability to predict the number of defects at each state can help project teams better evaluate and plan maintenance activities. In this paper, we present BugStates, a method for predicting defect numbers at each state based on defect state transition models. In our method, we first construct defect state transition models using historical data. We then derive a stability metric from the transition models to measure a project's defect-fixing performance. For projects with stable defect-fixing performance, we show that we can apply Markovian method to predict the number of defects at each state in future based on the state transition model. We evaluate the effectiveness of BugStates using six open source projects and the results are promising. For example, when predicting defect numbers at each state in December 2010 using data from July 2009 to June 2010, the absolute errors for all projects are less than 28. In general, BugStates also outperforms other related methods.",2012,0, 6119,How many individuals to use in a QA task with fixed total effort?,"Increasing the number of persons working on quality assurance (QA) tasks, e.g., reviews and testing, increases the number of defects detected - but it also increases the total effort unless effort is controlled with fixed effort budgets. Our research investigates how QA tasks should be configured regarding two parameters, i.e., time and number of people. We define an optimization problem to answer this question. As a core element of the optimization problem we discuss and describe how defect detection probability should be modeled as a function of time. We apply the formulas used in the definition of the optimization problem to empirical defect data of an experiment previously conducted with university students. The results show that the optimal choice of the number of persons depends on the actual defect detection probabilities of the individual defects over time, but also on the size of the effort budget. Future work will focus on generalizing the optimization problem to a larger set of parameters, including not only task time and number of persons but also experience and knowledge of the personnel involved, and methods and tools applied when performing a QA task.",2012,0, 6120,A comparison of database fault detection capabilities using mutation testing,"Mutation testing involves systematically generating and introducing faults into an application to improve testing. A quasi-experimental study is reported comparing the fault-detection capabilities of realworld database application test suites to those of an SQL vendor test suite (NIST SQL) based on mutation scores. The higher the mutation score the more successful the test suite will be at detecting faults. The SQLMutation tool was used to generate query mutants from beginner-level sample schemas obtained from three popular real-world database test suite vendors - MySQL, SQL Server, and Oracle. Four SQLMutation operators were applied to both realworld and NIST SQL vendor compliance test suites - SQL Clause (SC), Operator Replacement (OR), NULL (NL) and Identifier Replacement (IR). Two mutation operators, SC and NL generated significantly lower mutation scores in real-world test suites than for those in the vendor test suite. The IR operator generated significantly higher mutation scores in real-world test suites than for those in the vendor test suite. The OR operator produced roughly the same mutation scores in both the real-world and vendor test suites.",2012,0, 6121,Methodology to assess the influence of PV systems as a distributed generation technology,"This paper shows a simplified methodology for assess the impacts of photovoltaic systems connected to a supply of low voltage distribution power system, analyzing voltage variations outside the limits according to the NTC 1340, the chargeability and losses through a development software implemented in DIgSILENT. The development includes the characterization of solar resource and ambient temperature to estimate the generation of the photovoltaic system, with this procedure it can find an optimum rate of penetration PV on the electric network under two scenarios: first, a circuit located for residential loads and second, a commercial circuit, both in the city of Bogota, Colombia.",2012,0, 6122,Investigating Automatic Static Analysis Results to Identify Quality Problems: An Inductive Study,"Background: Automatic static analysis (ASA) tools examine source code to discover """"issues"""", i.e. code patterns that are symptoms of bad programming practices and that can lead to defective behavior. Studies in the literature have shown that these tools find defects earlier than other verification activities, but they produce a substantial number of false positive warnings. For this reason, an alternative approach is to use the set of ASA issues to identify defect prone files and components rather than focusing on the individual issues. Aim: We conducted an exploratory study to investigate whether ASA issues can be used as early indicators of faulty files and components and, for the first time, whether they point to a decay of specific software quality attributes, such as maintainability or functionality. Our aim is to understand the critical parameters and feasibility of such an approach to feed into future research on more specific quality and defect prediction models. Method: We analyzed an industrial C# web application using the Resharper ASA tool and explored if significant correlations exist in such a data set. Results: We found promising results when predicting defect-prone files. A set of specific Resharper categories are better indicators of faulty files than common software metrics or the collection of issues of all issue categories, and these categories correlate to different software quality attributes. Conclusions: Our advice for future research is to perform analysis on file rather component level and to evaluate the generalizability of categories. We also recommend using larger datasets as we learned that data sparseness can lead to challenges in the proposed analysis process.",2012,0, 6123,Model-Driven Development of Secure Service Applications,"The development of a secure service application is a difficult task and designed protocols are very error-prone. To develop a secure SOA application, application-independent protocols (e.g. TLS or Web service security protocols) are used. These protocols guarantee standard security properties like integrity or confidentiality but the critical properties are application-specific (e.g. 'a ticket can not be used twice'). For that, security has to be integrated in the whole development process and application-specific security properties have to be guaranteed. This paper illustrates the modeling of a security-critical service application with UML. The modeling is part of an integrated software engineering approach that encompasses model-driven development. Using the approach, an application based on service-oriented architectures (SOA) is modeled with UML. From this model executable code as well as a formal specification to prove the security of the application is generated automatically. Our approach, called SecureMDD, supports the development of security-critical applications and integrates formal methods to guarantee the security of the system. The modeling guidelines are demonstrated with an online banking example.",2012,0, 6124,Three dimension Time-Frequency approach for diagnosing eccentricity faults in Switched Reluctance motor,"This paper presents an analysis of effects of dynamic air-gap eccentricity on the performances of a 6/4 Switched Reluctance Machine (SRM) through finite element analysis (FEA) based on a FEMM package associated to MATLAB/SIMULINK software. Among the various Time-Frequency methods used for detection of defects, the Time-Frequency Representation (TFR) is an appropriate tool to detect the mechanical failures through the torque analysis by allowing a better representation independent from the type of fault. Simulation results of healthy and faulty cases are discussed and illustrate the effectiveness of the proposed approach.",2012,0, 6125,Applying evolution programming Search Based Software Engineering (SBSE) in selecting the best open source software maintainability metrics,"The nature of an Open Source Software development paradigm forces individual practitioners and organization to adopt software through trial and error approach. This leads to the problems of coming across software and then abandoning it after realizing its lack of important qualities to suit their requirements or facing negative challenges in maintaining the software. These contributed by lack of recognizing guidelines to lead the practitioners in selecting out of the dozens available metrics, the best metric(s) to measure quality OSS. In this study, the novel results provide the guidelines that lead to the development of metrics model that can select the best metric(s) to predict maintainability of Open Source Software.",2012,0, 6126,Simulation study of a simple flux saturation controller for high-frequency transformer link full-bridge DC-DC converters,"High-Frequency Transformer Link Full-bridge DC-DC converter systems such as those used in plasma cutting applications are prone to transformer flux saturation. It can cause unit shutdown due to over-current protection or even catastrophic failure under extreme situations [1]. This is especially unacceptable in large plasma cutting applications where unexpected production stoppages can lead to severe economic loss to the customer. A simple flux control method that overcomes the disadvantages of current flux saturation control methods has been presented in [2]. Here, the transformer flux is controlled without affecting the dynamics of the main control loop. The proposed method is verified by simulating a 16 kW DC-DC full-bridge converter circuit model in ORCAD-PSPICE software. Results of this exercise show a 50% improvement in dynamic response, a 25% reduction in transformer size and weight, and improvements in system reliability and efficiency when compared with the conventional approach. It is seen that the proposed method can be retro-fitted on an existing power supply whether voltage or current controlled, with minimal change to its circuitry. In addition, it can also be extended to converter topologies like the push-pull as well.",2012,0, 6127,Opportunities and challenges of static code analysis of IEC 61131-3 programs,"Static code analysis techniques analyze programs by examining the source code without actually executing them. The main benefits lie in improving software quality by detecting potential defects and problematic code constructs in early development stages. Today, static code analysis is widely used and numerous tools are available for established programming languages like C/C++, Java, C# and others. However, in the domain of PLC programming, static code analysis tools are still rare. In this paper we present an approach and tool support for static code analysis of PLC programs. The paper discusses opportunities static code analysis can offer for PLC programming, it reviews techniques for static analysis, and it describes our tool that implements a rule-based analysis approach for IEC 61131-3 programs.",2012,0, 6128,A status protocol for system-operation in a fault-tolerant system Verification and testing with SPIN,This paper presents a status protocol for a fault-tolerant distributed real-time system. The protocol aims to give all nodes a consistent view of the status of processing operations during one communication cycle; despite the occurrence of asymmetric omission failures. The system consists of nodes interconnected with a time-triggered network. A part of the protocol is performed only on-demand i.e. when failure is detected and can thus make use of event-triggered messages in e.g. FlexRay. The protocol is studied in several configurations of nodes and processes. Model checking with SPIN shows that it is not possible to guarantee a consistent decision when more than one failure occurs. SPIN is then used to enumerate the success-ratio (at least 90%) of the protocol in failure scenarios for a number of configurations of the protocol.,2012,0, 6129,Reliability correlation between physical and virtual cores at the ISA level,"The proliferation of highly-configurable FPGA technology has allowed the implementation of dedicated systems of diverse configurations and fueled the software to hardware migration paradigm. This work demonstrates how the hardware implementation of virtualization technology affects the system reliability at several levels of abstraction. By correlating faults between the physical and virtual, the reliability impact of hardware-assisted virtualization is shown, as well as how runtime faults are capable of breaching virtualization. ISA profiling is used to assess reliability at early design stages and how its use can serve as a robustness guideline for hardware and software designers is explained.",2012,0, 6130,Semantic design and integration of simulation models in the industrial automation area,"Simulations are software tools approximating and predicting the behavior of real industrial plants. Unlike real plants, the utilization of simulations cannot cause damages and it saves time and costs during series of experiments. A shortcoming of current simulation models is the complicated runtime integration into legacy industrial systems and platforms, as well as ad-hoc design phase, introducing manual and error-prone work. This paper contributes to improve the efficiency of simulation model design and integration. It utilizes a semantic knowledge base, implemented by ontologies and their mappings. The integration uses the Automation Service Bus and the paper explains how to configure the runtime integration level semantically. The main contributions are the concept of semantic configuration of the service bus and the workflows of simulation design and integration.",2012,0, 6131,An autonomous recovery software module for protecting embedded OS and application software,"Embedded systems have been widespread for novel technologies which bring people more convenience and hence become more relevant to our life. When embedded systems are utilized on safety-critical applications, their availability and reliability issues must be addressed and systems must be protected by effective techniques. One primary cause of the embedded system crash is the data corruption error. In this study, the embedded system crashes caused by data corruption errors are resolved by an autonomous recovery software methodology (ARSM). ARSM is composed by system monitor, bad block salvage, autonomous recovery mechanism and OS initial backup. ARSM performs all-operation system monitoring. Once any application software and operation system crash is detected, the autonomous recovery mechanism will be activated to recover the embedded system back to normal operation. For verification of the ARSM, we adopt a car event data recorder to be the case demonstration, and generate data corruption errors to validate the efficiency of the ARSM.",2012,0, 6132,Reliability asssessment of SIPS based on a Safety Integrity Level and Spurious Trip Level,"As the number and complexity of System Integrity Protection Schemes (SIPS) in operation increases very rapidly, it must be ensured that their performance meets the reliability requirements of electrical utilities, in terms of dependability and security. A procedure based on Markov Modeling and Fault Tree Analysis is proposed for assessing SIPS reliability. Many operators tend to have SIPS permanently in service; this reduces the probability the arming software or the human operator will fail to arm the SIPS. Whilst this can decrease the probability of dependability-based misoperation, it may increase the probability of security-based misoperation. Therefore, the impact of having SIPS always armed is examined and compared with the impact of arming the schemes only when required. In addition, two reliability indices are introduced for quantifying the level of SIPS reliability: Safety Integrity Level and Spurious Trip Level. The proposed method is illustrated using the South of Lugo N-2 SIPS, which is part of the South California Edison grid.",2012,0, 6133,QoS and performance optimization with VM provisioning approach in Cloud computing environment,"Cloud computing is the computing paradigm which delivers IT resources as a service, hence user are free from setting up the infrastructure and managing hardware etc. Cloud Computing provides dynamic provisioned resources and presented as one or more integrated computing resources based on constraints. The process of provisioning in Clouds requires the application provisioner to compute the best software and hardware configuration so as to ensure that Quality of Services (QoS) target of application services are achieved, without compromising efficiency and utilization of whole system. This paper presents a dynamic provisioning technique, adapting to peak-to-peak workload changes related to applications to offer end-users guaranteed Quality of Services (QoS) in highly dynamic environments. Behavior and performance of applications and Cloud-based IT resources are modeled to adaptively serve end-user requests. Analytical performance (queuing network system model) and workload information are used to supply intelligent input about the physical infrastructure which causes improvement in efficiency. VM provisioning technique detects changes in workload intensity that occurs over time and makes appropriate changes in allocations of multiple virtualized IT resources to achieve application QoS targets.",2012,0, 6134,To what extent could we detect field defects? an empirical study of false negatives in static bug finding tools,"Software defects can cause much loss. Static bug-finding tools are believed to help detect and remove defects. These tools are designed to find programming errors; but, do they in fact help prevent actual defects that occur in the field and reported by users? If these tools had been used, would they have detected these field defects, and generated warnings that would direct programmers to fix them? To answer these questions, we perform an empirical study that investigates the effectiveness of state-of-the-art static bug finding tools on hundreds of reported and fixed defects extracted from three open source programs: Lucene, Rhino, and AspectJ. Our study addresses the question: To what extent could field defects be found and detected by state-of-the-art static bug-finding tools? Different from past studies that are concerned with the numbers of false positives produced by such tools, we address an orthogonal issue on the numbers of false negatives. We find that although many field defects could be detected by static bug finding tools, a substantial proportion of defects could not be flagged. We also analyze the types of tool warnings that are more effective in finding field defects and characterize the types of missed defects.",2012,0, 6135,An automated approach to forecasting QoS attributes based on linear and non-linear time series modeling,"Predicting future values of Quality of Service (QoS) attributes can assist in the control of software intensive systems by preventing QoS violations before they happen. Currently, many approaches prefer Autoregressive Integrated Moving Average (ARIMA) models for this task, and assume the QoS attributes' behavior can be linearly modeled. However, the analysis of real QoS datasets shows that they are characterized by a highly dynamic and mostly nonlinear behavior to the extent that existing ARIMA models cannot guarantee accurate QoS forecasting, which can introduce crucial problems such as proactively triggering unrequired adaptations and thus leading to follow-up failures and increased costs. To address this limitation, we propose an automated forecasting approach that integrates linear and nonlinear time series models and automatically, without human intervention, selects and constructs the best suitable forecasting model to fit the QoS attributes' dynamic behavior. Using real-world QoS datasets of 800 web services we evaluate the applicability, accuracy, and performance aspects of the proposed approach, and results show that the approach outperforms the popular existing ARIMA models and improves the forecasting accuracy by on average 35.4%.",2012,0, 6136,Predicting recurring crash stacks,"Software crash is one of the most severe bug manifestations and developers want to fix crash bugs quickly and efficiently. The Crash Reporting System (CRS) is widely deployed for this purpose. Even with the help of CRS, fixes are largely by manual effort, which is error-prone and results in recurring crashes even after the fixes. Our empirical study reveals that 48% of fixed crashes in Firefox CRS are recurring mostly due to incomplete or missing fixes. It is desirable to automatically check if a crash fix misses some reported crash traces at the time of the first fix. This paper proposes an automatic technique to predict recurring crash traces. We first extract stack traces and then compare them with bug fix locations to predict recurring crash traces. Evaluation using the real Firefox crash data shows that the approach yields reasonable accuracy in prediction of recurring crashes. Had our technique been deployed earlier, more than 2,225 crashes in Firefox 3.6 could have been avoided.",2012,0, 6137,Code patterns for automatically validating requirements-to-code traces,"Traces between requirements and code reveal where requirements are implemented. Such traces are essential for code understanding and change management. Unfortunately, traces are known to be error prone. This paper introduces a novel approach for validating requirements-to-code traces through calling relationships within the code. As input, the approach requires an executable system, the corresponding requirements, and the requirements-to-code traces that need validating. As output, the approach identifies likely incorrect or missing traces by investigating patterns of traces with calling relationships. The empirical evaluation of four case study systems covering 150 KLOC and 59 requirements demonstrates that the approach detects most errors with 85-95% precision and 82-96% recall and is able to handle traces of varying levels of correctness and completeness. The approach is fully automated, tool supported, and scalable.",2012,0, 6138,Can I clone this piece of code here?,"While code cloning is a convenient way for developers to reuse existing code, it may potentially lead to negative impacts, such as degrading code quality or increasing maintenance costs. Actually, some cloned code pieces are viewed as harmless since they evolve independently, while some other cloned code pieces are viewed as harmful since they need to be changed consistently, thus incurring extra maintenance costs. Recent studies demonstrate that neither the percentage of harmful code clones nor that of harmless code clones is negligible. To assist developers in leveraging the benefits of harmless code cloning and/or in avoiding the negative impacts of harmful code cloning, we propose a novel approach that automatically predicts the harmfulness of a code cloning operation at the point of performing copy-and-paste. Our insight is that the potential harmfulness of a code cloning operation may relate to some characteristics of the code to be cloned and the characteristics of its context. Based on a number of features extracted from the cloned code and the context of the code cloning operation, we use Bayesian Networks, a machine-learning technique, to predict the harmfulness of an intended code cloning operation. We evaluated our approach on two large-scale industrial software projects under two usage scenarios: 1) approving only cloning operations predicted to be very likely of no harm, and 2) blocking only cloning operations predicted to be very likely of harm. In the first scenario, our approach is able to approve more than 50% cloning operations with a precision higher than 94.9% in both subjects. In the second scenario, our approach is able to avoid more than 48% of the harmful cloning operations by blocking only 15% of the cloning operations for the first subject, and avoid more than 67% of the cloning operations by blocking only 34% of the cloning operations for the second subject.",2012,0, 6139,Using GUI ripping for automated testing of Android applications,"We present AndroidRipper, an automated technique that tests Android apps via their Graphical User Interface (GUI). AndroidRipper is based on a user-interface driven ripper that automatically explores the app's GUI with the aim of exercising the application in a structured manner. We evaluate AndroidRipper on an open-source Android app. Our results show that our GUI-based test cases are able to detect severe, previously unknown, faults in the underlying code, and the structured exploration outperforms a random approach.",2012,0, 6140,Detection of embedded code smells in dynamic web applications,"In dynamic Web applications, there often exists a type of code smells, called embedded code smells, that violate important principles in software development such as software modularity and separation of concerns, resulting in much maintenance effort. Detecting and fixing those code smells is crucial yet challenging since the code with smells is embedded and generated from the server-side code. We introduce WebScent, a tool to detect such embedded code smells. WebScent first detects the smells in the generated code, and then locates them in the server-side code using the mapping between client-side code fragments and their embedding locations in the server program, which is captured during the generation of those fragments. Our empirical evaluation on real-world Web applications shows that 34%-81% of the tested server files contain embedded code smells. We also found that the source files with more embedded code smells are likely to have more defects and scattered changes, thus potentially require more maintenance effort.",2012,0, 6141,Predicting common web application vulnerabilities from input validation and sanitization code patterns,"Software defect prediction studies have shown that defect predictors built from static code attributes are useful and effective. On the other hand, to mitigate the threats posed by common web application vulnerabilities, many vulnerability detection approaches have been proposed. However, finding alternative solutions to address these risks remains an important research problem. As web applications generally adopt input validation and sanitization routines to prevent web security risks, in this paper, we propose a set of static code attributes that represent the characteristics of these routines for predicting the two most common web application vulnerabilities-SQL injection and cross site scripting. In our experiments, vulnerability predictors built from the proposed attributes detected more than 80% of the vulnerabilities in the test subjects at low false alarm rates.",2012,0, 6142,Software defect prediction using semi-supervised learning with dimension reduction,"Accurate detection of fault prone modules offers the path to high quality software products while minimizing non essential assurance expenditures. This type of quality modeling requires the availability of software modules with known fault content developed in similar environment. Establishing whether a module contains a fault or not can be expensive. The basic idea behind semi-supervised learning is to learn from a small number of software modules with known fault content and supplement model training with modules for which the fault information is not available. In this study, we investigate the performance of semi-supervised learning for software fault prediction. A preprocessing strategy, multidimensional scaling, is embedded in the approach to reduce the dimensional complexity of software metrics. Our results show that the semi-supervised learning algorithm with dimension-reduction preforms significantly better than one of the best performing supervised learning algorithms, random forest, in situations when few modules with known fault content are available for training.",2012,0, 6143,Healing online service systems via mining historical issue repositories,"Online service systems have been increasingly popular and important nowadays, with an increasing demand on the availability of services provided by these systems, while significant efforts have been made to strive for keeping services up continuously. Therefore, reducing the MTTR (Mean Time to Restore) of a service remains the most important step to assure the user-perceived availability of the service. To reduce the MTTR, a common practice is to restore the service by identifying and applying an appropriate healing action (i.e., a temporary workaround action such as rebooting a SQL machine). However, manually identifying an appropriate healing action for a given new issue (such as service down) is typically time consuming and error prone. To address this challenge, in this paper, we present an automated mining-based approach for suggesting an appropriate healing action for a given new issue. Our approach generates signatures of an issue from its corresponding transaction logs and then retrieves historical issues from a historical issue repository. Finally, our approach suggests an appropriate healing action by adapting healing actions for the retrieved historical issues. We have implemented a healing suggestion system for our approach and applied it to a real-world product online service that serves millions of online customers globally. The studies on 77 incidents (severe issues) over 3 months showed that our approach can effectively provide appropriate healing actions to reduce the MTTR of the service.",2012,0, 6144,MaramaAI: tool support for capturing and managing consistency of multi-lingual requirements,"Requirements captured by Requirements Engineers are commonly inconsistent with their client's intended requirements and are often error prone especially if the requirements are written in multiple languages. We demonstrate the use of our automated inconsistency-checking tool MaramaAI to capture and manage the consistency of multi-lingual requirements in both the English and Malay languages for requirements engineers and clients using a round-trip, rapid prototyping approach.",2012,0, 6145,GZoltar: an eclipse plug-in for testing and debugging,"Testing and debugging is the most expensive, error-prone phase in the software development life cycle. Automated testing and diagnosis of software faults can drastically improve the efficiency of this phase, this way improving the overall quality of the software. In this paper we present a toolset for automatic testing and fault localization, dubbed GZoltar, which hosts techniques for (regression) test suite minimization and automatic fault diagnosis (namely, spectrum-based fault localization). The toolset provides the infrastructure to automatically instrument the source code of software programs to produce runtime data. Subsequently the data was analyzed to both minimize the test suite and return a ranked list of diagnosis candidates. The toolset is a plug-and-play plug-in for the Eclipse IDE to ease world-wide adoption.",2012,0, 6146,Software Defect Prediction Scheme Based on Feature Selection,"Predicting defect-prone software modules accurately and effectively are important ways to control the quality of a software system during software development. Feature selection can highly improve the accuracy and efficiency of the software defect prediction model. The main purpose of this paper is to discuss the best size of feature subset for building a prediction model and prove that feature selection method is useful for establishing software defect prediction model. Mutual information is an outstanding indicator of relevance between variables, and it has been used as a measurement in our feature selection algorithm. We also introduce a nonlinear factor to our evaluation function for feature selection to improve its performance. The results of our feature selection algorithm are validated by different machine learning methods. The experiment results show that all the classifiers achieve higher accuracy by using the feature subset provided by our algorithm.",2012,0, 6147,EasyBuild: Building Software with Ease,"Maintaining a collection of software installations for a diverse user base can be a tedious, repetitive, error-prone and time-consuming task. Because most end-user software packages for an HPC environment are not readily available in existing OS package managers, they require significant extra effort from the user support team. Reducing this effort would free up a large amount of time for tackling more urgent tasks. In this work, we present EasyBuild, a software installation framework written in Python that aims to support the various installation procedures used by the vast collection of software packages that are typically installed in an HPC environment - catering to widely different user profiles. It is built on top of existing tools, and provides support for well-established installation procedures. Supporting customised installation procedures requires little effort, and sharing implementations of installation procedures becomes very easy. Installing software packages that are supported can be done by issuing a single command, even if dependencies are not available yet. Hence, it simplifies the task of HPC site support teams, and even allows end-users to keep their software installations consistent and up to date.",2012,0, 6148,"Abstract: cTuning.org: Novel Extensible Methodology, Framework and Public Repository to Collaboratively Address Exascale Challenges","Innovation in science and technology is vital for our society and requires faster, more power efficient and reliable computer systems. However, designing and optimizing such systems has become intolerably complex, ad-hoc, costly and error prone due to ever increasing number of available design and optimization choices combined with complex interactions between all software and hardware components, multiple strict requirements placed on characteristics of new computer systems, and a large number of ever-changing and often incompatible analysis and optimization tools. Auto-tuning, run-time adaptation and machine learning based approaches have been demonstrating good promise to address above challenges for more than a decade but are still far from the widespread production use due to unbearably long exploration and training times, lack of a common experimental methodology, and lack of public repositories for unified data collection, analysis and mining.",2012,0, 6149,"Poster: Collective Tuning: Novel Extensible Methodology, Framework and Public Repository to Collaboratively Address Exascale Challenges","Designing and optimizing novel computing systems became intolerably complex, ad-hoc, costly and error prone due to an unprecedented number of available tuning choices, and complex interactions between all software and hardware components. I present a novel holistic methodology, extensible infrastructure and public repository (cTuning.org and Collective Mind) to overcome the rising complexity of computer systems by distributing their characterization and optimization among multiple users. This technology effectively combines online auto-tuning, run-time adaptation, data mining and predictive modeling to collaboratively analyze thousands of codelets and datasets, explore large optimization spaces and detect abnormal behavior. It then extrapolates collected knowledge to suggest program optimizations, run-time adaptation scenarios or architecture designs to balance performance, power consumption and other characteristics. This technology has been recently successfully validated and extended in several academic and industrial projects with NCAR, Intel Exascale Lab, IBM and CAPS Entreprise, and we believe that it will be vital for developing future Exascale systems.",2012,0, 6150,Comparison of genome-scale reconstructions using ModeRator,"The Computational Intelligence is one of the main tools in biochemical network modeling that help predict or optimize engineering means of objectives to achieve. For this purpose, reconstructions (which tend to increase in number and size) of different genome-scale metabolic networks can be used. Consequently, realizing different tasks it is necessary to evaluate alternative models and to assess the quality, similarity and usefulness of the combination. This article provides an in depth look into reconstruction comparison software tool ModeRator for detection of inconsistencies, duplicate reactions and visualization of comparison results. A case study shows comparison results of two representative genome-scale metabolic network reconstructions containing 600 and 747 reactions. The obtained results show how using various options the threshold of comparison strictness can be lowered to reveal similar or probably equal reactions. The application, user manual and sample reconstructions can be downloaded from http://biosystems.lv/moderator2/. The ModeRator2 is implemented in Python and is freely available.",2012,0, 6151,Market-Awareness in Service-Based Systems,"Service-based systems are applications built by composing pre-existing services. During design time and according to the specifications, a set of services is selected. Both, service providers and consumers exist in a service market that is constantly changing. Service providers continuously change their quality of services (QoS), and service consumers can update their specifications according to what the market is offering. Therefore, during runtime, the services are periodically and manually checked to verify if they still satisfy the specifications. Unfortunately, humans are overwhelmed with the degree of changes exhibited by the service market. Consequently, verification of the compliance specification and execution of the corresponding adaptations when deviations are detected cannot be carried out in a manual fashion. In this work, we propose a framework to enable online awareness of changes in the service market in both consumers and providers by representing them as active software agents. At runtime, consumer agents concretize QoS specifications according to the available market knowledge. Services agents are collectively aware of themselves and of the consumers' requests. Moreover, they can create and maintain virtual organizations to react actively to demands that come from the market. In this paper we show preliminary results that allow us to conclude that the creation and adaptation of service-based systems can be carried out by a self-organized service market system.",2012,0, 6152,Visual field monitoring of road defect and modeling of pavement road vibration from moving truck traffic,"A number of flexible pavement structures experience deterioration due to high traffic volume and growing weights. Thus, there is a need to model pavement responses due to various types of overweight truck traffic by taking into account axle loads, configuration and traffic operations in order to provide a comprehensive understanding and to assess the existing pavement performance and expected service life. The data used are from 16-hour traffic volume at these stations taken twice a year for seven years by the Highway Planning Unit (HPU) using manual counting are used to investigate the relationship between volume of heavy vehicles and the damages that occur. The empirical modeling was performed to ascertain the relationships among three parameters which are vertical vibration of ground borne vibration, vehicular speed and traffic noise generated by the movement of vehicles. As a tool to assess the performance of the pavement based on vibration index level, new approach procedure has been set up to determine the relationship between vibration index with speed, noise and pavement defect by developing three multiple linear regression model using advanced statistical software.",2012,0, 6153,Application DANP with MCDM model to explore smartphone software,"To understand the behavior of smartphone online application software will be helpful to predict whether the software application would be adopted by the users and to guide the providers to enhance the functions of the software. A wide range of criteria are used to assess smartphone software quality, but most of these criteria have interdependent or interactive characteristics, which can make it difficult to effectively analyze and improve smartphone use intention. The purpose of this study is to address this issue using a hybrid MCDM (multiple criteria decision-making) approach that includes the DEMATEL (decision-making trial and evaluation laboratory), DANP (the DEMATEL-based analytic network process) methods to achieve an optimal solution. By exploring the influential interrelationships between criteria related to mobile communication industry's and related value-added service content providers' reference in the respect of operation. This approach can be used to solve interdependence and feedback problems, allowing for greater satisfaction of the actual needs of mobile communication industries.",2012,0, 6154,A process for clouds services procurement based on model and QoS,"A relevant challenge for cloud computing is related to quality control of the services available. Cloud providers sometimes just deliver services, but do not clearly define quality of services guarantees. In addition, each provider uses a particular process to provide services. In this way, aiming to define a service procurement process to clouds, this paper proposes an approach based on a cloud environment model, considering service quality preservation. The proposed process will use an environment model containing all relevant information to create a virtual workspace, taking into account requirements of hardware and software and quality parameters, all of them specified by users. From this model, it will be possible to automatically provide platform and infrastructure as a service. The agreement negotiation happens during the service acquisition process from automated agents, creating the services and monitoring their quality attributes, generating an environment less error-prone, increasing the customer level satisfaction.",2012,0, 6155,Research on risk control system in regional power grid,"Today's society increasingly high demand on the supply level of service, and quality service based on the safe and stable operation of power grids, regional power grid face the majority of users directly, while the active power itself can be modulated by a relatively small grid operation security risk is always there, in order to effectively control the risk of grid when it runs and improve ability to resist risks, therefor it is very important to build the system of risk control system of regional power grid. Based on the characteristics of regional power grid itself, described the contents of the regional power grid risks tube control system's design principle, the system software and hardware architecture, which is mainly introduced the regional power grid risk management and control systems: historical data read, grid risk identification and assessment, grid risk comprehensive assessment, grid risk controlling and decision-making, intelligent early warning function, and realize the function of this five respects the need to study the content and technical policy. Regional power grid can effectively analyse, induce and recognise the various potential or inherent risk factors in the regional grid security and stability through the establishment of risk management and control system. at the same time the probability of occurrence of various risk factors and the severity of the impact on the regional power grid are quantified, with the formation of different risk indicators and unified the various indicators of risk, it determines the level of risk ultimately, power companies establish appropriate contingency plans and emergency response programs according to the level of risk, which reduce the risk of the grid operation and improve the supply service levels and the enterprise economic efficiency.",2012,0, 6156,Intelligent online monitoring of power capacitor complete equipment in substation,"The general structure of capacitor complete equipment inside the substation and internal structure of single capacitor are introduced and analyzed in this paper. Based on this, a mathematical model is established to simulate several possible damages of internal component of single capacitor by using computing software, and then some applicable conclusions are drawn. In accordance with China's overall trend of developing smart grid, a system of intelligent online monitoring applicable to the substation capacitor complete equipment is figured out, and a pre-detection of fault symptom of the power capacitor is able to detected, and the fault could be analyzed and located.",2012,0, 6157,Modular based multiple test case prioritization,"Cost and time effective reliable test case prioritization technique is the need for present software industries. The test case prioritization for the entire program consumes more time and the selection of test case for entire software is also affecting the test performance. In order to alleviate the above problem a new methodology using modular based test case prioritization is proposed for regression testing. In this method the program is divided into multiple modules. The test cases corresponding to each module is prioritized first. In the second stage, the individual modular based prioritized test suites are combined together and further prioritized for the whole program. This method is verified for fault coverage and compared with overall program test case prioritization method. The proposed method is assessed using three standard applications namely University Students Monitoring System, Hospital Management System, and Industrial Process Operation System. The empirical studies show that the proposed algorithm is significantly performed well. The superiority of the proposed method is also highlighted.",2012,0, 6158,Predicting fault-prone software modules using feature selection and classification through data mining algorithms,"Software defect detection has been an important topic of research in the field of software engineering for more than a decade. This research work aims to evaluate the performance of supervised machine learning techniques on predicting defective software through data mining algorithms. This paper places emphasis on the performance of classification algorithms in categorizing seven datasets (CM1, JM1, MW1, KC3, PC1, PC2, PC3 and PC4) under two classes namely Defective and Normal. In this study, publicly available data sets from different organizations are used. This permitted us to explore the impact of data from different sources on different processes for finding appropriate classification models. We propose a computational framework using data mining techniques to detect the existence of defects in software components. The framework comprises of data pre-processing, data classification and classifier evaluation. In this paper; we report the performance of twenty classification algorithms on seven publicly available datasets from the NASA MDP Repository. Random Tree Classification algorithm produced 100 percent accuracy in classifying the datasets and hence the features selected by this technique were considered to be the most significant features. The results were validated with suitable test data.",2012,0, 6159,"Quality Metrics in optical modulation analysis: EVM and its relation to Q-factor, OSNR, and BER","The quality of optical signals is a very important parameter in optical communications. Several metrics are in common use, like optical signal-to-noise power ratio (OSNR), Q-factor, error vector magnitude (EVM) and bit error ratio (BER). A measured raw BER is not necessarily useful to predict the final BER after soft-decision forward error correction (FEC), if the statistics of the noise leading to errors is unknown. In this respect the EVM is superior, as it allows an estimation of the error statistics. We compare various metrics analytically, by simulation, and through experiments. We employ six quadrature amplitude modulation (QAM) formats at symbol rates of 20 GBd and 25 GBd. The signals were generated by a software-defined transmitter. We conclude that for optical channels with additive Gaussian noise the EVM metric is a reliable quality measure. For nondata-aided QAM reception, BER in the range 10-6...10-2 can be reliably estimated from measured EVM.",2012,0, 6160,Finding focused itemsets from software defect data,"Software product measures have been widely used to predict software defects. Though these measures help develop good classification models, studies propose that relationship between software measures and defects still needs to be investigated. This paper investigates the relationship between software measures and the defect prone modules by studying associations between the two. The paper identifies the critical ranges of the software measures that are strongly associated with defects across five datasets of PROMISE repository. The paper also identifies the ranges of the measures that do not necessarily contribute towards defects. These results are supported by information gain based ranking of software measures.",2012,0, 6161,Lessons Learnt in the Implementation of CMMI Maturity Level 5,"CMMI has proven benefits in software process improvement. Typically, organisations that achieve a CMMI level rating improve their performance. However, CMMI implementation is not trivial, in particular for high maturity levels, and not all organisations achieve the expected results. Certain CMMI implementation problems may remain undetected by SCAMPISM since only a sample of the organisation is analysed during the appraisal and assessing the quality of implementation of some practices may be difficult. In this paper we present the case of three CMMI level 5 organisations. From the lessons learnt and based on an extensive bibliographic research, we identify a set of problems and difficulties that organisations willing to implement CMMI should be aware of and provide a set of recommendations to help avoid them. As future research we will develop a framework to help to evaluate the quality of implementation of CMMI practices.",2012,0, 6162,A Metamodel-Based Approach for Customizing and Assessing Agile Methods,"In today's dynamic market environments, producing high quality software rapidly and efficiently is crucial. In order to allow fast and reliable development processes, several agile methodologies have been designed and are now quite popular. Although existing agile methodologies are abundant, companies are increasingly interested in the construction of their own customized methods to fit their specific environment. In this paper, we investigate how agile methods can be constructed in-house to address specific software process needs. First, we examine a case study focusing on the tailoring of two agile methodologies, XP and Scrum. Then, we focus on the high-level scope of any agile method customization and investigate an approach based on the Situational Method Engineering (SME) paradigm that includes measurement concepts for constructing context specific agile methodologies. We also examine several existing metamodels proposed for use in SME. Finally, we introduce an agile metamodel designed to support the construction of agile methods and relying on measurements to provide guidance to agile methodologists during the construction phase and throughout the development process itself.",2012,0, 6163,Using Association Rules to Identify Similarities between Software Datasets,"A number of V&V datasets are publicly available. These datasets have software measurements and defectiveness information regarding the software modules. To facilitate V&V, numerous defect prediction studies have used these datasets and have detected defective modules effectively. Software developers and managers can benefit from the existing studies to avoid analogous defects and mistakes if they are able to find similarity between their software and the software represented by the public datasets. This paper identifies the similar datasets by comparing association patterns in the datasets. The proposed approach finds association rules from each dataset and identifies the overlapping rules from the 100 strongest rules from each of the two datasets being compared. Afterwards, average support and average confidence of the overlap is calculated to determine the strength of the similarity between the datasets. This study compares eight public datasets and results show that KC2 and PC2 have the highest similarity 83% with 97% support and 100% confidence. Datasets with similar attributes and almost same number of attributes have shown higher similarity than the other datasets.",2012,0, 6164,Developing a Process Assessment Model for Technological and Business Competencies on Software Development,"This article describes the design, development, validation and results of a Process Assessment Model for assessing Technological and Business Competencies on Software Development. The model follows the ISO/IEC 15504 (SPICE) requirements for Process Assessment Models. The development of this model follows the PRO2PI Method Framework for Engineering Process Models as a methodology. In the first phases the concept to be assessed was defined in terms of technological and business competencies on software development. In order to identify these competencies the processes used to develop the software systems are identified and analyzed. Model's versions have been used in thirteen software intensive organizations.",2012,0, 6165,Formally Specifying Requirements with RSL-IL,"To mediate and properly support the interplay between the domains of business stakeholders and the development team, requirements must be documented and maintained in a rigorous manner. Furthermore, to effectively communicate the viewpoints of different stakeholders, it is of the utmost importance to provide complementary views that support a better understanding of the intended software system's requirements. However, the quality of requirements specifications and related artifacts strongly depends on the expertise of whoever performs these human-intensive and error-prone activities. This paper introduces RSL-IL, a domain-specific language that can be used to formally specify the requirements of software systems. The formal semantics of RSL-IL constructs enable further computations on its requirements representations, such as the automatic verification and generation of complementary views that support stakeholders during requirements validation.",2012,0, 6166,A Quality Model for Spreadsheets,"In this paper we present a quality model for spreadsheets based on the ISO/IEC 9126 standard that defines a generic quality model for software. To each of the software characteristics defined in the ISO/IEC 9126, we associate an equivalent spreadsheet characteristic. Then, we propose a set of spreadsheet specific metrics to assess the quality of a spreadsheet in each of the defined characteristics. To obtain the normal distribution of expected values for a spreadsheet in each of the proposed metrics, we have executed them in the widely used EUSES spreadsheet corpus. Then, we quantify each characteristic of our quality model after computing the values of our metrics, and we define quality scores for the different ranges of values. Finally, to automate the quality assessment of a given spreadsheet, according to our quality model, we have integrated the computation of the metrics it includes in both a batch and a web-based tool.",2012,0, 6167,Structuring and Verifying Requirement Specifications through Activity Diagrams to Support the Semi-automated Generation of Functional Test Procedures,"The higher the quality of the specification document, the lower the effort of its translation into design models and testing plans. Besides, an adequate level of abstraction to promote such translations must be described. Therefore, to ensure the quality of requirements specifications it is strategic to develop high quality software applications. So, in this paper a model-based approach to support the correctness, structuring, and translation of functional requirements specifications is described. This approach consists of facilities to build and inspect requirements specifications based on activity diagrams (capturing use cases), and derive functional tests from them. A tool to model and check the activity diagram, a checklist-based inspection technique and a test procedure generation tool form it. This approach was assessed in experimental studies that indicated its feasibility in specification time and a significant reduction of defects in the specified use cases when compared to ad-hoc approaches.",2012,0, 6168,Usability Evaluation of Domain-Specific Languages,"Domain-Specific Languages (DSLs) are claimed to bring important productivity improvements to developers, when compared to General-Purpose Languages (GPLs). The increased Usability is regarded as one of the key benefits of DSLs when compared to GPLs, and has an important impact on the achieved productivity of the DSL users. So, it is essential to build in good usability while developing the DSL. The purpose of this proposal is to contribute to the systematic activity of Software Language Engineering by focusing on the issue of the Usability evaluation of DSLs. Usability evaluation is often skipped, relaxed, or at least omitted from papers reporting development of DSLs. We argue that a systematic approach based on User Interface experimental validation techniques should be used to assess the impact of new DSLs. For that purpose, we propose to merge common Usability evaluation processes with the DSL development process. In order to provide reliable metrics and tools we should reuse and identify good practices that exist in Human-Computer Interaction community.",2012,0, 6169,A Software Framework for Supporting Ubiquitous Business Processes: An ANSI/ISA-95 Approach,"Nowadays, organizations to survive competitively they need to be, innovative and efficient. The way the Internet has been expanding along with other technological changes is leading us to a future in which all the objects that surround us will be seamlessly integrated into information networks. The possibility to implement concepts related with the ubiquitous computing in the business process-level will influence how they are designed, structured, monitored, and managed. One of the most remarkable possibilities of ubiquitous computing can be the real-time monitoring of a particular business process: it should be possible to analyze the flow of materials and information, identify possible points of failure or improve energetic efficiency with a small delay on they occur in reality. Currently, there is no direct and automated link between ubiquitous business processes descriptions and their physical executions which, frequently, promotes the occurrence of a discrepancy between the planned modes of operation and the executed ones. The ubiquitous business processes will enable a narrowing between the real (objects) and virtual (models) world and the possibility to create adaptive business processes that can predict failures, adapting themselves to changes in the environment is an attractive challenge. In this PhD thesis, we will propose a new software framework to monitor real-time executions of ubiquitous industrial business processes.",2012,0, 6170,"Modeling Organizational Information System Architecture Using """"Complex Networks"""" Concepts","Organizations live in a world where interdependence, self-organization and emergence are factors for agility, adaptability and flexibility plunged into networks. Software-based information systems go into a service oriented architecture direction and the same goes to Infrastructures where services are become structures available in networks. Inspired into empirical studies of networked systems such as Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us structurally understand or predict the behavior of these systems. Those findings are characterized by been supported on the """"complex networks"""" concepts. On this PhD research we present the use of the concepts of complex networks from physics to develop organizational information system architectural models, as requirements modeling technique. The research is about the structure and function of networks and its use for modeling organizational information systems architectures by using a combination of empirical methods, analysis, and computer simulations.",2012,0, 6171,Human perceived quality-of-service for multimedia applications,"Usage of multimedia applications is increasing day by day in human social life, medical, military and businesses. Providing the expected quality of service (QoS) to end users is a challenging work for multimedia experts and companies in high technology environment. Delivering the good quality of services (QoS) to end users in distributed systems is based on multiple reliable components of software and hardware. Furthermore, human perceived quality of services are also depended on multiple factors such as multimedia application platform, middleware components and quality of service component architectures. This work addresses the quality of service components architecture particularly assessing the quality of service as perceived by end users. This paper discussed major components of QoS architecture in sense of perceiving the quality to end user and described the role of objective and subjective methods. For testing the quality-of-service for multimedia application, this paper evaluated subjective and objective methods and selected the successfully subjective method for assessing the video quality by end users. Furthermore, described the experiment results and data which collected for assessing perceived quality of multimedia by end users.",2012,0, 6172,Aspect-oriented software for testability,"AOP is recently popular for effective technique in modularizing the crosscutting concerns such as exception handling, fault tolerance, error handling and reusability. Modularizing crosscutting concerns has a great impact on testability of software. Testability of software is the degree to facilitate testing in a given test context and ease reveling of faults. Controllability and observability are the important measures of testing non-functional requirements of software. To test software requires controlling the input and observing the output. Controllability provides a concept of probability to handle the software's input (the internal state) while observability is to observe the output for certain input. This paper presents an overview of the use of aspect-oriented programming (AOP) for facilitating controllability to ease testability of object-oriented software, and simulation of well-mixed biochemical systems.",2012,0, 6173,A Study on the Validation of Histogram Equalization as a Contrast Enhancement Technique,"Our study uncovers that histogram equalization (HE) - in a striking contrast to it's claim - is not related to enhancement of contrast. To understand this view, we start with real world images which have varying degree of image quality that almost invariably require processing to improve image contrast. For this purpose, histogram equalization including its variants is a frequently relied upon technique. HE processes image by calculating pixel density of its constituent gray levels. This mathematical model, described by HE, is neither linked to contrast nor is contrast directly included in HE equations. Therefore, the study aims to find out the factual nature of transformation functions used by HE. To understand these mathematical calculations thoroughly, the paper dismantles HE into it's building blocks. These blocks are, then, critically analyzed to understand the true relationship between HE fundamentals and contrast. This analysis' determines that HE manipulates density - not contrast - which, in turn, achieves density changes but no contrast enhancement. Hence the study concludes that HE is not a valid contrast enhancement technique.",2012,0, 6174,Automatic Generation of On-Line Test Programs through a Cooperation Scheme,"Test programs for Software-based Self-Test (SBST) can be exploited during the mission phase of microprocessor-based systems to periodically assess hardware integrity. However, several additional constraints must be imposed due to the coexistence of test programs with the mission application. This paper proposes a method for the generation of SBST on-line test programs for embedded RISC processors, systems where the impact of on-line constraints is significant. The proposed strategy exploits an evolutionary optimizer that is able to create a complete test set of programs relying on a new cooperative scheme. Experimental results showed high fault coverage values on two different modules of a MIPS-like processor core. These two case studies demonstrate the effectiveness of the technique and the low human effort required for its implementation.",2012,0, 6175,Software testing suite prioritization using multi-criteria fitness function,"Regression testing is the process of validating modifications introduced in a system during software maintenance. It is an expensive, yet an important process. As the test suite size is very large, system retesting consumes large amount of time and computing resources. Unfortunately, there may be insufficient resources to allow for the re-execution of all test cases during regression testing. Testcase prioritization techniques aim to improve the effectiveness of regression testing, by ordering the testcases so that the most beneficial are executed first with higher priority. The objective of test case prioritization is to detect faults as early as possible. An approach for automating the test case prioritization process using genetic algorithm with Multi-Criteria Fitness function is presented. It uses multiple control flow coverage metrics. These metrics measure the degree of coverage of conditions, multiple conditions and statements that the test case covers. Theses metrics are weighted by the number of faults revealed and their severity. The proposed Multi-criteria technique showed superior results compared to similar work.",2012,0, 6176,Bug fix-time prediction model using naive Bayes classifier,"Predicting bug fix-time is an important issue in order to assess the software quality or to estimate the time and effort needed during the bug triaging. Previous work has proposed several bug fix-time prediction models that had taken into consideration various bug report attributes (e.g. severity, number of developers, dependencies) in order to know which bug to fix first and how long it will take to fix it. Our aim is to distinguish the very fast and the very slow bugs in order to prioritize which bugs to start with and which to exclude at the mean time respectively. We used the data of four systems taken from three large open source projects Mozilla, Eclipse, Gnome. We used naive Bayes classifier to compute our prediction model.",2012,0, 6177,Updated Schneidewind model with single change-point by geometrical method,"Many software reliability growth models assume that all faults have equal probability of being detected during the software testing process and the rate remains constant over the intervals between faults occurrences. But in fact, the fault detection rate may depend on the fault discovery efficiency, the fault detection density, the testing-effort, the inspection rate and so on, and may change with the software requirement and testing team. Change due to variations in resource allocation, defect density, running environment and testing strategy is called change-point. In this paper, we propose a updated Schneidewind model with a single change-point by geometrical method, and the experiment has been done to prove the method can be improve the reliability precision to some degree. At the same time, in this paper, we elaborate on the merits and limitations.",2012,0, 6178,CloudAssoc: A pipeline for imputation based genome wide association study on cloud,"Genome wide association study (GWAS) has been proved to be an efficient approach to identify susceptibility genes for complex diseases. In order to increase the power for detecting the disease causal variants, imputation has been used to predict genotype dosages of untyped variants on the basis of linkage disequilibrium evaluated by public data. However, as the volume of data grows, time-consuming of imputation based association study becomes extremely large. We developed a cloud based pipeline to implement data format conversion, imputation, quality control and association study based on Map/Reduce framework which can aid biologists to accelerate the identification and evaluation of susceptibility genes for complex diseases and make it easier to combine GWAS data from worldwide for meta analysis.",2012,0, 6179,A solution for detecting the congestion in networks,"Congestion in networks is one of the main factors adversely affect the quality of communication among networks, detecting congestion is the prerequisite to solve it. In this paper, a type of software sensor which has been designed and implemented can detect the levels of congestion, monitor the current network conditions. Meanwhile, it's very flexible because the key algorithm can be transplanted in a bid to provide a basis for wide application. Finally, simulation results support the conclusion that the software sensor has precise detection performance and future prospects.",2012,0, 6180,Bayesian network-based exception handling for web service composition,"Nowadays, there are more and more web services published on the internet and the new application systems can be constructed by the composition of such web services. It's a trend and novel way for enterprises to build dynamic needs in modern markets, which has won a lot of increasing attentions in all fields. However, the application system composed by web services is easy to cause various failures due to the uncontrollable web services and the network instability. In order to guarantee the quality of the web service composition, this paper presents a method of exception handling based on the improved Bayesian network. The relationships between web services and Bayesian network is constructed as the topological structure during the process of web services composition. For getting better performance, the traditional Bayesian network algorithm is improved by modifying parameter setting for determining the prior probability of service nodes and the conditional probability of service output node. The detail of Bayesian Network-based exception handling for web service composition is given and analyzed. An experiment is also performed and the results show the proposed method is feasible and effective.",2012,0, 6181,Applicability and benefits of mutation analysis as an aid for unit testing,"Unit testing is a highly popular and widely practiced measure for assuring software quality. Nevertheless, writing good unit tests requires experience in test design and in applying the testing frameworks. Hence, existing unit test suites often suffer from issues that limit their defect detection capability or that impact the understandability and maintainability of the implemented tests. Several methods and techniques have been proposed to aid the developer in creating good unit tests. Mutation analysis is one of the most promising techniques to assess the quality of a test suite. Over the last years it has caught increasing attention by researchers and practitioners and a variety of tools have been developed to support this technique. In this work, mutation analysis is studied for its practical applicability and the potential benefits in providing guidance for unit testing. Five mutation analysis tools are investigated on four test suites representing different levels of test quality. The results show that the applied tools have reached an acceptable level of maturity although practical application still uncovers technical limitations. Furthermore, the study results indicate that the implemented mutation operators allow a good approximation of the actual quality of a test suite and an advantage over conventional code coverage measures when a comprehensive set of mutation operators has been implemented.",2012,0, 6182,An approach to predict software project success by cascading clustering and classification,"Generation of successful project is the core challenge of the day. Prediction of software project success is therefore one of the vital activities of software engineering community. Data mining techniques enable one to predict the success of the company by estimating the degree of success of their projects. This paper presents an empirical study of several projects developed at various software industries in order to comprehend the effectiveness of data mining technique for efficient project management. The paper provides K-means clustering approach for grouping of projects based on project success as one of the parameters. Subsequently, different classification algorithms are trained on the result set to build the classifier model based on K-fold cross validation. The best accuracy for the given dataset is achieved in Random Forest algorithm compared to other classifiers. This mode of project management using effective data mining techniques on empirical projects ensures accurate prediction of project success rate of the company. It further reflects process maturity leading towards implementation of strategies for improved productivity and sustainability of the company in the industrial market.",2012,0, 6183,CMMI for educational institutions,"We all know that Education constitutes one of the important foundations of any society. There are various levels of teaching institutes from the Ivy League to ordinary colleges. This gradation is a very informal and there are no formal mechanisms to classify (assess) and improve the standards in an institution. There are bodies like Washington Accord (International Engineering Alliance, www.washingtonaccord.org) which consists of six international agreements governing mutual recognition of engineering qualifications and professional competence. There are other National bodies like NBA (National Board of Accreditation - India) set up in September 1994, for the purpose of assessment of Quality and Accreditation of Technical programmes in India. But these bodies are more like certification agencies. What we want to propose in this paper is an assessment and improvement framework for all Professional institutions like the CMMI for SW industry. In fact we propose to use the same levels, with appropriate process areas, relevant specific and common goals, work products etc. We propose to produce a complete, consistent framework along with the needed software which will be useful for any academic institution to assess themselves where they are and produce a roadmap (project plan) to go to the subsequent levels. We used some of these processes during our preparation for NBA accreditation.",2012,0, 6184,Design and implementation of scalable DAQ software for a high-resolution PET camera,"This paper describes a new data acquisition (DAQ) program for a breast dedicated high-resolution positron emission tomography (PET) camera employing 4608 position-sensitive avalanche photodiodes (PSAPDs). The DAQ program is designed to be highly scalable to match the needs of evolving implementation as the system is built up from 1 to 4608 PSAPDs. The program features real time data quality monitoring capabilities. Energy and flood histograms of each PSAPD are monitored in real time, as well as the charge histogram for each channel. This allows the user to detect any hardware problems and configuration errors prior to the completion of experiment. In order to view flood histograms collected data needs to be corrected for pedestals. The program employs a pedestal estimation algorithm by fitting a Landau function to the histograms. It is also possible to use this algorithm to detect changes in pedestal values during data acquisition. A circular buffer supports real time data monitoring. Changes in the display settings, like photopeak windowing, require refilling the histograms. In order to do this as fast as possible, a number of events are stored in a circular buffer with data split into separate groups, one for each PSAPD. When a histogram of a certain PSAPD needs to be updated, the buffer pulls out the data of that specific PSAPD. This approach reduces the time it takes to update a histogram as opposed to reading data from disk or waiting for new events. It also prevents unnecessary data transfer between different parts of the program. Using one of the data acquisition boards developed, we tested the throughput capabilities of the program. The maximum throughput is limited by the hardware to 64Mbits/s. This throughput corresponds to roughly 228000 events per second. Due to processing overhead, real time online monitoring ability of the program is limited to 120000 events per second. During the test none of the events were dropped due to processing and plotting tas- s.",2012,0, 6185,"Optimization of energy window and multiple event acceptance policy for PETbox4, a high sensitivity preclinical imaging tomograph","PETBox4 is a preclinical system dedicated to imaging mice. This system is composed by four detector panels, each made by a 24 50 array of 1.825 1.825 7 mm BGO crystals. The face to face crystal separation of the detectors is 5 cm, generating a 4.5 4.5 9.4 cm field of view (FOV). The result is a tomograph with a large detection solid angle, which in combination with a wide open energy window achieves high peak detection efficiency (~17%). Despite the small size of the typical imaged subjects, these design features also increase the fraction of accepted crystal and object scattered events, which reduce the overall image signal to noise ratio. In this work, we investigated the system acquisition configuration settings necessary to optimize the NEC (Noise equivalent Counts) and image quality. A Monte Carlo simulation software package was used (GATE) to investigate the different types of events detected as a function of energy window settings and multiple event acceptance policy. This was done for a realistic distribution of activity and attenuation coefficients in the PETBox4 FOV, based on emission data from an in-vivo preclinical PET image from an average sized mouse (30g). Based on the simulations, the NEC rate was optimized for a dual energy window that accepts both low (50-175 keY) and high (410-650 keY) energy events. This indicates that low energy events that are composed mostly from single crystal scatter contribute useful image information, while events in the middle of the energy spectrum (175keV-410keV), tend to include large fractions of crystal backscatter as well as object scatter and do not contribute significantly in data signal to noise ratio. As a result of the same simulations, a new policy for the acceptance of multiple events was introduced, implementing a """"KiIlAIl"""" multiple policy, further improving the NEe. Overall, these two optimization parameters improved the NEC rate by 56%, providing a signifi- ant advantage in image signal to noise ratio.",2012,0, 6186,Polyenergetic CT sinogram generator,"Energy-sensitive X-ray detection devices operating in photon counting mode are getting growing interest since the last decade. By offering a promise of lower dosage requirements and spectroscopic analysis capabilities, they might redefine the paradigm of clinical X-ray measurements in a near future. A simulation software reproducing the data collection scheme of such detection devices was implemented. Without trying to reproduce the in-depth electronic mechanism of those devices, it is rather oriented toward the overall quality of the measured data in terms of simple detection characteristics. Build upon rugged and proven software components, the generator includes realistic material definitions with respect to energy-dependent attenuation. Projections are measured and features such as energy resolution, number of detected energy levels, counting noise statistics and data weighting schemes are taken into account. Using the proposed method, iterative image reconstructions show that classic beam-hardening related artefacts can successfully be reproduced. The proposed method is intended to be used as a tool aimed at predicting the imaging capabilities of these forthcoming energy-sensitive detection devices and to help in the design of their specifications. Being an easily parameterizable analytical tool, it will also be useful to validate the behavior of new dedicated polyenergetic reconstruction algorithms.",2012,0, 6187,Evaluation of a novel wafer-scale CMOS APS X-ray detector for use in mammography,"The most important factors that affect the image quality are contrast, spatial resolution and noise. These factors and their relationship are quantitatively described by the Contrast-to-Noise Ratio (CNR), Signal-to-Noise Ratio (SNR), Modulation Transfer Function (MTF), Noise Power Spectrum (NPS) and Detective Quantum Efficiency (DQE) parameters. The combination of SNR, MTF and NPS determines the DQE, which represents the ability to visualize object details of a certain size and contrast at a given dose. In this study the performance of a novel large area Complementary Metal-Oxide-Semiconductor (CMOS) Active Pixel Sensor (APS) X-ray detector, called DynAMITe (Dynamic range Adjustable for Medical Imaging Technology), was investigated and compared to other three digital mammography systems (namely a) Large Area Sensor (LAS), b) Hamamatsu C9732DK, and c) Anrad SMAM), in terms of physical characteristics and evaluation of the image quality. DynAMITe detector consists of two geometrically superimposed grids: a) 2560 2624 pixels at 50 11m pitch, named Sub-Pixels (SP camera) and b) 1280 1312 pixels at 100 11m pitch, named Pixels (P camera). The X-ray performance evaluation of DynAMITe SP detector demonstrated high DQE results (0.58 to 0.64 at 0.5 Ip/mm). Image simulation based on the X-ray performance of the detectors was used to predict and compare the mammographic image quality using ideal software phantoms: a) one representing two three dimensional (3-D) breasts of various thickness and glandularity to estimate the CNR between simulated microcalcifications and the background, and b) the CDMAM 3.4 test tool for a contrast-detail analysis of small thickness and low contrast objects. The results show that DynAMITe SP detector results in high CNR and contrast-detail performance.",2012,0, 6188,Development and assessment of statistical iterative image reconstruction for CT on a small animal SPECT/CT dual-modality system,"We developed a statistical iterative CT image reconstruction software for a newly constructed high-resolution small animal SPECT/CT dual-modality system, and assessed its performance at different radiation exposure levels. The objective of this work was to preserve or improve reconstructed image quality at either the same or reduced animal x-ray radiation exposure. The SPECT/CT system used a single detector for both the CT and SPECT modalities that consists of a micro-columnar CsI(TI) phosphor, a light image intensifier (LII) and a CCD sensor. The CT reconstruction software was based on the ordered-subset-convex (OSC) algorithm, and the system matrix was calculated through a ray-driven approach. A self-calibration method was implemented to calculate the offset of the axis of rotation (AOR), an important geometry parameter of the system. An endovascular stent was imaged to evaluate the high resolution performance of the statistical reconstructed image. A sacrificed mouse was scanned at different exposure levels to assess the effect of statistical noise on the image. The mouse studies were reconstructed with both the statistical reconstruction software and a filtered back-projection (FBP) program. The images were assessed and compared by contrast to noise ratio (CNR) in the region of interest. The images yielded by the statistical reconstruction software were artifact free and show superior noise performance to those from FBP reconstruction at different radiation exposure levels. The statistical reconstructed images with reduced exposure showed obviously higher image quality than those from FBP reconstruction at full exposure.",2012,0, 6189,Quality metrics for optical transmission,"The quality of optical signals is a very important parameter in optical communications. Several metrics are in common use, like optical signal-to-noise power ratio (OSNR), Qfactor, error vector magnitude (EVM) and bit error ratio (BER). While the BER is the final determinant for a system, a measured raw BER is not necessarily useful to predict the final BER after soft-decision forward error correction (FEC), if the statistics of the noise leading to errors remains unknown. In this respect the EVM is superior, as it allows an estimation of the error statistics. The accuracy of the BER estimate from a measured Q-factor is impaired by the basic requirement that the relevant noise must be Gaussian. Already for plain on-off keying (OOK) signals this assumption is violated if an optical pre-amplifier is employed, and the estimated BER becomes even worse if phase modulation formats are involved. We compare various metrics analytically, by simulation, and through experiments. We employ six quadrature amplitude modulation (QAM) formats at symbol rates of 20 GBd and 25 GBd. The signals were generated by a software-defined transmitter. We conclude that for optical channels with additive Gaussian noise the EVM metric is a reliable quality measure. For nondata-aided QAM reception, BER in the range 10-6...10-2 can be reliably estimated from measured EVM.",2012,0, 6190,New method on the relays protective coordination due to presence of distributed generation,"The main purpose of this paper is to serve as a guideline for assessing the impact of distributed generation (DG) on the protection coordination of the distribution network. In particular, the paper details and presents a generalized assessment procedure for determining the impact of the integration of DG on the protection practices of distribution systems. To ensure the relevancy and the compatibility with the existing utility practices, demonstrations are carried out using ETAP, which are the commonly used software for distribution system protection planning.",2012,0, 6191,The application of fault location system for 220kV overhead power line in China,"Reliable power supply is considered more and more important for today's infrastructure. If power line faults occur, they could give damage to the facility that fail the power supply quality, so that the fault point must be located, investigated and repaired quickly as possible. Since the length of overhead transmission line is long, say several decades to hundreds kilometers, fault location system has been employed and utilized. J-Power Systems (JPS) has provided the fault location system for overhead power transmission line since early 1980s, which locates the fault point based on the distribution pattern of the induced current detected by the current sensors on the ground wire. By applying the combination of EMTP (Electro Magnetic Transients Program) simulation and the neural network pattern recognizing technology, the located fault point is achieved narrower than the sensor to sensor section. Typical fault location system consists of one Master Station (MS) which has a personal computer with fault location software and communication device to communicate with Local Stations (LS), several LS installed on tower which are driven by solar panel and battery, incorporate processing circuit boards and communication device to communicate with MS, and several current sensors which are installed on ground wire and whose signal cables are connected to LS. Except for few cases JPS has not provided the fault location system in overseas. Recently the authors achieved to apply the fault location system to Chinese power supply network, by studying and considering to make the system improved to adapt the local material, local telecommunication network, local installation procedure, and so on. This paper describes the fault location system for 220kV overhead power line in China, designed, manufactured and implemented by authors.",2012,0, 6192,Multi-sensor information fusion for unmanned cars using radar map,"Safety is the foremost quality to unmanned cars. In order to ensure the safety, unmanned cars must percept the surrounding environment precisely and exhaustively. To achieve this, various sensors including camera, lidar, and radar are equipped with unmanned cars. While the environment perception algorithms are relatively mature, there is no general solution for the multi-sensor information fusion for unmanned cars. In this article, we present a solution for multi-sensor information fusion for unmanned cars using radar map. With this solution, different environment information detected by various sensors can fuse naturally in radar map. Besides, the radar map is essentially a matrix and can be easily stored in memory. Experiment results show that radar map works well in all road conditions. And software development practices for unmanned cars also show that radar map can provide well support to decision-making, path planning and other subsequent sections.",2012,0, 6193,T2M and M2T Transformations between Software Processes and Software Process Models,"Formalizing software processes allows process engineers to analyze, simulate, evolve and manipulate them with different purposes. Eclipse Process Framework Composer is a free tool for formalizing SPEM 2.0 processes, it provides primitives for process definition and allows to graphically visualize processes. This user friendliness is crucial for practitioners to both follow the designed process and to validate it. For the last two years we have been working in tailoring software process models, i.e. configuring general processes to particular contexts in order to make them more productive. However, it is hard for end users to validate the results and follow the process if it is in its model format, i.e., xmi. Moreover, manual translation is hard and error prone. In this paper we present a set of projectors: an injector that transforms the xml description of the process into xmi, and a extractor that transforms the xmi description of the configured process into the xml epresentation needed for its visualization.",2012,0, 6194,Evaluating a Methodology to Establish Usability Heuristics,"Assessing usability in any software product may be a key factor for predicting its success or fail. Heuristic evaluation is the most commonly used usability evaluation method. It uses a set of recognized usability design principles (heuristics). Until now, the Nielsen's ten usability heuristics have been widely used. However, such heuristics are too general and currently it is necessary to provide new sets of heuristics for evaluating specific kinds of applications. Through a survey, applied to a group of researchers, it was possible to analyze the pertinence of formalizing a methodology for establishing specific usability heuristics that could improve usability assessments.",2012,0, 6195,An effective error concealment method based on abrupt scene change detection algorithm,"Compressed video bit-streams are extremely sensitive to packet loss over error-prone channels. Error concealment (EC) has been considered as one of error control techniques to improve the reconstructed picture quality against transmission errors. However, EC methods show poor image reconstruction performance due to unavailable scene change information of the video sequence, especially when an abrupt scene change occurs. In this paper, we propose an effective EC method based on scene change detection algorithm (SCDA), which provides information to decide whether spatial or temporal EC is better to be used for intra and inter frames respectively. The simulation results show that the proposed method highly improves the subjective quality of incorrectly decoded frames and obtains an average gain of 0.7dB compared with the H.264/AVC Joint Model reference software.",2012,0, 6196,Integrative software design for reliability: Beyond models and defect prediction,"The prevalence of software as an integral part of almost anything is commonplace today among the products, systems, applications, services and/or solutions with which we all interact. Software reliability directly impacts both the user experience and field operating costs. A key challenge is for the product developer to accurately predict and improve software field reliability in terms of attributes such as outage frequency and duration, prior to software delivery. Unfortunately, the actual field performance data does not often correlate with the predictions. Markov analysis and software reliability growth models are necessary key steps, but are insufficient to fully address customer-perceived reliability issues. Can we make better use of field performance data to improve our predictive models and tune our test strategies from one release to the next? In this paper, we describe an integrative software reliability framework that attempts to close the gap between expectations, predictions, and the actual behavior of systems by leveraging key best-in-class practices, tools, and techniques for software reliability analysis, modeling and predicting software field performance in terms of outage frequency/duration; tracking and monitoring test defect data and defects not fixed in subsequent releases over the software lifecycle, and validating prediction results with the actual field data. The proposed approach is designed to achieve customer expectations for software reliability and help assure the delivery of highly reliable software products. 2012 Alcatel-Lucent.",2012,0, 6197,Intelligent systems based in hospital database malfunction scenarios,"Databases are indispensable for everyday tasks in organizations, particularly in healthcare units. Databases archive important and confidential information about patient's clinical status. Therefore, they must always be available, reliable and at high performance level. In healthcare units, fault tolerant systems ensure the availability, reliability and disaster recovery of data. However, these mechanisms do not allow taking preventive actions in order to avoid fault occurrence. In this context, it emerges the necessity of developing a fault prevention system. It can predict database malfunction in advance and provides early decision taken to solve problems. The objectives of this paper are: monitoring database performance and adapt a forecasting model used in medicine (MEWS) to the database context. Based on mathematical tools it was created a scale that assesses the severity of abnormal situations. In this way, it is possible to define the scenarios where database symptoms must trigger alerts and assistance request.",2012,0, 6198,Poster abstract: Direct multi-hop time synchronization with constructive interference,"Multi-hop time synchronization in wireless sensor networks (WSNs) is often time-consuming and error-prone due to random time-stamp delays for MAC layer access and unstable clocks of intermediate nodes. Constructive interference (CI), a recently discovered physical layer phenomenon, allows multiple nodes transmit and forward an identical packet simultaneously. By leveraging CI, we propose direct multihop (DMH) time synchronization by directly utilizing the time-stamps from the sink node instead of intermediate n-odes, which avoids the error caused by the unstable clock of intermediate nodes. DMH doesn't need decode the flooding time synchronization beacons. Moreover, DMH explores the linear regression technique in CI based time synchronization to counterbalance the clock drifts due to clock skews.",2012,0, 6199,Modeling software failures using neural network,"Failure-free software is a major concern for delivering high-quality system. High reliable software system requires robust modeling techniques to estimate the probability of the software failures over a period of time. In this paper, we have proposed a neural network based approach for predicting software failures. This paper presents the use of Feedforward neural network, mostly adopted by many researchers for reliability prediction [12][13][14] and Elman neural network. This experiment was conducted on real software failure dataset for three different applications. We have used various error metrics such as MSE, RMSE, NRMSE, MAE, MAPE, MMRE and BRE for the performance analysis of Feedforward (Universally adopted) and Elman neural network. Experimental results show that Elman neural network has good predictive capability. Mean Square Error (MSE), Mean absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) are both reduced significantly for Elman Neural Network in comparison to Feedforward network.",2012,0, 6200,High Speed Format Converter with Intelligent Quality Checker for File-Based System,"NHK broadcasting is shifting to file-based systems for its TV production and playout systems including VTRs and editing machines. A variety of codecs and Material eXchange Format (MXF) formats for broadcast equipment have been adopted. These include MPEG-2/AVC and OP1a/OP-Atom. Video files need to be converted into the selected codec and format to operate efficiently. The quality of video and audio must be checked during this conversion process because degradation and noise may occur. This paper describes equipment that can quickly convert files to multiple formats as well as intelligently check the quality of video and audio during the conversion. The equipment automatically adjusts thresholds to detect anomalies in the video quality check, depending on the type of codec and the spatial frequency of each area. This can be done in less time than the actual video duration by optimizing the processing software.",2012,0, 6201,Planning and Managing the Cost of Compromise for AV Retention and Access,"Long-term retention and access to audiovisual (AV) assets as part of a preservation strategy inevitably involve some form of compromise in order to achieve acceptable levels of cost, throughput, quality, and many other parameters. Examples include quality control and throughput in media transfer chains; data safety and accessibility in digital storage systems; and service levels for ingest and access for archive functions delivered as services. We present new software tools and frameworks developed in the PrestoPRIME project that allow these compromises to be quantitatively assessed, planned, and managed for file-based AV assets. Our focus is how to give an archive an assurance that when they design and operate a preservation strategy as a set of services, it will function as expected and will cope with the inevitable and often unpredictable variations that happen in operation. This includes being able to do cost projections, sensitivity analysis, simulation of disaster scenarios, and to govern preservation services using service-level agreements and policies.",2012,0, 6202,On the optimized generation of Software-Based Self-Test programs for VLIW processors,"Software-Based Self-Test (SBST) approaches have shown to be an effective solution to detect permanent faults, both at the end of the production process, and during the operational phase. However, when Very Long Instruction Word (VLIW) processors are addressed these techniques require some optimization steps in order to properly exploit the parallelism intrinsic in these architectures. In this paper we present a new method that, starting from previously known algorithms, automatically generates an effective test program able to still reach high fault coverage on the VLIW processor under test, while reducing the test duration and the test code size. The method consists of three parametric phases and can deal with different VLIW processor models. The main goal of the proposed method is to automatically obtain a test program able to effectively reduce the test time and the required resources. Experimental results gathered on a case study show the effectiveness of the proposed approach.",2012,0, 6203,Case Study in Computer Design,This chapter contains sections titled:
Design Principles
Design Decisions
Identification of System Elements
Architectural Design
Test Strategies
Fault Detection and Correction
CRC
Sequence Analysis
Sequence Probability and Sequence Response Time Predictions and Analysis
Sequence Failure Rate
Reliability
Detailed Design
Summary
References,2012,0, 6204,Network Reliability and Availability Metrics,This chapter contains sections titled:
Introduction
Model Development
Probability of Failure Analysis Results
Fault and Failure Correction Analysis Results
Remaining Failures Analysis Results
Reliability Analysis Results
Availability Analysis Results
Another Perspective on Probability of Failure
Measuring Prediction Accuracy
Methods for Improving Reliability
Summary of Results
Summary
References,2012,0, 6205,Reflections on the NASA MDP data sets,"The NASA metrics data program (MDP) data sets have been heavily used in software defect prediction research. Aim: To highlight the data quality issues present in these data sets, and the problems that can arise when they are used in a binary classification context. Method: A thorough exploration of all 13 original NASA data sets, followed by various experiments demonstrating the potential impact of duplicate data points when data mining. Conclusions: Firstly researchers need to analyse the data that forms the basis of their findings in the context of how it will be used. Secondly, the bulk of defect prediction experiments based on the NASA MDP data sets may have led to erroneous findings. This is mainly because of repeated/duplicate data points potentially causing substantial amounts of training and testing data to be identical.",2012,1, 6206,Transfer learning for cross-company software defect prediction,"Context: Software defect prediction studies usually built models using within-company data, but very few focused on the prediction models trained with cross-company data. It is difficult to employ these models which are built on the within-company data in practice, because of the lack of these local data repositories. Recently, transfer learning has attracted more and more attention for building classifier in target domain using the data from related source domain. It is very useful in cases when distributions of training and test instances differ, but is it appropriate for cross-company software defect prediction? Objective: In this paper, we consider the cross-company defect prediction scenario where source and target data are drawn from different companies. In order to harness cross company data, we try to exploit the transfer learning method to build faster and highly effective prediction model. Method: Unlike the prior works selecting training data which are similar from the test data, we proposed a novel algorithm called Transfer Naive Bayes (TNB), by using the information of all the proper features in training data. Our solution estimates the distribution of the test data, and transfers cross-company data information into the weights of the training data. On these weighted data, the defect prediction model is built. Results: This article presents a theoretical analysis for the comparative methods, and shows the experiment results on the data sets from different organizations. It indicates that TNB is more accurate in terms of AUC (The area under the receiver operating characteristic curve), within less runtime than the state of the art methods. Conclusion: It is concluded that when there are too few local training data to train good classifiers, the useful knowledge from different-distribution training data on feature level may help. We are optimistic that our transfer learning method can guide optimal resource allocation strategies, which may reduce software testing cost and increase effectiveness of software testing process.",2012,1, 6207,User preferences based software defect detection algorithms selection using MCDM,"A variety of classification algorithms for software defect detection have been developed over the years. How to select an appropriate classifier for a given task is an important issue in Data mining and knowledge discovery (DMKD). Many studies have compared different types of classification algorithms and the performances of these algorithms may vary using different performance measures and under different circumstances. Since the algorithm selection task needs to examine several criteria, such as accuracy, computational time, and misclassification rate, it can be modeled as a multiple criteria decision making (MCDM) problem. The goal of this paper is to use a set of MCDM methods to rank classification algorithms, with empirical results based on the software defect detection datasets. Since the preferences of the decision maker (DM) play an important role in algorithm evaluation and selection, this paper involved the DM during the ranking procedure by assigning user weights to the performance measures. Four MCDM methods are examined using 38 classification algorithms and 13 evaluation criteria over 10 public-domain software defect datasets. The results indicate that the boosting of CART and the boosting of C4.5 decision tree are ranked as the most appropriate algorithms for software defect datasets. Though the MCDM methods provide some conflicting results for the selected software defect datasets, they agree on most top-ranked classification algorithms.",2012,1, 6208,Effective Software Fault Localization Using an RBF Neural Network,"We propose the application of a modified radial basis function neural network in the context of software fault localization, to assist programmers in locating bugs effectively. This neural network is trained to learn the relationship between the statement coverage information of a test case and its corresponding execution result, success or failure. The trained network is then given as input a set of virtual test cases, each covering a single statement. The output of the network, for each virtual test case, is considered to be the suspiciousness of the corresponding covered statement. A statement with a higher suspiciousness has a higher likelihood of containing a bug, and thus statements can be ranked in descending order of their suspiciousness. The ranking can then be examined one by one, starting from the top, until a bug is located. Case studies on 15 different programs were conducted, and the results clearly show that our proposed technique is more effective than several other popular, state of the art fault localization techniques. Further studies investigate the robustness of the proposed technique, and illustrate how it can easily be applied to programs with multiple bugs as well.",2012,1, 6209,Software fault prediction based on grey neural network,"Considering determining the number of software fault is an uncertain non-linear problem with only small sample, a novel software fault prediction method based on grey neural network is put forward. Firstly, constructing the grey neural network topological structure according the small sample sequence is necessary, and then the network learning algorithm is discussed. Finally, the grey neural network prediction model based on the grey theory and artificial neural network is proposed. The sample fault sequences of some software project are used to verify the precision of this method. Comparison with GM(1,1), the proposed model can reduce the prediction relative error effectively.",2012,1, 6210,Antecedence Graph Approach to Checkpointing for Fault Tolerance in Mobile Agent Systems,"The flexibility offered by mobile agents is quite noticeable in distributed computing environments. However, the greater flexibility of the mobile agent paradigm compared to the client/server computing paradigm comes at an additional threats since agent systems are prone to failures originating from bad communication, security attacks, agent server crashes, system resources unavailability, network congestion, or even deadlock situations. In such events, mobile agents either get lost or damaged (partially or totally) during execution. In this paper, we propose parallel checkpointing approach based on the use of antecedence graphs for providing fault tolerance in mobile agent systems. During normal computation message transmission, the dependency information among mobile agents is recorded in the form of antecedence graphs by participating mobile agents of mobile agent group. When a checkpointing procedure begins, the initiator concurrently informs relevant mobile agents, which minimizes the identifying time. The proposed scheme utilizes the checkpointed information for fault tolerance which is stored in form of antecedence graphs. In case of failures, using checkpointed information, the antecedence graphs and message logs are regenerated for recovery and then normal operation continued. Moreover, compared with the existing schemes, our algorithm involves the minimum number of mobile agents during the identifying and checkpoiting procedure, which leads to the improvement of the system performance. In addition, the proposed algorithm is a domino-free checkpointing algorithm, which is especially desirable for mobile agent systems. Quantitative analysis and experimental simulation show that our algorithm outperforms other coordinated checkpointing schemes in terms of the identifying time and the number of blocked mobile agents and then can provide a better system performance. The main contribution of the proposed checkpointing scheme is the enhancement of graph-based ap- roach in terms of considerable improvement by reducing message overhead, execution, and recovery times.",2013,0, 6211,Empirical Principles and an Industrial Case Study in Retrieving Equivalent Requirements via Natural Language Processing Techniques,"Though very important in software engineering, linking artifacts of the same type (clone detection) or different types (traceability recovery) is extremely tedious, error-prone, and effort-intensive. Past research focused on supporting analysts with techniques based on Natural Language Processing (NLP) to identify candidate links. Because many NLP techniques exist and their performance varies according to context, it is crucial to define and use reliable evaluation procedures. The aim of this paper is to propose a set of seven principles for evaluating the performance of NLP techniques in identifying equivalent requirements. In this paper, we conjecture, and verify, that NLP techniques perform on a given dataset according to both ability and the odds of identifying equivalent requirements correctly. For instance, when the odds of identifying equivalent requirements are very high, then it is reasonable to expect that NLP techniques will result in good performance. Our key idea is to measure this random factor of the specific dataset(s) in use and then adjust the observed performance accordingly. To support the application of the principles we report their practical application to a case study that evaluates the performance of a large number of NLP techniques for identifying equivalent requirements in the context of an Italian company in the defense and aerospace domain. The current application context is the evaluation of NLP techniques to identify equivalent requirements. However, most of the proposed principles seem applicable to evaluating any estimation technique aimed at supporting a binary decision (e.g., equivalent/nonequivalent), with the estimate in the range [0,1] (e.g., the similarity provided by the NLP), when the dataset(s) is used as a benchmark (i.e., testbed), independently of the type of estimator (i.e., requirements text) and of the estimation method (e.g., NLP).",2013,0, 6212,On Fault Representativeness of Software Fault Injection,"The injection of software faults in software components to assess the impact of these faults on other components or on the system as a whole, allowing the evaluation of fault tolerance, is relatively new compared to decades of research on hardware fault injection. This paper presents an extensive experimental study (more than 3.8 million individual experiments in three real systems) to evaluate the representativeness of faults injected by a state-of-the-art approach (G-SWFIT). Results show that a significant share (up to 72 percent) of injected faults cannot be considered representative of residual software faults as they are consistently detected by regression tests, and that the representativeness of injected faults is affected by the fault location within the system, resulting in different distributions of representative/nonrepresentative faults across files and functions. Therefore, we propose a new approach to refine the faultload by removing faults that are not representative of residual software faults. This filtering is essential to assure meaningful results and to reduce the cost (in terms of number of faults) of software fault injection campaigns in complex software. The proposed approach is based on classification algorithms, is fully automatic, and can be used for improving fault representativeness of existing software fault injection approaches.",2013,0, 6213,Multilevel Diskless Checkpointing,"Extreme scale systems available before the end of this decade are expected to have 100 million to 1 billion CPU cores. The probability that a failure occurs during an application execution is expected to be much higher than today's systems. Counteracting this higher failure rate may require a combination of disk-based checkpointing, diskless checkpointing, and algorithmic fault tolerance. Diskless checkpointing is an efficient technique to tolerate a small number of process failures in large parallel and distributed systems. In the literature, a simultaneous failure of no more than N processes is often tolerated by using a one-level Reed-Solomon checkpointing scheme for N simultaneous process failures, whose overhead often increases quickly as N increases. We introduce an N-level diskless checkpointing scheme that reduces the overhead for tolerating a simultaneous failure of up to N processes. Each level is a diskless checkpointing scheme for a simultaneous failure of i processes, where i = 1, 2,..., N. Simulation results indicate the proposed N-level diskless checkpointing scheme achieves lower fault tolerance overhead than the one-level Reed-Solomon checkpointing scheme for N simultaneous processor failures.",2013,0, 6214,Systematic Elaboration of Scalability Requirements through Goal-Obstacle Analysis,"Scalability is a critical concern for many software systems. Despite the recognized importance of considering scalability from the earliest stages of development, there is currently little support for reasoning about scalability at the requirements level. This paper presents a goal-oriented approach for eliciting, modeling, and reasoning about scalability requirements. The approach consists of systematically identifying scalability-related obstacles to the satisfaction of goals, assessing the likelihood and severity of these obstacles, and generating new goals to deal with them. The result is a consolidated set of requirements in which important scalability concerns are anticipated through the precise, quantified specification of scaling assumptions and scalability goals. The paper presents results from applying the approach to a complex, large-scale financial fraud detection system.",2013,0, 6215,Automated Behavioral Testing of Refactoring Engines,"Refactoring is a transformation that preserves the external behavior of a program and improves its internal quality. Usually, compilation errors and behavioral changes are avoided by preconditions determined for each refactoring transformation. However, to formally define these preconditions and transfer them to program checks is a rather complex task. In practice, refactoring engine developers commonly implement refactorings in an ad hoc manner since no guidelines are available for evaluating the correctness of refactoring implementations. As a result, even mainstream refactoring engines contain critical bugs. We present a technique to test Java refactoring engines. It automates test input generation by using a Java program generator that exhaustively generates programs for a given scope of Java declarations. The refactoring under test is applied to each generated program. The technique uses SafeRefactor, a tool for detecting behavioral changes, as an oracle to evaluate the correctness of these transformations. Finally, the technique classifies the failing transformations by the kind of behavioral change or compilation error introduced by them. We have evaluated this technique by testing 29 refactorings in Eclipse JDT, NetBeans, and the JastAdd Refactoring Tools. We analyzed 153,444 transformations, and identified 57 bugs related to compilation errors, and 63 bugs related to behavioral changes.",2013,0, 6216,Toward Comprehensible Software Fault Prediction Models Using Bayesian Network Classifiers,"Software testing is a crucial activity during software development and fault prediction models assist practitioners herein by providing an upfront identification of faulty software code by drawing upon the machine learning literature. While especially the Naive Bayes classifier is often applied in this regard, citing predictive performance and comprehensibility as its major strengths, a number of alternative Bayesian algorithms that boost the possibility of constructing simpler networks with fewer nodes and arcs remain unexplored. This study contributes to the literature by considering 15 different Bayesian Network (BN) classifiers and comparing them to other popular machine learning techniques. Furthermore, the applicability of the Markov blanket principle for feature selection, which is a natural extension to BN theory, is investigated. The results, both in terms of the AUC and the recently introduced H-measure, are rigorously tested using the statistical framework of Demsar. It is concluded that simple and comprehensible networks with less nodes can be constructed using BN classifiers other than the Naive Bayes classifier. Furthermore, it is found that the aspects of comprehensibility and predictive performance need to be balanced out, and also the development context is an item which should be taken into account during model selection.",2013,1, 6217,Simulating Service-Oriented Systems: A Survey and the Services-Aware Simulation Framework,"The service-oriented architecture style supports desirable qualities, including distributed, loosely coupled systems spanning organizational boundaries. Such systems and their configurations are challenging to understand, reason about, and test. Improved understanding of these systems will support activities such as autonomic runtime configuration, application deployment, and development/testing. Simulation is one way to understand and test service systems. This paper describes a literature survey of simulation frameworks for service-oriented systems, examining simulation software, systems, approaches, and frameworks used to simulate service-oriented systems. We identify a set of dimensions for describing the various approaches, considering their modeling methodology, their functionalities, their underlying infrastructure, and their evaluation. We then introduce the services-aware simulation framework (SASF), a simulation framework for predicting the behavior of service-oriented systems under different configurations and loads, and discuss the unique features that distinguish it from other systems in the literature. We demonstrate its use in simulating two service-oriented systems.",2013,0, 6218,Locating Need-to-Externalize Constant Strings for Software Internationalization with Generalized String-Taint Analysis,"Nowadays, a software product usually faces a global market. To meet the requirements of different local users, the software product must be internationalized. In an internationalized software product, user-visible hard-coded constant strings are externalized to resource files so that local versions can be generated by translating the resource files. In many cases, a software product is not internationalized at the beginning of the software development process. To internationalize an existing product, the developers must locate the user-visible constant strings that should be externalized. This locating process is tedious and error-prone due to 1) the large number of both user-visible and non-user-visible constant strings and 2) the complex data flows from constant strings to the Graphical User Interface (GUI). In this paper, we propose an automatic approach to locating need-to-externalize constant strings in the source code of a software product. Given a list of precollected API methods that output values of their string argument variables to the GUI and the source code of the software product under analysis, our approach traces from the invocation sites (within the source code) of these methods back to the need-to-externalize constant strings using generalized string-taint analysis. In our empirical evaluation, we used our approach to locate need-to-externalize constant strings in the uninternationalized versions of seven real-world open source software products. The results of our evaluation demonstrate that our approach is able to effectively locate need-to-externalize constant strings in uninternationalized software products. Furthermore, to help developers understand why a constant string requires translation and properly translate the need-to-externalize strings, we provide visual representation of the string dependencies related to the need-to-externalize strings.",2013,0, 6219,A Second Replicated Quantitative Analysis of Fault Distributions in Complex Software Systems,"Background: Software engineering is searching for general principles that apply across contexts, for example, to help guide software quality assurance. Fenton and Ohlsson presented such observations on fault distributions, which have been replicated once. Objectives: We aimed to replicate their study again to assess the robustness of the findings in a new environment, five years later. Method: We conducted a literal replication, collecting defect data from five consecutive releases of a large software system in the telecommunications domain, and conducted the same analysis as in the original study. Results: The replication confirms results on unevenly distributed faults over modules, and that fault proneness distributions persist over test phases. Size measures are not useful as predictors of fault proneness, while fault densities are of the same order of magnitude across releases and contexts. Conclusions: This replication confirms that the uneven distribution of defects motivates uneven distribution of quality assurance efforts, although predictors for such distribution of efforts are not sufficiently precise.",2013,0, 6220,Defect-Density Assessment in Evolutionary Product Development: A Case Study in Medical Imaging,"Defect density is the ratio between the number of defects and software size. Properly assessing defect density in evolutionary product development requires a strong tool and rigid process support that enables defects to be traced to the offending source code. In addition, it requires waiting for field defects after the product is deployed. To ease the calculation in practice, a proposed method approximates the lifetime number of defects against the software by the number of defects reported in a development period even if the defects are reported against previous product releases. The method uses aggregated code churn to measure the software size. It was applied to two development projects in medical imaging that involved three geographical locations (sites) with about 30 software engineers and 1.354 million lines of code in the released products. The results suggest the approach has some merits and validity, which the authors discuss in the distributed development context. The method is simple and operable and can be used by others with situations similar to ours.",2013,0, 6221,Parametric Design and Performance Analysis of a Decoupled Service-Oriented Prediction Framework Based on Embedded Numerical Software,"In modern utility computing infrastructures, like grids and clouds, one of the significant actions of a service provider is to predict the resources needed by the services included in its platform in an automated fashion for service provisioning optimization. Furthermore, a variety of software toolkits exist that implement an extended set of algorithms applicable to workload forecasting. However, their automated use as services in the distributed computing paradigm includes a number of design and implementation challenges. In this paper, a decoupled framework is presented, for taking advantage of software like GNU Octave in the process of creating and using prediction models during the service life cycle of a SOI. A performance analysis of the framework is also conducted. In this context, a methodology for creating parametric or gearbox services with multiple modes of operations based on the execution conditions is portrayed and is applied to transform the aforementioned service framework to optimize service performance. A new estimation algorithm is introduced, that creates performance rules of applications as black boxes, through the creation and usage of genetically optimized artificial neural networks. Through this combination, the critical parameters of the networks are decided through an evolutionary iterative process.",2013,0, 6222,GPS Near-Real-Time Coseismic Displacements for the Great Tohoku-oki Earthquake,"Here, we present the application to the great Tohoku-oki (Japan) earthquake (United States Geological Survey M = 9.0, March 11, 2011, 05:46:24 Coordinated Universal Time) of a novel approach, named Variometric Approach for Displacements Analysis Stand-Alone Engine, able to estimate accurate coseismic displacements and waveforms in real time, in the global reference frame, just using the standard broadcast products (orbits and clocks) and the high-rate (1 Hz or more) carrier phase observations continuously collected by a stand-alone global-positioning-system receiver. We processed separately the data collected at MIZU (Mizusawa, 140 km from the epicenter) and USUD (Usuda, 430 km from the epicenter) International Global Navigation Satellite System Service sites. A total horizontal displacement of about 2.4 m east-southeast was estimated for the MIZU, with a maximum horizontal oscillation amplitude of about 3.4 m along the same direction. Generally, an overall accuracy better than 10 cm for all the components (east, north, and up) and an average accuracy around 5 cm were assessed over an interval shorter than 5 min, with respect to independent solutions obtained with two different scientific software. The threshold of 5-cm accuracy has been recently indicated as sufficient for real-time fault determination for near-field tsunami forecasting for a major earthquake, like the 2011 Tohoku-oki one.",2013,0, 6223,Master Failure Detection Protocol in Internal Synchronization Environment,"During the last decades, the wide advance in the networking technologies has allowed the development of distributed monitoring and control systems. These systems show advantages compared with centralized solutions: heterogeneous nodes can be easily integrated, new nodes can be easily added to the system, and no single point of failure. For these reasons, distributed systems have been adopted in different fields, such as industrial automation and telecommunication systems. Recently, due to technology improvements, distributed systems are also adopted in the control of power-grid and transport systems, i.e., the so-called large-scale complex critical infrastructures. Given the strict safety, security, reliability, and real-time requirements, using distributed systems for controlling such critical infrastructure demands that adequate mechanisms have to be established to share the same notion of time among the nodes. For this class of systems, a synchronization protocol, such as the IEEE 1588 standard, can be adopted. This type of synchronization protocol was designed to achieve very precise clock synchronization, but it may not be sufficient to ensure safety of the entire system. For example, instability of the local oscillator of a reference node, due to a failure of the node itself or to malicious attacks, could influence the quality of synchronization of all nodes. In recent years, a new software clock, the reliable and self-aware clock (R&SAClock), which is designed to estimate the quality of synchronization through statistical analysis, was developed and tested. This statistical instrument can be used to identify any anomalous conditions with respect to normal behavior. A careful analysis and classification of the main points of failure of IEEE 1588 standard suggests that the reference node, which is called master, is the weak point of the system. For this reason, this paper deals with the detection of faults of the reference node(s) of an of IEEE 1588 setup- This paper describes and evaluates the design of a protocol for timing failure detection for internal synchronization based on a revised version of the R&SAClock software suitably modified to cross-exploit the information on the quality of synchronization among all the nodes of the system. The experimental evaluation of this approach confirms the capability of the synchronization uncertainty, which is provided by R&SAClock, to reveal the anomalous behaviors either of the local node or of the reference node. In fact, it is shown that, through a proper configuration of the parameters of the protocol, the system is able to detect all the failures injected on the master in different experimental conditions and to correctly identify failures on slaves with a probability of 87%.",2013,0, 6224,Self-Organized Cooperation Policy Setting in P2P Systems Based on Reinforcement Learning,"In this paper, we have developed a self-organized approach to cooperation policy setting in a system of rational peers that have only partial views of the whole system in order to improve the overall welfare as a system-wide performance metric. The proposed approach is based on distributed reinforcement learning and sets cooperation policies of the peers through their self-organized interactions. We have analyzed this approach to demonstrate that it results in Pareto optimality in the system by disseminating the local value functions of the peers among the neighbors. We have also experimentally verified that this approach outperforms the other commonly used approaches in the literature, in terms of the performance of the system.",2013,0, 6225,Low-Cost Electronic Tongue System and Its Application to Explosive Detection,"The use of biomimetic measurement systems as electronic tongues and noses has considerably increased in the last years. This paper presents the design of a low-cost electronic tongue system that includes a software application that runs on a personal computer and electronic equipment based on a 16-b microcontroller. The designed system is able to implement voltammetry and impedance spectroscopy measurements with different-electrode configurations. The data obtained from the electrochemical measurements can be used to build statistical models able to assess physicochemical properties of the samples. The designed system has been applied to the detection and quantification of trinitrotoluene (TNT), which is one of the most common explosive materials. Pulse voltammetry measurements were carried out on TNT samples with different concentration levels. The principal component analysis of the obtained results shows that the electronic tongue is able to detect TNT in acetonitrile samples. Prediction models were built with partial least squares regression, and a good correlation was observed between the pulse voltammetry measurements and the TNT concentration levels. In this experience, a new voltammetric data compression algorithm based on polynomial approximations has been tested with good results. The electronic tongue has also been applied to the prediction of water quality parameters in wastewater and to the evaluation of different-pulse array designs for pulse voltammetry experiences.",2013,0, 6226,Robust Dynamic Service Composition in Sensor Networks,"Service modeling and service composition are software architecture paradigms that have been used extensively in web services where there is an abundance of resources. They mainly capture the idea that advanced functionality can be realized by combining a set of primitive services provided by the system. Many efforts in web services domain focused on detecting the initial composition, which is then followed for the rest of service operation. In sensor networks, however, communication among nodes is error-prone and unreliable, while sensor nodes have constrained resources. This dynamic environment requires a continuous adaptation of the composition of a complex service. In this paper, we first propose a graph-based formulation for modeling sensor services that maps to the operational model of sensor networks and is amenable to analysis. Based on this model, we formulate the process of sensor service composition as a cost-optimization problem and show that it is NP-complete. Two heuristic methods are proposed to solve the composition problem: the top-down and the bottom-up approaches. We discuss centralized and distributed implementations of these methods. Finally, using ns-2 simulations, we evaluate the performance and overhead of our proposed methods.",2013,0, 6227,A Remainder-Based Contention-Avoidance Scheme for Saturated Wireless CSMA Networks,"The traditional truncated-binary-exponential-backoff contention-resolution scheme can address the collision problem of carrier-sense multiple-access networks when the number of users is limited. However, its performance significantly worsens when many users join the contention or when the network is saturated. To address this issue, we propose a demand-assigned multiaccess (DAMA) time-division multiple-access (TDMA) system with a remainder-based contention-avoidance scheme. In our scheme, the wireless users are divided into several groups according to the available transmission opportunities in each time frame, and each user sends its bandwidth request during the time window for its group. According to the average collision probability and the utilization of the transmission opportunities in the last two rounds of contention resolution, the size of available transmission opportunities and the number of wireless users who issue bandwidth requests are adaptively controlled by the base station and the wireless users, respectively. Our analysis and simulation show that the proposed contention-avoidance scheme can significantly improve the performance of the entire network.",2013,0, 6228,A large-scale empirical study of just-in-time quality assurance,"Defect prediction models are a well-known technique for identifying defect-prone files or packages such that practitioners can allocate their quality assurance efforts (e.g., testing and code reviews). However, once the critical files or packages have been identified, developers still need to spend considerable time drilling down to the functions or even code snippets that should be reviewed or tested. This makes the approach too time consuming and impractical for large software systems. Instead, we consider defect prediction models that focus on identifying defect-prone (risky) software changes instead of files or packages. We refer to this type of quality assurance activity as Just-In-Time Quality Assurance, because developers can review and test these risky changes while they are still fresh in their minds (i.e., at check-in time). To build a change risk model, we use a wide range of factors based on the characteristics of a software change, such as the number of added lines, and developer experience. A large-scale study of six open source and five commercial projects from multiple domains shows that our models can predict whether or not a change will lead to a defect with an average accuracy of 68 percent and an average recall of 64 percent. Furthermore, when considering the effort needed to review changes, we find that using only 20 percent of the effort it would take to inspect all changes, we can identify 35 percent of all defect-inducing changes. Our findings indicate that Just-In-Time Quality Assurance may provide an effort-reducing way to focus on the most risky changes and thus reduce the costs of developing high-quality software.",2013,0, 6229,Making a Completely Blind Image Quality Analyzer,"An important aim of research on the blind image quality assessment (IQA) problem is to devise perceptual models that can predict the quality of distorted images with as little prior knowledge of the images or their distortions as possible. Current state-of-the-art general purpose no reference (NR) IQA algorithms require knowledge about anticipated distortions in the form of training examples and corresponding human opinion scores. However we have recently derived a blind IQA model that only makes use of measurable deviations from statistical regularities observed in natural images, without training on human-rated distorted images, and, indeed without any exposure to distorted images. Thus, it is completely blind. The new IQA model, which we call the Natural Image Quality Evaluator (NIQE) is based on the construction of a quality aware collection of statistical features based on a simple and successful space domain natural scene statistic (NSS) model. These features are derived from a corpus of natural, undistorted images. Experimental results show that the new index delivers performance comparable to top performing NR IQA models that require training on large databases of human opinions of distorted images. A software release is available at http://live.ece.utexas.edu/research/quality/niqe_release.zip.",2013,0, 6230,Overcurrent and Overload Protection of Directly Voltage-Controlled Distributed Resources in a Microgrid,"This paper presents two add-on features for the voltage control scheme of directly voltage-controlled distributed energy resource units (VC-DERs) of an islanded microgrid to provide overcurrent and overload protection. The overcurrent protection scheme detects the fault, limits the output current magnitude of the DER unit, and restores the microgrid to its normal operating conditions subsequent to fault clearance. The overload protection scheme limits the output power of the VC-DER unit. Off-line digital time-domain simulation studies, in the EMTDC/PSCAD software environment, demonstrate the feasibility and desirable performance of the proposed features. Real-time case studies based on an RTDS system verifies performance of the hardware-implemented overload and overcurrent protection schemes in a hardware-in-the-loop environment.",2013,0, 6231,Predicting Architectural Vulnerability on Multithreaded Processors under Resource Contention and Sharing,"Architectural vulnerability factor (AVF) characterizes a processor's vulnerability to soft errors. Interthread resource contention and sharing on a multithreaded processor (e.g., SMT, CMP) shows nonuniform impact on a program's AVF when it is co-scheduled with different programs. However, measuring the AVF is extremely expensive in terms of hardware and computation. This paper proposes a scalable two-level predictive mechanism capable of predicting a program's AVF on a SMT/CMP architecture from easily measured metrics. Essentially, the first-level model correlates the AVF in a contention-free environment with important performance metrics and the processor configuration, while the second-level model captures the interthread resource contention and sharing via processor structures' occupancies. By utilizing the proposed scheme, we can accurately estimate any unseen program's soft error vulnerability under resource contention and sharing with any other program(s), on an arbitrarily configured multithreaded processor. In practice, the proposed model can be used to find soft error resilient thread-to-core scheduling for multithreaded processors.",2013,0, 6232,Trends in the Quality of Human-Centric Software Engineering Experiments--A Quasi-Experiment,"Context: Several text books and papers published between 2000 and 2002 have attempted to introduce experimental design and statistical methods to software engineers undertaking empirical studies. Objective: This paper investigates whether there has been an increase in the quality of human-centric experimental and quasi-experimental journal papers over the time period 1993 to 2010. Method: Seventy experimental and quasi-experimental papers published in four general software engineering journals in the years 1992-2002 and 2006-2010 were each assessed for quality by three empirical software engineering researchers using two quality assessment methods (a questionnaire-based method and a subjective overall assessment). Regression analysis was used to assess the relationship between paper quality and the year of publication, publication date group (before 2003 and after 2005), source journal, average coauthor experience, citation of statistical text books and papers, and paper length. The results were validated both by removing papers for which the quality score appeared unreliable and using an alternative quality measure. Results: Paper quality was significantly associated with year, citing general statistical texts, and paper length (p <; 0.05). Paper length did not reach significance when quality was measured using an overall subjective assessment. Conclusions: The quality of experimental and quasi-experimental software engineering papers appears to have improved gradually since 1993.",2013,0, 6233,Proactive and Reactive Runtime Service Discovery: A Framework and Its Evaluation,"The identification of services during the execution of service-based applications to replace services in them that are no longer available and/or fail to satisfy certain requirements is an important issue. In this paper, we present a framework to support runtime service discovery. This framework can execute service discovery queries in pull and push mode. In pull mode, it executes queries when a need for finding a replacement service arises. In push mode, queries are subscribed to the framework to be executed proactively and, in parallel with the operation of the application, to identify adequate services that could be used if the need for replacing a service arises. Hence, the proactive (push) mode of query execution makes it more likely to avoid interruptions in the operation of service-based applications when a service in them needs to be replaced at runtime. In both modes of query execution, the identification of services relies on distance-based matching of structural, behavioral, quality, and contextual characteristics of services and applications. A prototype implementation of the framework has been developed and an evaluation was carried out to assess the performance of the framework. This evaluation has shown positive results, which are discussed in the paper.",2013,0, 6234,C-MART: Benchmarking the Cloud,"Cloud computing environments provide on-demand resource provisioning, allowing applications to elastically scale. However, application benchmarks currently being used to test cloud management systems are not designed for this purpose. This results in resource underprovisioning and quality-of-service (QoS) violations when systems tested using these benchmarks are deployed in production environments. We present C-MART, a benchmark designed to emulate a modern web application running in a cloud computing environment. It is designed using the cloud computing paradigm of elastic scalability at every application tier and utilizes modern web-based technologies such as HTML5, AJAX, jQuery, and SQLite. C-MART consists of a web application, client emulator, deployment server, and scaling API. The deployment server automatically deploys and configures the test environment in orders of magnitude less time than current benchmarks. The scaling API allows users to define and provision their own customized datacenter. The client emulator generates the web workload for the application by emulating complex and varied client behaviors, including decisions based on page content and prior history. We show that C-MART can detect problems in management systems that previous benchmarks fail to identify, such as an increase from 4.4 to 50 percent error in predicting server CPU utilization and resource underprovisioning in 22 percent of QoS measurements.",2013,0, 6235,Design and Implementation of an Approximate Communication System for Wireless Media Applications,"All practical wireless communication systems are prone to errors. At the symbol level, such wireless errors have a well-defined structure: When a receiver decodes a symbol erroneously, it is more likely that the decoded symbol is a good approximation of the transmitted symbol than a randomly chosen symbol among all possible transmitted symbols. Based on this property, we define approximate communication, a method that exploits this error structure to natively provide unequal error protection to data bits. Unlike traditional [forward error correction (FEC)-based] mechanisms of unequal error protection that consume additional network and spectrum resources to encode redundant data, the approximate communication technique achieves this property at the PHY layer without consuming any additional network or spectrum resources (apart from a minimal signaling overhead). Approximate communication is particularly useful to media delivery applications that can benefit significantly from unequal error protection of data bits. We show the usefulness of this method to such applications by designing and implementing an end-to-end media delivery system, called Apex. Our Software Defined Radio (SDR)-based experiments reveal that Apex can improve video quality by 5-20 dB [peak signal-to-noise ratio (PSNR)] across a diverse set of wireless conditions when compared to traditional approaches. We believe that mechanisms such as Apex can be a cornerstone in designing future wireless media delivery systems under any error-prone channel condition.",2013,0, 6236,Modeling of Second Generation HTS Cables for Grid Fault Analysis Applied to Power System Simulation,"HTS power cable systems are an emerging technology aimed at competing with XLPE cable systems. Knowledge on the thermal operating conditions of HTS power devices is needed to estimate their availability when connected to a power system, because the HTS material must remain below its critical temperature to transport current. In this work, a simple finite difference method is used to assess the temperature distribution at certain cross-section of a second-generation coaxial HTS cable. This method has been implemented in MATLAB and its proper functioning has been verified with the software package FLUX. This method is a tool to establish temperature distributions among HTS cable layers under normal operating conditions. Additionally, the aim of this work is to serve as basis for future simulations including heat generation changes within the cable layers typically caused by grid fault events.",2013,0, 6237,Monitor-Based Instant Software Refactoring,"Software refactoring is an effective method for improvement of software quality while software external behavior remains unchanged. To facilitate software refactoring, a number of tools have been proposed for code smell detection and/or for automatic or semi-automatic refactoring. However, these tools are passive and human driven, thus making software refactoring dependent on developers' spontaneity. As a result, software engineers with little experience in software refactoring might miss a number of potential refactorings or may conduct refactorings later than expected. Few refactorings might result in poor software quality, and delayed refactorings may incur higher refactoring cost. To this end, we propose a monitor-based instant refactoring framework to drive inexperienced software engineers to conduct more refactorings promptly. Changes in the source code are instantly analyzed by a monitor running in the background. If these changes have the potential to introduce code smells, i.e., signs of potential problems in the code that might require refactorings, the monitor invokes corresponding smell detection tools and warns developers to resolve detected smells promptly. Feedback from developers, i.e., whether detected smells have been acknowledged and resolved, is consequently used to optimize smell detection algorithms. The proposed framework has been implemented, evaluated, and compared with the traditional human-driven refactoring tools. Evaluation results suggest that the proposed framework could drive inexperienced engineers to resolve more code smells (by an increase of 140 percent) promptly. The average lifespan of resolved smells was reduced by 92 percent. Results also suggest that the proposed framework could help developers to avoid similar code smells through timely warnings at the early stages of software development, thus reducing the total number of code smells by 51 percent.",2013,0, 6238,Modular Modeling for the Diagnostic of Complex Discrete-Event Systems,"For the complex systems, the development of a methodology of fault diagnosis is important. Indeed, for such systems, an efficient diagnosis contributes to the improvement of the availability, the growth of production, and, of course, the reduction of maintenance costs. It is a key action in the improvement of performance of industrial feature. This paper proposes a new approach to diagnose complex systems modeled by communicating timed automata. Each component has been modeled separately by a timed automaton integrating various operating modes while the communication between the various components is carried out by the control module. Starting from each module of the complex system, a single deterministic automaton, called a diagnoser, is constructed that uses observable events to detect the occurrence of a failure. This modeling formalism provides means for formal verification of the complex system model and its diagnoser. The model-checking methods are used to check correctness properties. The steps of the method are described by an algorithm and illustrated through a batch neutralization process. The implementation of the algorithm is also discussed.",2013,0, 6239,Automated Fault Diagnosis for an Autonomous Underwater Vehicle,"This paper reports our results in using a discrete fault diagnosis system Livingstone 2 (L2), onboard an autonomous underwater vehicle (AUV) Autosub 6000. Due to the difficulty of communicating between an AUV and its operators, AUVs can benefit particularly from increased autonomy, of which fault diagnosis is a part. However, they are also restricted in their power consumption. We show that a discrete diagnosis system can detect and identify a number of faults that would threaten the health of an AUV, while also being sufficiently lightweight computationally to be deployed onboard the vehicle. Since AUVs also often have their missions designed just before deployment in response to data from previous missions, a diagnosis system that monitors the software as well as the hardware of the system is also very useful. We show how a software diagnosis model can be built automatically that can be integrated with the hardware model to diagnose the complete system. We show empirically that on Autosub 6000 this allows us to diagnose real vehicle faults that could potentially lead to the loss of the vehicle.",2013,0, 6240,Code Coverage of Adaptive Random Testing,"Random testing is a basic software testing technique that can be used to assess the software reliability as well as to detect software failures. Adaptive random testing has been proposed to enhance the failure-detection capability of random testing. Previous studies have shown that adaptive random testing can use fewer test cases than random testing to detect the first software failure. In this paper, we evaluate and compare the performance of adaptive random testing and random testing from another perspective, that of code coverage. As shown in various investigations, a higher code coverage not only brings a higher failure-detection capability, but also improves the effectiveness of software reliability estimation. We conduct a series of experiments based on two categories of code coverage criteria: structure-based coverage, and fault-based coverage. Adaptive random testing can achieve higher code coverage than random testing with the same number of test cases. Our experimental results imply that, in addition to having a better failure-detection capability than random testing, adaptive random testing also delivers a higher effectiveness in assessing software reliability, and a higher confidence in the reliability of the software under test even when no failure is detected.",2013,0, 6241,Languages for software-defined networks,"Modern computer networks perform a bewildering array of tasks, from routing and traffic monitoring, to access control and server load balancing. However, managing these networks is unnecessarily complicated and error-prone, due to a heterogeneous mix of devices (e.g., routers, switches, firewalls, and middleboxes) with closed and proprietary configuration interfaces. Softwaredefined networks are poised to change this by offering a clean and open interface between networking devices and the software that controls them. In particular, many commercial switches support the OpenFlow protocol, and a number of campus, data center, and backbone networks have deployed the new technology. However, while SDNs make it possible to program the network, they does not make it easy. Today's OpenFlow controllers offer low-level APIs that mimic the underlying switch hardware. To reach SDNs full potential, we need to identify the right higher-level abstractions for creating (and composing) applications. In the Frenetic project, we are designing simple and intuitive abstractions for programming the three main stages of network management: monitoring network traffic, specifying and composing packet forwarding policies, and updating policies in a consistent way. Overall, these abstractions make it dramatically easier for programmers to write and reason about SDN applications.",2013,0, 6242,"ECG signal processing: Lossless compression, transmission via GSM network and feature extraction using Hilbert transform","Software based new, efficient and reliable lossless ECG data compression, transmission and feature extraction scheme is proposed here. The compression and reconstruction algorithm is implemented on C-platform. The compression scheme is such that the compressed file contains only ASCII characters. These characters are transmitted using internet based Short Message Service (SMS) system and at the receiving end, original ECG signal is brought back using just the reverse logic of compression. Reconstructed ECG signal is de-noised and R peaks are detected using Lagrange Five Point Interpolation formula and Hilbert transform. ECG baseline modulation correction is done and Q, S, QRS onset-offset points are identified. The whole module has been applied to various ECG data of all the 12 leads taken from PTB diagnostic ECG database (PTB-DB). It is observed that the compression module gives a moderate to high compression ratio (CR=7.18), an excellent Quality Score (QS=312.17) and the difference between original and reconstructed ECG signal is negligible (PRD=0.023%). Also the feature extraction module offers a good level of Sensitivity and Positive Predictivity (99.91%) of R peak detection. Measurement errors in extracted ECG features are also calculated.",2013,0, 6243,Data Quality: Some Comments on the NASA Software Defect Datasets,"Background--Self-evidently empirical analyses rely upon the quality of their data. Likewise, replications rely upon accurate reporting and using the same rather than similar versions of datasets. In recent years, there has been much interest in using machine learners to classify software modules into defect-prone and not defect-prone categories. The publicly available NASA datasets have been extensively used as part of this research. Objective--This short note investigates the extent to which published analyses based on the NASA defect datasets are meaningful and comparable. Method--We analyze the five studies published in the IEEE Transactions on Software Engineering since 2007 that have utilized these datasets and compare the two versions of the datasets currently in use. Results--We find important differences between the two versions of the datasets, implausible values in one dataset and generally insufficient detail documented on dataset preprocessing. Conclusions--It is recommended that researchers 1) indicate the provenance of the datasets they use, 2) report any preprocessing in sufficient detail to enable meaningful replication, and 3) invest effort in understanding the data prior to applying machine learners.",2013,1, 6244,Software cost estimation using Particle Swarm Optimization in the light of Quality Function Deployment technique,"Although software industry has seen a tremendous growth and expansion since its birth, it is continuously facing problems in its evolution. The major challenge for this industry is to produce quality software which is timely designed and build with proper cost estimates. Thus the techniques for controlling the quality and predicting cost of software are in the center of attention for many software firms. In this paper, we have tried to propose a cost estimation model based on Multi-objective Particle Swarm Optimization (MPSO) to tune the parameters of the famous COstructive COst MOdel (COCOMO). This cost estimation model is integrated with Quality Function Deployment (QFD) methodology to assist decision making in software designing and development processes for improving the quality. This unique combination will help the project managers to efficiently plan the overall software development life cycle of the software product.",2013,0, 6245,Validating Software Reliability Early through Statistical Model Checking,"Conventional software reliability assessment validates a system's reliability only at the end of development, resulting in costly defect correction. A proposed framework employs statistical model checking (SMC) to validate reliability at an early stage. SMC computes the probability that a target system will satisfy functional-safety requirements. The framework compares the allocated reliability goal with the calculated reliability using the probabilities and relative weight values for the functional-safety requirements. Early validation can prevent the propagation of reliability allocation errors and design errors at later stages, thereby achieving safer, cheaper, and faster development of safety-critical systems.",2013,0, 6246,Tailoring a large-sized software process using process slicing and case-based reasoning technique,"As the process tailoring is an inevitable and costly activity in software development projects, it is important to reduce the effort for process tailoring. It is critical to a large-sized software process. A large-sized software process usually contains hundreds of elements and relationships between the elements. Manually identifying the elements to be tailored is laborious and error-prone. To overcome this problem, the authors proposed an approach to process tailoring using process slicing (PS) in a large-sized software process. PS operates on a software process that includes various sub-processes and utilises past experience by case-based reasoning (CBR) technique to increase its effectiveness. The authors validated that PS can help a project manager to identify the elements to be tailored with less effort. It has also been illustrated that the CBR technique is helpful in reducing errors and increasing the performance of PS.",2013,0, 6247,Patterns of Knowledge in API Reference Documentation,"Reading reference documentation is an important part of programming with application programming interfaces (APIs). Reference documentation complements the API by providing information not obvious from the API syntax. To improve the quality of reference documentation and the efficiency with which the relevant information it contains can be accessed, we must first understand its content. We report on a study of the nature and organization of knowledge contained in the reference documentation of the hundreds of APIs provided as a part of two major technology platforms: Java SDK 6 and .NET 4.0. Our study involved the development of a taxonomy of knowledge types based on grounded methods and independent empirical validation. Seventeen trained coders used the taxonomy to rate a total of 5,574 randomly sampled documentation units to assess the knowledge they contain. Our results provide a comprehensive perspective on the patterns of knowledge in API documentation: observations about the types of knowledge it contains and how this knowledge is distributed throughout the documentation. The taxonomy and patterns of knowledge we present in this paper can be used to help practitioners evaluate the content of their API documentation, better organize their documentation, and limit the amount of low-value content. They also provide a vocabulary that can help structure and facilitate discussions about the content of APIs.",2013,0, 6248,Tool Use within NASA Software Quality Assurance,"As space mission software systems become larger and more complex, it is increasingly important for the software assurance effort to have the ability to effectively assess both the artifacts produced during software system development and the development process itself. Conceptually, assurance is a straightforward idea -- it is the result of activities carried out by an organization independent of the software developers to better inform project management of potential technical and programmatic risks, and thus increase management's confidence in the decisions they ultimately make. In practice, effective assurance for large, complex systems often entails assessing large, complex software artifacts (e.g., requirements specifications, architectural descriptions) as well as substantial amounts of unstructured information (e.g., anomaly reports resulting from testing activities during development). In such an environment, assurance engineers can benefit greatly from appropriate tool support. In order to do so, an assurance organization will need accurate and timely information on the tool support available for various types of assurance activities. In this paper, we investigate the current use of tool support for assurance organizations within NASA, and describe on-going work at JPL for providing assurance organizations with the information about tools they need to use them effectively.",2013,0, 6249,HETA: Hybrid Error-Detection Technique Using Assertions,"This paper presents HETA, a hybrid technique based on assertions and a non-intrusive enhanced watchdog module to detect SEE faults in microprocessors. These types of faults have a major influence in the microprocessor's control flow, causing incorrect jumps in the program's execution flow. In order to protect the system, a non-intrusive hardware module is implemented in order to monitor the data exchanged between the microprocessor and its memory. Since the hardware itself is not capable of detecting all control flow errors, it is enhanced to support a new software-based technique. Also, previous techniques are used to reach higher detection rates. A fault injection campaign is performed using a MIPS microprocessor. Simulation results show high detection rates with a small amount of performance degradation and area overhead.",2013,0, 6250,Automatic Arabic pronunciation scoring for computer aided language learning,"Automatic articulation scoring makes the computer able to give feedback on the quality of pronunciation and eventually detect some phonemes on miss-pronunciation. Computer-assisted language learning has evolved from simple interactive software that access the learner's knowledge in grammar and vocabulary to more advanced systems that accept speech input as a result of the recent development of speech recognition. Therefore many computer based self teaching systems have been developed for several languages such as English, Deutsch and Chinese, however for Arabic; the research is still in its infancy. This study is part of the Arabic Pronunciation improvement system for Malaysian Teachers of the Arabic language project which aims at developing computer based systems for standard Arabic language learning for Malaysian teachers of the Arabic language. The system aims to help teachers to learn the Arabic language quickly by focusing on the listening and speaking comprehension (receptive skills) to improve their pronunciation. In this paper we addressed the problem of giving marks for Arabic pronunciation by using a Automatic Speech Recognizer (ASR) based on Hidden Markov Models (HMM). Therefore, our methodology for pronunciation assessment is based on the HMM log-likelihood probability, however our main contribution was to train the system using both native and non native speakers. This resulted on improving the system's accuracy from 87.61% to 89.69%.",2013,0, 6251,Relyzer: Application Resiliency Analyzer for Transient Faults,"Future microprocessors need low-cost solutions for reliable operation in the presence of failure-prone devices. A promising approach is to detect hardware faults by deploying low-cost software-level symptom monitors. However, there remains a nonnegligible risk that several faults might escape these detectors to produce silent data corruptions (SDCs). Evaluating and bounding SDCs is, therefore, crucial for low-cost resiliency solutions. The authors present Relyzer, an approach that can systematically analyze all application fault sites and identify virtually all SDC-causing program locations. Instead of performing fault injections on all possible application-level fault sites, which is impractical, Relyzer carefully picks a small subset. It employs novel fault-pruning techniques that reduce the number of fault sites by either predicting their outcomes or showing them equivalent to others. Results show that 99.78 percent of faults are pruned across 12 studied workloads, reducing the complete application resiliency evaluation time by 2 to 6 orders of magnitude. Relyzer, for the first time, achieves the capability to list virtually all SDC-vulnerable program locations, which is critical in designing low-cost application-centric resiliency solutions. Relyzer also opens new avenues of research in designing error-resilient programming models as well as even faster (and simpler) evaluation methodologies.",2013,0, 6252,Fault Detection in Nonlinear Stable Systems Over Lossy Networks,"This paper addresses the problem of fault detection (FD) in nonlinear stable systems, which are monitored via communications networks. An FD based on the system data provided by the communications network is called networked fault detection (NFD) or over network FD in the literature. A residual signal is generated, which gives a satisfactory estimation of the fault. A sufficient condition is derived, which minimizes the estimation error in the presence of packet drops, quantization error, and unwanted exogenous inputs such as disturbance and noise. A linear matrix inequality is obtained for the design of the FD filter parameters. In order to produce appropriate fault alarms, two widely used residual signal evaluation methodologies, based on the signals' peak and average values, are presented and compared together. Finally, the effectiveness of the proposed NFD technique is extensively assessed by using an experimental testbed that was built for performance evaluation of such systems with the use of IEEE 802.15.4 wireless sensor networks (WSNs) technology. In particular, this paper describes the issue of floating point calculus when connecting the WSNs to the engineering design softwares, such as MATLAB, and a possible solution is presented.",2013,0, 6253,Web based testing An optimal solution to handle peak load,"Software Testing is a difficult task and testing web applications may be even more difficult due to peculiarities of such applications. One way to assess IT infrastructure performance is through load testing, which lets you assess how your Web site supports its expected workload by running a specified set of scripts that emulate customer behavior at different load levels. This paper describe the QoS factors load testing addresses, how to conduct load testing, and how it addresses business needs at several requirement levels and presents the efficiency of web based applications in terms of QoS, throughput and Response Time.",2013,0, 6254,Reliable code coverage technique in software testing,"E-Learning has become a major field of interest in recent year, and multiple approaches and solutions have been developed. Testing in E-Iearning software is the most important way of assuring the quality of the application. The E-Learning software contains miscommunication or no communication, software complexity, programming errors, time pressures and changing requirements, there are too many unrealistic software which results in bugs. In order to remove or defuse the bugs that cause a lot of project failures at the final stage of the delivery., this paper focuses on adducing a Reliable code coverage technique in software testing, which will ensure a bug free delivery of the software development. Software testing aims at detecting error-prone areas. This helps in the detection and correction of errors. It can be applied at the unit of integration and system levels of the software testing process, and it is usually done at the unit level. This method of test design uncovered many errors or problems. Experimental results show that, the increase in software performance rating and software quality assurance increases the testing level in performance.",2013,0, 6255,HELIOLIB/MIDL: An example of code reuse over mission lifetime,"This paper presents an overview of using a single semantic data model (HELIOLIB) and science data analysis code base (MIDL) throughout all phases of a spacecraft mission from integration and testing to archive submission. By using a single code base to create both ground support software and analysis software, we reduce costs and increase the integrity of the final science data product. The detailed, error-prone task of decoding telemetry is coded only once, and is tested early in the instrument development cycle. Not only does this result in better science telemetry processing later on, but can also reveal instrument or data design issues while the instrument is still on the ground. The telemetry processing code is then encapsulated within a data model that allows the code to be used as a pluggable reader module within science analysis tools. The daily use of these tools (MIDL) by instrument scientists helps validate the code. Final archive products can then be created with the same code base (same jar file even), ensuring that the quality of the archive products is as good as the data used routinely by the instrument team.",2013,0, 6256,Comprehensive visual field test & diagnosis system in support of astronaut health and performance,"Long duration spaceflight, permanent human presence on the Moon, and future human missions to Mars will require autonomous medical care to address both expected and unexpected risks. An integrated non-invasive visual field test & diagnosis system is presented for the identification, characterization, and automated classification of visual field defects caused by the spaceflight environment. This system will support the onboard medical provider and astronauts on space missions with an innovative, non-invasive, accurate, sensitive, and fast visual field test. It includes a database for examination data, and a software package for automated visual field analysis and diagnosis. The system will be used to detect and diagnose conditions affecting the visual field, while in space and on Earth, permitting the timely application of therapeutic countermeasures before astronaut health or performance are impaired. State-of-the-art perimetry devices are bulky, thereby precluding application in a spaceflight setting. In contrast, the visual field test & diagnosis system requires only a touchscreen-equipped computer or touchpad device, which may already be in use for other purposes (i.e., no additional payload), and custom software. The system has application in routine astronaut assessment (Clinical Status Exam), pre-, in-, and post-flight monitoring, and astronaut selection. It is deployable in operational space environments, such as aboard the International Space Station or during future missions to or permanent presence on the Moon and Mars.",2013,0, 6257,Assessing the Cost Effectiveness of Fault Prediction in Acceptance Testing,"Until now, various techniques for predicting fault-prone modules have been proposed and evaluated in terms of their prediction performance; however, their actual contribution to business objectives such as quality improvement and cost reduction has rarely been assessed. This paper proposes using a simulation model of software testing to assess the cost effectiveness of test effort allocation strategies based on fault prediction results. The simulation model estimates the number of discoverable faults with respect to the given test resources, the resource allocation strategy, a set of modules to be tested, and the fault prediction results. In a case study applying fault prediction of a small system to acceptance testing in the telecommunication industry, results from our simulation model showed that the best strategy was to let the test effort be proportional to """"the number of expected faults in a module log(module size)."""" By using this strategy with our best fault prediction model, the test effort could be reduced by 25 percent while still detecting as many faults as were normally discovered in testing, although the company required about 6 percent of the test effort for metrics collection, data cleansing, and modeling. The simulation results also indicate that the lower bound of acceptable prediction accuracy is around 0.78 in terms of an effort-aware measure, Norm(Popt). The results indicate that reduction of the test effort can be achieved by fault prediction only if the appropriate test strategy is employed with high enough fault prediction accuracy. Based on these preliminary results, we expect further research to assess their general validity with larger systems.",2013,0, 6258,Impacts of information and communication failures on optimal power system operation,"This paper focuses on recognizing the ways in which information and communication network failures cause a loss of control over a power system's operation. Using numerical evidence, it also assesses the specific impacts of such failures on the optimal operation of a power system. Optimal power flow (OPF) is the most prominent method for implementing optimal operation. In OPF, it is assumed that all power appliances are accessible through the communication and information network, and all power devices are set as the output of OPF; nevertheless, the loss of control and operation of the power system's apparatuses may seriously impact the real-time operation of the bulk power system. The control and operation of the power system is dedicated to a modern communication network, in that intelligent electronic devices (IEDs) are connected to apparatuses of the power network. Data communication among IEDs enables both automatic and remote manual control of the power system. Although such a network offers new advantages and possibilities not previously achievable, it intrinsically has its own source of failures, such as the failure of physical components, loss of integrity, software failures and data communication faults.",2013,0, 6259,On the Relationship between Program Evolution and Fault-Proneness: An Empirical Study,"Over the years, many researchers have studied the evolution and maintenance of object-oriented source code in order to understand the possibly costly erosion of the software. However, many studies thus far did not link the evolution of classes to faults. Since (1) some classes evolve independently, other classes have to do it together with others (co-evolution), and (2) not all classes are meant to last forever, but some are meant for experimentation or to try out an idea that was then dropped or modified. In this paper, we group classes based on their evolution to infer their lifetime models and coevolution trends. Then, we link each group's evolution to faults. We create phylogenetic trees showing the evolutionary history of programs and we use such trees to facilitate spotting the program code decay. We perform an empirical study, on three open-source programs: ArgoUML, JFreechart, and XercesJ, to examine the relation between the evolution of object-oriented source code at class level and fault-proneness. Our results indicate that (1) classes having a specific lifetime model are significantly less fault-prone than other classes and (2) faults fixed by maintaining co-evolved classes are significantly more frequent than faults fixed using not co-evolved classes.",2013,0, 6260,An Empirical Analysis of Bug Reports and Bug Fixing in Open Source Android Apps,"Smartphone platforms and applications (apps) have gained tremendous popularity recently. Due to the novelty of the smartphone platform and tools, and the low barrier to entry for app distribution, apps are prone to errors, which affects user experience and requires frequent bug fixes. An essential step towards correcting this situation is understanding the nature of the bugs and bug-fixing processes associated with smartphone platforms and apps. However, prior empirical bug studies have focused mostly on desktop and server applications. Therefore, in this paper, we perform an in-depth empirical study on bugs in the Google Android smartphone platform and 24 widely-used open-source Android apps from diverse categories such as communication, tools, and media. Our analysis has three main thrusts. First, we define several metrics to understand the quality of bug reports and analyze the bug-fix process, including developer involvement. Second, we show how differences in bug life-cycles can affect the bug-fix process. Third, as Android devices carry significant amounts of security-sensitive information, we perform a study of Android security bugs. We found that, although contributor activity in these projects is generally high, developer involvement decreases in some projects, similarly, while bug-report quality is high, bug triaging is still a problem. Finally, we observe that in Android apps, security bug reports are of higher quality but get fixed slower than non-security bugs. We believe that the findings of our study could potentially benefit both developers and users of Android apps.",2013,0, 6261,A Study on the Relation between Antipatterns and the Cost of Class Unit Testing,"Antipatterns are known as recurring, poor design choices, recent and past studies indicated that they negatively affect software systems in terms of understand ability and maintainability, also increasing change-and defect-proneness. For this reason, refactoring actions are often suggested. In this paper, we investigate a different side-effect of antipatterns, which is their effect on testability and on testing cost in particular. We consider as (upper bound) indicator of testing cost the number of test cases that satisfy the minimal data member usage matrix (MaDUM) criterion proposed by Bashir and Goel. A study-carried out on four Java programs, Ant 1.8.3, ArgoUML 0.20, Check Style 4.0, and JFreeChart 1.0.13-supports the evidence that, on the one hand, antipatterns unit testing requires, on average, a number of test cases substantially higher than unit testing for non-antipattern classes. On the other hand, antipattern classes must be carefully tested because they are more defect-prone than other classes. Finally, we illustrate how specific refactoring actions-applied to classes participating in antipatterns-could reduce testing cost.",2013,0, 6262,Relating Clusterization Measures and Software Quality,"Empirical studies have shown that dependence clusters are both prevalent in source code and detrimental to many activities related to software, including maintenance, testing and comprehension. Based on such observations, it would be worthwhile to try to give a more precise characterization of the connection between dependence clusters and software quality. Such attempts are hindered by a number of difficulties: there are problems in assessing the quality of software, measuring the degree of clusterization of software and finding the means to exhibit the connection (or lack of it) between the two. In this paper we present our approach to establish a connection between software quality and clusterization. Software quality models comprise of low- and high-level quality attributes, in addition we defined new clusterization metrics that give a concise characterization of the clusters contained in programs. Apart from calculating correlation coefficients, we used mutual information to quantify the relationship between clusterization and quality. Results show that a connection can be demonstrated between the two, and that mutual information combined with correlation can be a better indicator to conduct deeper examinations in the area.",2013,0, 6263,"Adoption of Software Testing in Open Source Projects--A Preliminary Study on 50,000 Projects","In software engineering, testing is a crucial activity that is designed to ensure the quality of program code. For this activity, development teams spend substantial resources constructing test cases to thoroughly assess the correctness of software functionality. What is however the proportion of open source projects that include test cases? What kind of projects are more likely to include test cases? In this study, we explore 50,000 projects and investigate the correlation between the presence of test cases and various project development characteristics, including the lines of code and the size of development teams.",2013,0, 6264,Supplying Compiler's Static Compatibility Checks by the Analysis of Third-Party Libraries,"Statically typed languages and their compile time checks prevent a lot of runtime errors thanks to type mismatches detection, namely calls of incompatible methods. Since current applications typically include tens of already compiled third-party libraries, the compile checks are powerless to detect their mutual dependencies. However, the calls among third-party library methods are not less error prone due to bugs or wrong library usage. These all lead to runtime failures. In this paper, we describe a partial solution to this problem based on the static analysis of third-party libraries and verification of their dependencies. This verification is invoked as the application is compiled and assembled, essentially supplying the compiler detecting errors before the application runs. This approach promises improved error detection of complex applications on the static type checking level.",2013,0, 6265,A Pilot Study on Software Quality Practices in Belgian Industry,"In the context of an ERDF-funded project portfolio, we have carried out a survey to assess the state-of-the-practice in software quality in Belgian companies. With this survey, we wish to find out what are the most common industry practices (processes, techniques and tools) with respect to software quality, and how these practices vary across companies. Companies could use the results of this study to improve upon their current software quality practices compared to other companies. Researchers could use it to develop better techniques and tools for aspects that have not found sufficient take-up by industry. Teachers may use it to adapt their courses to become more directly relevant to industry practices.",2013,0, 6266,Facts and Fallacies of Reuse in Practice,"Despite the positive effects of reuse claimed in a significant amount of research, anecdotal evidence indicates that industry is not yet experiencing the expected benefit. This dissertation proposal aims to investigate these indicators and therefore addresses reuse from an industrial point of view. As a first step, it empirically assesses the general state of reuse in practice. This is achieved via a large-scale online questionnaire distributed to multiple companies. Complementing the questionnaire, extensive interviews are being scheduled with developers and project managers of the respective companies. The goal is to interview approximately 10 employees per company. Three companies have already committed to the interview phase and contact with seven further companies is currently being established. In a second step, the findings of the study will be used to extract the context and type of reuse, as well as success factors and hindrances. This information forms the basis for an analytical assessment model for internal code reuse, which is developed in a third step. It will capture a range of different aspects of reuse in practice and will be combined with a process to evaluate the adequacy of reuse. The planned result is a larger assessment framework for evaluating the reuse (management) process within a project as well as a multi-project context. As a result, guidelines for code organization should be developed and tested for their effects in improving reuse in one or more of the industrial partners projects.",2013,0, 6267,Quality Assessment in the Cloud: Is It Worthwhile?,"As software systems become increasingly complex, the aspects of quality that we want to assess become increasingly diverse, requiring the usage of a significant number of tools. Therefore, the installation and proper configuration of these various analysis tools, as well as running them on local computers for large-scale systems becomes more and more a significant investment both in terms of time and computing power. In this paper we present how the infrastructure and services that are developed within the HOST project could be employed to facilitate the extensive use of quality assessment tools. We present the HOST service functionality infrastructure by showing how INFUSION, a popular tool for detecting design flaws, has been integrated and used there. The paper presents the performance improvements due to running INFUSION within a cloud infrastructure and discusses the trade-offs of moving software analysis tools in the cloud.",2013,0, 6268,Vulnerability Scrying Method for Software Vulnerability Discovery Prediction Without a Vulnerability Database,"Predicting software vulnerability discovery trends can help improve secure deployment of software applications and facilitate backup provisioning, disaster recovery, diversity planning, and maintenance scheduling. Vulnerability discovery models (VDMs) have been studied in the literature as a means to capture the underlying stochastic process. Based on the VDMs, a few vulnerability prediction schemes have been proposed. Unfortunately, all these schemes suffer from the same weaknesses: they require a large amount of historical vulnerability data from a database (hence they are not applicable to a newly released software application), their precision depends on the amount of training data, and they have significant amount of error in their estimates. In this work, we propose vulnerability scrying, a new paradigm for vulnerability discovery prediction based on code properties. Using compiler-based static analysis of a codebase, we extract code properties such as code complexity (cyclomatic complexity), and more importantly code quality (compliance with secure coding rules), from the source code of a software application. Then we propose a stochastic model which uses code properties as its parameters to predict vulnerability discovery. We have studied the impact of code properties on the vulnerability discovery trends by performing static analysis on the source code of four real-world software applications. We have used our scheme to predict vulnerability discovery in three other software applications. The results show that even though we use no historical data in our prediction, vulnerability scrying can predict vulnerability discovery with better precision and less divergence over time.",2013,0, 6269,Evaluation of Stretcher Alignment in Radiotherapy using Computed Tomography,"Currently, a wider and more frequent use of Computed Tomography (CT) volumes are used for planning the radiotherapy treatment of oncological patients. These image volumes are instrumental in positioning the patient and the precise incidence of radiation fields. Any error in such a process will adversely affect the result of the treatment and, consequently, the health of the patient. Therefore, quality control is mandatory throughout this process. As regards image acquisition, the controls are typically executed manually, implying a tedious repetitive task. This problem spurred the need to develop an algorithm that allows eliminating this issue while it evaluates and corrects the main concern as well. Particularly, an algorithm for automatic control of the entire positioning process is proposed here which prevents errors from deviation on both the longitudinal and transversal axes of the arrangement. The alignment will vary according to the anatomical part to be treated and the type of stretcher support, among other aspects. The application was developed by resorting to free software tools (GLP) and the libraries for processing, segmenting and recording of medical images (ITK). The algorithm detects these deviations on both axes by applying threshold methods, morphologic operations and Hough's Transform. This algorithm is currently operative in the Medical Image Processing Server of the Nuclear Medicine School Foundation (FUESMEN) of Mendoza, Argentina.",2013,0, 6270,Educational Software for Power Quality Analysis,"This paper presents educational software that allows users to generate, detect and classify electrical power disturbance signals using a Wavelet Transform and Neural Networks based algorithm. This software includes four main modules: a) Signal Acquisition Module that allows the incorporation of waveforms stored in a data base; b) Generation Module which permits the generation of diverse disturbed waveforms; c) Detections Module provides tools to analyze different disturbance detection algorithms and d) Classification Module that determines the disturbance type using different pattern classification methods.",2013,0, 6271,An integrated health management process for automotive cyber-physical systems,"Automobile is one of the most widely distributed cyber-physical systems. Over the last few years, the electronic explosion in automotive vehicles has significantly increased the complexity, heterogeneity and interconnectedness of embedded systems. Although designed to sustain long life, systems degrade in performance due to gradual development of anomalies eventually leading to faults. In addition, system usage and operating conditions (e.g., weather, road surfaces, and environment) may lead to different failure modes that can affect the performance of vehicles. Advanced diagnosis and prognosis technologies are needed to quickly detect and isolate faults in network-embedded automotive systems so that proactive corrective maintenance actions can be taken to avoid failures and improve vehicle availability. This paper discusses an integrated diagnostic and prognostic framework, and applies it to two automotive systems, viz., a Regenerative Braking System (RBS) in hybrid electric vehicles and an Electric Power Generation and Storage (EPGS) system.",2013,0, 6272,Multiple fault diagnosis on a synchronous 2 pole generator using shaft and flux probe signals,A method for diagnosis of multiple incipient faults on a 2-pole synchronous generator is presented. Simulation of the generator on a finite element analysis (FEA) software package is used to predict the effects of these faults. Experimental analysis of the generator under fault conditions is then conducted and confirms the predicted behaviour. The investigation utilises shaft brushes as a non-invasive condition monitoring tool and search coils are used to validate findings from the shaft signal analysis. Results of the investigation indicate definitive relationships between the faults and specific harmonics of the output signals from the condition monitoring tools.,2013,0, 6273,Fault diagnosis of voltage sensor in grid-connected 3-phase voltage source converters,"This paper proposes a fault diagnosis method of the line-to-line voltage sensors in grid-connected 3-phase voltage source converters. The line-to-line voltage sensors are an essential device to obtain the information of the grid side voltages for controlling the converters. If there are problems in the voltage sensors by some faults, the controller obtains wrong information of the grid voltage. It causes the unbalance 3 phase currents and the pulsation of the DC-link voltage even though the power grid is healthy. Therefore fault diagnosis methods are required to detect the failure and to avoid the abnormal operation. This proposed diagnosis method identifies whether the fault is at the voltage sensor or the power grid when the abnormal values are measured from the line-to-line voltage sensors. Then the fault tolerance control is possible in case of the voltage sensor fault. The proposed method can improve the system reliability by just adding some software algorithm without additional hardware circuits. The usefulness of this paper is verified through the computer simulation and experiment.",2013,0, 6274,Robust and integrated diagnostics for safety systems in the industrial domain,"The development of robust, safety critical systems with effective diagnostics is increasingly difficult, since hardware is getting more complex, code size is constantly increasing and soft-errors (transient errors) are becoming a dominating factor. It is difficult to reach the required safety integrity in future systems without improving the way diagnostic functions are handled today. Diagnostics are integral part of both hardware and software and it is crucial to design architectures with cross-connected and smart functions being able to detect dangerous errors in the system. While adequate safety is required by EU directives, the end customers require also high availability (uptime). This paper introduces a robust architecture that covers the requirements in order to build fault-tolerant and highly available systems for industrial devices.",2013,0, 6275,MedMon: Securing Medical Devices Through Wireless Monitoring and Anomaly Detection,"Rapid advances in personal healthcare systems based on implantable and wearable medical devices promise to greatly improve the quality of diagnosis and treatment for a range of medical conditions. However, the increasing programmability and wireless connectivity of medical devices also open up opportunities for malicious attackers. Unfortunately, implantable/wearable medical devices come with extreme size and power constraints, and unique usage models, making it infeasible to simply borrow conventional security solutions such as cryptography. We propose a general framework for securing medical devices based on wireless channel monitoring and anomaly detection. Our proposal is based on a medical security monitor (MedMon) that snoops on all the radio-frequency wireless communications to/from medical devices and uses multi-layered anomaly detection to identify potentially malicious transactions. Upon detection of a malicious transaction, MedMon takes appropriate response actions, which could range from passive (notifying the user) to active (jamming the packets so that they do not reach the medical device). A key benefit of MedMon is that it is applicable to existing medical devices that are in use by patients, with no hardware or software modifications to them. Consequently, it also leads to zero power overheads on these devices. We demonstrate the feasibility of our proposal by developing a prototype implementation for an insulin delivery system using off-the-shelf components (USRP software-defined radio). We evaluate its effectiveness under several attack scenarios. Our results show that MedMon can detect virtually all naive attacks and a large fraction of more sophisticated attacks, suggesting that it is an effective approach to enhancing the security of medical devices.",2013,0, 6276,Implementation of an integrated FPGA based automatic test equipment and test generation for digital circuits,"The VLSI circuit manufacturers cannot guarantee defect-free integrated circuits (IC's). Circuit complexity, IC defect anomalies, and economic considerations prevent complete validation of VLSI circuits. The aim is to present the integrated automatic test equipment/generation System for digital circuits. The test generation is developed using device behavior and its based on Behavior-Based Automatic Test Generation (BBATG) technique. A behavior of a device is a set of functions with timing relations on its in/out pins. Automatic test Equipment which is the vital part of electronics test scene today provides the complete set of test executing software and test supporting hardware for the ATE which can use the BBATG generated test data directly to detect behavior faults and diagnose faults at the device level for digital circuits. The low cost, versatile and reconfigurable FPGA-based ATE is implemented called FATE to support in ASIC development phase. This provides the ideal solution for engineers to develop test programs and perform device tests and yield analysis on their desktop and then transfer the test program directly to production. Thus it could able to execute a preliminary digital test, using just a Laptop and an FPGA- board.",2013,0, 6277,Software defect prediction using software metrics - A survey,"Traditionally software metrics have been used to define the complexity of the program, to estimate programming time. Extensive research has also been carried out to predict the number of defects in a module using software metrics. If the metric values are to be used in mathematical equations designed to represent a model of the software process, metrics associated with a ratio scale may be preferred, since ratio scale data allow most mathematical operations to meaningfully apply. Work on the mechanics of implementing metrics programs. The goal of this research is to help developers identify defects based on existing software metrics using data mining techniques and thereby improve software quality which ultimately leads to reducing the software development cost in the development and maintenance phase. This research focuses in identifying defective modules and hence the scope of software that needs to be examined for defects can be prioritized. This allows the developer to run test cases in the predicted modules using test cases. The proposed methodology helps in identifying modules that require immediate attention and hence the reliability of the software can be improved faster as higher priority defects can be handled first. Our goal in this research focuses to improve the classification accuracy of the Data mining algorithm. To initiate this process we initially propose to evaluate the existing classification algorithms and based on its weakness we propose a novel Neural network algorithm with a degree of fuzziness in the hidden layer to improve the classification accuracy.",2013,0, 6278,Developer Dashboards: The Need for Qualitative Analytics,"Prominent technology companies including IBM, Microsoft, and Google have embraced an analytics-driven culture to help improve their decision making. Analytics aim to help practitioners answer questions critical to their projects, such as """"Are we on track to deliver the next release on schedule?"""" and """"Of the recent features added, which are the most prone to defects?"""" by providing fact-based views about projects. Analytic results are often quantitative in nature, presenting data as graphical dashboards with reports and charts. Although current dashboards are often geared toward project managers, they aren't well suited to help individual developers. Mozilla developer interviews show that developers face challenges maintaining a global understanding of the tasks they're working on and that they desire improved support for situational awareness, a form of qualitative analytics that's difficult to achieve with current quantitative tools. This article motivates the need for qualitative dashboards designed to improve developers' situational awareness by providing task tracking and prioritizing capabilities, presenting insights on the workloads of others, listing individual actions, and providing custom views to help manage workload while performing day-to-day development tasks.",2013,0, 6279,Using Class Imbalance Learning for Software Defect Prediction,"To facilitate software testing, and save testing costs, a wide range of machine learning methods have been studied to predict defects in software modules. Unfortunately, the imbalanced nature of this type of data increases the learning difficulty of such a task. Class imbalance learning specializes in tackling classification problems with imbalanced distributions, which could be helpful for defect prediction, but has not been investigated in depth so far. In this paper, we study the issue of if and how class imbalance learning methods can benefit software defect prediction with the aim of finding better solutions. We investigate different types of class imbalance learning methods, including resampling techniques, threshold moving, and ensemble algorithms. Among those methods we studied, AdaBoost.NC shows the best overall performance in terms of the measures including balance, G-mean, and Area Under the Curve (AUC). To further improve the performance of the algorithm, and facilitate its use in software defect prediction, we propose a dynamic version of AdaBoost.NC, which adjusts its parameter automatically during training. Without the need to pre-define any parameters, it is shown to be more effective and efficient than the original AdaBoost.NC.",2013,1, 6280,Design and verification tools for continuous fluid flow-based microfluidic devices,"This paper describes an integrated design, verification, and simulation environment for programmable microfluidic devices called laboratories-on-chip (LoCs). Today's LoCs are architected and laid out by hand, which is time-consuming, tedious, and error-prone. To increase designer productivity, this paper introduces a Microfluidic Hardware Design Language (MHDL) for LoC specification, along with software tools to assist LoC designers verify the correctness of their specifications and estimate their performance.",2013,0, 6281,VISA synthesis: Variation-aware Instruction Set Architecture synthesis,"We present VISA: a novel Variation-aware Instruction Set Architecture synthesis approach that makes effective use of process variation from both software and hardware points of view. To achieve an efficient speedup, VISA selects custom instructions based on statistical static timing analysis (SSTA) for aggressive clocking. Furthermore, with minimum performance overhead, VISA dynamically detects and corrects timing faults resulting from aggressive clocking of the underlying processor. This hybrid software/hardware approach generates significant speedup without degrading the yield. Our experimental results on commonly used ISA synthesis benchmarks demonstrate that VISA achieves significant performance improvement compared with a traditional deterministic worst case-based approach (up to 78.0%) and an existing SSTA-based approach (up to 49.4%).",2013,0, 6282,An Efficient and Experimentally Tuned Software-Based Hardening Strategy for Matrix Multiplication on GPUs,"Neutron radiation experiment results on matrix multiplication on graphic processing units (GPUs) show that multiple errors are detected at the output in more than 50% of the cases. In the presence of multiple errors, the available hardening strategies may become ineffective or inefficient. Analyzing radiation-induced error distributions, we developed an optimized and experimentally tuned software-based hardening strategy for GPUs. With fault-injection simulations, we compare the performance and correcting capabilities of the proposed technique with the available ones.",2013,0, 6283,Scalable fault localization for SystemC TLM designs,"SystemC and Transaction Level Modeling (TLM) have become the de-facto standard for Electronic System Level (ESL) design. For the costly task of verification at ESL, simulation is the most widely used and scalable approach. Besides the Design Under Test (DUT), the TLM verification environment typically consists of stimuli generators and checkers where the latter are responsible for detecting errors. However, in case of an error, the subsequent debugging process is still very timeconsuming.",2013,0, 6284,Extracting useful computation from error-prone processors for streaming applications,"As semiconductor fabrics scale closer to fundamental physical limits, their reliability is decreasing due to process variation, noise margin effects, aging effects, and increased susceptibility to soft errors. Reliability can be regained through redundancy, error checking with recovery, voltage scaling and other means, but these techniques impose area/energy costs. Since some applications (e.g. media) can tolerate limited computation errors and still provide useful results, error-tolerant computation models have been explored, with both the application and computation fabric having stochastic characteristics. Stochastic computation has, however, largely focused on application-specific hardware solutions, and is not general enough to handle arbitrary bit errors that impact memory addressing or control in processors. In response, this paper addresses requirements for error-tolerant execution by proposing and evaluating techniques for running error-tolerant software on a general-purpose processor built from an unreliable fabric. We study the minimum error-protection required, from a microarchitecture perspective, to still produce useful results at the application output. Even with random errors as frequent as every 250s, our proposed design allows JPEG and MP3 benchmarks to sustain good output quality14dB and 7dB respectively. Overall, this work establishes the potential for error-tolerant single-threaded execution, and details its required hardware/system support.",2013,0, 6285,Reliability analysis reloaded: How will we survive?,"In safety related applications and in products with long lifetimes reliability is a must. Moreover, facing future technology nodes of integrated circuit device level reliability may decrease, i.e., counter-measures have to be taken to ensure product level reliability. But assessing the reliability of a large system is not a trivial task. This paper revisits the state-of-the-art in reliability evaluation starting from the physical device level, to the software system level, all the way up to the product level. Relevant standards and future trends are discussed.",2013,0, 6286,"Fault detection, real-time error recovery, and experimental demonstration for digital microfluidic biochips","Advances in digital microfluidics and integrated sensing hold promise for a new generation of droplet-based biochips that can perform multiplexed assays to determine the identity of target molecules. Despite these benefits, defects and erroneous fluidic operations remain a major barrier to the adoption and deployment of these devices. We describe the first integrated demonstration of cyberphysical coupling in digital microfluidics, whereby errors in droplet transportation on the digital microfluidic platform are detected using capacitive sensors, the test outcome is interpreted by control hardware, and software-based error recovery is accomplished using dynamic reconfiguration. The hardware/software interface is realized through seamless interaction between control software, an off-the-shelf microcontroller and a frequency divider implemented on an FPGA. Experimental results are reported for a fabricated silicon device and links to videos are provided for the first-ever experimental demonstration of cyberphysical coupling and dynamic error recovery in digital microfluidic biochips.",2013,0, 6287,On-line testing of permanent radiation effects in reconfigurable systems,"Partially reconfigurable systems are more and more employed in many application fields, including aerospace. SRAM-based FPGAs represent an extremely interesting hardware platform for this kind of systems, because they offer flexibility as well as processing power. In this paper we report about the ongoing development of a software flow for the generation of hard macros for on-line testing and diagnosing of permanent faults due to radiation in SRAM-FPGAs used in space missions. Once faults have been detected and diagnosed the flow allows to generate fine-grained patch hard macros that can be used to mask out the discovered faulty resources, allowing partially faulty regions of the FPGA to be available for further use.",2013,0, 6288,Data mining MPSoC simulation traces to identify concurrent memory access patterns,"Due to a growing need for flexibility, massively parallel Multiprocessor SoC (MPSoC) architectures are currently being developed. This leads to the need for parallel software, but poses the problem of the efficient deployment of the software on these architectures. To address this problem, the execution of the parallel program with software traces enabled on the platform and the visualization of these traces to detect irregular timing behavior is the rule. This is error prone as it relies on software logs and human analysis, and requires an existing platform. To overcome these issues and automate the process, we propose the conjoint use of a virtual platform logging at hardware level the memory accesses and of a data-mining approach to automatically report unexpected instructions timings, and the context of occurrence of these instructions. We demonstrate the approach on a multiprocessor platform running a video decoding application.",2013,0, 6289,Efficient software-based fault tolerance approach on multicore platforms,This paper describes a low overhead software-based fault tolerance approach for shared memory multicore systems. The scheme is implemented at user-space level and requires almost no changes to the original application. Redundant multithreaded processes are used to detect soft errors and recover from them. Our scheme makes sure that the execution of the redundant processes is identical even in the presence of non-determinism due to shared memory accesses. It provides a very low overhead mechanism to achieve this. Moreover it implements a fast error detection and recovery mechanism. The overhead incurred by our approach ranges from 0% to 18% for selected benchmarks. This is lower than comparable systems published in literature.,2013,0, 6290,Low cost permanent fault detection using ultra-reduced instruction set co-processors,"In this paper, we propose a new, low hardware overhead solution for permanent fault detection at the micro-architecture/instruction level. The proposed technique is based on an ultra-reduced instruction set co-processor (URISC) that, in its simplest form, executes only one Turing complete instruction the subleq instruction. Thus, any instruction on the main core can be redundantly executed on the URISC using a sequence of subleq instructions, and the results can be compared, also on the URISC, to detect faults. A number of novel software and hardware techniques are proposed to decrease the performance overhead of online fault detection while keeping the error detection latency bounded including: (i) URISC routines and hardware support to check both control and data flow instructions; (ii) checking only a subset of instructions in the code based on a novel check window criterion; and (iii) URISC instruction set extensions. Our experimental results, based on FPGA synthesis and RTL simulations, illustrate the benefits of the proposed techniques.",2013,0, 6291,Improving fault tolerance utilizing hardware-software-co-synthesis,"Embedded systems consist of hardware and software and are ubiquitous in safety-critical and mission-critical fields. The increasing integration density of modern, digital circuits causes an increasing vulnerability of embedded systems to transient faults. Techniques to improve the fault tolerance are often either implemented in hardware or in software. In this paper, we focus on synthesis techniques to improve the fault tolerance of embedded systems considering hardware and software. A greedy algorithm is presented which iteratively assesses the fault tolerance of a processor-based system and decides which components of the system have to be hardened choosing from a set of existing techniques. We evaluate the algorithm in a simple case study using a Traffic Collision Avoidance System (TCAS).",2013,0, 6292,Object oriented approach for building extraction from high resolution satellite images,"In this paper, an object oriented approach for automatic building extraction from high resolution satellite image is developed. Firstly, Single Feature Classification is applied on the high resolution satellite image. After that, the high resolution image is segmented by using the split and merge segmentation so that the pixels that are grouped as raster objects have probability attributes associated with them. Then different filters are applied on the image to remove the objects which are not of our interest. After filtering the segments, the output raster image is converted into vector image. After converting the raster image into vector image, the building objects are extracted on the basis of area. The cleanup methods are applied to smoothen the extracted buildings and also to increase the accuracy of extraction of buildings. Imagine Objective tool of ERDAS 2011 has been used. The approach is applied on three different satellite images. The extracted buildings are compared with the manually digitized buildings. For one satellite image it has picked up all the buildings with a slight change in the area of footprints of buildings. Only one patch of road is extracted as a building. For the other two satellite images, the overall accuracy is low as compared to the first satellite image. Some patches of road and ground are also extracted as buildings. The branching factor, miss factor, building detection percentage and quality percentage were also calculated for accuracy assessment. Nonetheless, the overall accuracy of building extraction with respect to area was found to be 85.38% in a set of 66 buildings, 73.81% in a set of 94 buildings and 70.64% in a set of 102 buildings.",2013,0, 6293,Predicting Design Quality of Object-Oriented Software using UML diagrams,"Assessment of Object Oriented Software Design Quality has been an important issue among researchers in Software Engineering discipline. In this paper, we propose an approach for determining the design quality of Object Oriented Software System. The approach makes use of a set of UML diagrams created during the design phase of the development process. Design metrics are fetched from the UML diagrams using a parser developed by us and design quality is assessed using a Hierarchical Model of Software Design Quality. To validate the design quality, we compute the product quality for the same software that corresponds to the UML design diagrams using available tools METRIC 1.3.4, JHAWK and Team In a Box. The objective is to establish a correspondence between design quality and product quality of Object Oriented Software. For this purpose, we have chosen priory known three software of Low, Medium and High quality. This is a work under progress, though; the substantial task has already been completed.",2013,0, 6294,Fault Injection for Software Certification,"As software becomes more pervasive and complex, it's increasingly important to ensure that a system will be safe even in the presence of residual software faults (or bugs). Software fault injection consists of the deliberate introduction of software faults for assessing the impact of faulty software on a system and improving its fault tolerance. SFI has been included as a recommended practice in recent safety standards and has therefore gained interest among practitioners, but it's still unclear how it can be effectively used for certification purposes. In this article, the authors discuss the adoption of SFI in the context of safety certification, present a tool for the injection of realistic software faults, and show the usage of that tool in evaluating and improving the robustness of an operating system used in the avionic domain.",2013,0, 6295,Development of a stereo vision measurement architecture for an underwater robot,"Underwater robotics tasks are considered very critical, mainly because of the hazardous environment. The embedded systems for this kind of robots should be robust and fault-tolerant. This paper describes the development of a system for embedded stereo vision in real-time, using a hardware/software co-design approach. The system is capable to detect an object and measure the distance between the object and the cameras. The platform uses two CMOS cameras, a development board with a low-cost FPGA, and a display for visualizing images. Each camera provides a pixel-clock, which are used to synchronize the processing architectures inside the FPGA. For each camera a hardware architecture has been implemented for detecting objects, using a background subtraction algorithm. Whenever an object is detected, its center of mass is calculated in both images, using another hardware architecture to do that. The coordinates of the object center in each image are sent to a soft-processor, which computes the disparity and determines the distance from the object to the cameras. A calibration procedure gives to the soft-processor the capability of computing both disparities and distances. The synthesis tool used (Altera Quartus II) estimates that the system consumes 115.25mW and achieves a throughput of 26.56 frames per second (800480 pixels). These synthesis and the operation results have shown that the implemented system is useful to real-time distance measurements achieving a good precision and an adequate throughput, being suitable for real-time critical operation.",2013,0, 6296,Test suite prioritisation using trace events technique,"The size of the test suite and the duration of time determines the time taken by the regression testing. Conversely, the testers can prioritise the test cases by the use of a competent prioritisation technique to obtain an increased rate of fault detection in the system, allowing for earlier corrections, and getting higher overall confidence that the software has been tested suitably. A prioritised test suite is more likely to be more effective during that time period than would have been achieved via a random ordering if execution needs to be suspended after some time. An enhanced test case ordering may be probable if the desired implementation time to run the test cases is proven earlier. This research work's main intention is to prioritise the regressiontesting test cases. In order to prioritise the test cases some factors are considered here. These factors are employed in the prioritisation algorithm. The trace events are one of the important factors, used to find the most significant test cases in the projects. The requirement factor value is calculated and subsequently a weightage is calculated and assigned to each test case in the software based on these factors by using a thresholding technique. Later, the test cases are prioritised according to the weightage allocated to them. Executing the test cases based on the prioritisation will greatly decreases the computation cost and time. The proposed technique is efficient in prioritising the regression test cases. The new prioritised subsequences of the given unit test suites are executed on Java programs after the completion of prioritisation. Average of the percentage of faults detected is an evaluation metric used for evaluating the 'superiority' of these orderings.",2013,0, 6297,Validating dimension hierarchy metrics for the understandability of multidimensional models for data warehouse,"Structural properties including hierarchies have been recognised as important factors influencing quality of a software product. Metrics based on structural properties (structural complexity metrics) have been popularly used to assess the quality attributes like understandability, maintainability, fault-proneness etc. of a software artefact. Although few researchers have considered metrics based on dimension hierarchies to assess the quality of multidimensional models for data warehouse, there are certain aspects of dimension hierarchies like those related to multiple hierarchies, shared dimension hierarchies among various dimensions etc. which have not been considered in the earlier works. In the authors' previous work, they identified the metrics based on these aspects which may contribute towards the structural complexity and in turn the quality of multidimensional models for data warehouse. However, the work lacks theoretical and empirical validation of the proposed metrics and any metric proposal is acceptable in practice, if it is theoretically and empirically valid. In this study, the authors provide thorough validation of the metrics considered in their previous work. The metrics have been validated theoretically on the basis of Briand's framework - a property-based framework and empirically on the basis of controlled experiment using statistical techniques like correlation and linear regression. The results of these validations indicate that these metrics are either size or length measure and hence, contribute significantly towards structural complexity of multidimensional models and have considerable impact on understandability of these models.",2013,0, 6298,Early performance assessment in component-based software systems,"Most techniques used to assess the qualitative characteristics of software are done in testing phase of software development. Assessment of performance in the early software development process is particularly important to risk management. Software architecture, as the first product, plays an important role in the development of the complex software systems. Using software architecture, quality attributes (such as performance, reliability and security) can be evaluated at the early stages of the software development. In this study, the authors present a framework for taking the advantages of architectural description to evaluate software performance. To do so, the authors describe static structure and architectural behaviour of a software system as the sequence diagram and the component diagram of the Unified Modelling Language (UML), respectively; then, the described model is automatically converted into the 'interface automata', which provides the formal foundation for the evaluation. Finally, the evaluation of architectural performance is performed using 'queuing theory'. The proposed framework can help the software architect to choose an appropriate architecture in terms of quality or remind him/her of making necessary changes in the selected architecture. The main difference among the proposed method and other methods is that the proposed method benefits the informal description methods, such as UML, to describe the architecture of software systems; it also enjoys a formal and lightweight language, called 'interface automata' to provide the infrastructure for verification and evaluation.",2013,0, 6299,Classification and diagnosis of broken rotor bar faults in induction motor using spectral analysis and SVM,"In this paper, we propose to detect and localize the broken bar faults in multi-winding induction motor using Motor current signature (MCSA) combined to Support Vector Machine (SVM). The analysis of stator currents in the frequency domain is the most commonly used method, because induction machine faults often generates particular frequency components in the stator current spectrum. In order to obtain a more robust diagnosis, we propose to classify the feature vectors extracted from the magnitude of spectral analysis using multi-class SVM to discriminate the state of the motor. Finally, in order to validate our proposed approach, we simulated the multi-winding induction motor under Matlab software. Promising results were obtained, which confirms the validity of the proposed approach.",2013,0, 6300,Cooperative sensor anomaly detection using global information,"Sensor networks are deployed in many application areas nowadays ranging from environment monitoring, industrial monitoring, and agriculture monitoring to military battlefield sensing. The accuracy of sensor readings is without a doubt one of the most important measures to evaluate the quality of a sensor and its network. Therefore, this work is motivated to propose approaches that can detect and repair erroneous (i.e., dirty) data caused by inevitable system problems involving various hardware and software components of sensor networks. As information about a single event of interest in a sensor network is usually reflected in multiple measurement points, the inconsistency among multiple sensor measurements serves as an indicator for data quality problem. The focus of this paper is thus to study methods that can effectively detect and identify erroneous data among inconsistent observations based on the inherent structure of various sensor measurement series from a group of sensors. Particularly, we present three models to characterize the inherent data structures among sensor measurement traces and then apply these models individually to guide the error detection of a sensor network. First, we propose a multivariate Gaussian model which explores the correlated data changes of a group of sensors. Second, we present a Principal Component Analysis (PCA) model which captures the sparse geometric relationship among sensors in a network. The PCA model is motivated by the fact that not all sensor networks have clustered sensor deployment and clear data correlation structure. Further, if the sensor data show non-linear characteristic, a traditional PCA model can not capture the data attributes properly. Therefore, we propose a third model which utilizes kernel functions to map the original data into a high dimensional feature space and then apply PCA model on the mapped linearized data. All these three models serve the purpose of capturing the underlying phenomenon of a se- sor network from its global view, and then guide the error detection to discover any anomaly observations. We conducted simulations for each of the proposed models, and evaluated the performance by deriving the Receiver Operating Characteristic (ROC) curves.",2013,0, 6301,Increasing the security level of analog IPs by using a dedicated vulnerability analysis methodology,"With the increasing diffusion of multi-purpose systems such as smart phones and set-top boxes, security requirements are becoming as important as power consumption and silicon area constraints in SoCs and ASICs conception. In the same time, the complexity of IPs and the new technology nodes make the security evaluation more difficult. Indeed, predicting how a circuit behaves when pushed beyond its specifications limits is now a harder task. While security concerns in software development and digital hardware design are very well known, analog hardware security issues are not really studied. This paper first introduces the security concerns for analog and mixed circuits and then presents a vulnerability analysis methodology dedicated to them. Using this methodology, the security level of AMS SoC and Analog IP is increased by evaluating objectively its vulnerabilities and selecting appropriated countermeasure in the earliest design steps.",2013,0, 6302,Commissioning and periodic maintenance of microprocessor-based protection relays at industrial facilities,"Microprocessor-based protective relays are being used throughout industrial facilities and offer the benefits of extensive metering and monitoring, which include sequence components and waveform capturing. There are two types of relay testing which is performed on microprocessor-based protective relays: (1) commission testing and; (2) routine or periodic testing. Commission testing is extensive and exhaustive and its role is to completely test the design and installation of the protective system. Routine or periodic testing is used to validate that a protective system will perform its task by verifying the relay is measuring correctly, set correctly and that it will operate its output contacts for a fault or alarm condition. This paper will first review the differences and functions of commission testing and routine/periodic testing. Secondly, the paper will review methods to use the smarts of the microprocessor-based protective relay to detect issues during startup or during normal operation. These methods include protective relay setting comparison, minimal negative sequence current and voltage, verification/recognition of contact inputs, manual operation of contact outputs, complete control circuitry (trip, close, start, stop functions), lack of device self-test alarms, device date & time, and phasor diagrams provided by protective relay. Examples will be reviewed on the methods including an overview of symmetrical components. The paper will discuss options of installing test switches for AC current & AC voltage isolation and use of spare relay case or chassis for bench tests/verifications. In addition, the paper will discuss the periodic tests that should be performed on protective relay spares that are stored in an industrial facility's warehouse.",2013,0, 6303,Dependability Prediction of WS-BPEL Service Compositions Using Petri Net and Time Series Models,"Web services are emerging as a major technology for deploying automated interactions between distributed and heterogeneous applications. To predict dependability of composite service processes allows service users to decide whether service process meets quantitative trustworthiness requirement. Existing contributions for dependability prediction simply trust QoS information published in Service-Level-Agreement (SLA) or assume QoS of service activities to follow certain assumed distributions. These information and distributions are used as static model inputs into stochastic process models to obtain analytical results. Instead, we consider QoS of service activities to be fluctuating and introduce a dynamic framework to predict runtime dependability of service compositions built on WS-BPEL, employing the Autoregressive-Moving-Average Model (ARMA) time series model and general stochastic Petri net model. In the case study of a real-world service composition sample, a comparison between existing approaches and our one is presented and results suggest that our approach achieves higher prediction accuracy and a better curve-fitting.",2013,0, 6304,A User-Oriented Trust Model for Web Services,"Trust is one of the most critical quality factors for service requestors when they select the services from a large pool of Web services. However, existing web service trust models either do not focus on satisfying user's preferences for different quality of service (QoS) attributes, or do not pay enough attention to the impact of malicious ratings on trust evaluation. To address these gaps, a user-oriented trust model considering user's preferences and false ratings is proposed in this paper. The model introduces an approach to automatically mine user's preferences from their requirements, the preferences are used to determine the weights of each QoS attribute when integrating local trust into the multi-dimensional QoS attributes. The local trust on a service for the user is derived by combining the trust on QoS attributes and the trust on user's ratings. In this model, the users are classified into different groups according to their preferences, the honesty of each group is assessed by filtering out dishonest users using a hybrid approach which combines rating consistency clustering and an average method. To calculate the global trustworthiness of a service for the users group, the weight of ratings is dynamically adjusted according to the results of honesty assessment. The simulation results indicate that the model works well on personalized evaluation of trust, and it can effectively dilute the influence of malicious ratings.",2013,0, 6305,On the design of Trojan tolerant finite field multipliers,"In this paper we analyze the process variation in different multiplier circuits and describe techniques to design error correcting circuits. Integrated circuits have reached such a level of integration that the length transistors is limited to 10s of nanometres. The increasing difficulty to fabricate millions of transistors of the same parameters specified in the integrated circuit design have lead to variation in the performance of the integrated circuit, for instance the thickness of the gate oxide, the length and width of the of the transistor, the doping concentration in the N well substrate, gate threshold voltage and so on. This process variation can be misused for Trojan attacks. Trojan attacks are based on injecting some fault in to the cryptosystem and observing any leak of information by analyzing the erroneous results due to the additional Trojan circuitry. In order to avoid such fault-based attacks, the cryptosystem can be used to detect errors and correct computations, thereby not producing any erroneous results as output. In this paper we further discuss about the error correcting finite field multiplier, as on-line error correction is done it results in more robust hardware modules. The Trojan circuitry can be added even after the error correction stage and hence we have designed a new technique such that error detection and correction is done irrespective of the position of the Trojan in the multiplier.",2013,0, 6306,Simulation Model of IBM-Watson Intelligent System for Early Software Development,"IBM-Watson is among the leading intelligent systems available today. It is extremely complex, employing multiple processors, peripherals, interconnects, along with specialized software and applications. Developing software for this architecture, in absence of target platform, is an extremely error-prone affair. Bringup of software once the hardware is available detects large amount of bugs, throwing the project cost and schedule out of control. This paper introduces a methodology based on IBM-Watson system-level simulation, developed using high-level simulation models of all the components of the target architecture. This methodology helps to debug, verify and fine-tune the IBM-Watson software much before the availability of the target hardware. Use of this methodology enables detecting the bugs much earlier in the development cycle. Majority of the defects are removed much earlier and software bringup-time on actual hardware has reduced from months to days !",2013,0, 6307,Mathematical Function of a Signal Generator for Voltage Dips Analysis,"This paper presents a mathematical model for a complex voltage dip signal generator, that can be used to generate waveforms similar to those recorded. The voltage dip signal generator is designed to shape the voltage waveform for normal operating conditions, during the voltage dip stage and the transition between these two situations. The advantage of using a generator instead of real data measurements, is that the dip parameters are known, and un this way, it can be detected if the specific algorithms used for voltage dips analysis give the same results. The mathematical model can be implemented in different software used for voltage dip analysis. To demonstrate the complexity of the voltage dip generator, the corresponding mathematical model was implemented in MAPLE and use to generate some voltage dips with particular characteristics.",2013,0, 6308,Novel Automated Fault Isolation System on Low Voltage Distribution System,"This novel automated fault isolation system has been developed and integrated into a new customer side distribution system of 415/240V. The distribution system is based on the Tenaga Nasional Berhad (TNB), the Malaysia's power utility company especially the distribution system. Supervisory Control and Data Acquisition (SCADA), Remote Terminal Unit (RTUs) and power line communication (PLC) system have been used and developed for detecting, fault locating, fault isolating, fault segregating and power restoration in terms of hardware and software. Open loop distribution system is the distribution configuration system used as TNB distribution system. It is the first distribution automation system (DAS) based on fault management research work on customer side substation for operating and controlling betweenthe consumer side system and the substation.",2013,0, 6309,Service Isolation vs. Consolidation: Implications for IaaS Cloud Application Deployment,"Service isolation, achieved by deploying components of multi-tier applications using separate virtual machines (VMs), is a common """"best"""" practice. Various advantages cited include simpler deployment architectures, easier resource scalability for supporting dynamic application throughput requirements, and support for component-level fault tolerance. This paper presents results from an empirical study which investigates the performance implications of component placement for deployments of multi-tier applications to Infrastructure-as-a-Service (IaaS) clouds. Relationships between performance and resource utilization (CPU, disk, network) are investigated to better understand the implications which result from how applications are deployed. All possible deployments for two variants of a multi-tier application were tested, one computationally bound by the model, the other bound by a geospatial database. The best performing deployments required as few as 2 VMs, half the number required for service isolation, demonstrating potential cost savings with service consolidation. Resource use (CPU time, disk I/O, and network I/O) varied based on component placement and VM memory allocation. Using separate VMs to host each application component resulted in performance overhead of ~1-2%. Relationships between resource utilization and performance were harnessed to build a multiple linear regression model to predict performance of component deployments. CPU time, disk sector reads, and disk sector writes are identified as the most powerful performance predictors for component deployments.",2013,0, 6310,A Differential Approach for Configuration Fault Localization in Cloud Environments,"Configuration fault localization is the process of identifying fault in the configuration of component(s) that is the source of failure given a set of observed failure conditions. Configuration faults are harder to detect than on/off failures as it involves analysis of the parameters that constitute the configuration. While distributed systems become more complex and interconnected, the requirements on configuration fault localization have changed. In this paper we present a new, simple but effective approach to configuration fault localization, which utilizes the difference in configuration parameters of components that share a resource. We establish a Reference Configuration State (RCS) by determining a set of non-faulty probing components for each faulty component with respect to shared resources. Performing difference in configuration of reference state with that of the faulty components localizes faulty configuration parameter. Experiments through simulations demonstrate that our approach is effective in identifying configuration faults with reduced time and increased accuracy. Our algorithm gracefully handles the complexity of the problem as the system size grows.",2013,0, 6311,A method for evaluating project management competency acquired from role-play training,"The information technology industry in Japan has required universities to provide project management education. In Tokyo University of Technology, role-play training has been carried out as part of project management education. The role-play scenarios necessary to run role-play exercises have been created in accordance with the ADDIE (Analysis, Design, Development, Implementation, Evaluation) model. This paper describes a method for evaluating project management competency that learners gain through role-play training conducted using the scenarios. Competency in project management is assessed from a learner's behavior characteristics in taking an appropriate action when needed. We first examined the quality of the role-play scenarios by using a design checklist based on Goal-Based Scenarios (GBS). In addition, we analyzed the behavior of each learner during a role-play exercise by using rubrics based on how the user behaved. A high correlation was found between the acquired skill with which learners generally played the role assigned to them in role-play training and the level of quality of the role-play scenario. Based on the analysis results, we will propose a method for helping learners to be able to take effective action by providing appropriate advice from a software agent and feedback from a teacher, along with use of the GBS checklist.",2013,0, 6312,iLight: Device-Free Passive Tracking Using Wireless Sensor Networks,"In this paper, we study indoor passive tracking problem using wireless sensor networks, in which we assume that a target being tracked is clean, i.e., there is no any equipment carried by the target, hence, the tracking procedure is considered to be passive. We first show that received signal strength indicator and link quality indicator are not as effective as expected for passive detection (tracking) through our extensive testbed studies, and further propose to utilize light to track targets using WSNs. To the best of our knowledge, this is the first work, which studies the passive tracking problem in WSNs using light sensors and general light sources. We present a number of probability-based algorithms to study the moving patterns (properties) of targets being tracked. We design and implement our tracking system named as iLight consisting of 40 wireless sensor nodes and one base station. Through extensive experimental results, we show that iLight can track both single target and multiple targets efficiently.",2013,0, 6313,Sequoll: A framework for model checking binaries,"Multi-criticality real-time systems require protected-mode operating systems with bounded interrupt latencies and guaranteed isolation between components. A tight WCET analysis of such systems requires trustworthy information about loop bounds and infeasible paths. We propose sequoll, a framework for employing model checking of binary code to determine loop counts and infeasible paths, as well as validating manual infeasible path annotations which are often error-prone. We show that sequoll automatically determines many of the loop counts in the Malardalen WCET benchmarks. We also show that sequoll computes loop bounds and validates several infeasible path annotations used to reduce the computed WCET bound of seL4, a high-assurance protected microkernel for multi-criticality systems.",2013,0, 6314,Adaptive luminance coding-based scene-change detection for frame rate up-conversion,"This paper presents a new scene-change detection method that uses adaptive luminance coding for frame rate up-conversion. The proposed scene-change detection method splits a frame into several blocks and converts the gray levels of pixels in each block to bit codes. Then, it computes the difference between the bit codes in previous and current blocks. In addition, directional distribution analysis is applied to correct the areas falsely detected as a scene change. The experimental results show that the average F1 score of the proposed method was up to 0.479 higher than those of the benchmark methods (a 108.53% improvement). The proposed method also reduced the average computation time per pixel by up to 5.572 s compared to the benchmark methods (a 73.14% reduction).",2013,0, 6315,Verifying Cyber-Physical Interactions in Safety-Critical Systems,"Safety-compromising bugs in software-controlled systems are often hard to detect. In a 2007 DARPA Urban Challenge vehicle, such a defect remained hidden during more than 300 miles of test-driving, manifesting for the first time during the competition. With this incident as an example, the authors discuss formalisms and techniques available for safety analysis of cyber-physical systems.",2013,0, 6316,Efficient Near-Optimal Dynamic Content Adaptation Applied to JPEG Slides Presentations in Mobile Web Conferencing,"In the context of mobile Web conferencing, slide documents are generally transcoded into JPEG format and wrapped into a Web page prior to delivery. Given the diversity of these devices and their networks, dynamically identifying the optimal transcoding parameters is very challenging, as the number of transcoding parameters combinations could be very high. Current solutions use the resolution of the target mobile device and a fixed quality factor as transcoding parameters. However, this technique allows no control over the resulting file size, which, if too large, might increase the delivery time and negatively affect users' experience. Another solution (content selection) which leads to better quality consists in creating several versions and, at delivery time, selecting the best one. However, such a solution is computationally expensive. In this paper, we propose a prediction-based framework which computes near-optimal transcoding parameters dynamically with far less computations. We propose five methods based on this framework. The first predicts near-optimal transcoding parameters, while the others improve their accuracy. From the set of documents tested, two of the proposed methods reach optimality 14% and 30% of the time, respectively. Moreover, the average deviation from optimality for the proposed methods varies from 6% to 3%, with a complexity varying from 1 to 5 transcoding operations.",2013,0, 6317,Internet Metaobject Protocol (IMOP): Weaving the Global Program Grid,"Software applications are increasingly relying on networks to function, but making programs to interact over the network is still tedious and error-prone. Conventional technologies such as CORBA and the WS-* stack are complicated to use, whereas Restful style operations rely on costly ad-hoc developments on a per-service basis. We believe the problem lies in the lack of a network protocol that can solely and sufficiently address interoperability needs. In light of this, we developed Internet Metaobject Protocol (IMOP), a remote method invocation protocol for object-based resource representations. IMOP thoroughly defines operations required to facilitate interactions, from reflecting a resource's definition to invoking its methods. It also rigorously defines the types of data passed between systems, including primitive types, composite value types, and reference types. All of these are programming language neutral.",2013,0, 6318,Privacy preservation and enhanced utility in search log publishing using improved zealous algorithm,"Search log records can enhance the quality and delivery of internet information services to the end user. Analysing and exploring the search log can explore the user's behaviour. When these search logs are published it must ensure privacy of the users at the same time it should exhibit better utility. The existing ZEALOUS algorithm uses a two threshold framework to provide probabilistic differential privacy. In the course of providing this level of privacy the search log looses it's utility, as it publishes only frequent items. So an algorithm is proposed to enhance the utility of search log by qualifying the infrequent items while publishing at the same time preserving the stronger level of privacy.",2013,0, 6319,Robustness Evaluation of Controllers in Self-Adaptive Software Systems,"An increasingly important requirement for software-intensive systems is the ability to self-manage by adapting their structure and behavior at run-time in an autonomous way as a response to a variety of changes that may occur to the system, its environment, or its goals. In particular, self-adaptive (or autonomic) systems incorporate complex software components that act as controllers of a target system by executing actions through effectors, based on information monitored by probes. However, although these controllers are becoming critical in many application domains, it is still difficult to assess their robustness. The proposed approach for evaluating the robustness of controllers for self-adaptive software systems, is aimed at the effective identification of design faults. To achieve this objective, our proposal is based on a set of robustness tests that include the provision of mutated inputs to the interfaces between the controller and the target system (i.e., probes). The feasibility of the approach is evaluated in the context of Znn.com, a case study implemented using the Rainbow framework for architecture-based self-adaptation.",2013,0, 6320,Reliability Analysis of Software Architecture Evolution,"Software engineers and practitioners regard software architecture as an important artifact, providing the means to model the structure and behavior of systems and to support early decisions on dependability and other quality attributes. Since systems are most often subject to evolution, the software architecture can be used as an early indicator on the impact of the planned evolution on quality attributes. We propose an automated approach to evaluate the impact on reliability of architecture evolution. Our approach provides relevant information for architects to predict the impact of component reliabilities, usage profile and system structure on the overall reliability. We translate a system's architectural description written in an Architecture Description Language (ADL) to a stochastic model suitable for performing a thorough analysis on the possible architectural modifications. We applied our method to a case study widely used in research in which we identified the reliability bottlenecks and performed structural modifications to obtain an improved architecture regarding its reliability.",2013,0, 6321,A Model-Driven Approach for Runtime Reliability Analysis,"Runtime reliability analysis has proven to be a valuable technique to enhance the overall reliability of safety-critical systems. It has the potential to close the dependability gap that has been identified by Laprie. However, existing approaches suffer from either too complex and therefore error-prone input languages or from long execution time due to the state space explosion of the underlying analysis techniques. In this paper, we present an approach for runtime reliability analysis, which handles both problems. It provides a compact metamodel that can be used to describe all necessary information. Moreover, it provides analysis algorithms that can be automatically parameterized by code generation. These algorithms are runtime efficient so that they can be executed even on low-end computers, e.g., safety-critical embedded systems, to adapt the system to changing environmental conditions.",2013,0, 6322,The Time Dimension in Predicting Failures: A Case Study,"Online Failure Prediction is a cutting-edge technique for improving the dependability of software systems. It makes extensive use of machine learning techniques applied to variables monitored from the system at regular intervals of time (e.g. mutexes/s, paged bytes/s, etc.). The goal of this work is to assess the impact of considering the time dimension in failure prediction, through the use of sliding windows. The state-of-the-art SVM (Support Vector Machine) classifier is used to support the study, predicting failure events occurring in a Windows XP machine. An extensive comparative analysis is carried out, in particular using a software fault injection technique to speed up the failure data generation process.",2013,0, 6323,Assessing the Impact of Virtualization on the Generation of Failure Prediction Data,"Fault injection has been successfully used in the past to support the generation of realistic failure data for offline training of failure prediction algorithms. However, runtime computer systems evolution requires the online generation of training data. The problem is that using fault injection in a production environment is unacceptable. Virtualization is a cheap sand boxing solution that may be used to run multiple copies of a system, over which fault injection can be safely applied. Nevertheless, there is no guarantee that the data generated in the virtualized environment can be used for training the algorithms that will run in the original system. In this work we study the similarity of failure data obtained in the two scenarios, considering different virtualized environments. Results show that the data share key characteristics, suggesting virtualization as a viable solution to be further researched.",2013,0, 6324,"Declarative, Temporal, and Practical Programming with Capabilities","New operating systems, such as the Capsicum capability system, allow a programmer to write an application that satisfies strong security properties by invoking security-specific system calls at a few key points in the program. However, rewriting an application to invoke such system calls correctly is an error-prone process: even the Capsicum developers have reported difficulties in rewriting programs to correctly invoke system calls. This paper describes capweave, a tool that takes as input (i) an LLVM program, and (ii) a declarative policy of the possibly-changing capabilities that a program must hold during its execution, and rewrites the program to use Capsicum system calls to enforce the policy. Our experiments demonstrate that capweave can be applied to rewrite security-critical UNIX utilities to satisfy practical security policies. capweave itself works quickly, and the runtime overhead incurred in the programs that capweave produces is generally low for practical workloads.",2013,0, 6325,Differential proteome of the striatum from A30P -Synuclein transgenic mouse model of parkinson's disease,"Parkinson's disease (PD) is a multifactorial, neurodegenerative disease where etiopathogenesis are not fully understood. Mutations in -Synuclein (-Syn) were the first genetic defect linked to PD. They are deposited in Lewy bodies (LBs) characteristic for PD. Some experiments had showed that A30P mutant a-Syn have high toxicity than wide-type -Syn. Here we used A30P-Syn transgenic mice model to analysed proteome changes of the striatum 11 months after the birth. Striata were removed and after digesting the proteins we used isotope labelling method to mark different group of peptides. Strong-cation exchange (SCX) liquid chromatography (LC) was integrated with peptide separation as the first dimension of the two-dimensional LC tandem mass spectrometry workflow. In this work, electrospray ionization (ESI) quadrupole time-of-flight (QTOF) mass spectrometer was explored as a means of detecting the Ms/Ms spectrogram. Agilent Spectrum Mill software was used to analysed the results. A total of 660 proteins were quantified. 280 proteins were down-regulated and 77 proteins were up-regulated.",2013,0, 6326,Innovative practices session 5C: Cloud atlas Unreliability through massive connectivity,"The rapid pace of integration, emergence of low power, low cost computing elements, and ubiquitous and ever-increasing bandwidth of connectivity have given rise to data center and cloud infrastructures. These infrastructures are beginning to be used on a massive scale across vast geographic boundaries to provide commercial services to businesses such as banking, enterprise computing, online sales, and data mining and processing for targeted marketing to name a few. Such an infrastructure comprises of thousands of compute and storage nodes that are interconnected by massive network fabrics, each of them having their own hardware and firmware stacks, with layers of software stacks for operating systems, network protocols, schedulers and application programs. The scale of such an infrastructure has made possible service that has been unimaginable only a few years ago, but has the downside of severe losses in case of failure. A system of such scale and risk necessitates methods to (a) proactively anticipate and protect against impending failures, (b) efficiently, transparently and quickly detect, diagnose and correct failures in any software or hardware layer, and (c) be able to automatically adapt itself based on prior failures to prevent future occurrences. Addressing the above reliability challenges is inherently different from the traditional reliability techniques. First, there is a great amount of redundant resources available in the cloud from networking to computing and storage nodes, which opens up many reliability approaches by harvesting these available redundancies. Second, due to the large scale of the system, techniques with high overheads, especially in power, are not acceptable. Consequently, cross layer approaches to optimize the availability and power have gained traction recently. This session will address these challenges in maintaining reliable service with solutions across the hardware/software stacks. The currently available commercial data-cente- and cloud infrastructures will be reviewed and the relative occurrences of different causalities of failures, the level to which they are anticipated and diagnosed in practice, and their impact on the quality of service and infrastructure design will be discussed. A study on real-time analytics to proactively address failures in a private, secure cloud engaged in domain-specific computations, with streaming inputs received from embedded computing platforms (such as airborne image sources, data streams, or sensors) will be presented next. The session concludes with a discussion on the increased relevance of resiliency features built inside individual systems and components (private cloud) and how the macro public cloud absorbs innovations from this realm.",2013,0, 6327,Performance modelling and analysis of the delay aware routing metric in Cognitive Radio Ad Hoc networks,Cognitive Radio Networks have been proposed to solve the problem of overcrowded unlicensed spectrum by using the cognitive ability built in software radios to utilise the underutilised licensed channel when the licensed users are not using it. Successful results from the research community have led to its application to wireless technologies like Ad Hoc networks due to their extensive advantages. Cognitive Radio Ad Hoc networks are a novel technology that will provide a solution to many communication challenges. This paper investigates the end-to-end performance modelling of a link using quality of service parameters; delay vs. link capacity while considering the factors of spectrum management and node mobility of two nodes in tandem representing a hop in Cognitive Radio Ad Hoc networks. We modelled spectrum management and node mobility using the pre-emptive resume priority M/G/1 queuing model and the gated node model respectively. We considered delay aware routing schemes; shortest queue and random probability routing and compared them with the analytical link-capacity for analysis. The study shows that already existing mathematical models can be used as close approximations to analyse the queuing models proposed for Cognitive Radio Ad Hoc Networks.,2013,0, 6328,Fault location in combined overhead line and underground cable distribution networks using fault transient based mother wavelets,"This paper presents an optimized fault location approach in combined overhead line and underground cable distribution networks. Continuous wavelet transform (CWT) is employed for analyzing fault originated travelling waves. The transient voltage waveform is recorded at a measuring point and then analyzed using both standard and fault transient inferred mother wavelets. This approach rely on the relationship between typical frequencies of CWT signal energies and certain paths in the network passed by travelling waves produced by faults. In order to identify characteristic frequencies directly related to the previously mentioned paths, the continuous frequency spectrum of fault transients must be determined. Fault location is then detected using this frequency domain data. The frequency domain data along with the theoretically obtained characteristic frequencies specify the fault position. In order to verify this procedure, the IEEE 34-bus test distribution network is modeled by EMTP-RV software and the relevant transient signal analyses are executed in MATLAB programming environment.",2013,0, 6329,Safety analysis integration in a SysML-based complex system design process,"Model-based system engineering is an efficient approach to specifying, designing, simulating and validating complex systems. This approach allows errors to be detected as soon as possible in the design process, and thus reduces the overall cost of the product. Uniformity in a system engineering project, which is by definition multidisciplinary, is achieved by expressing the models in a common modeling language such as SysML. This paper presents an approach to integrate safety analysis in SysML at early stages in the design process of safety-critical systems. Qualitative analysis is performed through functional as well as behavioral safety analysis and strengthened by formal verification method. This approach is applied to a real-life avionic system and contributes to the integration of formal models in the overall safety and systems engineering design process of complex systems.",2013,0, 6330,EVM as new quality metric for optical modulation analysis,"The quality of optical signals is a very important parameter in optical communications. Several metrics are in common use, like optical signal-to-noise power ratio (OSNR), Q-factor, error vector magnitude (EVM) and bit error ratio (BER). A measured raw BER is not necessarily useful to predict the final BER after soft-decision forward error correction (FEC), if the statistics of the noise leading to errors is unknown. In this respect the EVM is superior, as it allows an estimation of the error statistics. We compare various metrics analytically, by simulation, and through experiments. We employ six quadrature amplitude modulation (QAM) formats at symbol rates of 20 GBd and 25 GBd. The signals were generated by a software-defined transmitter. We conclude that for optical channels with additive Gaussian noise the EVM metric is a reliable quality measure. For nondata-aided QAM reception, BER in the range 10-6-10-2 can be reliably estimated from measured EVM.",2013,0, 6331,Special section on advanced tuneable/reconfigurable and multi-function RF/microwave filtering devices,"Modern trends towards the design of highly-flexible next-generation multi-purpose RF transceivers for emerging applications, such as software-defined radio and radar systems, have reactivated the interest into reconfigurable high-frequency electronics. Microwave tuneable filters are among the most challenging components to carry out the adaptive frequency-band selection demanded by such systems; more still taking into account some critical factors to be considered in their development depending on the application; e.g., power-handling capability, non-linear/noise behavior, switching speed or quality (Q) factor over covered tuning range. On the other hand, a lot of attention has lately been detected in the development of RF/microwave multi-function circuits. This means sophisticated devices simultaneously showing multiple functionalities in the same electrical network. Benefits of this multi-function approach are compact size and more efficient implementations by means of the co-synthesis of different high-frequency components. Within this trend, a considerable effort has been dedicated to integrate the filtering function in other types of RF/microwave circuits, such as power dividers/combiners, antennas, amplifiers or baluns. As a consequence, completely new families of multi-operation filtering devices are being conceived.",2013,0, 6332,Photogrammetric Bundle Adjustment With Self-Calibration of the PrimeSense 3D Camera Technology: Microsoft Kinect,"The Kinect system is arguably the most popular 3-D camera technology currently on the market. Its application domain is vast and has been deployed in scenarios where accurate geometric measurements are needed. Regarding the PrimeSense technology, a limited amount of work has been devoted to calibrating the Kinect, especially the depth data. The Kinect is, however, inevitably prone to distortions, as independently confirmed by numerous users. An effective method for improving the quality of the Kinect system is by modeling the sensor's systematic errors using bundle adjustment. In this paper, a method for modeling the intrinsic and extrinsic parameters of the infrared and colour cameras, and more importantly the distortions in the depth image, is presented. Through an integrated marker-and feature-based self-calibration, two Kinects were calibrated. A novel approach for modeling the depth systematic errors as a function of lens distortion and relative orientation parameters is shown to be effective. The results show improvements in geometric accuracy up to 53% compared with uncalibrated point clouds captured using the popular software RGBDemo. Systematic depth discontinuities were also reduced and in the check-plane analysis the noise of the Kinect point cloud was reduced by 17%.",2013,0, 6333,Distributed Integrated Development Environment for Mobile Platforms,"It is believed that future technologies related to smart devices could add more towards making life easy while saving on time for a person on the go. Already mobile devices have added value to our everyday tasks. However, programmers, so far, seem to be denied the use of such facilities with these smart devices. Distributed Integrated Development Environment for Mobile Platforms (DIMP) is directed towards them with an innovative way to write software programs on the go. Using a mobile device such as a mobile phone or a tablet computer, DIMP is capable of writing source codes and compiling. DIMP consists of a mobile application, a central server and a set of compilation servers, while an administrative web console supports the administrative functions. Together, they comprise DIMP. The mobile application is an android application and provides a rich source code editor integrated to the software. It allows compiling and running of source codes where users can write programs in a selected language. If the source code is error free, a user can expect a worthwhile output whereas an error prone source code would reveal the relevant error message with useful hints for debugging. A further benefit from DIMP is that it allows a user to maintain online work space as well as an offline workspace. Source codes can be shared with other interested users.",2013,0, 6334,An empirical study on the importance of quality among offshore outsourced software development firms in Sri Lanka,"Offshore outsourcing of software development has become an increasingly popular trend in recent years. Sri Lanka has emerged as a favourable destination for outsourcing and it is currently catering to many offshore projects globally. However it is also observed that many other potential destinations are emerging globally. Due to this factor Sri Lankan software development firms would eventually face global competition in time to come. Therefore in this research paper, we would carry out an empirical study to assess & study the current quality measurements that software development companies in Sri Lanka have taken and further discuss on the importance and future benefits that companies would attain by performing these quality practices consistently. Our research adopts both quantitative and qualitative methods to solidify our results. The final outcome of this result would facilitate the software development industry to gain more understanding on the importance of adopting feasible quality measures into their software development life cycle.",2013,0, 6335,Resolving context conflicts using Association Rules (RCCAR) to improve quality of context-aware systems,Context-aware systems (CASs) face many challenges to keep high quality performance. One challenge faces CASs is conflicted values come from different sensors because of different reasons. These conflicts affect the quality of context (QoC) and as a result the quality of service as a whole. This paper conducts a novel approach called RCCAR resolves the context conflicts and so contributes in improving QoC for CASso RCCAR approach resolve context conflicts by exploiting the previous context using Association Rules (AR) to predict the valid values among different conflicted ones. RCCAR introduces an equation that evaluates the strength of prediction for different conflicted context elements values. The approach RCCAR has been implemented using Weka 3.7.7 and results show the success of the solution for different experiments applied to different scenarios designed to examine the solution according to different possible conditions.,2013,0, 6336,A collaborative filtering recommendation algorithm based on user clustering and Slope One scheme,"Recommendation system has been widely used in electronic commerce, news, web2.0, E-Iearning and other fields. Collaborative filtering is one of the most important algorithms. But as scale of recommendation system continues to expand, more and more problems appear. Data sparsity and poor prediction are main problems that recommendation system has to face. To improve the quality and performance, a new collaborative filtering recommendation algorithm combining user-clustering and Slope One algorithm is proposed. In our algorithm, users were clustered into several classes based on users' rating on items; therefore the useless information was filtered. Then the slope-one scheme was applied to predict the object rating. The experiments were applied to the MovieLens dataset to exploit the benefits of our detector and the experiment results show that the accuracy of our algorithm is in advance of previous research.",2013,0, 6337,Joint source-channel coding for delay-constrained iterative resource allocation algorithms,"Before achieving convergence, iterative resource allocation algorithms may result in degraded link quality that presents a challenge for the transmission of delay constrained multimedia traffic because of the impossibility for retransmission of frames received with errors. In this paper, a novel approach to solve this problem is presented by proposing the use of a source and channel coding scheme matched to the performance of the algorithm during iterations. The solution uses a JSCC scheme based on incremental redundancy and single feedback for transmission under strict delay constraints. The presented results, based on three widely representative iterative algorithms, show that the novel approach notably reduces the probability of transmissions with excessive distortion. The results show an increase of 12 % in relative values for the probability of links achieving target end-to-end distortion values, reduce the negative effects of degraded channels when performing iterations and also effectively absorb the effect of channel changes over time. These characteristics are specially useful for cognitive radio learning algorithms to mitigate the distortion increase during the exploratory phase.",2013,0, 6338,QoS optimization in ad hoc wireless networks through adaptive control of marginal utility,"Applications consisting of messaging, voice, and video are used to provide situational awareness to decision makers and emergency responders in high criticality crisis scenarios such as disaster management. Here, ad hoc wireless networks are often quickly provisioned to provide the necessary connectivity to support these applications. Applications ill prepared to deal with the constant fluctuation of available bandwidth will stall or fail and contribute to mission failure. Our algorithm, D-Q-RAM (Distributed Quality of Service (QoS) Resource Allocation Model) allows applications to satisfy their specific QoS expectations in dynamically fluctuating networked environments by incorporating a distributed optimization heuristic that results in near optimal adaptation without the need to know, estimate, or predict available bandwidth at any moment in time. This paper describes our approach for managing that optimization heuristic in a manner that is decentralized, that is network routers are unaware of the semantics of the applications, and the applications can arbitrate among competing signals from numerous network routers and select an appropriate QoS level which results in an improved overall global utility of available network bandwidth.",2013,0, 6339,An intelligent ophthalmic regional anesthesia training system based on capacitive sensing,"Safe administration of regional anesthesia in the eye involves insertion of syringe needle into the intra-orbital space at the correct position and angle to avoid injury to ocular structures. A training manikin which emulates human ocular anatomy and provides feedback on the quality of anesthetic procedure would considerably help to reduce the risks involved in real life procedures. This paper presents an anatomically accurate training manikin that has been developed employing rapid prototyping techniques. The system detects and alarms the trainee when a needle is in close proximity of the ocular muscles to avoid injury. Additionally it also apprises the trainee on whether the muscles have been touched by the needle. Proximity of needle is detected by a capacitive sensing scheme. A Virtual Instrument developed measures output from capacitive sensing electrodes and presents it through an intuitive graphical user interface. The proposed touch and proximity detection schemes have been validated by tests performed on a prototype training manikin developed, thus demonstrating its use for practical purposes.",2013,0, 6340,Software reliability prediction model based on ICA algorithm and MLP neural network,"To achieve the high performance system without any failure, we should provide the high reliability level of software. Soft computing models for software reliability prediction suffer from low accuracy during predicting the number of faults. Moreover, the models have some problems like no solid mathematical foundation for analysis, being trapped in local minima, and convergence problem. This paper introduces Imperialist Competitive Algorithm (ICA) to overcome the weaknesses of previous models and improve the efficiency of training process of Multi-Layer Perceptron (MLP) neural network. Therefore, the network can predict the number of faults precisely. The results show that the proposed predicting model is more efficient than the existing techniques in prediction performance.",2013,0, 6341,A statistical machine learning based modeling and exploration framework for run-time cross-stack energy optimization,"As the complexity of many-core processors grow, meeting performance, energy, temperature, reliability, and noise requirements under dynamically changing operating conditions requires run-time optimization of all parts of the computing stack - architecture, system software, and applications. Unfortunately, the combination of design parameters for the entire computing stack results in an operating space of millions of points that must be explored and evaluated at run-time. In this paper, we present a statistical machine learning (SML) based modeling framework that can be used to rapidly explore such vast operating spaces. We construct a multivariate adaptive regression spline (MARS) based model that uses a number of architecture and application parameters as predictor variables to predict performance and power. We then use a Pareto-front exploring evolutionary algorithm to determine operating points for optimal power and performance. The operating points constituting the Pareto front are stored in look-up tables for runtime use. The proposed framework is applied to an 264 video encoding application executing on a quad core processor. The microarchitectural predictor variables include core and cache parameters. The application predictor variables include the video resolution, and visual quality determined by the choice of the motion estimation algorithm. The model outputs the average frames per second (FPS) and the average power consumption. The MARS model has an R2 of 0.9657 and 0.9467 respectively for FPS and power. For a video frame resolution of 480x320, and FPS of 20, a power saving of 55% can be obtained by jointly tuning the microarchitectural parameters and the visual quality.",2013,0, 6342,A new model for software defect prediction using Particle Swarm Optimization and support vector machine,"Software defect prediction could improve the reliability of software and reduce development costs. Traditional prediction models usually have a lower prediction accuracy. In order to solve this problem, a new model for software defect prediction using Particle Swarm Optimization (PSO) and Support Vector Machine (SVM) named P-SVM model is proposed in this paper, which takes advantage of non-linear computing capability of SVM and parameters optimization capability of PSO. Firstly, P-SVM model uses PSO algorithm to calculate the best parameters of SVM, and then it adopts the optimized SVM model to predict software defect. P-SVM model and other three different prediction models are used to predict the software defects in JM1 data set as an experiment, the results show that P-SVM model has a higher prediction accuracy than BP Neural Network model, SVM model, GA-SVM model.",2013,0, 6343,Method study on fault-tolerant dispatch of the control system of the aero-engine,"Time-limited dispatch (TLD) allows the degraded redundancy dispatch of aircraft. The aero-engine fitted with full authority digital electronic control (FADEC) system with known faults can be dispatched with a deferred fault by applying TLD, and the time of deferred fault should be determined. Under fleet average loss of thrust control (LOTC) rate being achieved, faults are classified by rate of instantaneous LOTC caused by single faults of FADEC system internal redundant components, and components with single faults which leave the FADEC system in an acceptable dispatch configuration are determined.Building the reduced-state open loop Markov calculation model and computing the rate of LOTC in assumed time of deferred fault by MATLAB software. Determining the longest time of limited dispatch without exceedence of LOTC rate meeting the necessary airworthiness requirements.The results indicate that the method is an effective approach for type certification and developing master minimum equipment list (MMEL)and maintenance review board report(MRBR) of aircraft.",2013,0, 6344,The research and design of visual fault tree modeling analysis,"According to the situation that creating fault tree manually is inefficient, error-prone, existing software of visual fault tree modeling is not perfect, not easy to transplant, research visual modeling and analysis of the fault tree. First, this paper discusses the visual building method combines dynamic contribution with stratified achievements, fault event editing functions, separation of fault tree with failure information method, then discusses the fault qualitative analysis in visual modeling process, final design the visual fault tree modeling platform, realize the rapid modeling and visual analysis of the fault tree, and presents an application examples to illustrate. The actual instructions that the system can meet the needs of visual fault tree modeling.",2013,0, 6345,Improving error detection with selective redundancy in software-based techniques,"This paper presents an analysis of the impact of selective software-based techniques to detect faults in microprocessor systems. A set of algorithms is implemented, compiled to a microprocessor and selected variables of the code are hardened with software-based techniques. Seven different methods that choose which variables are hardened are introduced and compared. The system is implemented over a miniMIPS microprocessor and a fault injection campaign is performed in order to verify the feasibility and effectiveness of each selective fault tolerance approach. Results can lead designers to choose more wisely which variables of the code should be hardened considering detection rates and hardening overheads.",2013,0, 6346,Assessment of diagnostic test for automated bug localization,Statistical simulation based design error debug approaches strongly rely on quality of the diagnostic test. At the same time there exists no dedicated technique to perform its quality assessment and engineers are forced to rely on subjective figures such as verification test quality metrics or just the size of the diagnostic test. This paper has proposed two new approaches for assessing diagnostic capability of diagnostic tests for automated bug localization. The first approach relies on probabilistic simulation of diagnostic experiments. The second assessment method is based on calculating Hamming distances of the individual sub-tests in the diagnostic test set. The methods are computationally cheap and they provide for a measure of confidence in the localization results and allow estimating impact of the diagnostic test enhancement. The approach is implemented as a part of an open-source hardware design and debugging framework zamiaCAD. Experimental results with an industrial processor design and a set of documented bugs demonstrate feasibility and effectiveness of the proposed approach.,2013,0, 6347,Towards an automatic generation of diagnostic in-field SBST for processor components,"This paper deals with a diagnostic software-based self-test program for multiplexer based components in a processor. These are in particular the read ports of a multi-ported register file and the bypass structures of an instruction pipeline. Based on the detailed analysis of both multiplexer structures, first a manually coded diagnostic test program is presented. This test program can detect all single and multiple stuck-at data- and address faults in a multiplexer structure. But it does not fully cover the control-logic of the bypass. By further refinements a 100% fault coverage for single stuck-at faults, including the control logic, is finally obtained. Based on these results, an ATPG-assisted method for the generation of such a diagnostic test program is described for arbitrary processor components. This method is finally applied to the multiplexer structures for which the manually coded test program is available. The test length and test coverage of the generated test program and of the hand-coded test program are compared.",2013,0, 6348,Supporting the adaptation of open-source database applications through extracting data lifecycles,"The adaptation of open-source database applications is common in the industry. Most open-source database applications are incomplete. During adaptation, users usually have to implement additional data maintenance. Hence, the completeness of an application is an important concern for the adaptation as a key factor to indicate how much additional effort is required before using a system. From our study of database applications with complete functionalities, we observe that data in a database has common patterns of lifecycles. Anomaly in data lifecycles provides a good indicator on the completeness of database applications. In this paper, we propose a novel approach to automatically extract the data lifecycles out of the source code of database applications through inter-procedural static program analysis. This representative information can benefit the adaptation of database applications specifically for selection, maintenance and extension. We have developed a tool to implement the proposed approach for PHP (Hypertext Preprocessor)-based database applications. Case studies have shown that the proposed approach is useful in assisting adaptation and detecting faults of open-source database applications.",2013,0, 6349,About diagnosis of circuit breakers,"On-line monitoring of electrical equipment and their diagnosis is a field which has special attention because it can detect some faults in their incipient phase and thus prevent serious failure of the equipment and also prevent financial and materials related losses. This paper presents a system for monitoring and diagnosis of electrical equipment of medium voltage installations. Also are presented the evolutions and values of some parameters considered important for knowledge of technical condition of a circuit breaker, in case of abnormal operating conditions, and comparing them with similar records considered of reference. Advanced processing and data analysis was performed by using a software application developed in LabVIEW programming environment.",2013,0, 6350,ClimaWin: An intelligent window for optimal ventilation and minimum thermal loss,"In this paper the ClimaWin concept is introduced. The ClimaWin project's main goals are to improve both indoor air quality and the energy efficiency of new and refurbished buildings, through the use of novel green smart windows. Generally, in order to improve windows' energy efficiency better insulation materials are used in windows frames and glasses. However, this approach leads to a severe deterioration of indoor air quality (IAQ) especially in buildings that are not equipped with heating, ventilation and air conditioning (HVAQ) systems. The Climawin windows do not require wires neither for power nor for communications. The window is powered through a battery (for blind operation) and a solar panel, which makes it an ideal solution for retrofitting. In order to achieve the energy efficiency requirements, the Climawin system hardware, the microcontroller software architecture and the radio communication strategy were designed for low power consumption. Furthermore, all the information about the system status can be monitored and actuated using intuitive graphical applications developed for PCs and Android OS smartphones. A remote database keeps all the relevant information about the system, making it easy to detect any anomaly or even to adjust the control algorithm parameters from a remote location. A full-set of web services are also provided in order to simplify the communication with home automation systems.",2013,0, 6351,"Conservative Bounds for the pfd of a 1-out-of-2 Software-Based System Based on an Assessor's Subjective Probability of """"Not Worse Than Independence""""","We consider the problem of assessing the reliability of a 1-out-of-2 software-based system, in which failures of the two channels cannot be assumed to be independent with certainty. An informal approach to this problem assesses the channel probabilities of failure on demand (pfds) conservatively, and then multiplies these together in the hope that the conservatism will be sufficient to overcome any possible dependence between the channel failures. Our intention here is to place this kind of reasoning on a formal footing. We introduce a notion of """"not worse than independence""""' and assume that an assessor has a prior belief about this, expressed as a probability. We obtain a conservative prior system pfd, and show how a conservative posterior system pfd can be obtained following the observation of a number of demands without system failure. We present some illustrative numerical examples, discuss some of the difficulties involved in this way of reasoning, and suggest some avenues of future research.",2013,0, 6352,Adaptive Mho type distance relaying scheme with fault resistance compensation,"This paper describes an adaptive distance relaying scheme which can eliminate the effect of fault resistance on distance relay zone reach. Distance relay is commonly used as main protection to protect transmission line from any type of fault. For a stand-alone distance relay, fault resistance can make Mho type distance relay to be under reached and thus the fault will be isolated at a longer time. In this scheme, the relay detects the fault location using a two-terminal algorithm. By knowing fault location, fault voltage at the fault point can be calculated by using equivalent sequence network connection as seen from local terminal. Then, fault resistance is calculated by using simple equation considering contribution from remote terminal current. Finally, the compensation of fault resistance is done onto calculated apparent resistance as seen at relaying point. The modeling and simulation was carried out using Matlab/Simulink software. Several cases were carried out and the results show the validity of the scheme.",2013,0, 6353,Autokite experimental use of a low cost autonomous kite plane for aerial photography and reconnaissance,"An experimental kite-plane capable of autonomous aerial imaging is introduced as a viable low-cost small-scale civilian UAV imaging platform ideal for field use. The AUTOKITE fulfills a need currently unmet by other fully automated Unmanned Aerial Vehicles (UAVs), resulting from ease of operation, extended flight time, and overall reliability. The AUTOKITE is outfitted with an off-the-shelf autopilot system, and has demonstrated fully autonomous flight in field deployments while collecting high-resolution (~12 cm/pixel) images. The AUTOKITE has been used to map regions historically prone to earthquakes along the Southern San Andreas Fault in California. Comparative image methods enabled by photogrammetric software, like Agisoft's PhotoScan, are then used to discern Structure-from-Motion (SfM) from a multitude of aerial images taken by AUTOKITE [8]. Processing SfM data from overlapping images results in the creation of Digital Elevation Models (DEMs) and Orthophotos for geographic areas of interest. In addition to sample data sets illustrating the SfM process, The AUTOKITE is compared with three alternative UAV systems, and payload integration/automation details are discussed.",2013,0, 6354,A New Modeling Based on Urban Trenches to Improve GNSS Positioning Quality of Service in Cities,"Digital maps with 3D data proved to make it possible the determination of Non-Line-Of-Sight (NLOS) satellites in real time, whilst moving, and obtain significant benefit in terms of navigation accuracy. However, such data are difficult to handle with Geographical Information System (GIS) embedded software in real time. The idea developed in this article consists is proposing a method, light in terms of information contents and computation throughput, for taking into account the knowledge of the 3D environment of a vehicle in a city, where multipath phenomena can cause severe errors in positioning solution. This method makes use of a digital map where homogeneous sections of streets have been identified, and classified among different types of urban trenches. This classification is so called: """"Urban Trench Model"""". Not only NLOS satellites can be detected, but also, if needed, the corresponding measurements can be corrected and further used in the positioning solver. The paper presents in details the method and its results on several real test sites, with a demonstration of the gain obtained on the final position accuracy. The benefit of the Urban Trench Model, i.e. the reduction of positioning errors as compared to conventional solver considering all satellites, gets up to an amount between 30% and as much as 70% e.g. in Paris.",2013,0, 6355,Investigation on transient stability of an industrial network and relevant impact on over-current protection performance,"System protection performance and transient stability of the electrical network are significantly affected by each other. The larger the time delay in which protection detects and clears the fault, the more likely loss of synchronism will be, especially in the networks with internal generation. Over current protection schemes inherently operates with considerable delay. Moreover, system dynamic oscillations discernibly aggravate their performance. Therefore utilizing them as main protection is controversial and even abortive in order to maintain system stability. In this paper, transient stability of a real industrial network is studied. The study is investigated using critical clearing time (CCT) criterion for different network configuration. Equipments such as generators and motors are modeled and simulated by DIgSILENT software. In addition, the operation of over current relays adjusted by conventional methods is investigated dynamically and its performance is examined under different network configurations.",2013,0, 6356,Approaching reliable realtime communications? A novel system design and implementation for roadway safety oriented vehicular communications,"Though there exist ready-made DSRC/WiFi/3G/4G cellular systems for roadway communications, there are common defects in these systems for roadway safety oriented applications and the corresponding challenges remain unsolved for years, i.e., WiFi cannot work well in vehicular networks due to the high probability of packet loss caused by burst communications, which is a common phenomenon in roadway networks; 3G/4G cannot well support real-time communications due to the nature of their designs; DSRC lacks the support to roadway safety oriented applications with hard realtime and reliability requirements [1]. To solve the conflict between the capability limitations of existing systems and the ever-growing demands of roadway safety oriented communication applications, we propose a novel system design and implementation for realtime reliable roadway communications, aiming at providing safety messages to users in a realtime and reliable manner. In our extensive experimental study, the latency is well controlled within the hard realtime requirement (100ms) for roadway safety applications given by NHTSA [2], and the reliability is proved to be improved by two orders of magnitude compared with existing experimental results [1]. Our experiments show that the proposed system for roadway safety communications can provide guaranteed highly reliable packet delivery ratio (PDR) of 99% within the hard realtime requirement 100ms under various scenarios, e.g., highways, city areas, rural areas, tunnels, bridges. Our design can be widely applied for roadway communications and facilitate the current research in both hardware and software design and further provide an opportunity to consolidate the existing work on a practical and easy-configurable low-cost roadway communication platform.",2013,0, 6357,A software-based self test of CUDA Fermi GPUs,"Nowadays, Graphical Processing Units (GPUs) have become increasingly popular due to their high computational power and low prices. This makes them particularly suitable for high-performance computing applications, like data elaboration and financial computation. In these fields, high efficient test methodologies are mandatory. One of the most effective ways to detect and localize hardware faults in GPUs is a Software-Based-Self-Test methodology (SBST). In this paper a fully comprehensive SBST and fault localization methodology for GPUs is presented. This novel approach exploits different custom test strategies for each component inside the GPU architecture. Such strategies guarantee both permanent fault detection and accurate fault localization.",2013,0, 6358,Transitioning Manual System Test Suites to Automated Testing: An Industrial Case Study,"Visual GUI testing (VGT) is an emerging technique that provides software companies with the capability to automate previously time-consuming, tedious, and fault prone manual system and acceptance tests. Previous work on VGT has shown that the technique is industrially applicable, but has not addressed the real-world applicability of the technique when used by practitioners on industrial grade systems. This paper presents a case study performed during an industrial project with the goal to transition from manual to automated system testing using VGT. Results of the study show that the VGT transition was successful and that VGT could be applied in the industrial context when performed by practitioners but that there were several problems that first had to be solved, e.g. testing of a distributed system, tool volatility. These problems and solutions have been presented together with qualitative, and quantitative, data about the benefits of the technique compared to manual testing, e.g. greatly improved execution speed, feasible transition and maintenance costs, improved bug finding ability. The study thereby provides valuable, and previously missing, contributions about VGT to both practitioners and researchers.",2013,0, 6359,Efficient JavaScript Mutation Testing,"Mutation testing is an effective test adequacy assessment technique. However, it suffers from two main issues. First, there is a high computational cost in executing the test suite against a potentially large pool of generated mutants. Second, there is much effort involved in filtering out equivalent mutants, which are syntactically different but semantically identical to the original program. Prior work has mainly focused on detecting equivalent mutants after the mutation generation phase, which is computationally expensive and has limited efficiency. In this paper, we propose a technique that leverages static and dynamic program analysis to guide the mutation generation process a-priori towards parts of the code that are error-prone or likely to influence the program's output. Further, we focus on the JavaScript language, and propose a set of mutation operators that are specific to web applications. We implement our approach in a tool called MUTANDIS. We empirically evaluate MUTANDIS on a number of web applications to assess the efficacy of the approach.",2013,0, 6360,CHECK-THEN-ACT Misuse of Java Concurrent Collections,"Concurrent collections provide thread-safe, highly-scalable operations, and are widely used in practice. However, programmers can misuse these concurrent collections when composing two operations where a check on the collection (such as non-emptiness) precedes an action (such as removing an entry). Unless the whole composition is atomic, the program contains an atomicity violation bug. In this paper we present the first empirical study of CHECK-THEN-ACT idioms of Java concurrent collections in a large corpus of open-source applications. We catalog nine commonly misused CHECK-THEN-ACT idioms and show the correct usage. We quantitatively and qualitatively analyze 28 widely-used open source Java projects that use Java concurrency collections - comprising 6.4M lines of code. We classify the commonly used idioms, the ones that are the most error-prone, and the evolution of the programs with respect to misused idioms. We implemented a tool, CTADetector, to detect and correct misused CHECK-THEN-ACT idioms. Using CTADetector we found 282 buggy instances. We reported 155 to the developers, who examined 90 of them. The developers confirmed 60 as new bugs and accepted our patch. This shows that CHECK-THEN-ACT idioms are commonly misused in practice, and correcting them is important.",2013,0, 6361,Assessing Quality and Effort of Applying Aspect State Machines for Robustness Testing: A Controlled Experiment,"Aspect-Oriented Modeling (AOM) has been the subject of intense research over the last decade and aims to provide numerous benefits to modeling, such as enhanced modularization, easier evolution, higher quality as well as reduced modeling effort. However, these benefits can only be obtained at the cost of learning and applying new modeling approaches. Studying their applicability is therefore important to assess whether they are worth using in practice. In this paper, we report a controlled experiment to assess the applicability of AOM, focusing on a recently published UML profile (AspectSM). This profile was originally designed to support model-based robustness testing in an industrial context but is applicable to the behavioral modeling of other crosscutting concerns. This experiment assesses the applicability of AspectSM from two aspects: the quality of derived state machines and the effort required to build them. With AspectSM, a crosscutting behavior is modeled using an aspect state machine. The applicability of aspect state machines is evaluated by comparing them with standard UML state machines that directly model the entire system behavior, including crosscutting concerns. The quality of both aspect and standard UML state machines derived by subjects is measured by comparing them against predefined reference state machines. Results show that aspect state machines derived with AspectSM are significantly more complete and correct though AspectSM took significantly more time than the standard approach.",2013,0, 6362,Multi-objective Cross-Project Defect Prediction,"Cross-project defect prediction is very appealing because (i) it allows predicting defects in projects for which the availability of data is limited, and (ii) it allows producing generalizable prediction models. However, existing research suggests that cross-project prediction is particularly challenging and, due to heterogeneity of projects, prediction accuracy is not always very good. This paper proposes a novel, multi-objective approach for cross-project defect prediction, based on a multi-objective logistic regression model built using a genetic algorithm. Instead of providing the software engineer with a single predictive model, the multi-objective approach allows software engineers to choose predictors achieving a compromise between number of likely defect-prone artifacts (effectiveness) and LOC to be analyzed/tested (which can be considered as a proxy of the cost of code inspection). Results of an empirical evaluation on 10 datasets from the Promise repository indicate the superiority and the usefulness of the multi-objective approach with respect to single-objective predictors. Also, the proposed approach outperforms an alternative approach for cross-project prediction, based on local prediction upon clusters of similar classes.",2013,0, 6363,Analysis and Prediction of Mandelbugs in an Industrial Software System,"Mandelbugs are faults that are triggered by complex conditions, such as interaction with hardware and other software, and timing or ordering of events. These faults are considerably difficult to detect with traditional testing techniques, since it can be challenging to control their complex triggering conditions in a testing environment. Therefore, it is necessary to adopt specific verification and/or fault-tolerance strategies for dealing with them in a cost-effective way. In this paper, we investigate how to predict the location of Mandelbugs in complex software systems, in order to focus V&V activities and fault tolerance mechanisms in those modules where Mandelbugs are most likely present. In the context of an industrial complex software system, we empirically analyze Mandelbugs, and investigate an approach for Mandelbug prediction based on a set of novel software complexity metrics. Results show that Mandelbugs account for a noticeable share of faults, and that the proposed approach can predict Mandelbug-prone modules with greater accuracy than the sole adoption of traditional software metrics.",2013,0, 6364,Estimating Fault Numbers Remaining After Testing,"Testing is an essential component of the software development process, but also one which is exceptionally difficult to manage and control. For example, it is well understood that testing techniques are not guaranteed to detect all faults, but more frustrating is that after the application of a testing technique the tester has little or no knowledge of how many faults might still be left undiscovered. This paper investigates the performance of a range of capture-recapture models to determine the accuracy with which they predict the number of defects remaining after testing. The models are evaluated with data from two empirical testing-related studies and from one larger publicly available project and the factors affecting the accuracy of the models are analysed. The paper also considers how additional information (such as structural coverage data) may be used to improve the accuracy of the estimates. The results demonstrate that diverse sets of faults resulting from different testers using different techniques tend to produce the most accurate results, and also illustrate the sensitivity of the estimators to the patterns of fault data.",2013,0, 6365,Oracle-based Regression Test Selection,"Regression test selection (RTS) techniques attempt to reduce regression testing costs by selecting a subset of a software system's test cases for use in testing changes made to that system. In practice, RTS techniques may select inordinately large sets of test cases, particularly when applied to industrial systems such as those developed at ABB, where code changes may have far-reaching impact. In this paper, we present a new RTS technique that addresses this problem by focusing on specific classes of faults that can be detected by internal oracles - oracles (rules) that enforce constraints on system states during system execution. Our technique uses program chopping to identify code changes that are relevant to internal oracles, and selects test cases that cover these changes. We present the results of an empirical study that show that our technique is more effective and efficient than other RTS techniques, relative to the classes of faults targeted by the internal oracles.",2013,0, 6366,JAutomate: A Tool for System- and Acceptance-test Automation,"System- and acceptance-testing are primarily performed with manual practices in current software industry. However, these practices have several issues, e.g. they are tedious, error prone and time consuming with costs up towards 40 percent of the total development cost. Automated test techniques have been proposed as a solution to mitigate these issues, but they generally approach testing from a lower level of system abstraction, leaving a gap for a flexible, high system-level test automation technique/tool. In this paper we present JAutomate, a Visual GUI Testing (VGT) tool that fills this gap by combining image recognition with record and replay functionality for high system-level test automation performed through the system under test's graphical user interface. We present the tool, its benefits compared to other similar techniques and manual testing. In addition, we compare JAutomate with two other VGT tools based on their static properties. Finally, we present the results from a survey with industrial practitioners that identifies test-related problems that industry is currently facing and discuss how JAutomate can solve or mitigate these problems.",2013,0, 6367,AURORA: AUtomatic RObustness coveRage Analysis Tool,"Code coverage is usually used as a measurement of testing quality and as adequacy criterion. Unfortunately, code coverage is very sensitive to modifications of the code structure, and, therefore, we can achieve the same degree of coverage with different testing effort by writing the same program in syntactically different ways. For this reason, code coverage can provide the tester with misleading information. In order to understand how a testing criterion is affected by code structure modifications, we have introduced a way to measure the sensitivity of coverage to code changes by means of code-to-code transformations. However the manual execution of the robustness analysis is tedious, time consuming and error prone. In order to solve these issues we present AURORA, a tool that automates the robustness analysis process and leverages the capabilities offered from several existing tools. AURORA has an extendible architecture that concretely supports the tester in the execution of the robustness analysis. Due to this extendible architecture, each user can personalize the robustness analysis to his/her needs. AURORA allows the user to add new transformations by using TXL, which is a programming language specifically designed to support source transformation tasks. It performs the coverage evaluation by using existing code coverage tools and is based on the use of the JUnit framework.",2013,0, 6368,A Toolchain for Designing and Testing XACML Policies,"In modern pervasive application domains, such as Service Oriented Architectures (SOAs) and Peer-to-Peer (P2P) systems, security aspects are critical. Justified confidence in the security mechanisms that are implemented for assuring proper data access is a key point. In the last years XACML has become the de facto standard for specifying policies for access control decisions in many application domains. Briefly, an XACML policy defines the constraints and conditions that a subject needs to comply with for accessing a resource and doing an action in a given environment. Due to the complexity of the language, XACML policy specification is a difficult and error prone process that requires specific knowledge and a high effort to be properly managed.",2013,0, 6369,GUIdiff -- A Regression Testing Tool for Graphical User Interfaces,"Due to the rise of tablets and smart phones and their impact on everyday life, robust and high-quality Graphical User Interfaces (GUIs) are becoming more and more important. Unfortunately, testing these GUIs still remains a big challenge with the current industrial tools, which only cater to manual testing practices and provide limited oracle functionalities such as screenshot comparison. These tools often result in large amounts of manual labor and thus increase cost. We propose a new GUI regression testing tool called GUIdiff, which works similar to diff tools for text data. It executes two different versions of a System Under Test (SUT) side by side, compares the GUI states against each other and presents the list of the detected deviations to the tester. The tool is semi-automatic in the sense that it finds the differences completely automatic and that the tester labels them as faults or false positives.",2013,0, 6370,Identification of Anomalies in Processes of Database Alteration,"Data, especially in large item sets, hide a wealth of information on the processes that have created and modified them. Often, a data-field or a set of data-fields are not modified only through well-defined processes, but also through latent processes; without the knowledge of the second type of processes, testing cannot be considered exhaustive. As a matter of fact, changes in the data deriving from unknown processes can cause anomalies not detectable by testing, which focuses on known data variation rules. History of data variations can yield information about the nature of the changes. In my work I focus on the elicitation of an evolution profile of data: the values data may assume, the change frequencies, the temporal variation of a piece of data in relation to other data, or other constraints that are directly connected to the reference domain. The profile of evolution is then used to detect anomalies in the database state evolution. Detecting anomalies in the database state evolution could strengthen the quality of a system, since a data anomaly could be the signal of a defect in the applications populating the database.",2013,0, 6371,CDCGM Track Report,"The Convergence of distributed clouds, grids and their management conference track focuses on virtualization and cloud computing as they enjoy wider acceptance. A recent IDC report predicts that by 2016, $1 of every $5 will be spent on cloud-based software and infrastructure. Three papers address key issues in cloud computing such as resource optimization and scaling to address changing workloads and energy management. In addition, the DIME network architecture proposed in WETICE2010 is discussed in two papers in this conference, both showing its usefulness in addressing fault, configuration, accounting, performance and security of service transactions with in the service oriented architecture implementation and also spanning across multiple clouds. While virtualization has brought resource elasticity and application agility to the services infrastructure management, the resulting layers of orchestration and the lack of end-to-end service visibility and control spanning across multiple service provider infrastructure have added an alarming degree of complexity. Hopefully, reducing the complexity in the next generation datacenters will be a major research topic in this conference.",2013,0, 6372,Mutation Operators for the Atlas Transformation Language,"Faults in model transformations will result in defective models, and eventually defective code. Correction of defects at the code level is considered very late and is often expensive. Uncorrected defects in the models will propagate to other artifacts; thus, adversely affecting the quality of the end product. Moreover, defect propagation may result in a system that does not meet the stakeholders' requirements. Therefore, model transformations must be thoroughly tested to maintain product quality, while keeping development cost at reasonable levels. Existing literature on model transformation verification and validation has considered coverage based techniques. Mutation testing is a popular technique that has been extensively studied in the literature, and shown to perform better than coverage based techniques. To support the mutation testing of model transformations, this paper proposes a suite of mutation operators for the Atlas Transformation language (ATL). The effectiveness of the proposed operators is evaluated using a model transformation program, implemented in ATL, to transform Use Case Maps models to UML Activity Diagrams. The results show that the proposed operators can successfully detect inadequacies in an example test suite.",2013,0, 6373,Efficient Mutation Analysis of Relational Database Structure Using Mutant Schemata and Parallelisation,"Mutation analysis is an effective way to assess the quality of input values and test oracles. Yet, since this technique requires the generation and execution of many mutants, it often incurs a substantial computational cost. In the context of program mutation, the use of mutant schemata and parallelisation can reduce the costs of mutation analysis. This paper is the first to apply these approaches to the mutation analysis of a relational database schema, arguably one of the most important artefacts in a database application. Using a representative set of case studies that vary in both their purpose and structure, this paper empirically compares an unoptimised method to four database structure mutation techniques that intelligently employ both mutant schemata and parallelisation. The results of the experimental study highlight the performance trade-offs that depend on the type of database management system (DBMS), underscoring the fact that every DBMS does not support all types of efficient mutation analysis. However, the experiments also identify a method that yields a one to ten times reduction in the cost of mutation analysis for relational schemas hosted by both the Postgres and SQLite DBMSs.",2013,0, 6374,Conditional-Based Refactorings and Fault-Proneness: An Empirical Study,"Recent empirical work has shown that some of the most frequently applied Java-based refactorings relate to the manipulation of code conditionals and flags. The logic of such code is often complex and difficult to test regressively. One open research issue thus relates to the fault-proneness profiles of classes where these refactorings have been applied, vis-a-vis refactorings on other classes. In this paper, we explore six releases of three Eclipse projects and the faults in the refactored classes of those releases. We explore four specific conditional-based refactorings and the supposition that: classes where these four refactorings have been applied will tend to have relatively higher fault incidences because of the inherent complexity of the embedded logic given by the constructs they operate on. Results showed that one of the four refactorings in particular had been applied to classes with higher fault profiles - the `Replace Nested Conditional with Guard Clauses' refactoring. Some evidence that the `Remove Control Flag' refactoring had also been applied to relatively highly fault-prone classes was found. Relative to other types of refactoring, the result thus suggests that these two effectively signpost fault-prone classes.",2013,0, 6375,A Process for Assessing Data Quality,"This industrial contribution describes a tool support approach to assessing the quality of relational databases. The approach combines two separate audits - an audit of the database structure as described in the schema and an audit of the database content at a given point in time. The audit of the database schema checks for design weaknesses, data rule violations and deviations from the original data model. It also measures the size, complexity and structural quality of the database. The audit of the database content compares the state of selected data attributes to identify incorrect data and checks for missing and redundant records. The purpose is to initiate a data clean-up process to ensure or restore the quality of the data.",2013,0, 6376,Model-Based Test Suite Generation for Function Block Diagrams Using the UPPAAL Model Checker,"A method for model-based test generation of safety-critical embedded applications using Programmable Logic Controllers and implemented in a programming language such as Function Block Diagram (FBD) is described. The FBD component model is based on the IEC 1131 standard and it is used primarily for embedded systems, in which timeliness is an important property to be tested. Our method involves the transformation of FBD programs with timed annotations into timed automata models which are used to automatically generate test suites. Specifically we demonstrate how to use model transformation for formalization and model-checking of FBD programs using the UPPAAL tool. Many benefits emerge from this method, including the ability to automatically generate test suites from a formal model in order to ensure compliance to strict quality requirements including unit testing and specific coverage measurements. The approach is experimentally assessed on a train control system in terms of consumed resources.",2013,0, 6377,Evaluation of t-wise Approach for Testing Logical Expressions in Software,"Pair-wise and, more generally, t-wise testing are the most common and powerful combinatorial testing approaches. This paper investigates the effectiveness of the t-wise approach for testing logical expressions in software in terms of its fault detecting capabilities. Effectiveness is evaluated experimentally using special software tools for generating logical expressions and t-wise test cases, simulating faults in expressions, testing faulty expressions, and evaluating effectiveness of the testing. T-wise testing effectiveness is measured in its totality and for specific types of faults; it is then compared with random testing. A detailed analysis of the experimental results is also provided.",2013,0, 6378,On Use of Coverage Metrics in Assessing Effectiveness of Combinatorial Test Designs,"Combinatorial test suite design is a test generation technique, popular in part due to its ability to achieve coverage and defect finding power approximating that of exhaustive testing while keeping test suite sizes constrained. In recent years, there have been numerous advances in combinatorial test design techniques, in terms of efficiency and usability of methods used to create them as well as in understanding of their benefits and limitations when applied to real world software. Numerous case studies have appeared presenting practical applications of the combinatorial test suite design techniques, often comparing them with manually-created, random, or exhaustive suites. These comparisons are done either in terms of defects found or by applying some code coverage metric. Since many different and valid combinatorial test suites of strength t can be created for a given test domain, the question whether they all have the same coverage properties is a pertinent one. In this paper we explore the stability of size and coverage of combinatorial test suites. We find that in general coverage levels increase and coverage variability decreases with increasing order of combinations t; however we also find exceptions with implications for practitioners. In addition, we explore cases where coverage achieved by combinatorial test suites of order t applied to the same program is not different from test suites of order t-1. Lastly, we discuss these findings in context of the ongoing practice of applying code coverage metrics to measure effectiveness of combinatorial test suites.",2013,0, 6379,Identifying Failure-Inducing Combinations Using Tuple Relationship,"Combinatorial testing (CT) aims at detecting interaction failures between parameters in a system. Identifying the failure-inducing combinations of a failing test configuration can help developers find the cause of this failure. However, most studies in CT focus on detecting the failures rather than identifying failure-inducing combinations. In this paper, we propose the notion of a tuple relationship tree (TRT) to describe the relationships among all the candidate parameter interactions. TRT reduces additional test configurations that need to be generated in the fault localization process, and it also provides a clear view of all possible candidate interactions. As a result, our approach will not omit any possible interaction that could be the cause of a failure. In particular, we can identify multiple failure-inducing combinations that overlap with each other. Moreover, we extend our approach to handle the case where additional failure-inducing combinations may be introduced by newly generated test configurations.",2013,0, 6380,Combinatorial Interaction Testing with Multi-perspective Feature Models,"Testing product lines and similar software involves the important task of testing feature interactions. The challenge is to test all those feature interactions that result in testing of all variations across all dimensions of variation. In this context, we propose the use of combinatorial test generation, with Multi-Perspective Feature Models (MPFM) as the input model. MPFMs are a set of feature models created to achieve Separation of Concerns within the model. We believe that the MPFM is useful as an input model for combinatorial testing and it is easy to create and understand. This approach helps achieve a better coverage of variability in the product line. Results from an experiment on a real-life case show that up to 37% of the test effort could be reduced and up to 79% defects from the live system could be detected.",2013,0, 6381,Applying Combinatorial Testing to the Siemens Suite,"Combinatorial testing has attracted a lot of attention from both industry and academia. A number of reports suggest that combinatorial testing can be effective for practical applications. However, there are few systematic, controlled studies on the effectiveness of combinatorial testing. In particular, input parameter modeling is a key step in the combinatorial testing process. But most studies do not report the details of the modeling process. In this paper, we report an experiment that applies combinatorial testing to the Siemens suite. The Siemens suite has been used as a benchmark to evaluate the effectiveness of many testing techniques. Each program in the suite has a number of faulty versions. The effectiveness of combinatorial testing is measured in terms of the number of faulty versions that are detected. The experimental results show that combinatorial testing is effective in terms of detecting most of the faulty versions with a small number of tests. In addition, we report the details of our modeling process, which we hope to shed some lights on this critical, yet often ignored step, in the combinatorial testing process.",2013,0, 6382,On Adequacy of Assertions in Automated Test Suites: An Empirical Investigation,"An integral part of test case is the verification phase (also called `test oracle'), which verifies program's state, output or behavior. In automated testing, the verification phase is often implemented using test assertions which are usually developed manually by testers. More precisely, assertions are used for checking the unit or the system's behavior (or output) which is reflected by the changes in the data fields of the class under test, or the output of the function under test. Originated from human (testers') error, test suites are prone to having inadequate assertions. The paper reports an empirical study on the Inadequate-Assertion (IA) problem in the context of automated test suites developed for open-source projects. In this study, test suites of three active open-source projects have been chosen. To investigate IA problem occurrence among the sampled test suites, we performed mutation analysis and coverage analysis. The results indicate that: (1) the IA problem is common among the sampled open-source projects, and the occurrence varies from project to project and from package to package, and (2) the occurrence rate of the IA problem is positively co-related with the complexity of test code.",2013,0, 6383,Search-Based Propagation of Regression Faults in Automated Regression Testing,"Over the lifetime of software programs, developers make changes by adding, removing, enhancing functionality or by refactoring code. These changes can sometimes result in undesired side effects in the original functionality of the software, better known as regression faults. To detect these, developers either have to rely on an existing set of test cases, or have to create new tests that exercise the changes. However, simply executing the changed code does not guarantee that a regression fault manifests in a state change, or that this state change propagates to an observable output where it could be detected by a test case. To address this propagation aspect, we present EVOSUITER, an extension of the EVOSUITE unit test generation tool. Our approach generates tests that propagate regression faults to an observable difference using a search-based approach, and captures this observable difference with test assertions. We illustrate on an example program that EVOSUITER can be effective in revealing regression errors in cases where alternative approaches may fail, and motivate further research in this direction.",2013,0, 6384,The Forth International Workshop on Security Testing (SECTEST 2013),"To improve software security, several techniques, including vulnerability modelling and security testing, have been developed but the problem remains unsolved. On one hand, the SECTEST workshop tries to answer how vulnerability modelling can help users understand the occurrence of vulnerabilities so to avoid them, and what the advantages and drawbacks of the existing models are to represent vulnerabilities. At the same time, the workshop tries to understand how to solve the challenging security testing problem given that testing the mere functionality of a system alone is already a fundamentally critical task, how security testing is different from and related to classical functional testing, and how to assess the quality of security testing. The objective of this workshop is to share ideas, methods, techniques, and tools about vulnerability modelling and security testing to improve the state of the art.",2013,0, 6385,An Empirical Study on Data Retrievability in Decentralized Erasure Code Based Distributed Storage Systems,"Erasure codes are applied in distributed storage systems to provide data robustness against server failures by storing data redundancy among many storage servers. A (n, k) erasure code encodes a data object, which is represented as k elements, into a codeword of n elements such that any k out of these n codeword elements can recover the data object back. Decentralized erasure codes are proposed for distributed storage systems without a central authority. The characteristic of decentralization makes resulting storage systems more scalable and suitable for loosely-organized networking environments. However, different from conventional erasure codes, decentralized erasure codes trade some probability of a successful data retrieval for decentralization. Although theoretical lower bounds on the probability are overwhelming from a theoretical aspect, it is essential to know what the data retrievability is in real applications from a practical aspect. We focus on decentralized erasure code based storage systems and investigate data retrievability from both theoretical and practical aspects. We conduct simulation for random processes of storage systems to evaluate data retrievability. Then we compare simulation results and analytical values from theoretical bounds. By our comparison, we find that data retrievability is underestimated by those bounds. Data retrievability is over 99% in most cases in our simulations, where the order of the used finite field is an 8-bit prime. Data retrievability can be enlarged by using a larger finite field. We believe that data retrievability of decentralized erasure code based storage systems is acceptable for real applications.",2013,0, 6386,Improving Service Diagnosis through Increased Monitoring Granularity,"Due to their loose coupling and highly dynamic nature, service-oriented systems offer many benefits for realizing fault tolerance and supporting trustworthy computing. They enable automatic system reconfiguration in case that a faulty service is detected. Spectrum-based fault localization (SFL) is a statistics-based diagnosis technique that can effectively be applied to pinpoint problematic services. It works by monitoring service usage in system transactions and comparing service coverage with pass/fail observations. SFL exhibits poor performance in diagnosing faulty services in cases when services are tightly coupled. In this paper, we study how and to which extent an increase in monitoring granularity can help to improve correct diagnosis of tightly coupled faulty services. We apply SFL in a real service-based system, for which we show that 100% correct identification of faulty services can be achieved through an increase in the monitoring granularity.",2013,0, 6387,The Determination Method for Software Reliability Qualitative Indices,"The determination of software reliability indices is the primary task in the software reliability engineering. The indices are taken as not only the basis for the software reliability design and the constraints during the software development process, but also the foundation of the software's acceptance. Software reliability indices are usually divided into quantitative indices and qualitative indices. Quantitative indices are quantified software reliability parameters' values, such as software reliability is quantitatively defined as the probability of failure-free operation of a software program for a specified time under specified conditions. However, having a number, even with the appropriate accompanying evidence, is not generally sufficient to convince customers or even the system/software suppliers that the software satisfies its requirements. Thus, qualitative indices such as software reliability is also qualitatively defined as a set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time. Attributes that relate to implementation of fault tolerance design, use of best engineering practices, application of specialized methods and techniques for ensuring reliability-critical requirements, and procedural methods to ensure mistake-proof loading and/or operation also provide evidence that improves the confidence that the software will not cause a system failure. So the qualitative indices can be regarded as the requirements for software reliability activities throughout the development process. Unfortunately, currently there is no systematic theory and approach for software reliability indices' determination. This paper proposes a method for determining the software reliability qualitative indices based on the two standards of SAE-JAI 003 and RTCA DO-I 78B which are widely used by the airworthiness and industrial sectors, as well as the best practices and- management experiences of software reliability engineering. This paper proposes the method's principle, which determines the software reliability qualitative indices according to the profile formed by all stages of the software life cycle and the environment requirements, technique requirements, validation requirements and management requirements. Combined with the software's criticality levels, this paper also proposes a generic framework which recommends a variety of tailoring mechanisms and building guidelines to help users develop their demanded indices.",2013,0, 6388,Quality perception in 3D interactive environments,"In this paper we investigate how configuration and visualization parameters influence the quality of experience in a 3D interactive environment, more specifically in a motion parallax setup. In order to do so, we designed a dedicated testing room and conducted subjective experiments with a team of evaluators. The tests considered parameters such as different disparities, amount of parallax, monitor sizes and visualization angles. Factors such as visual comfort, sense of immersion and the 3D experience as a whole have been assessed. The users were also asked to execute a task in the 3D motion parallax environment assessing the difficulty to complete the task for different parameter configurations. Obtained results suggest that user experience in an immersive environment is not as critically influenced by configuration parameters such as disparity and amount of parallax as initially thought. They also indicate that a better understanding of how this experience is influenced still requires the design and conduction of more comprehensive testing procedures.",2013,0, 6389,TURNUS: A design exploration framework for dataflow system design,"While research on the design of heterogeneous concurrent systems has a long and rich history, a unified design methodology and tool support has not emerged so far, and thus the creation of such systems remains a difficult, time-consuming and error-prone process. The absence of principled support for system evaluation and optimization at high abstraction levels makes the quality of the resulting implementation highly dependent on the experience or prejudices of the designer. This is particularly critical when the combinatorial explosion of design parameters overwhelms available optimization tools. In this work we address these matters by presenting a unified design exploration framework suitable for a wide range of different target platforms. The design is unified and implemented at high level by using a standard dataflow language, while the target platform is described using the IP-XACT standard. This facilitates different design space heuristics that guide the designer during validation and optimization stages without requiring low-level implementations of parts of the application. Our framework currently yields exploration and optimization results in terms of application throughput and buffer size dimensioning, although other co-exploration and optimization heuristics are available.",2013,0, 6390,Pattern generation for Mutation Analysis using Genetic Algorithms,"Mutation Analysis (MA) is a fault-based simulation technique that is used to measure the quality of testbenches for mutant detections where mutants are simple syntactical changes in the designs. A mutant is said living if its error effect cannot be observed at the primary outputs. Previous works mainly focused on the cost reduction in the process of MA, because the MA is a computation intensive process in the commercial tool. For the living mutants, to the best of our knowledge, the commercial tool has not addressed the pattern generation issue yet. Thus, this paper presents a Genetic Algorithm to generate patterns for detecting living mutants such that the quality of the verification environment is improved. The experimental results show that more living mutants can be detected after adding the generated patterns in the testbench.",2013,0, 6391,A collaborative software development model based on formal concept analysis and stable matching,"As the world shrinks into a global village, software development processes seek cooperation from multiple teams that are spread across the globe and possess their own unique capabilities and skills. Studies indicate that the Collaborative Software Development model has several advantages such as increased productivity and cost efficiency. However, it also poses the challenge of coordination and task assignment between dispersed and heterogeneous teams. In this paper, we propose a Formal Concept Analysis based model for skills oriented mapping between disparate teams and a set of software development tasks within a distributed and collaborative software development environment in a manner that is efficient, economical and fault-tolerant. Concepts extracted in the form of teams exhibiting common skills and software development tasks requiring specific skills are matched by using an extended version of the Stable Marriage Problem. The stable pairs so obtained are subsequently pruned with the objective of either minimizing the cost of allocation or maximizing the continuity of tasks. We also assess the redundancy for each task. Experimental results demonstrate the efficacy of our approach.",2013,0, 6392,Identifying the root cause of failures in IT changes: Novel strategies and trade-offs,"Despite the Change and Problem Management have received significant attention from the academic community in recent years, the developed solutions do not identify the root cause of failures in IT Changes and, in some cases, only detect software failures. To address this, in this paper, we introduce four strategies to identify root cause of problems based on an interactive approach, in which the Diagnosis System questions a human operator. The strategies introduced and evaluated in this paper are built upon a system we have developed previously, but whose root cause identification was more rudimentary. A case study that uses the improved solution is conducted for the purpose of analyzing the diagnostics generated. Thus, it was possible to compare the diagnostics generated by each strategy, identifying any trends.",2013,0, 6393,PReSET: A toolset for the evaluation of network resilience strategies,"Computer networks support many of the services that our society relies on. Therefore, ensuring their resilience to faults and challenges, such as attacks, is critical. To do this can require the execution of resilience strategies that perform dynamic reconfiguration of networks, including resilience-specific functionality. It is important that resilience strategies are evaluated prior to their execution, for example, to ensure they will not exacerbate an on-going problem. To facilitate this activity, we have developed a toolset that supports the evaluation of resilience strategies that are specified as event-driven policies. The toolset couples the Ponder2 policy-based management framework and the OMNeT++ simulation environment. In this paper, we discuss the network resilience problem and motivate simulation as a suitable way to evaluate resilience strategies. We describe the toolset we have developed, including its architecture and the implementation of a number of resilience mechanisms, and its application to evaluating strategies that detect and mitigate Internet worm behaviour.",2013,0, 6394,Detecting software aging in a cloud computing framework by comparing development versions,"Software aging, i.e. degradation of software performance or functionality caused by resource depletion is usually discovered only in the production scenario. This incurs large costs and delays of defect removal and requires provisional solutions such as rejuvenation (controlled restarts). We propose a method for detecting aging problems shortly after their introduction by runtime comparisons of different development versions of the same software. Possible aging issues are discovered by analyzing the differences in runtime traces of selected metrics. The required comparisons are workload-independent which minimizes the additional effort of dedicated stress tests. Consequently, the method requires only minimal changes to the traditional development and testing process. This paves the way to detecting such problems before public releases, greatly reducing the cost of defect fixing. Our study focuses on the memory leaks of Eucalyptus, a popular open source framework for managing cloud computing environments.",2013,0, 6395,Universal Script Wrapper An innovative solution to manage endpoints in large and heterogeneous environment,"Endpoint management is a key function for data center management and cloud management. Today's practice to manage endpoints for the enterprise is labor intensive, tedious and error prone. In this paper, we present Universal Script Wrapper, an innovative solution which provides a unique solution to allow users to manage a group of endpoints as if logged in to one endpoint. It harvests proven scripts and makes them available to automate management tasks on endpoints without modification. To reduce risk, it provides a mechanism to guard against intrusive commands and scripts that could cause massive damage to the infrastructure.",2013,0, 6396,Pattern detection in unstructured data: An experience for a virtualized IT infrastructure,"Data-agnostic management of today's virtualized and cloud IT infrastructures motivates statistical inference from unstructured or semi-structured data. We introduce a universal approach to the determination of statistically relevant patterns in unstructured data, and then showcase its application to log data of a Virtual Center (VMware's virtualization management software). The premise of this study is that the unstructured data can be converted into events, where an event is defined by time, source, and a series of attributes. Every event can have any number of attributes but all must have a time stamp and optionally a source of origination (be it a server, a location, a business process, etc.) The statistical relevance of the data can then be made clear via determining the joint and prior probabilities of events using a discrete probability computation. From this we construct a Directed Virtual Graph with nodes representing events and the branches representing the conditional probabilities between two events. Employing information-theoretic measures the graphs are reduced to a subset of relevant nodes and connections. Moreover, the information contained in the unstructured data set is extracted from these graphs by detecting particular patterns of interest.",2013,0, 6397,Video quality monitoring based on precomputed frame distortions,"In the past decade, video streaming has taken over a large part of the current Internet traffic and more and more TV broadcasters and network providers extend their portfolio of video streaming services. With the growing expectations of video consumers with respect to the service quality, monitoring is an important aspect for network providers to detect possible performance problems or high network load. In parallel, emerging technologies like software defined networking or network virtualization introduce support for specialized networks which allow enhanced functionality in the network. This development enables more sophisticated monitoring techniques in the specialized networks which use knowledge about the video content to better predict the service quality at consumers. In this work, we present a SSIM-based monitoring technique and compare it with the current state-of-the-art which infers the service quality from the monitored packet loss. We further show how network conditions like packet loss or bursts influence the two different monitoring techniques.",2013,0, 6398,System synthesis from UML/MARTE models: The PHARAON approach,"Model-Driven Engineering (MDE) based on UML is a mature methodology for software development. However, its application to HW/SW embedded system specification and design requires specific features not covered by the language. For this reason, the MARTE profile for Real-Time and Embedded systems was defined. It has proven to be powerful enough to support holistic system modeling under different views. This single-source model is able to capture the required information, enabling the automatic generation of executable and configurable models for fast performance analysis without requiring additional engineering effort. As a result of this performance analysis suitable system architecture can be decided. At this point, the SW stack to be executed by each processing node in the selected heterogeneous platform has to be generated. In the general case this is a tedious and error-prone process with little assistance from available tools. Current practices oblige the SW engineer to develop the code for each node of the heterogeneous multi-core platform by hand. The code has to be written specifically for the selected architecture and architectural mapping, thus reducing reusability. In order to overcome this limitation, the FP7 PHARAON project aims to develop tools able to automatically generate the code to be executed in each node from the initial system model. This affects not only the application code, the static and run-time libraries (e.g. OpenMP/OpenCL), the middleware and communication functions, but also the OS and the driver calls in each node.",2013,0, 6399,A Petri-Net-Based Approach to Reliability Determination of Ontology-Based Service Compositions,"Ontology Web Language for Services (OWL-S), one of the most significant semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of services in an unambiguous computer-interpretable form. Analysis of the quality of service of composite service processes specified in OWL-S enables service users to decide whether the process meets nonfunctional requirements. In this paper, we propose a probabilistic approach for reliability analysis of OWL-S processes, employing the non-Markovian stochastic Petri net (NMSPN) as the fundamental model. Based on the NMSPN representations of the OWL-S elements, we introduce an analytical method for the calculation of the process-normal-completion probability as the reliability estimate. This method takes the probabilistic parameters of service invocations and messages as model inputs. To validate the feasibility and accuracy of our approach, we obtain runtime experimental data and conduct a confidence interval analysis in a case study. A sensitivity analysis is also performed to determine the impact of model parameters on reliability and to help identify the reliability bottlenecks.",2013,0, 6400,Improving the Trustworthiness of Medical Device Software with Formal Verification Methods,"Wearable and implantable medical devices are commonly used for diagnosing, monitoring, and treating various medical conditions. Increasingly complex software and wireless connectivity have enabled great improvements in the quality of care and convenience for users of such devices. However, an unfortunate side-effect of these trends has been the emergence of security concerns. In this letter, we propose the use of formal verification techniques to verify temporal safety properties and improve the trustworthiness of medical device software. We demonstrate how to bridge the gap between traditional formal verification and the needs of medical device software. We apply the proposed approach to cardiac pacemaker software and demonstrate its ability to detect a range of software vulnerabilities that compromise security and safety.",2013,0, 6401,A Learning-Based Framework for Engineering Feature-Oriented Self-Adaptive Software Systems,"Self-adaptive software systems are capable of adjusting their behavior at runtime to achieve certain functional or quality-of-service goals. Often a representation that reflects the internal structure of the managed system is used to reason about its characteristics and make the appropriate adaptation decisions. However, runtime conditions can radically change the internal structure in ways that were not accounted for during their design. As a result, unanticipated changes at runtime that violate the assumptions made about the internal structure of the system could degrade the accuracy of the adaptation decisions. We present an approach for engineering self-adaptive software systems that brings about two innovations: 1) a feature-oriented approach for representing engineers' knowledge of adaptation choices that are deemed practical, and 2) an online learning-based approach for assessing and reasoning about adaptation decisions that does not require an explicit representation of the internal structure of the managed software system. Engineers' knowledge, represented in feature-models, adds structure to learning, which in turn makes online learning feasible. We present an empirical evaluation of the framework using a real-world self-adaptive software system. Results demonstrate the framework's ability to accurately learn the changing dynamics of the system while achieving efficient analysis and adaptation.",2013,0, 6402,"Conservative Reasoning about the Probability of Failure on Demand of a 1-out-of-2 Software-Based System in Which One Channel Is """"Possibly Perfect""""","In earlier work, [11] (henceforth LR), an analysis was presented of a 1-out-of-2 software-based system in which one channel was possibly perfect. It was shown that, at the aleatory level, the system pfd (probability of failure on demand) could be bounded above by the product of the pfd of channel A and the pnp (probability of nonperfection) of channel B. This result was presented as a way of avoiding the well-known difficulty that for two certainly-fallible channels, failures of the two will be dependent, i.e., the system pfd cannot be expressed simply as a product of the channel pfds. A price paid in this new approach for avoiding the issue of failure dependence is that the result is conservative. Furthermore, a complete analysis requires that account be taken of epistemic uncertainty-here concerning the numeric values of the two parameters pfdA and pnpB. Unfortunately this introduces a different difficult problem of dependence: estimating the dependence between an assessor's beliefs about the parameters. The work reported here avoids this problem by obtaining results that require only an assessor's marginal beliefs about the individual channels, i.e., they do not require knowledge of the dependence between these beliefs. The price paid is further conservatism in the results.",2013,0, 6403,Mars science laboratory frame manager for centralized frame tree database and target pointing,"The FM (Frame Manager) flight software module is responsible for maintaining the frame tree database containing coordinate transforms between frames. The frame tree is a proper tree structure of directed links, consisting of surface and rover subtrees. Actual frame transforms are updated by their owner. FM updates site and saved frames for the surface tree. As the rover drives to a new area, a new site frame with an incremented site index can be created. Several clients including ARM and RSM (Remote Sensing Mast) update their related rover frames that they own. Through the onboard centralized FM frame tree database, client modules can query transforms between any two frames. Important applications include target image pointing for RSM-mounted cameras and frame-referenced arm moves. The use of frame tree eliminates cumbersome, error-prone calculations of coordinate entries for commands and thus simplifies flight operations significantly.",2013,0, 6404,Hector: Detecting Resource-Release Omission Faults in error-handling code for systems software,"Omitting resource-release operations in systems error handling code can lead to memory leaks, crashes, and deadlocks. Finding omission faults is challenging due to the difficulty of reproducing system errors, the diversity of system resources, and the lack of appropriate abstractions in the C language. To address these issues, numerous approaches have been proposed that globally scan a code base for common resource-release operations. Such macroscopic approaches are notorious for their many false positives, while also leaving many faults undetected. We propose a novel microscopic approach to finding resource-release omission faults in systems software. Rather than generalizing from the entire source code, our approach focuses on the error-handling code of each function. Using our tool, Hector, we have found over 370 faults in six systems software projects, including Linux, with a 23% false positive rate. Some of these faults allow an unprivileged malicious user to crash the entire system.",2013,0, 6405,An algorithmic approach to error localization and partial recomputation for low-overhead fault tolerance,"The increasing size and complexity of massively parallel systems (e.g. HPC systems) is making it increasingly likely that individual circuits will produce erroneous results. For this reason, novel fault tolerance approaches are increasingly needed. Prior fault tolerance approaches often rely on checkpoint-rollback based schemes. Unfortunately, such schemes are primarily limited to rare error event scenarios as the overheads of such schemes become prohibitive if faults are common. In this paper, we propose a novel approach for algorithmic correction of faulty application outputs. The key insight for this approach is that even under high error scenarios, even if the result of an algorithm is erroneous, most of it is correct. Instead of simply rolling back to the most recent checkpoint and repeating the entire segment of computation, our novel resilience approach uses algorithmic error localization and partial recomputation to efficiently correct the corrupted results. We evaluate our approach in the specific algorithmic scenario of linear algebra operations, focusing on matrix-vector multiplication (MVM) and iterative linear solvers. We develop a novel technique for localizing errors in MVM and show how to achieve partial recomputation within this algorithm, and demonstrate that this approach both improves the performance of the Conjugate Gradient solver in high error scenarios by 3x-4x and increases the probability that it completes successfully by up to 60% with parallel experiments up to 100 nodes.",2013,0, 6406,A view on the past and future of fault injection,"Fault injection is a well-known technology that enables assessing dependability attributes of computer systems. Many works on fault injection have been developed in the past, and fault injection has been used in different application domains. This fast abstract briefly revises previous applications of fault injection, especially for embedded systems, and puts forward ideas on its future use, both in terms of application areas and business markets.",2013,0, 6407,Error detector placement for soft computation,"The scaling of Silicon devices has exacerbated the unreliability of modern computer systems, and power constraints have necessitated the involvement of software in hardware error detection. At the same time, emerging workloads in the form of soft computing applications, (e.g., multimedia applications) can tolerate most hardware errors as long as the erroneous outputs do not deviate significantly from error-free outcomes. We term outcomes that deviate significantly from the error-free outcomes as Egregious Data Corruptions (EDCs). In this study, we propose a technique to place detectors for selectively detecting EDC causing errors in an application. We performed an initial study to formulate heuristics that identify EDC causing data. Based on these heuristics, we developed an algorithm that identifies program locations for placing high coverage detectors for EDCs using static analysis.We evaluate our technique on six benchmarks to measure the EDC coverage under given performance overhead bounds. Our technique achieves an average EDC coverage of 82%, under performance overheads of 10%, while detecting 10% of the Non-EDC and benign faults.",2013,0, 6408,Mobile app development and usability research to help dementia and Alzheimer patients,"Caregiver anecdotes attest that music and photographs play an important role for family members diagnosed with Alzheimer's disease (AD), even those with severe AD. Tablets and iPads, which are prevalent, can be utilized with dementia patients in portraying favorite music and family photographs via apps developed in close partnership with geriatric facilities. Anecdotal research has shown that non-verbal late-stage dementia patients have become stimulated when iPods played their beloved tunes. There is an unmet need in geriatric facilities for stimulating dementia patients, as well providing hard-core data for proving increased cognitive abilities with technology. Technology can help bridge the gap between patients and staff to improve the quality of life for the cognitively impaired. This study addresses cognitive functioning and quality of life for people diagnosed with dementia via technology. In recent times, the influx of older adults suffering from Alzheimer's or dementia related illness has impacted the U.S. Healthcare system. Cognition significantly declines in older adults with AD or dementia over the course of the disease, causing most to be dependent on caregivers, thus routinely institutionalized. Caregivers are often required to focus their attention on addressing the agitation or discomfort of the AD or dementia patient. Research has shown that technology instruments such as iPods, help stimulate those with dementia. This study focuses on innovative devices such as iPads and tablets, which are mainstream and easy to use, cannot only help determine stage of dementia, but also provide stimulation to improve cognitive functioning. It is hoped that this research will analyze that specially created apps and existing assistive software can be used to decrease the symptoms and improve cognition of older adults suffering from AD or other dementia related diseases. Via service-learning courses, students developed an easy-to-use application for tablets to help- older adults with disabilities more readily use the technology. Student programmers produced apps and performed usability tests with the dementia patients, as well as met with geriatric facility personnel to produce application software that meets the patients, family, and caregiver needs and expectations. For example, a student term project produced an application entitled Candoo that utilizes Google's voice recognition and synthesis engine to navigate the web, provide the weather, and supply pill reminder alerts. Another application example included one that allows families to electronically send photographs, video clips, and favorite music from anywhere to loved ones for enjoyment. Furthermore, older adults were assessed by nursing students for cognitive functioning before, and after the semester's intervention. Such mobile apps could allow dementia persons to become less agitated and stay in their homes longer, while also providing awareness and positive change of attitude by those of another generation towards the elderly. This research will discuss student developed mobile applications in the scope of helping improve the quality of life of patients with AD or dementia.",2013,0, 6409,OCR-independent and segmentation-free word-spotting in handwritten Arabic Archive documents,"In this paper, a word-spotting approach is presented that can help in reading handwritten Arabic Archive Documents. Because of the low quality of these documents, the proposed approach is free segmentation, independent of OCR, using a global transformation of word images. It is a based learning approach which employs Generalized Hough Transform (GHT) technique. It detects words, described by their models, in documents images by finding the model's position in the image. With the GHT, the problem of finding the model's position is transformed to a problem of finding the transformation's parameter that maps the model into the image. Parameters such as Hough threshold and distance between voting points are considered for a better location and recognition of words. We tested our system on registers from the 19th century onwards, held in the National Archives of Tunisia. Our first experiments reach an average of 94% of well-spotted words.",2013,0, 6410,Induction motor mechanical fault identification using Park's vector approach,"In this work we have shown that the extended Park's vector spectrum is rich in harmonics characteristics of mechanical defects (air-gap eccentricity and outer raceway bearing fault). About the use of Park's Lissajou's curves to identify mechanical defects, we have demonstrated that this type of index can only detect the occurrence of a fault, but it cannot identify.",2013,0, 6411,"Current sensors faults detection, isolation and control reconfiguration for PMSM drives","This paper deals with a new method current sensors faults detection isolation (FDI) and reconfiguration of the control loops of a permanent magnet synchronous motor (PMSM) drives. The stator currents are measured as well as observed. During fault free operation, the measured signals are used for the PMSM control. In the case of current sensors faults, the faulty measurements are detected and isolated using the new FDI algorithm. This algorithm uses an augmented PMSM model and a bank of adaptive observers to generate residuals. The resulting residuals are used for sensor fault detection. A logic algorithm is built in such a way to isolate and identify the faulty sensor for a stator phase current fault after detecting the fault occurrence. After sensor fault detection and isolation, the control is reconfigured using the healthy observer's outputs. The validity of the proposed method is verified by simulation tests.",2013,0, 6412,A real-time open phase faults detection for IPMSM drives based on Discrete Fourier Transform phase analysis,"Permanent Magnet Synchronous Motors (PMSM) are many used to high performance applications. Accurate faults detection can significantly improve system availability and reliability. This paper investigates the experimental implementation and detection of open phase faults in interior permanent magnet synchronous motor (IPMSM). The proposed method of open phase fault detection is based only on stator current measurement. The objective of this paper is to develop a new detection method for the open phase fault in IPMSM drives. The main idea consists in minimizing the number of sensors allowing the open stator phase fault of the system to study. This paper proposes the fault diagnosis for open-phase faults of IPMSM drives using a Discrete Fourier Transform phase. The current waveform patterns for various modes of open phase winding are investigated. Discrete Fourier Transform is used for the phases (, ) calculation. Experimental results show that the method is able to detect the open-phase faults in IPMSM drive. The experimental implementation is carried out on powerful dSpace DS1103 controller board based on the digital signal processor (DSP) TMS320F240. Experimental results obtained confirm the aforementioned study.",2013,0, 6413,Neural SKCS for efficient noise reduction and content preserving,"Images are often corrupted by random variations in intensity, illumination or have poor contrast and can't be used directly. Several studies have expressed the need to reduce noise and to improve the visual quality of the image. For this purpose, several mathematical tools have been developed such as image filtering by a convolution filter, such as the kernel with compact support (KCS) which has been recently proposed by Remaki and Cheriet [1] and it's version separable (SKCS) 10]. The effectiveness of the SKCS filter in the smoothing operation depends on the value of the scale parameter. Moreover, if the scale parameter is increased, the image is blurred and details and borders are removed. This disadvantage is related to the static nature of the KCS kernel. In this paper we propose a dynamic and adaptive SKCS filter based on neural networks. The scale parameters involved in the filtering process are calculated in real time and supervised by the neural network. The filter scale varies continuously in order to detect and clean noisy areas of the image. To assess the developed theory, an application of filtering noisy images is presented, including a qualitative comparison between the result obtained by the static SKCS and the adaptive SKCS kernel proposed.",2013,0, 6414,Detection of brushless exciter rotating diodes failures by spectral analysis of main output voltage,"Rotating rectifier is a basic part of synchronous generators. Inappropriate operation of this component can prove costly for the machine's owner. This paper presents theoretical analysis and experimental validation for detecting failure of brushless exciter rotating diodes that can fail either open circuit or short circuit. Harmonic analysis of the alternator output voltage waveform is performed when machine is unloaded as well as when it runs around its rated load. Apparition of characteristic frequencies can be useful to distinguish the rotating diodes state. By considering the relative amplitudes at specific harmonics, it is possible to discriminate short circuit diode failure case from open circuit diode breakdown.",2013,0, 6415,"Single, multiple and simultaneous current sensors FDI based on an adaptive observer for PMSM drives","This paper deals with a new method single, multiple and simultaneous current sensors faults detection isolation (FDI) and identification for permanent magnet synchronous motor (PMSM) drives. A new state variable is introduced so that an augmented system can be constructed to treat PMSM sensor faults as actuator faults. This method uses the PMSM model and a bank of adaptive observers to generate residuals. The resulting residuals are used for sensor fault detection. A logic algorithm is built in such a way to isolate and identify the faulty sensor for a stator phase current fault after detecting the fault occurrence. The validity of the proposed method is verified by simulation tests.",2013,0, 6416,Assessing the quality of bioforensic signatures,"We present a mathematical framework for assessing the quality of signature systems in terms of fidelity, risk, cost, other attributes, and utility-a method we call Signature Quality Metrics (SQM). We demonstrate the SQM approach by assessing the quality of a signature system designed to predict the culture medium used to grow a microorganism. The system consists of four chemical assays and a Bayesian network that estimates the probabilities the microorganism was grown using one of eleven culture media. We evaluated fifteen combinations of the signature system by removing one or more of the assays from the Bayes net. We show how SQM can be used to compare the various combinations while accounting for the tradeoffs among three attributes of interest: fidelity, cost, and the amount of sample material consumed by the assays.",2013,0, 6417,Applying Scheduling Algorithms with QoS in the Cloud Computing,"Cloud computing is the model to use existing computing resources that are delivered as a form of service over a network. These services can be divided into three parts with software, platform, and infrastructure. It is important to evaluate cloud computing environment to predict valid cost to manage the cloud computing system. SimJava and GridSim is well-known simulation tools but they do not support the virtualization of cloud computing. CloudSim is only tool which can evaluate the performance of this environment and it is based on SimJava and GridSim. It is suitable to simulate the situation with large amount of devices and data in cloud computing. Also, it can simulate the virtualization of computing nodes, network devices, and storage units. Service provider has to guarantee quality of service to provide stable related services. For this, we can use the scheduling algorithms. However, there is no consideration of data priority in CloudSim. It is important to support QoS to keep the service level agreement. Thus, it is needed to research a scheduling algorithm to support QoS. In this paper, we propose the way to support various scheduling algorithms in CloudSim.",2013,0, 6418,Machine Learning-Based Software Quality Prediction Models: State of the Art,"Quantification of parameters affecting the software quality is one of the important aspects of research in the field of software engineering. In this paper, we present a comprehensive literature survey of prominent quality molding studies. The survey addresses two views: (1) quantification of parameters affecting the software quality; and (2) using machine learning techniques in predicting the software quality. The paper concludes that, model transparency is a common shortcoming to all the surveyed studies.",2013,0, 6419,Indentifying Fault-Prone Object in the Web Service,"The faults in web services are very variant. They are occurred by software complexity. This paper focuses on identifying the fault-prone objects in the Web service that are very complex and different. At first define software complexity metrics for the web service. The technique is successful in classifying objects with relatively low error rate. This procedure shows very useful method in the detection of objects, which occur the fault of web services with high potential.",2013,0, 6420,Automatic enhanced CDFG generation based on runtime instrumentation,"Control and Data Flow Graph (CDFG) is a universal description of program behavior, which is widely used in the co-design of software and hardware. The derivation of CDFG has been done mostly by manually or automatically analyzing corresponding source code, which makes this process time-consuming, error-prone and incomplete. In this paper, we proposed an automated design flow based on runtime instrumentation to generate Enhanced CDFG (ECDFG) with additional runtime information. Though the approach of runtime instrumentation is widely used in software debugging to analyze the program with the accurate runtime information, it is rarely used in software and hardware co-design due to the huge trace data and processing overhead. To overcome the bottle neck of the runtime instrumentation approach, Parallel Background Event Logger is proposed to compress and save the huge amount of trace data. Hierarchical loop structures are detected by intersecting reachable set and backward reachable set. Precise data dependency information is deducted by an address based analytical method named Shower Line Algorithm. With these algorithms and techniques, a set of automatic design tools are implemented to collect runtime events, identify nested and implicit loops and deduct data dependance between modules. Exemplar results demonstrated that Enhanced CDFGs for various programs can be generated correctly with acceptable overhead.",2013,0, 6421,Semi-Automatic Generation of Device Drivers for Rapid Embedded Platform Development,"IP core integration into an embedded platform implies the implementation of a customized device driver complying with both the IP communication protocol and the CPU organization (single processor, SMP, AMP). Such a close dependence between driver and platform organization makes reuse of already existing device drivers very hard. Designers are forced to manually customize the driver code to any different organization of the target platform. This results in a very time-consuming and error-prone task. In this paper, we propose a methodology to semi-automatically generate customized device drivers, thus allowing a more rapid embedded platform development. The methodology exploits the testbench provided with the RTL IP module for extracting the formal model of the IP communication protocol. Then, a taxonomy of device drivers based on the CPU organization allows the system to determine the characteristics of the target platform and to obtain a template of the device driver code. This requires some manual support to identify the target architecture and to generate the desired device driver functionality. The template is used then to automatically generate drivers compliant with 1) the CPU organization, 2) the use in a simulated or in a real platform, 3) the interrupt support, 4) the operating system, 5) the I/O architecture, and 6) possible parallel execution. The proposed methodology has been successfully tested on a family of embedded platforms with different CPU organizations.",2013,0, 6422,Assessing QoS trade-offs for real-time video,"Demand for real-time video in law enforcement, emergency and first responder situations has been in rapid growth. Further, different users will have different requirements which, depending on their needs, may change over time. At some times, a user may require high frame rate to detect motion, while at other times the user may be more concerned with resolution for object recognition. In this paper we describe our model for quantifying Quality of Experience (QoE) and managing Quality of Service (QoS) that incorporates end user or mission needs. We describe our distributed utility-based QoS optimization technique, D-Q-RAM and show how it can be used to make QoS trade-offs in response to mission needs to maximize QoE. We experimentally demonstrate QoS optimized trade-offs as user preferences shift between resolution and frame rate in a live 802.11 ad hoc wireless network. The results show the ability to meet all individual user needs, changing or not, while minimizing the impact on other users.",2013,0, 6423,Optimizing agent placement for flow reconstruction of DDoS attacks,"The Internet today continues to be vulnerable to distributed denial of service (DDoS) attacks. We consider the design of a scalable agent-based system for collecting information about the structure and dynamics of DDoS attacks. Our system requires placement of agents on inter-autonomous system (AS) links in the Internet. The agents implement a self-organizing and totally decentralized mechanism capable of reconstructing topological information about the spatial and temporal structure of attacks. The system is effective at recovering DDoS attack structure, even at moderate levels of deployment. In this paper, we demonstrate how careful placement of agents within the system can improve the system's effectiveness and provide better tradeoffs between system parameters and the quality of structural information the system generates. We introduced two agent placement algorithms for our agent-based DDoS system. The first attempts to maximize the percentage of attack flows detected, while the second tries to maximize the extent to which we are able to trace back detected flows to their sources. We show, somewhat surprisingly, these two objectives are concomitant. Placement of agents in a manner which optimizes in the first criterion tends also to optimize with respect to the second criterion, and vice versa. Both placement schemes show a marked improvement over a system in which agents are placed randomly, and thus provide a concrete design process by which to instrument a DDoS flow reconstruction system that is effective at recovering attack structure in large networks at moderate levels of deployment.",2013,0, 6424,New techniques for testing and operational support of AESA radars,"The Active Antennas (AESA) technology has dramatically increased the operational capability of modern radars. Nevertheless, minimize production costs and cost of ownership of these systems is also a major industrial objective. The paper is organized in two parts. The first one deals with improvements achieved so far for reducing the costs and the complexity of industrial testing. In the second part, a data mining method, based on Bayesian Networks, is presented. It aims at processing all data issued from the Built-In-Test (B.I.T.) in order to accurately detect some defects impossible to catch with current methods, such as transient failures or initiation of youth's defects. Originally planned for testing in production, this new performing method could replace, in the future, the current B.I.T. processing, which is used in operational support.",2013,0, 6425,Introducing tool-supported architecture review into software design education,"While modularity is highly regarded as an important quality of software, it poses an educational dilemma: the true value of modularity is realized only as software evolves, but student homework, assignments and labs, once completed, seldom evolve. In addition, students seldom receive feedback regarding the modularity and evolvability of their designs. Prior work has shown that it is extremely easy for students and junior developers to introduce extra dependencies in their programs. In this paper, we report on a first experiment applying a tool-supported architecture review process in a software design class. To scientifically address this education problem, our first objective is to advance our understanding of why students make these modularity mistakes, and how the mistakes can be corrected. We propose tool-guided architecture review so that modularity problems in students' implementation can be revealed and their consequences can be assessed against possible change scenarios. Our pilot study shows that even students who understand the importance of modularity and have excellent programming skills may introduce additional harmful dependencies in their implementations. Furthermore, it is hard for them to detect the existence of these dependencies on their own. Our pilot study also showed that students need more formal training in architectural review to effectively detect and analyze these problems.",2013,0, 6426,An empirical study of the effects of personality on software testing,"The effectiveness of testing is a major determinant of software quality. It is believed that individual testers vary in their effectiveness, but so far the factors contributing to this variation have not been well studied. In this study, we examined whether personality traits, as described by the five-factor model, affect performance on a software testing task. ICT students were given a small software testing task at which their effectiveness was assessed using several different criteria, including bug location rate, weighted fault density, and bug report quality. Their personality was assessed using the NEO PI-3 personality questionnaire. We then compared testing performance according to individual and aggregate measures against different five-factor personality traits. Several weak correlations between two of these personality traits, extraversion and conscientiousness, and testing effectiveness were found.",2013,0, 6427,Foreword,"The purpose of this workshop is to study and advance the effective use of models in the engineering of software systems. In particular, we are interested in the exchange of experiences, challenges and promising technologies related to modeling. The goals of the software modeling community are to improve the productivity of software developers and to improve the quality of the resulting software products. Models are useful in all phases and activities surrounding software development and deployment. Thus, workshop topics range from requirements modeling, to runtime models, to models for assessing software quality, and to the pragmatics of how to manage large collections of models. This year, we received 23 submissions. Of these, the program committee accepted 11 papers for long presentations and 3 papers papers for shorter presentations, for an acceptance rate of 61%. These papers form the basis of workshop sessions, each of which starts with short presentations of 2-3 papers, followed by discussions of issues and research opportunities raised by the papers and by the session topic in general. The program also includes two keynotes, a panel discussion, and a poster/demo session.",2013,0, 6428,Complementing model-driven development for the detection of software architecture erosion,"Detecting software architecture erosion is an important task during the development and maintenance of software systems. Even in model-driven approaches in which consistency between artifacts can partially be established by construction and consistency issues have been intensively investigated, the intended architecture and its realization may diverge with negative effects on software quality. In this article, we describe an approach to flexible architecture erosion detection for model-driven development approaches. Consistency constraints expressed by architectural aspects called architectural rules are specified as formulas on a common ontology, and models are mapped to instances of that ontology. A knowledge representation and reasoning system is then utilized to check whether these architectural rules are satisfied for a given set of models. We describe three case studies in which this approach has been used to detect architecture erosion flexibly and argue that the negative effects of architecture erosion can be minimized effectively.",2013,0, 6429,Do external feedback loops improve the design of self-adaptive systems? A controlled experiment,"Providing high-quality software in the face of uncertainties, such as dealing with new user needs, changing availability of resources, and faults that are difficult to predict, raises fundamental challenges to software engineers. These challenges have motivated the need for self-adaptive systems. One of the primary claimed benefits of self-adaptation is that a design with external feedback loops provide a more effective engineering solution for self-adaptation compared to a design with internal mechanisms. While many efforts indicate the validity of this claim, to the best of our knowledge, no controlled experiments have been performed that provide scientifically founded evidence for it. Such experiments are crucial for researchers and engineers to underpin their claims and improve research. In this paper, we report the results of a controlled experiment performed with 24 final-year students of a Master in Software Engineering program in which designs based on external feedback loops are compared with designs based on internal mechanisms. The results show that applying external feedback loops can reduce control flow complexity and fault density, and improve productivity. We found no evidence for a reduction of activity complexity.",2013,0, 6430,QoS-aware fully decentralized service assembly,"Large distributed software systems are increasingly common in today geographically distributed IT infrastructures. A key challenge for the software engineering community is how to efficiently and effectively manage such complex systems. Extending software services with autonomic capabilities has been suggested as a possible way to address this challenge. Ideally, self-management capabilities should be based on fully distributed, peer-to-peer (P2P) architectures in order to try to overcome the scalability and robustness problems of centralized solutions. Within this context, we propose an approach for the adaptive self-assembly of distributed services, based on a simple epidemic protocol. Our approach is based on the three-layer reference model for adaptive systems, and is centered on the use of a gossip protocol to achieve decentralized information dissemination and decision making. The goal of our system is to build and maintain an assembly of services that, besides functional requirements, is able to fulfill global quality of service (QoS) and structural requirements. A set of simulation experiments is used to assess the effectiveness of our approach in terms of convergence speed towards the optimal solution, and resilience to failures.",2013,0, 6431,An automated tool selection method based on model transformation: OPNET and NS-3 case study,"Errors in telecom service (TS) design may be expensive to correct by telecommunication enterprises, especially if they are discovered late and after the equipments and software are deployed. Verifying complex architectures of TS designs is a daunting task and subject to human errors. Thus, we aim to provide supportive tools that helps during the TS creation activity. Network simulators play an important role in detecting design errors and predicting performance quality violations in the TS domain due to the measurements that they can produce. The metrics associated with performance requirements are numerous, and it is difficult to find a unique tool that can handle the prediction of their values. In this paper, we tackle the tool selection challenge for non domain-expert designers taking into consideration the differences between tools and the large number of metrics that they can measure. Thus, by applying model transformation techniques, we propose a method to select the proper tool(s) to obtain the measurements needed during verification activity. Therefore, we present our contributions on the modeling language level, and the tool selection algorithm with its implementation. Reusability, complexity, and customized measurements are taken into account. We illustrate our approach with a video conference and customized measurement example using OPNET and NS-3 simulators.",2013,0, 6432,Functional SOA testing based on constraints,"In the fierce competition on today's software market, Service-Oriented Architectures (SOAs) are an established design paradigm. Essential concepts like modularization, reuse, and the corresponding IP core business are inherently supported in the development and operation of SOAs that offer flexibility in many aspects and thus optimal conditions also for heterogeneous system developments. The intrinsics of large and complex SOA enterprises, however, require us to adopt and evolve our verification technology, in order to achieve expected software quality levels. In this paper, we contribute to this challenge by proposing a constraint based testing approach for SOAs. In our work, we augment a SOA's BPEL business model with pre- and postcondition contracts defining essential component traits, and derive a suite of feasible test cases to be executed after assessing its quality via corresponding coverage criteria. We illustrate our approach's viability via a running example as well as experimental results, and discuss current and envisioned automation levels in the context of a test and diagnosis workflow.",2013,0, 6433,Did we test our changes? Assessing alignment between tests and development in practice,"Testing and development are increasingly performed by different organizations, often in different countries and time zones. Since their distance complicates communication, close alignment between development and testing becomes increasingly challenging. Unfortunately, poor alignment between the two threatens to decrease test effectiveness or increases costs. In this paper, we propose a conceptually simple approach to assess test alignment by uncovering methods that were changed but never executed during testing. The paper's contribution is a large industrial case study that analyzes development changes, test service activity and field faults of an industrial business information system over 14 months. It demonstrates that the approach is suitable to produce meaningful data and supports test alignment in practice.",2013,0, 6434,Automatic test generation for mutation testing on database applications,"To assure high quality of database applications, testing database applications remains the most popularly used approach. In testing database applications, tests consist of both program inputs and database states. Assessing the adequacy of tests allows targeted generation of new tests for improving their adequacy (e.g., fault-detection capabilities). Comparing to code coverage criteria, mutation testing has been a stronger criterion for assessing the adequacy of tests. Mutation testing would produce a set of mutants (each being the software under test systematically seeded with a small fault) and then measure how high percentage of these mutants are killed (i.e., detected) by the tests under assessment. However, existing test-generation approaches for database applications do not provide sufficient support for killing mutants in database applications (in either program code or its embedded or resulted SQL queries). To address such issues, in this paper, we propose an approach called MutaGen that conducts test generation for mutation testing on database applications. In our approach, we first apply an existing approach that correlates various constraints within a database application through constructing synthesized database interactions and transforming the constraints from SQL queries into normal program code. Based on the transformed code, we generate program-code mutants and SQL-query mutants, and then derive and incorporate query-mutant-killing constraints into the transformed code. Then, we generate tests to satisfy query-mutant-killing constraints. Evaluation results show that MutaGen can effectively kill mutants in database applications, and MutaGen outperforms existing test-generation approaches for database applications in terms of strong mutant killing.",2013,0, 6435,An industry proof-of-concept demonstration of automated combinatorial test,"Studies have found that the largest single cost and schedule component of safety-critical, embedded system development is software rework: locating and fixing software defects found during test. In many such systems these defects are the result of interactions among no more than 6 variables, suggesting that 6-way combinatorial testing would be sufficient to trigger and detect them. The National Institute of Standards and Technology developed an approach to automatically generating, executing, and analyzing such tests. This paper describes an industry proof-of-concept demonstration of automated unit and integration testing using this approach. The goal was to see if it might cost-effectively reduce rework by reducing the number of software defects escaping into system test - if it was adequately accurate, scalable, mature, easy to learn, and easy to use and still was able to achieve the required level of structural coverage. Results were positive - e.g., 2775 test input vectors were generated in 6 seconds, expected outputs were generated in 60 minutes, and executing and analyzing them took 8 minutes. Tests detected all seeded defects and in the proof-of-concept demonstration achieved nearly 100% structural coverage.",2013,0, 6436,ReFit: A Fit test maintenance plug-in for the Eclipse refactoring plug-in,"The Fit framework is a widely established tool for automated acceptance test-driven development (ATDD). Fit stores the test specification separate from the test fixture code in an easily human readable and editable tabular form in HTML format. Additional tools like the FitPro plugin or FitN esse support the writing of test specifications and test fixtures from within the Eclipse IDE or the Web. With the increasing popularity of agile test-driven software development, maintenance of the evolving and growing test base has become an important issue. However, there has been no support yet for automated refactoring of Fit test cases. In a recent research project, we developed the Eclipse plugin ReFit for automated refactoring of Fit test cases. Fit test refactoring can occur due to changing requirements or changing Java code, which in either case means a cross-language refactoring to keep test specification and test fixture in sync. In this paper the concept for the development of the ReFit Eclipse Plugin is described, which significantly reduces the effort for Fit test maintenance and makes refactoring less error prone. Besides a tight integration into the existing Eclipse refactoring plugin, major goals of the plugin were to make it easy extensible for additional refactorings, new fixture types and further test specification file formats. Challenges faced when adding new and modifying existing Eclipse refactoring behavior are described and are due to the strong dependency on the Eclipse JDK and LTK features, and the solutions developed are presented.",2013,0, 6437,A novel fuzzy classification to enhance software regression testing,"An effective system regression testing for consecutive releases of very large software systems, such as modern telecommunications systems, depends considerably on the selection of test cases for execution. Classification models can classify, early in the test planning phase, those test cases that are likely to detect faults in the upcoming regression test. Due to the high uncertainties in regression test, classification models based on fuzzy logic are very useful. Recently, methods have been proposed for automatically generating fuzzy if-then rules by applying complicated rule generation procedures to numerical data. In this research, we introduce and demonstrate a new rule-based fuzzy classification (RBFC) modeling approach as a method for identifying high effective test cases. The modeling approach, based on test case metrics and the proposed rule generation technique, is applied to extracting fuzzy rules from numerical data. In addition, it also provides a convenient way to modify rules according to the costs of different misclassification errors. We illustrate our modeling technique with a case study of large-scale industrial software systems and the results showed that test effectiveness and efficiency was significantly improved.",2013,0, 6438,Discovering signature patterns from event logs,"More and more information about processes is recorded in the form of so-called event logs. High-tech systems such as X-ray machines and high-end copiers provide their manufacturers and services organizations with detailed event data. Larger organizations record relevant business events for process improvement, auditing, and fraud detection. Traces in such event logs can be classified as desirable or undesirable (e.g., faulty or fraudulent behavior). In this paper, we present a comprehensive framework for discovering signatures that can be used to explain or predict the class of seen or unseen traces. These signatures are characteristic patterns that can be used to discriminate between desirable and undesirable behavior. As shown, these patterns can, for example, be used to predict remotely whether a particular component in an X-ray machine is broken or not. Moreover, the signatures also help to improve systems and organizational processes. Our framework for signature discovery is fully implemented in ProM and supports class labeling, feature extraction and selection, pattern discovery, pattern evaluation and cross-validation, reporting, and visualization. A real-life case study is used to demonstrate the applicability and scalability of the approach.",2013,0, 6439,A Programming Language Approach to Fault Tolerance for Fork-Join Parallelism,"When running big parallel computations on thousands of processors, the probability that an individual processor will fail during the execution cannot be ignored. Computations should be replicated, or else failures should be detected at runtime and failed subcomputations reexecuted. We follow the latter approach and propose a high-level operational semantics that detects computation failures, and allows failed computations to be restarted from the point of failure. We implement this high-level semantics with a lower-level operational semantics that provides a more accurate account of processor failures, and prove in Coq the correspondence between the high- and low-level semantics.",2013,0, 6440,Requirements-Driven Self-Repairing against Environmental Failures,"Self-repairing approaches have been proposed to alleviate the runtime requirements satisfaction problem by switching to appropriate alternative solutions according to the feedback monitored. However, little has been done formally on analyzing the relations between specific environmental failures and corresponding repairing decisions, making it a challenge to derive a set of alternative solutions to withstand possible environmental failures at runtime. To address these challenges, we propose a requirements-driven self-repairing approach against environmental failures, which combines both development-time and runtime techniques. At the development phase, in a stepwise manner, we formally analyze the issue of self-repairing against environmental failures with the support of the model checking technique, and then design a sufficient and necessary set of alternative solutions to withstand possible environmental failures. The runtime part is a runtime self-repairing mechanism that monitors the operating environment for unsatisfiable situations, and makes self-repairing decisions among alternative solutions in response to the detected environmental failures.",2013,0, 6441,EnHTM: Exploiting Hardware Transaction Memory for Achieving Low-Cost Fault Tolerance,"Fault-tolerance has become an essential concern for processor designers due to increasing transient fault rates, even for the processors used in the mainstream computing. As the mainstream commodity market accepts only low-cost fault tolerance solutions, traditional high-end solutions are unacceptable due to their expensive overheads. This paper presents EnHTM, a hybrid software/hardware implemented low-cost fault tolerance solution for the serial programs running on commodity systems. EnHTM employs light-weight symptom-based mechanism to detect faults and recovers from faults using a minimally-modified Hardware Transactional Memory (HTM) which features lazy conflict detection, lazy data versioning. Compile-time analysis approach is also exploited to support larger transaction size, so that transient faults detected within long latency can be recovered. The evaluation experiment result shows that EnHTM can recover from 89.4%of catastrophic failures caused by transient faults, with a performance overhead of 2.6% in error-free executions on average.",2013,0, 6442,Regression Testing Prioritization Based on Model Checking for Safety-Crucial Embedded Systems,"The order in which test-cases are executed has an influence on the rate at which faults can be detected. In this paper we demonstrate how test-case prioritization can be performed with the use of model-checkers. For this, different well known prioritization techniques are adapted for model-based use. New property based prioritization techniques are introduced. In addition it is shown that prioritization can be done at test-case generation time, thus removing the need for test-suite post-processing. Several experiments for safety-crucial embedded systems are used to show the validity of these ideas.",2013,0, 6443,A New Multi-threaded Code Synthesis Methodology and Tool for Correct-by-Construction Synthesis from Polychronous Specifications,"Embedded software systems respond to multiple events coming from various sources - some temporally regular (ex: periodic sampling of continuous time signals) and some intermittent (ex: interrupts, exception events etc.). Timely response to such events while executing complex computation, might require multi-threaded implementation. For example, overlapping I/O of various types of events, and computation on such events may be delegated to different threads. However, manual programming of multi-threaded programs is error-prone, and proving correctness is computationally expensive. In order to guarantee safety of such implementations, we believe that a correct-by-construction synthesis of multi-threaded software from formal specification is required. It is also imperative that the multiple threads are capable of making progress asynchronous to each other, only synchronizing when shared data is involved or information requires to be passed from one thread to other. Especially on a multi-core platform, lesser the synchronization between threads, better will be the performance. Also, the ability of the threads to make asynchronous progress, rather than barrier synchronize too often, would allow better real-time schedulability. In this work, we describe our technique for multi-threaded code synthesis from a variant of the polychronous programming language SIGNAL, namely MRICDF. Through a series of experimental benchmarks we show the efficacy of our synthesis technique. Our tool EmCodeSyn which was built originally for sequential code synthesis from MRICDF models has been now extended with multi-threaded code synthesis capability. Our technique first checks the concurrent implementability of the given MRICDF model. For implementable models, we further compute the execution schedule and generate multi-threaded code with appropriate synchronization constructs so that the behavior of the implementation is latency equivalent to that of the original MRICDF model.",2013,0, 6444,Modeling Stock Analysts' Decision Making: An Intelligent Decision Support System,"It is well known that security analysis is a time-consuming and error-prone process. However, it can be improved or enhanced considerably by automated reasoning. Efforts to reduce the inaccuracy and incorrectness of analyses and to enhance the confidence levels of stock selection have led to the development of an intelligent decision support system called Trade Expert, which assists, not replaces, portfolio managers. Trade Expert assumes the role of a hypothetical securities analyst capable of analyzing stocks, calling market turns, and making recommendations. It has a knowledge base of stock trading expertise, and a case base of past episodes and consequences of decisions. By combining knowledge-based problem solving with case-based reasoning and fuzzy inference, Trade Expert demonstrates forms of intelligent behavior not yet observed in traditional decision support systems and expert systems. The novelty of this research lies in its application to analogical reasoning, fuzzy reasoning, and knowledge-based decision making.",2013,0, 6445,A Comparison of Some Predictive Models for Modeling Abortion Rate in Russia,"Predictive modeling techniques are popular methods for building models to predict a target of interest. In many modeling problems, however, the focus is to identify possible factors that have significant association with the target. For this type of problem, it is very easy to stretch the interpretation of an association relationship to a causation relationship. Practitioners must pay special attention to such a misinterpretation when data are observational data. In addition, the process of data collection and cleansing are critical in order to produce quality data for modeling. In this article, an observational study is conducted to illustrate the issues about data quality and model building to identify potential important factors associated with abortion rate using data collected in Russia from year 2000 to 2009. Some pitfalls and cautions of applying predictive modeling techniques are discussed.",2013,0, 6446,Water Distribution System Monitoring and Decision Support Using a Wireless Sensor Network,"Water distribution systems comprise labyrinthine networks of pipes, often in poor states of repair, that are buried beneath our city streets and relatively inaccessible. Engineers who manage these systems need reliable data to understand and detect water losses due to leaks or burst events, anomalies in the control of water quality and the impacts of operational activities (such as pipe isolation, maintenance or repair) on water supply to customers. Water Wise is a platform that manages and analyses data from a network of wireless sensor nodes, continuously monitoring hydraulic, acoustic and water quality parameters. Water Wise supports many applications including rolling predictions of water demand and hydraulic state, online detection of events such as pipe bursts, and data mining for identification of longer-term trends. This paper illustrates the advantage of the Water Wise platform in resolving operational decisions.",2013,0, 6447,A new failure analysis approach to predict and localize defects and weakness areas in trough-glass-vias for a multifunctional package level camera,"In this paper, we provide a novel approach to identify failures and defects that occur in the glass interposer of a system-on-package technology-based miniaturized multifunctional camera. First, we use simulations to validate the proposed defect prediction and/or weakness identification techniques. Then, we confirm the predictions using non-destructive failure analysis techniques. Finally, we use the physical analysis techniques to confirm the software failure mode assumptions.",2013,0, 6448,The Web Services Composition Testing Based on Extended Finite State Machine and UML Model,"Web services are designed as software building blocks for Service Oriented Architecture (SOA). It provides an approach to software development that system and application can be constructed by assembling reusable software building blocks, called services. The industries have adopted web services composition to generate new business applications or mission critical services. One of the most popular integration languages for web services composition is Web Services Business Process Execution Language (WS-BPEL). Although the individual service is usually functional correctly, however, several unexpected faults may occur during execution of composite web service. It is difficult to detect the original failure service because the faults may propagate, accumulate and spread. In this paper, we present a technique of Model-Based Testing (MBT) to enhance testing of interactions among the web services. The technique combines Extended Finite State Machine (EFSM) and UML sequence diagram to generate a test model, called EFSM-SeTM. We also defined various coverage criteria to generate valid test paths from EFSM-SeTM model for a better test coverage of all possible scenarios.",2013,0, 6449,Exact determination of a winding disk radial deformation location considering tank effect using an analytical method,"Power transformers are one of the most expensive components of the power system. Timely detection of fault arose in the transformer can be used to prevent unwanted outage of transformer and repair costs. Using electromagnetic waves has recently been proposed for on-line monitoring of the transformer. The presence of the tank in the transformer structure causes problem for analyzing the electromagnetic waves. In this paper, a new analytical method based on locus of the objects in the space is proposed to detect the radial deformation location of a disk winding considering tank effect. The proposed experimental setup for this method has been modeled using CST (Computer Simulation Technology) software. In this paper, Vivaldi antennas suitable for measurements in environments with multi-path routing are used and the analysis is performed in the time domain. The simulation results show that exact determination of radial deformation location can be detected with good accuracy using this method.",2013,0, 6450,A new traveling wave fault location algorithm in series compensated transmission line,"Series capacitors (SCs) are installed on long transmission lines to reduce the inductive reactance of lines. This makes it appear electrically shorter and increases the power transfer capability. Series capacitors and their associated over-voltage protection devices (typically Metal Oxide Varistors (MOVs), and/or air gaps) create several problems for protection relays and fault locators including voltage and/or current inversion, sub-harmonic oscillations, transients caused by the air-gap flashover and sudden changes in the operating reach. In this paper, an accurate fault location algorithm for series compensated power transmission lines is presented. With using voltage and current traveling waves and placement of a fault locator in the middle of transmission line near the SCs, location of faults is calculated with high accuracy also proposed algorithm needs no communication link and uses only local signals and because of using of traveling wave polarity have no problem for detecting of reflected waves and therefore it solves problems caused by one end traveling wave based fault location methods. A simple power system containing a compensated transmission line is simulated on PSCAD/EMTDC software and fault location algorithm is implemented on MATLAB environment using wavelet transformer.",2013,0, 6451,Productive Development of Dynamic Program Analysis Tools with DiSL,"Dynamic program analysis tools serve many important software engineering tasks such as profiling, debugging, testing, program comprehension, and reverse engineering. Many dynamic analysis tools rely on program instrumentation and are implemented using low-level instrumentation libraries, resulting in tedious and error-prone tool development. The recently released Domain-Specific Language for Instrumentation (DiSL) was designed to boost the productivity of tool developers targeting the Java Virtual Machine, without impairing the performance of the resulting tools. DiSL offers high-level programming abstractions especially designed for development of instrumentation-based dynamic analysis tools. In this paper, we present a controlled experiment aimed at quantifying the impact of the DiSL programming model and high-level abstractions on the development of dynamic program analysis instrumentations. The experiment results show that compared with a prevailing, state-of-the-art instrumentation library, the DiSL users were able to complete instrumentation development tasks faster, and with more correct results.",2013,0, 6452,"Rule-Based Behaviour Engineering: Integrated, Intuitive Formal Rule Modelling","Requirement engineering is a difficult task which has a critical impact on software quality. Errors related to requirements are considered the most expensive types of software errors. They are the major cause of project delays and cost overruns. Software developers need to cooperate with multiple stakeholders with different backgrounds and concerns. The developers need to investigate an unfamiliar problem space and make the transition from the informal problem space to the formal solution space. The requirement engineering process should use systematic methods which are constructive, incremental, and rigorous. The methods also need to be easy to use and understand so that they can be used for communication among different stakeholders. Is it possible to invent a human intuitive modelling methodology which systematically translates the informal requirements into a formally defined model? Behaviour Engineering has arguably solved many problems. However, the size and low level of the final Behavior Tree makes it hard to match with the original requirements. Here, we propose a new requirement modelling approach called Rule-Based Behaviour Engineering. We separate two concerns, rules and procedural behaviours, right at the beginning of the requirement modelling process. We combine the Behavior Tree notation for procedural behaviour modelling with a non-monotonic logic called Clausal Defeasible Logic for rule modelling. In a systematic way, the target model is constructed incrementally in four well-defined steps. Both the representations of rules and procedural flows are humanly readable and intuitive. The result is an effective mechanism for formally modelling requirements, detecting requirement defects, and providing a set of tools for communication among stakeholders.",2013,0, 6453,Development of Robust Traceability Benchmarks,"Traceability benchmarks are essential for the evaluation of traceability recovery techniques. This includes the validation of an individual trace ability technique itself and the objective comparison of the technique with other traceability techniques. However, it is generally acknowledged that it is a real challenge for researchers to obtain or build meaningful and robust benchmarks. This is because of the difficulty of obtaining or creating suitable benchmarks. In this paper, we describe an approach to enable researchers to establish affordable and robust benchmarks. We have designed rigorous manual identification and verification strategies to determine whether or not a link is correct. We have developed a formula to calculate the probability of errors in benchmarks. Analysis of error probability results shows that our approach can produce high quality benchmarks, and our strategies significantly reduce error probability in them.",2013,0, 6454,Predicting Fault-Prone Software Modules with Rank Sum Classification,"The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naive Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.",2013,0, 6455,Fast-Tracking GENI Experiments Using HyperNets,"Although the underlying network resources needed to support virtualized networks are rapidly becoming available, the tools and abstractions needed to effectively make use of these virtual networks is severely lacking. Although networks like GENI are now available to experimenters, creating an experimental network can still be a daunting and error-prone task. While virtual networks enable experimenters to build tailored networks from the """"ground up"""", starting from scratch is rarely what an experimenter wants to do. Moreover, the challenges of incorporating real-world users into GENI experiments make it difficult to benefit real users or obtain realistic traffic. In this paper we describe a new service designed to simplify the process of setting up and running GENI experiments while at the same time adding support for real-world users to join GENI experiments. Our approach is based on a network hypervisor service used to deploy """"HyperNets"""": pre-defined experimental environments that can be quickly and easily created by experimenters. To illustrate the utility and simplicity of our approach, we describe two example HyperNets, and show how our network hypervisor service is able to automatically deploy them on GENI. We then present some initial performance results from our implentation on GENI. Because our network hypervisor is itself a client of GENI (i.e., it calls the GENI AM APIs to create HyperNets), we briefly discuss our experience using GENI and the challenges we encountered mapping HyperNets onto the GENI framework.",2013,0, 6456,Automated Analysis of Reliability Architectures,"The development of complex and critical systems calls for a rigorous and thorough evaluation of reliability aspects. Over the years, several methodologies have been introduced in order to aid the verification and analysis of such systems. Despite this fact, current technologies are still limited to specific architectures, without providing a generic evaluation of redundant system definitions. In this paper we present a novel approach able to assess the reliability of an arbitrary combinatorial redundant system. We rely on an expressive modeling language to represent a wide class of architectural solutions to be assessed. On such models, we provide a portfolio of automatic analysis techniques: we can produce a fault tree, that represents the conditions under which the system fails to produce a correct output, based on it, we can provide a function over the components reliability, which represents the failure probability of the system. At its core, the approach relies on the logical formalism of equality and uninterpreted functions, it relies on automated reasoning techniques, in particular Satisfiability Modulo Theories decision procedures, to achieve efficiency. We carried out an extensive experimental evaluation of the proposed approach on a wide class of multi-stage redundant systems. On the one hand, we are able to automatically obtain all the results that are manually obtained in [1], on the other, we provide results for a much wider class of architectures, including the cases of non-uniform probabilities and of two voters per stage.",2013,0, 6457,Game-Based Monitors for Scenario-Based Specification,"Run-time verification techniques based on monitors have become the basic means of detecting software failures in dynamic and open environments. One challenging problem is how the monitor can provide sufficient indications before the real failures, so that the system has enough time to act before the failures cause serious harm. To this end, this paper proposes the main idea on how to generate monitors from a scenario-based specification called property sequence chart based on game theory. The monitors are interpreted in multivalued semantics: satisfied, infinitely controllable, system finitely controllable, system urgently controllable, environment finitely controllable, environment urgently controllable, violated. Through the multi-valued semantics definition, the monitors can provide enough information to help the system to take measures for failure prevention or recovery.",2013,0, 6458,Educational Collaborative Virtual Environments: Evaluation Model,In this paper we propose a model for assessing the quality of educational collaborative virtual environments. The objective is to establish a theoretical model that highlights a number of relevant sets of requirements in relation to the quality in educational collaborative virtual environments. It is intended to apply the model during the lifecycle of product development and in the selection of the environment in order to support the learning/teaching process.,2013,0, 6459,Towards Efficient Probabilistic Scheduling Guarantees for Real-Time Systems Subject to Random Errors and Random Bursts of Errors,"Real-time computing and communication systems are often required to operate with prespecified levels of reliability in harsh environments, which may lead to the exposure of the system to random errors and random bursts of errors. The classical fault-tolerant schedulability analysis in such cases assumes a pseudo-periodic arrival of errors, and does not effectively capture any underlying randomness or burst characteristics. More modern approaches employ much richer stochastic error models to capture these behaviors, but this is at the expense of greatly increased complexity. In this paper, we develop a quantile-based approach to probabilistic schedulability analysis in a bid to improve efficiency whilst still retaining a rich stochastic error model capturing random errors and random bursts of errors. Our principal contribution is the derivation of a simple closed-form expression that tightly bounds the number of errors that a system must be able to tolerate at any time subsequent to its critical instant in order to achieve a specified level of reliability. We apply this technique to develop an efficient 'one-shot' schedulability analysis for a simple fault-tolerant EDF scheduler. The paper concludes that the proposed method is capable of giving efficient probabilistic scheduling guarantees, and may easily be coupled with more representative higher-level job failure models, giving rise to efficient analysis procedures for safety-critical fault-tolerant real-time systems.",2013,0, 6460,PIE: A lightweight control scheme to address the bufferbloat problem,"Bufferbloat is a phenomenon where excess buffers in the network cause high latency and jitter. As more and more interactive applications (e.g. voice over IP, real time video conferencing and financial transactions) run in the Internet, high latency and jitter degrade application performance. There is a pressing need to design intelligent queue management schemes that can control latency and jitter; and hence provide desirable quality of service to users. We present here a lightweight design, PIE (Proportional Integral controller Enhanced), that can effectively control the average queueing latency to a reference value. The design does not require per-packet extra processing, so it incurs very small overhead and is simple to implement in both hardware and software. In addition, the design parameters are self-tuning, and hence PIE is robust and optimized for various network scenarios. Simulation results, theoretical analysis and Linux testbed results show that PIE can ensure low latency and achieve high link utilization under various congestion situations.",2013,0, 6461,Transformation operators for easier engineering of medical process models,"The need for high-quality models is increasingly recognized for driving and documenting complex medical processes such as cancer therapies. A medical environment for such processes has to deal with a great multiplicity of dimensions such as different pathologies, different hospital departments, different agents with different concerns and expertise, different resources with a wide spectrum of capabilities, and so forth. The variety of needs along those multiple dimensions calls for multiple, complementary and consistent facets of the composite process model, each addressing a specific dimension. Building multi-dimensional process models is in our experience hard and error-prone. The paper describes various operators for composing process model facets in a coherent way or, conversely, for decomposing process models into specific facets that abstract from details irrelevant to a specific dimension. These operators are grounded on the formal trace semantics provided by our process language and its supporting analysis toolset. The paper shows how these operators may help modeling, analyzing, documenting and enacting complex processes. Their use is illustrated on simplified examples taken from real cancer therapies.",2013,0, 6462,RSL-PL: A linguistic pattern language for documenting software requirements,"Software requirements are traditionally documented in natural language (NL). However, despite being easy to understand and having high expressivity, this approach often leads to well-known requirements quality problems. In turn, dealing with these problems warrants a significant amount of human effort, causing requirements development activities to be error-prone and time-consuming. This paper introduces RSL-PL, a language that enables the definition of linguistic patterns typically found in well-formed individual NL requirements, according to the field's best practices. The linguistic features encoded within RSL-PL patterns enable the usage of information extraction techniques to automatically perform the linguistic analysis of NL requirements. Thus, in this paper we argue that RSL-PL can improve the quality of requirements specifications, as well as the productivity of requirements engineers, by mitigating the continuous effort that is often required to ensure requirements quality criteria, such as clearness, consistency, and completeness.",2013,0, 6463,Uncovering product line variability from early requirement documents,"Mass production of customer-specific software application through software product lines has been gaining great attention in the past years. A software product line supports fast production of customized software applications by the composition of variable requirements, namely variability. Practitioners and researchers suggest that the efficient construction of software product lines depends on the ability of domain engineers to early identify potential variability. Controversially, uncovering product line variability from elicited requirements remains one of the main challenges in domain engineering. The current practice is an ad-hoc, tacit and consequently error-prone identification of variable requirements by domain experts while reviewing different versions of specification documents for similar products. Therefore, variability uncovering could represent an adoption barrier for many companies that should otherwise benefit. To cope with this challenge on product line requirement engineering, we propose in this paper a novel technique for uncovering variability from early requirement documents, specially, from existing Language Extended Lexicons (LEL). The technique suggests the analysis of LEL following a set of heuristics, which therefore, supports the precise grouping, identification and relation of potential variable requirements. In this paper we also illustrate the proposed technique through examples for the meeting scheduler domain.",2013,0, 6464,Development of a binocular eye tracking system for quality assessment of S3D representations,"It is intended to develop a high-precision binocular eye tracking system to assess the eye behavior whilst watching stereo 3D content. Up to now, available eye tracking systems are not providing necessary high precision for issues concerning stereo 3D perception. Hence, an up to now not available detailed specification of the recordings of the eye movements as well as the interpretation of binocular data is investigated. Therefore, a binocular prototype is developed as well as specification software leading to optimized data interpretation by software modules computing against uncertainties and physical thresholds. This intention including planned software modules as well as the therefore used basic eye tracking system are presented within the QUALINET industry forum.",2013,0, 6465,Parametric classification over multiple samples,"This pattern was originally designed to classify sequences of events in log files by error-proneness. Sequences of events trace application use in real contexts. As such, identifying error-prone sequences helps understand and predict application use. The classification problem we describe is typical in supervised machine learning, but the composite pattern we propose investigates it with several techniques to control for data brittleness. Data pre-processing, feature selection, parametric classification, and cross-validation are the major instruments that enable a good degree of control over this classification problem. In particular, the pattern includes a solution for typical problems that occurs when data comes from several samples of different populations and with different degree of sparcity.",2013,0, 6466,Personalising Multi Video Streams and Camera Views for Live Events,"An innovative Internet streaming video player, we call ePlayer, that supports personalised camera switching for live events has been researched, developed and evaluated. The main novelty of the system is that in dispensing the available bandwidth amongst multiple input video streams it can automatically coordinate and preserve the video quality across multiple live video streams. An additional novelty of the system is that it supports automatic camera switching based upon an individual user's preferences. The experimental results indicate that the system is able to effectively allocate a contested bandwidth resource amongst multiple streams and is able to infer a user's camera switching preferences via dynamically predicting a user's switching intervals amongst multiple cameras.",2013,0, 6467,A Hybrid Method of User Privacy Protection for Location Based Services,"Recently, highly accurate positioning devices enables us to provide various types of location based services (LBS). On the other hand, because such positioning data include deeply personal information, the protection of location privacy is one of the most significant problems in LBS. Lots of different techniques for securing the location privacy have been proposed, for instance the concept of Silent period, the concept of Dummy node, and the concept of Cloaking-region. However, many of these researches have a problem that quality of the LBS (QoS) decreased when anonymity is improved, and anonymity falls down when QoS is improved. In this paper, we present a node density-based location privacy scheme which can provide location privacy by utilizing hybrid concept of Dummy node and Cloaking-region. Simulation results show that the probability of tracking of a target node by an adversary is reduced and the QoS of LBS is also improved.",2013,0, 6468,Poster abstract: Formal analysis of fresenius infusion pump (FIP),"Summary form only given. Today's medical devices are based on embedded architecture, with software used to control the underlying hardware. They are highly critical since errors in the software can endanger end users such as patients and medics. Medical devices should be designed and manufactured in such a way that when used, they perform as intended and they ensure a high level of safety. Current industrial practices are based on testing processes to check if the software meets the specifications and if it fulfills its purpose. However, testing does have several disadvantages that limit the reliability of this verification and validation process. Testing cannot guarantee that a device will function properly under all conditions and bugs can never be completely identified withing a program. Several attempts have already been made to provide standards for the formal verification of safety properties of medical devices, initiated by the Generic Infusion Pump project [2]. Our work is a collaboration between Objet Direct R&D and Fresenius [1]. Fresenius is a leading international health care group which produces and markets pharmaceuticals and medical devices. We aim to investigate innovative methods for software development, validation and verification. We study existing results provided amongst others by [3, 4] which we intend to extend by analyzing the Fresenius Infusion Pump (FIP) software. FIP automatizes the delivery process of fluid medical solution into patient's body. Its design is based on three layers. The highest level is the user interface and consists of three components, the administration protocol, the application system and the power management. The middle level consists of the pumping control components and the lowest level contains driver components such as Door, Watchdog, Optical Disk, Motor. FIP is modeled in UML (a total of 100 state machines) and the requirements are written in natural language. The implementation of the model is done in C- + with automatic code generation. For the V&V process, software testing checks if the implementation meets the requirements using fault scenarios written in UML. The main objective of this project is to use model-based design for migrating from software testing to formal based solution for verifying the Fresenius Infusion Pump. The goal is to use model checking technologies in order to verify requirements and eliminate bugs during the design process. Several faulty design patterns have already been identified to be caused by deadlocks, lost signal events, stack overflow, violation of real-time properties, incoherent behavior of UML state machines. We present and analyze the case study of the FIP's Motor component, a driver component of the lowest level. Its interest lies on the fact that while the Motor Control is stopped, the Motor Driver is still running. This faulty behavior was detected during the test checks and bug was partially corrected in code review.",2013,0, 6469,Highly-reliable integer matrix multiplication via numerical packing,"The generic matrix multiply (GEMM) routine comprises the compute and memory-intensive part of many information retrieval, relevance ranking and object recognition systems. Because of the prevalence of GEMM in these applications, ensuring its robustness to transient hardware faults is of paramount importance for highly-efficientlhighly-reliable systems. This is currently accomplished via error control coding (ECC) or via dual modular redundancy (DMR) approaches that produce a separate set of parity results to allow for fault detection in GEMM. We introduce a third family of methods for fault detection in integer matrix products based on the concept of numerical packing. The key difference of the new approach against ECC and DMR approaches is the production of redundant results within the numerical representation of the inputs rather than as a separate set of parity results. In this way, high reliability is ensured within integer matrix products while allowing for: (i) in-place storage; (ii) usage of any off-the-shelf 64-bit floating-point GEMM routine; (iii) computational overhead that is independent of the GEMM inner dimension. The only detriment against a conventional (i.e. fault-intolerant) integer matrix multiplication based on 32-bit floating-point GEMM is the sacrifice of approximately 30.6% of the bitwidth of the numerical representation. However, unlike ECC methods that can reliably detect only up to a few faults per GEMM computation (typically two), the proposed method attains more than 12 nines reliability, i.e. it will only fail to detect 1 fault out of more than 1 trillion arbitrary faults in the GEMM operations. As such, it achieves reliability that approaches that of DMR, at a very small fraction of its cost. Specifically, a single-threaded software realization of our proposal on an Intel i7-3632QM 2.2GHz processor (Ivy Bridge architecture with AVX support) incurs, on average, only 19% increase of execution time agai- st an optimized, fault-intolerant, 32-bit GEMM routine over a range of matrix sizes and it remains more than 80% more efficient than a DMR-based GEMM.",2013,0, 6470,Exploiting the debug interface to support on-line test of control flow errors,"Detecting the effects of transient faults is a key point in many safety-critical applications. This paper explores the possibility of using for this purpose the debug interface existing today in several processors/controllers on the market. In this way one can achieve a good detection capability with respect to control flow errors with very small latency, while the cost for adopting the proposed technique is rather limited and does not involve any change either in the processor hardware or in the application software. The method works even if the processor uses caches. Experimental results are reported, showing both the advantages and the costs of the method.",2013,0, 6471,Experimental evaluation of GPUs radiation sensitivity and algorithm-based fault tolerance efficiency,"Experimental results demonstrate that Graphic Processing Units are very prone to be corrupted by neutrons. We have performed several experimental campaigns at ISIS, UK and at LANSCE, Los Alamos, NM, USA accessing the sensitivity of the GPU internal resources as well as the error rate of common parallel algorithms. Experiments highlight output error patterns and radiation responses that can be fruitfully used to design optimized Algorithm-Based Fault Tolerance strategies and provide pragmatic programming guidelines to increase the code reliability with low computational overhead.",2013,0, 6472,A Feedback-Based Approach to Validate SWRL Rules for Developing Situation-Aware Software,"Recently, the Web Ontology Language (OWL) and Semantic Web Rule Language (SWRL) have been widely used to construct situation-aware environments. However, incorrect situations can be inferred, and these decrease the quality of situation-aware services. SWRL rules are one of the main causes of incorrectly inferred situations. Therefore, in this paper, we propose an approach to validate SWRL rules by applying feedback, a key concept used in software cybernetics research. We propose a feedback-based approach that consists of preparation, structural analysis, contextual analysis, and SWRL rule adaptation. Using the proposed approach, we can systematically detect errors and adapt the SWRL rules accordingly. Furthermore, our method can be used as a base model to validate SWRL rules for situation-aware software.",2013,0, 6473,A Case Study of Adaptive Combinatorial Testing,"The ability of Combinatorial Testing (CT) to detect and locate the interaction triggered failure has been well studied. But CT still suffers from many challenges, such as modeling for CT, sampling mechanisms for test generation, applicability and effectiveness. To overcome these issues of CT, adaptive combinatorial testing (ACT) is proposed in this paper, which improves the traditional CT with a well established adaptive testing method as the counter part of adaptive control and aims to make CT more flexible and practical. ACT can significantly enhance testing quality, software reliability and support testing strategy adjustment dynamically. To support further investigation, a preliminary form of concrete strategy for ACT is given as a heuristic guideline, and a case study is presented to illustrate its operations.",2013,0, 6474,Privacy-Aware Community Sensing Using Randomized Response,"Community sensing is an emerging system which allows the increasing number of mobile phone users to share effectively minute statistical information collected by themselves. This system relies on participants' active contribution including intentional input data through mobile phone's applications, e.g. Facebook, Twitter and Linkdin. However, a number of privacy concerns will hinder the spread of community sensing applications. It is difficult for resource-constrained mobile phones to rely on complicated encryption scheme. We should prepare a privacy-preserving community sensing scheme with less computational-complexity. Moreover, an environment that is reassuring for participants to conduct community sensing is strongly required because the quality of the statistical data is depending on general users' active contribution. In this article, we suggest a privacy-preserving community sensing scheme for human-centric data such as profile information by using the combination of negative surveys and randomized response techniques. By using our method described in this paper, the server can reconstruct the probability distributions of the original distributions of sensed values without violating the privacy of users. Especially, we can protect sensitive information from malicious tracking attacks. We evaluated how this scheme can preserve the privacy while keeping the integrity of aggregated information.",2013,0, 6475,Using a Trust Model to Reduce False Positives of SIP Flooding Attack Detection in IMS,"The IP Multimedia Subsystem (IMS) is constantly evolving to meet the growth of mobile services and Internet applications. One major security problem of the IMS is flooding attacks. There are many works that have been proposed to detect such attacks. However, generally, the detection systems trigger many alarms and most of them are false positives. These false alarms impact the quality of the detection. In this paper, we first present a method to improve the detection accuracy of SIP flooding detection in IMS by using a trust model. The trust value is calculated by a communication activity between a caller and a callee. By this algorithm, the trust value of an attacker is lower than a legitimate user because it does not have real human activities. To evaluate the proposed method, we integrate the trust model with three SIP flooding attack detection algorithms: Cumulative sum, Hellinger distance, and Tanimoto distance. The system is evaluated by using a comprehensive traffic dataset that consists of varying legitimate and malicious traffic patterns. The experimental results show that the trust integration method can reduce false alarms and improve the accuracy of the flooding attack detection algorithms.",2013,0, 6476,Analyzing and Predicting Software Quality Trends Using Financial Patterns,"The financial community assesses and analyzes fundamental qualities of stocks to predict their future performance. During the analysis different external and internal factors are considered which can affect the stock price. Financial analysts use indicators and analysis patterns, such as such as Moving Averages, Crossover patterns, and M-Top/W-Bottom patterns to determine stock price trends and potential trading opportunities. Similar to the stock market, also qualities of software systems are part of larger ecosystems which are affected by internal and external factors. Our research provides a cross disciplinary approach which takes advantages of these financial indicators and analysis patterns and re-applies them for the analysis and prediction of evolvability qualities in software system. We conducted several case studies to illustrate the applicability of our approach.",2013,0, 6477,Multi-constrained Routing Algorithm: A Networking Evaluation,"IP networks may face issues to support the offered workload due to the increasing number of Internet users, the steady influx of new Internet applications, which require stringent QoS, and applications needing big data transmission. QoS routing can be viewed as an attractive approach to tackle this issue. However, most of the QoS routing solutions are not evaluated in a realistic framework. In this paper we propose a networking evaluation of multi-constrained routing to assess the potential benefit of QoS routing protocol. To do this, we converted a multi-constrained routing algorithm into a protocol, and implemented it in the simulator NS2. Our results indicate that if the monitoring tool of a network can not sustain frequent link-state announcements, the benefits coming from implementing a QoS routing are quite low. On the other hand, if the network is equipped with an adequate measurement tool, then QoS routing can be worth implementing, and the routing based only the available bandwidth at each link arises as the best option (no need to consider the end-to-end delay constraint, nor the loss rate constraint).",2013,0, 6478,A Study on the Efficiency Aspect of Data Race Detection: A Compiler Optimization Level Perspective,"Dynamically detecting data races in multithreaded programs incurs significant slowdown and memory overheads. Many existing techniques have been put forward to improve the performance slowdown through different dimensions such as sampling, detection precision, and data structures to track the happened-before relations among events in execution traces. Compiling the program source code with different compiler optimization options, such as reducing the object code size as the selected optimization objective, may produce different versions of the object code. Does optimizing the object code with a standard optimization option help improve the performance of the precise online race detection? To study this question and a family of related questions, this paper reports a pilot study based on four benchmarks from the PARSEC 3.0 suite compiled with six GCC compiler optimization options. We observe from the empirical data that in terms of performance slowdown, the standard optimization options behave comparably to the optimization options for speed and code size, but behave quite different from the baseline option. Moreover, in terms of memory cost, the standard optimization options incur similar memory costs as the baseline option and the option for speed, and consume less memory than the option for code size.",2013,0, 6479,Bayesian Probabilistic Monitor: A New and Efficient Probabilistic Monitoring Approach Based on Bayesian Statistics,"Modern software systems deal with increasing dependability requirements which specify non-functional aspect of a system correct operation. Usually, probabilistic properties are used to formulate dependability requirements like performance, reliability, safety, and availability. Probabilistic monitoring techniques, as an important assurance measure, has drawn more and more interest. Despite currently several approaches has been proposed to monitor probabilistic properties, it still lacks of a general and efficient monitoring approach for monitoring probabilistic properties. This paper puts forward a novel probabilistic monitoring approach based on Bayesian statistics, called Bayesian Probabilistic Monitor (BaProMon). By calculating Bayesian Factor, the approach can check whether the runtime information can provide sufficient evidences to support the null or alternative hypothesis. We give the corresponding algorithms and validate them via simulated-based experiments. The experimental results show that BaProMon can effectively monitor QoS properties. The results also indicate that our approach is superior to other approaches.",2013,0, 6480,Evaluating Web Service Quality Using Finite State Models,"This paper addresses the problem of evaluating the Web service quality using Finite State Machines. The most popular metrics for estimating such quality and user perception are Quality of Service (QoS) and Quality of Experience (QoE), which represent objective and subjective assessments, correspondingly. In this paper, we show how QoS can be estimated for Web services and their composition using finite state models. We also discuss how different machine learning algorithms can be applied for evaluating QoE of Web services based on known QoS parameter values.",2013,0, 6481,A Comparison of Mutation Analysis Tools for Java,"Mutation analysis allows software developers to evaluate the quality of a test suite. The quality is measured as the ability of the test suite to detect faults injected into the program under tests. A fault is detected if at least one test case gives different results on the original program and the fault injected one. Mutation tools aim at automating and speeding both the generation of fault injected variants, called mutants, and the execution of the test suite on those mutants. In this paper, we aim at offering meaningful elements of comparison between mutation tools for Java for different usage profiles.",2013,0, 6482,ColFinder Collaborative Concurrency Bug Detection,"Many concurrency bugs are extremely difficult to be detected by random test due to huge input space and huge interleaving space. The multicore technology trend worsens this problem. We propose an innovative, collaborative approach called ColFinder to detect concurrency bugs effectively and efficiently. ColFinder uses static analysis to identify potential buggy statements. With respect to these statements, ColFinder uses program slicing to cut the original programs into smaller programs. Finally, it uses dynamic active test to verify whether the potential buggy statements will trigger real bugs. We implement a prototype of ColFinder, and evaluate it with several real-world programs. It significantly improves the probability of bug manifestation, from 0.75% to 89%. Additionally, ColFinder makes the time of bug manifestation obviously reduced by program slicing, with an average of 33%.",2013,0, 6483,Similarity-Based Search for Model Checking: A Pilot Study with Java PathFinder,"When a model checker cannot explore the entire state space because of limited resources, model checking becomes a kind of testing with an attempt to find a failure (violation of properties) quickly. We consider two state sequences in model checking: (i) the sequence in which new states are generated, and (ii) the sequence in which the states generated in sequence (i) are checked for property violation. We observe that neighboring states in sequence (i) often have similarities in certain ways. Based on this observation we propose a search strategy, which generates sequence (ii) in such a way that similar states are evenly spread over the sequence. As a result, neighboring states in sequence (ii) can have a higher diversity. A pilot empirical study with Java Path Finder suggests that the proposed strategy can outperform random search in terms of creating equal or smaller number of states to detect a failure.",2013,0, 6484,Leveraging a Constraint Solver for Minimizing Test Suites,"Software (regression) testing is performed to detect errors as early as possible and guarantee that changes did not affect the system negatively. As test suites tend to grow over time, (re-)executing the entire suite becomes prohibitive. We propose an approach, RZoltar, addressing this issue: it encodes the relation between a test case and its testing requirements (code statements in this paper) in a so-called coverage matrix, maps this matrix into a set of constraints, and computes a collection of optimal minimal sets (maintaining the same coverage as the original suite) by leveraging a fast constraint solver. We show that RZoltar efficiently (0.95 seconds on average) finds a collection of test suites that significantly reduce the size (64.88% on average) maintaining the same fault detection (as initial test suite), while the well-known greedy approach needs 11.23 seconds on average to find just one solution.",2013,0, 6485,Taming Deadlocks in Multithreaded Programs,"Many real-world multithreaded programs contain deadlock bugs. These bugs should be detected and corrected. Many existing detection strategies are not consistently scalable to handle large-scale applications. Many existing dynamic confirmation strategies may not reveal detectable deadlocks with high probability. And many existing runtime deadlock-tolerant strategies may incur high runtime overhead and may not prevent the same deadlock from re-occurring. This paper presents the current progress of our project on dynamic deadlock detection, confirmation, and resolution. It also describes a test harness framework developed to support our proposed approach.",2013,0, 6486,Adaptive Combinatorial Testing,"Combinatorial Testing (CT) has been proven to be effective in detecting and locating the interaction triggered failure in the last 20 years. But CT still suffers from many challenges, such as modeling for CT, sampling mechanisms for test generation, applicability and effectiveness. To overcome these issues of CT, adaptive combinatorial testing (ACT) is proposed in this paper, which improves the traditional CT with a well established adaptive testing method as the counter part of adaptive control and aims to make CT more flexible and practical. ACT can significantly enhance testing quality, software reliability and support testing strategy adjustment dynamically. To support further investigation, a preliminary form of concrete strategy for ACT is given as a heuristic guideline.",2013,0, 6487,A Theoretical Study: The Impact of Cloning Failed Test Cases on the Effectiveness of Fault Localization,"Statistical fault localization techniques analyze the dynamic program information provided by executing a large number of test cases to predict fault positions in faulty programs. Related studies show that the extent of imbalance between the number of passed test cases and that of failed test cases may reduce the effectiveness of such techniques, while failed test cases can frequently be less than passed test cases in practice. In this study, we propose a strategy to generate balanced test suite by cloning the failed test cases for suitable number of times to catch up with the number of passed test cases. We further give an analysis to show that by carrying out the cloning the effectiveness of two representative fault localization techniques can be improved under certain conditions and impaired at no time.",2013,0, 6488,A Low-Cost Fault Tolerance Technique in Multi-media Applications through Configurability,"As chip densities and clock rates increases, processors are becoming more susceptible to transient faults that affect program correctness. Therefore, fault tolerance becomes increasingly important in computing system. Two major concerns of fault tolerance techniques are: a) improving system reliability by detecting transient errors and b) reducing performance overhead. In this study, we propose a configurable fault tolerance technique targeting both high reliability and low performance overhead for multi-media applications. The basic principle is applying different levels of fault tolerance configurability, which means that different degrees of fault tolerance are applied to different parts of the source codes in multi-media applications. First, a primary analysis is performed on the source code level to classify the critical statements. Second, a fault injection process combined with a statistical analysis is used to assure the partition with regards to a confidence degree. Finally, checksum-based fault tolerance and instruction duplication are applied to critical statements, while no fault tolerance mechanism is applied to non-critical parts. Performance experiment results demonstrate that our configurable fault tolerance technique can lead to significant performance gains compared with duplicating all instructions. The fault coverage of this scheme is also evaluated. Fault injection results show that about 90% of outputs are application-level correctness with just 20% of runtime overhead.",2013,0, 6489,An Approach to Reliable Software Architectures Evolution,"In recent years, reliability is becoming a more and more important concern for software architectures. There exist many reliability model to predict the software reliability at architecture level, but few of them give the formal description of the software architecture. Although many formal approaches have been proposed to specify the software architecture, unfortunately, few of them pay attention to the important non-functional characteristic, namely reliability here. In this paper, we try to bridge the gap between software reliability model and software architecture description. Our work expands such idea in four directions. First, we propose a reliable hypergraph grammar by extending hyperedge. Then we describe the architecture structure by using our reliable hypergraph grammar. Meanwhile, through this reliable hypergraph grammar, architecture evolution is achieved by applying predefined transformation rules. At last, we use a case study to illustrate how our approach works.",2013,0, 6490,An Approach for Fault Localization Based on Program Slicing and Bayesian,"The key issue of reducing software cost and improving software reliability is locating defective codes precisely and efficiently. In this paper, we propose a fault localization method which combines program slicing and Bayesian method. First, we perform dynamic program slicing according to the slicing criteria. Then, we calculate the posterior probability according to Bayesian Theory. Finally, we take the posterior probability as the suspicion degree of the statement and rank the statements in the descending order based on suspicion degree. We apply our approach to six open-source programs. The results of the experiments show that the method we propose can improve the precision of fault localization to some extent.",2013,0, 6491,Abstraction Based Domain Ontology Extraction for Idea Creation,"Idea creation is a complicated process in which it requires plenty of knowledge as support for specific domain. Creativity becomes a more and more important feature for idea nowadays when information in various domains is on an explosion. Likewise, it raises the requirement for the amount of domain knowledge as background. Many previous traditional approaches to domain specific idea creation are performed by domain experts based on their personal knowledge storage and manually information research, which are considered to be time consuming, uncreative and out of date prone processes. As a creative knowledge combination activity, idea creation requires a great number of knowledge covers from research to industry in the application domains. The integration of domain ontology and idea creation is a trend in creative computing research area. The introduction of domain ontology based approach into idea creation can bridge the gap between knowledge collection and mental thought, and improve the efficiency and creativity of idea creation. In this paper, we propose an abstraction method to support one of the essential parts in this field - domain ontology extraction. Abstraction techniques are explored, classified, selected and integrated while elements of domain ontology are defined including concepts and relations. Also, a framework and approach is specified for apply the method into domain ontology extraction with designed abstraction rules to support its automation. A case study on idea creation scenario particularly is represented to validate the feasibility and reusability of our proposed method. Furthermore, the mapping rules for transformation from abstracted results to domain ontology are discussed as an initial idea and further work.",2013,0, 6492,Supporting Reliability Modeling and Analysis for Component-Based Software Architecture: An XML-Based Approach,"With recent development of Component-Based Software Engineering (CBSE), the importance of predicting the non-functional properties, such as performance and reliability, has been widely acknowledged. A special problem in CBSE stems from its specific development process: Software components should be specified and implemented independently from their later context to enable reuse. Thus, non-functional properties of components need to be specified in the abstract level of architecture. In this paper, we explore the possibility of supporting reliability modeling and analysis for component-based software architecture simultaneously by an XML-based approach. The contribution of this paper is twofold: first we present an extension of xADL 3.0 that enables the support for reliability modeling of software architectures, based on this extension, we propose a method for generation of analysis-oriented models for reliability prediction. We demonstrate the applicability of our approach by modeling an example and conducting reliability prediction.",2013,0, 6493,Micro defect detection in solar cell wafer based on hybrid illumination and near-infrared optics,"In this paper, an defect detection system based on hybrid illumination and near-infrared optics, is developed for solar cell wafer. It consists of geometrical camera optics, hybrid illumination device(HID), near-infrared(NIR) camera optics, machinery and control system and algorithm of defect detection and software. Especially, illumination conditions in HID is determined for reliable defect detection. Optimum illumination conditions in the HID are found with contrast analysis of RGB LED image, based on design of experiment. As a result, various surface micro defects are accurately detected. It is shown that the developed defect detection system can accurately detect micro defects of solar cell wafer.",2013,0, 6494,Automatic synthesis of modular connectors via composition of protocol mediation patterns,"Ubiquitous and pervasive computing promotes the creation of an environment where Networked Systems (NSs) eternally provide connectivity and services without requiring explicit awareness of the underlying communications and computing technologies. In this context, achieving interoperability among heterogeneous NSs represents an important issue. In order to mediate the NSs interaction protocol and solve possible mismatches, connectors are often built. However, connector development is a never-ending and error-prone task and prevents the eternality of NSs. For this reason, in the literature, many approaches propose the automatic synthesis of connectors. However, solving the connector synthesis problem in general is hard and, when possible, it results in a monolithic connector hence preventing its evolution. In this paper, we define a method for the automatic synthesis of modular connectors, each of them expressed as the composition of independent mediators. A modular connector, as synthesized by our method, supports connector evolution and performs correct mediation.",2013,0, 6495,Drag-and-drop refactoring: Intuitive and efficient program transformation,"Refactoring is a disciplined technique for restructuring code to improve its readability and maintainability. Almost all modern integrated development environments (IDEs) offer built-in support for automated refactoring tools. However, the user interface for refactoring tools has remained largely unchanged from the menu and dialog approach introduced in the Smalltalk Refactoring Browser, the first automated refactoring tool, more than a decade ago. As the number of supported refactorings and their options increase, invoking and configuring these tools through the traditional methods have become increasingly unintuitive and inefficient. The contribution of this paper is a novel approach that eliminates the use of menus and dialogs altogether. We streamline the invocation and configuration process through direct manipulation of program elements via drag-and-drop. We implemented and evaluated this approach in our tool, Drag-and-Drop Refactoring (DNDRefactoring), which supports up to 12 of 23 refactorings in the Eclipse IDE. Empirical evaluation through surveys and controlled user studies demonstrates that our approach is intuitive, more efficient, and less error-prone compared to traditional methods available in IDEs today. Our results bolster the need for researchers and tool developers to rethink the design of future refactoring tools.",2013,0, 6496,Observable modified condition/decision coverage,"In many critical systems domains, test suite adequacy is currently measured using structural coverage metrics over the source code. Of particular interest is the modified condition/decision coverage (MC/DC) criterion required for, e.g., critical avionics systems. In previous investigations we have found that the efficacy of such test suites is highly dependent on the structure of the program under test and the choice of variables monitored by the oracle. MC/DC adequate tests would frequently exercise faulty code, but the effects of the faults would not propagate to the monitored oracle variables. In this report, we combine the MC/DC coverage metric with a notion of observability that helps ensure that the result of a fault encountered when covering a structural obligation propagates to a monitored variable; we term this new coverage criterion Observable MC/DC (OMC/DC). We hypothesize this path requirement will make structural coverage metrics 1.) more effective at revealing faults, 2.) more robust to changes in program structure, and 3.) more robust to the choice of variables monitored. We assess the efficacy and sensitivity to program structure of OMC/DC as compared to masking MC/DC using four subsystems from the civil avionics domain and the control logic of a microwave. We have found that test suites satisfying OMC/DC are significantly more effective than test suites satisfying MC/DC, revealing up to 88% more faults, and are less sensitive to program structure and the choice of monitored variables.",2013,0, 6497,What good are strong specifications?,"Experience with lightweight formal methods suggests that programmers are willing to write specification if it brings tangible benefits to their usual development activities. This paper considers stronger specifications and studies whether they can be deployed as an incremental practice that brings additional benefits without being unacceptably expensive. We introduce a methodology that extends Design by Contract to write strong specifications of functional properties in the form of preconditions, postconditions, and invariants. The methodology aims at being palatable to developers who are not fluent in formal techniques but are comfortable with writing simple specifications. We evaluate the cost and the benefits of using strong specifications by applying the methodology to testing data structure implementations written in Eiffel and C#. In our extensive experiments, testing against strong specifications detects twice as many bugs as standard contracts, with a reasonable overhead in terms of annotation burden and run-time performance while testing. In the wide spectrum of formal techniques for software quality, testing against strong specifications lies in a sweet spot with a favorable benefit to effort ratio.",2013,0, 6498,Data clone detection and visualization in spreadsheets,"Spreadsheets are widely used in industry: it is estimated that end-user programmers outnumber programmers by a factor 5. However, spreadsheets are error-prone, numerous companies have lost money because of spreadsheet errors. One of the causes for spreadsheet problems is the prevalence of copy-pasting. In this paper, we study this cloning in spreadsheets. Based on existing text-based clone detection algorithms, we have developed an algorithm to detect data clones in spreadsheets: formulas whose values are copied as plain text in a different location. To evaluate the usefulness of the proposed approach, we conducted two evaluations. A quantitative evaluation in which we analyzed the EUSES corpus and a qualitative evaluation consisting of two case studies. The results of the evaluation clearly indicate that 1) data clones are common, 2) data clones pose threats to spreadsheet quality and 3) our approach supports users in finding and resolving data clones.",2013,0, 6499,"How, and why, process metrics are better","Defect prediction techniques could potentially help us to focus quality-assurance efforts on the most defect-prone files. Modern statistical tools make it very easy to quickly build and deploy prediction models. Software metrics are at the heart of prediction models; understanding how and especially why different types of metrics are effective is very important for successful model deployment. In this paper we analyze the applicability and efficacy of process and code metrics from several different perspectives. We build many prediction models across 85 releases of 12 large open source projects to address the performance, stability, portability and stasis of different sets of metrics. Our results suggest that code metrics, despite widespread use in the defect prediction literature, are generally less useful than process metrics for prediction. Second, we find that code metrics have high stasis; they don't change very much from release to release. This leads to stagnation in the prediction models, leading to the same files being repeatedly predicted as defective; unfortunately, these recurringly defective files turn out to be comparatively less defect-dense.",2013,0, 6500,Lase: Locating and applying systematic edits by learning from examples,"Adding features and fixing bugs often require systematic edits that make similar, but not identical, changes to many code locations. Finding all the relevant locations and making the correct edits is a tedious and error-prone process for developers. This paper addresses both problems using edit scripts learned from multiple examples. We design and implement a tool called LASE that (1) creates a context-aware edit script from two or more examples, and uses the script to (2) automatically identify edit locations and to (3) transform the code. We evaluate LASE on an oracle test suite of systematic edits from Eclipse JDT and SWT. LASE finds edit locations with 99% precision and 89% recall, and transforms them with 91% accuracy. We also evaluate LASE on 37 example systematic edits from other open source programs and find LASE is accurate and effective. Furthermore, we confirmed with developers that LASE found edit locations which they missed. Our novel algorithm that learns from multiple examples is critical to achieving high precision and recall; edit scripts created from only one example produce too many false positives, false negatives, or both. Our results indicate that LASE should help developers in automating systematic editing. Whereas most prior work either suggests edit locations or performs simple edits, LASE is the first to do both for nontrivial program edits.",2013,0, 6501,Mining SQL injection and cross site scripting vulnerabilities using hybrid program analysis,"In previous work, we proposed a set of static attributes that characterize input validation and input sanitization code patterns. We showed that some of the proposed static attributes are significant predictors of SQL injection and cross site scripting vulnerabilities. Static attributes have the advantage of reflecting general properties of a program. Yet, dynamic attributes collected from execution traces may reflect more specific code characteristics that are complementary to static attributes. Hence, to improve our initial work, in this paper, we propose the use of dynamic attributes to complement static attributes in vulnerability prediction. Furthermore, since existing work relies on supervised learning, it is dependent on the availability of training data labeled with known vulnerabilities. This paper presents prediction models that are based on both classification and clustering in order to predict vulnerabilities, working in the presence or absence of labeled training data, respectively. In our experiments across six applications, our new supervised vulnerability predictors based on hybrid (static and dynamic) attributes achieved, on average, 90% recall and 85% precision, that is a sharp increase in recall when compared to static analysis-based predictions. Though not nearly as accurate, our unsupervised predictors based on clustering achieved, on average, 76% recall and 39% precision, thus suggesting they can be useful in the absence of labeled training data.",2013,0, 6502,Exploring the impact of inter-smell relations on software maintainability: An empirical study,"Code smells are indicators of issues with source code quality that may hinder evolution. While previous studies mainly focused on the effects of individual code smells on maintainability, we conjecture that not only the individual code smells but also the interactions between code smells affect maintenance. We empirically investigate the interactions amongst 12 code smells and analyze how those interactions relate to maintenance problems. Professional developers were hired for a period of four weeks to implement change requests on four medium-sized Java systems with known smells. On a daily basis, we recorded what specific problems they faced and which artifacts were associated with them. Code smells were automatically detected in the pre-maintenance versions of the systems and analyzed using Principal Component Analysis (PCA) to identify patterns of co-located code smells. Analysis of these factors with the observed maintenance problems revealed how smells that were co-located in the same artifact interacted with each other, and affected maintainability. Moreover, we found that code smell interactions occurred across coupled artifacts, with comparable negative effects as same-artifact co-location. We argue that future studies into the effects of code smells on maintainability should integrate dependency analysis in their process so that they can obtain a more complete understanding by including such coupled interactions.",2013,0, 6503,X-PERT: Accurate identification of cross-browser issues in web applications,"Due to the increasing popularity of web applications, and the number of browsers and platforms on which such applications can be executed, cross-browser incompatibilities (XBIs) are becoming a serious concern for organizations that develop web-based software. Most of the techniques for XBI detection developed to date are either manual, and thus costly and error-prone, or partial and imprecise, and thus prone to generating both false positives and false negatives. To address these limitations of existing techniques, we developed X-PERT, a new automated, precise, and comprehensive approach for XBI detection. X-PERT combines several new and existing differencing techniques and is based on our findings from an extensive study of XBIs in real-world web applications. The key strength of our approach is that it handles each aspects of a web application using the differencing technique that is best suited to accurately detect XBIs related to that aspect. Our empirical evaluation shows that X-PERT is effective in detecting real-world XBIs, improves on the state of the art, and can provide useful support to developers for the diagnosis and (eventually) elimination of XBIs.",2013,0, 6504,Measuring architecture quality by structure plus history analysis,"This case study combines known software structure and revision history analysis techniques, in known and new ways, to predict bug-related change frequency, and uncover architecture-related risks in an agile industrial software development project. We applied a suite of structure and history measures and statistically analyzed the correlations between them. We detected architecture issues by identifying outliers in the distributions of measured values and investigating the architectural significance of the associated classes. We used a clustering method to identify sets of files that often change together without being structurally close together, investigating whether architecture issues were among the root causes. The development team confirmed that the identified clusters reflected significant architectural violations, unstable key interfaces, and important undocumented assumptions shared between modules. The combined structure diagrams and history data justified a refactoring proposal that was accepted by the project manager and implemented.",2013,0, 6505,MIDAS: A design quality assessment method for industrial software,"Siemens Corporate Development Center Asia Australia (CT DC AA) develops and maintains software applications for the Industry, Energy, Healthcare, and Infrastructure & Cities sectors of Siemens. The critical nature of these applications necessitates a high level of software design quality. A survey of software architects indicated a low level of satisfaction with existing design assessment practices in CT DC AA and highlighted several shortcomings of existing practices. To address this, we have developed a design assessment method called MIDAS (Method for Intensive Design ASsessments). MIDAS is an expert-based method wherein manual assessment of design quality by experts is directed by the systematic application of design analysis tools through the use of a three view-model consisting of design principles, project-specific constraints, and an ility-based quality model. In this paper, we describe the motivation for MIDAS, its design, and its application to three projects in CT DC AA. We believe that the insights from our MIDAS experience not only provide useful pointers to other organizations and practitioners looking to assess and improve software design quality but also suggest research questions for the software engineering community to explore.",2013,0, 6506,Evaluating usefulness of software metrics: An industrial experience report,"A wide range of software metrics targeting various abstraction levels and quality attributes have been proposed by the research community. For many of these metrics the evaluation consists of verifying the mathematical properties of the metric, investigating the behavior of the metric for a number of open-source systems or comparing the value of the metric against other metrics quantifying related quality attributes. Unfortunately, a structural analysis of the usefulness of metrics in a real-world evaluation setting is often missing. Such an evaluation is important to understand the situations in which a metric can be applied, to identify areas of possible improvements, to explore general problems detected by the metrics and to define generally applicable solution strategies. In this paper we execute such an analysis for two architecture level metrics, Component Balance and Dependency Profiles, by analyzing the challenges involved in applying these metrics in an industrial setting. In addition, we explore the usefulness of the metrics by conducting semi-structured interviews with experienced assessors. We document the lessons learned both for the application of these specific metrics, as well as for the method of evaluating metrics in practice.",2013,0, 6507,User involvement in software evolution practice: A case study,"User involvement in software engineering has been researched over the last three decades. However, existing studies concentrate mainly on early phases of user-centered design projects, while little is known about how professionals work with post-deployment end-user feedback. In this paper we report on an empirical case study that explores the current practice of user involvement during software evolution. We found that user feedback contains important information for developers, helps to improve software quality and to identify missing features. In order to assess its relevance and potential impact, developers need to analyze the gathered feedback, which is mostly accomplished manually and consequently requires high effort. Overall, our results show the need for tool support to consolidate, structure, analyze, and track user feedback, particularly when feedback volume is high. Our findings call for a hypothesis-driven analysis of user feedback to establish the foundations for future user feedback tools.",2013,0, 6508,Predicting bug-fixing time: An empirical study of commercial software projects,"For a large and evolving software system, the project team could receive many bug reports over a long period of time. It is important to achieve a quantitative understanding of bug-fixing time. The ability to predict bug-fixing time can help a project team better estimate software maintenance efforts and better manage software projects. In this paper, we perform an empirical study of bug-fixing time for three CA Technologies projects. We propose a Markov-based method for predicting the number of bugs that will be fixed in future. For a given number of defects, we propose a method for estimating the total amount of time required to fix them based on the empirical distribution of bug-fixing time derived from historical data. For a given bug report, we can also construct a classification model to predict slow or quick fix (e.g., below or above a time threshold). We evaluate our methods using real maintenance data from three CA Technologies projects. The results show that the proposed methods are effective.",2013,0, 6509,Towards automated testing and fixing of re-engineered Feature Models,"Mass customization of software products requires their efficient tailoring performed through combination of features. Such features and the constraints linking them can be represented by Feature Models (FMs), allowing formal analysis, derivation of specific variants and interactive configuration. Since they are seldom present in existing systems, techniques to re-engineer FMs have been proposed. There are nevertheless error-prone and require human intervention. This paper introduces an automated search-based process to test and fix FMs so that they adequately represent actual products. Preliminary evaluation on the Linux kernel FM exhibit erroneous FM constraints and significant reduction of the inconsistencies.",2013,0, 6510,LambdaFicator: From imperative to functional programming through automated refactoring,"Java 8 introduces two functional features: lambda expressions and functional operations like map or filter that apply a lambda expression over the elements of a Collection. Refactoring existing code to use these new features enables explicit but unobtrusive parallelism and makes the code more succinct. However, refactoring is tedious (it requires changing many lines of code) and error-prone (the programmer must reason about the control-flow, data-flow, and side-effects). Fortunately, these refactorings can be automated. We present LambdaFicator, a tool which automates two refactorings. The first refactoring converts anonymous inner classes to lambda expressions. The second refactoring converts for loops that iterate over Collections to functional operations that use lambda expressions. In 9 open-source projects we have applied these two refactorings 1263 and 1595 times, respectively. The results show that LambdaFicator is useful. A video highlighting the main features can be found at: http://www.youtube.com/watch?v=EIyAflgHVpU.",2013,0, 6511,Query quality prediction and reformulation for source code search: The Refoqus tool,"Developers search source code frequently during their daily tasks, to find pieces of code to reuse, to find where to implement changes, etc. Code search based on text retrieval (TR) techniques has been widely used in the software engineering community during the past decade. The accuracy of the TR-based search results depends largely on the quality of the query used. We introduce Refoqus, an Eclipse plugin which is able to automatically detect the quality of a text retrieval query and to propose reformulations for it, when needed, in order to improve the results of TR-based code search. A video of Refoqus is found online at http://www.youtube.com/watch?v=UQlWGiauyk4.",2013,0, 6512,LASE: An example-based program transformation tool for locating and applying systematic edits,"Adding features and fixing bugs in software often require systematic edits which are similar, but not identical, changes to many code locations. Finding all edit locations and editing them correctly is tedious and error-prone. In this paper, we demonstrate an Eclipse plug-in called Lase that (1) creates context-aware edit scripts from two or more examples, and uses these scripts to (2) automatically identify edit locations and (3) transform the code. In Lase, users can view syntactic edit operations and corresponding context for each input example. They can also choose a different subset of the examples to adjust the abstraction level of inferred edits. When Lase locates target methods matching the inferred edit context and suggests customized edits, users can review and correct LASE's edit suggestion. These features can reduce developers' burden in repetitively applying similar edits to different methods. The tool's video demonstration is available at https://www.youtube.com/ watch?v=npDqMVP2e9Q.",2013,0, 6513,An observable and controllable testing framework for modern systems,"Modern computer systems are prone to various classes of runtime faults due to their reliance on features such as concurrency and peripheral devices such as sensors. Testing remains a common method for uncovering faults in these systems. However, commonly used testing techniques that execute the program with test inputs and inspect program outputs to detect failures are often ineffective. To test for concurrency and temporal faults, test engineers need to be able to observe faults as they occur instead of relying on observable incorrect outputs. Furthermore, they need to be able to control thread or process interleavings so that they are deterministic. This research will provide a framework that allows engineers to effectively test for subtle and intermittent faults in modern systems by providing them with greater observability and controllability.",2013,0, 6514,Fault comprehension for concurrent programs,"Concurrency bugs are difficult to find because they occur with specific memory-access orderings between threads. Traditional bug-finding techniques for concurrent programs have focused on detecting raw-memory accesses representing the bugs, and they do not identify memory accesses that are responsible for the same bug. To address these limitations, we present an approach that uses memory-access patterns and their suspicious-ness scores, which indicate how likely they are to be buggy, and clusters the patterns responsible for the same bug. The evaluation on our prototype shows that our approach is effective in handling multiple concurrency bugs and in clustering patterns for the same bugs, which improves understanding of the bugs.",2013,0, 6515,Studying the effect of co-change dispersion on software quality,"Software change history plays an important role in measuring software quality and predicting defects. Co-change metrics such as number of files changed together has been used as a predictor of bugs. In this study, we further investigate the impact of specific characteristics of co-change dispersion on software quality. Using statistical regression models we show that co-changes that include files from different subsystems result in more bugs than co-changes that include files only from the same subsystem. This can be used to improve bug prediction models based on co-changes.",2013,0, 6516,Changeset based developer communication to detect software failures,"As software systems get more complex, the companies developing them consist of larger teams and therefore results in more complex communication artifacts. As these software systems grow, so does the impact of every action to the product. To prevent software failure created by this growth and complexity, companies need to find more efficient and effective ways to communicate. The method used in this paper presents developer communication in the form of social networks of which have properties that can be used to detect software failures.",2013,0, 6517,Data science for software engineering,"Target audience: Software practitioners and researchers wanting to understand the state of the art in using data science for software engineering (SE). Content: In the age of big data, data science (the knowledge of deriving meaningful outcomes from data) is an essential skill that should be equipped by software engineers. It can be used to predict useful information on new projects based on completed projects. This tutorial offers core insights about the state-of-the-art in this important field. What participants will learn: Before data science: this tutorial discusses the tasks needed to deploy machine-learning algorithms to organizations (Part 1: Organization Issues). During data science: from discretization to clustering to dichotomization and statistical analysis. And the rest: When local data is scarce, we show how to adapt data from other organizations to local problems. When privacy concerns block access, we show how to privatize data while still being able to mine it. When working with data of dubious quality, we show how to prune spurious information. When data or models seem too complex, we show how to simplify data mining results. When data is too scarce to support intricate models, we show methods for generating predictions. When the world changes, and old models need to be updated, we show how to handle those updates. When the effect is too complex for one model, we show how to reason across ensembles of models. Pre-requisites: This tutorial makes minimal use of maths of advanced algorithms and would be understandable by developers and technical managers.",2013,0, 6518,4th International workshop on managing technical debt (MTD 2013),"Although now 20 years old, only recently has the concept of technical debt gained some momentum and credibility in the software engineering community. The goal of this fourth workshop on managing technical debt is to engage researchers and practitioners in exchanging ideas on viable research directions and on how to put the concept to actual use, beyond its usage as a rhetorical instrument to discuss the fate and ailments of software development projects. The workshop participants presented and discussed approaches to detect, analyze, visualize, and manage technical debt, in its various forms, on large software-intensive system developments.",2013,0, 6519,Adding automatic dependency processing to Makefile-based build systems with amake,"This paper explains how to improve the quality of an existing Makefile-based build system, using a new variant of Make. Ordinary file-oriented dependencies are detected, recorded, and monitored automatically. Checksums are compared, rather than timestamps. Other important dependencies are also processed automatically. This provides an accurate, compact, and low-maintenance build system. Experiences with the Linux kernel/driver build system are described.",2013,0, 6520,Kanbanize the release engineering process,"Release management process must be adapted when IT organizations scale up to avoid discontinuity at the release flow and to preserve the software quality. This paper reports on means to improve the release process in a large-scale project. It discusses the rationale behind adopting Kanban principles for release management, how to implement these principles within a transitional approach, and what are the benefits. The paper discusses the post-transitional product to assess the status of the outcomes.",2013,0, 6521,An analytical model to evaluate reliability of cloud computing systems in the presence of QoS requirements,"Cloud computing is widely referred as the next generation of computing systems. Reliability is a key metric for assessing performance in such systems. Redundancy and diversity are prevalent approaches to enhance reliability in Cloud Computing Systems (CCS). Proper resource allocation is an alternative approach to reliability improvement in such systems. In contrast to redundancy, appropriate resource allocation can improve system reliability without imposing extra cost. On the other hand, contemplating reliability irrespective of Quality of Service (QoS) requirements may be undesirable in most of CCSs. In this paper, we focus on resource allocation approach and introduce an analytical model in order to analyze system reliability besides considering application and resource constraints. Task precedence structure and QoS are taken into account as the application constraints. Memory and storage limitation of each server as well as maximum communication load on each link are considered as the principle resource constraints. In addition, effect of network topology on system reliability is discussed in detail and the model is extended to cover various network topologies.",2013,0, 6522,Modular AUV system for Sea Water Quality Monitoring and Management,"The sustained and cost-effective monitoring of the water quality within European coastal areas is of growing importance in view of the upcoming European marine and maritime directives, i.e. the increased industrial use of the marine environment. Such monitoring needs mechanisms/systems to detect the water quality in a large sea area at different depths in real time. This paper presents a system for the automated detection and analysis of water quality parameters using an autonomous underwater vehicle. The analysis of discharge of nitrate into Norwegian fjords near aqua farms is one of the main application fields of this AUV system. As carrier platform the AUV CWolf from the Fraunhofer IOSB-AST will be used, which is perfectly suited through its modular payload concept. The mission task and the integration of the payload unit which includes the sensor module, the scientific and measurement computer in the AUV carrier platform will be described. Few practice oriented information about the software and interface concept, the function of the several software modules and the test platform with the several test levels to test every module will be discussed.",2013,0, 6523,Statistical fault localization in decision support system based on probability distribution criterion,"Finding the location of a fault in code is an important research and practical problem, which often requires much time and manual effort. To automate this time consuming task, a class of predicate-based statistical fault localization techniques have been proposed, which test the similarity of dynamic predicate spectra between non-failed runs and failed runs and suggest suspicious predicates to the programmers to facilitate the identification of faults. However, with the existence of coincidental correctness, how to efficiently and effectively compare the difference of predicate spectra distribution has become a crucial problem to be solved. In this paper, we make use of probability distribution criterion in developing a new statistical fault localization algorithm. Instead of using geometry distance, it calculates the overlapping of dynamic predicate spectra in two communities (non-failed runs and failed runs) to evaluate the difference. Empirical results show that our technique outperforms some representative predicate-based fault localization techniques for localizing faults in most subject programs of the Siemens suite and space program. To facilitate the debugging process and provide visual help to the debugger, we also designed a system software prototype, which integrates many recent fault localization algorithms, including the one proposed in this paper.",2013,0, 6524,Feature interaction testing of variability intensive systems,Testing variability intensive systems is a formidable task due to the combinatorial explosion of feature interactions that result from all variations. We developed and validated an approach of combinatorial test generation using Multi-Perspective Feature Models (MPFM). MPFMs are a set of feature models created to achieve Separation of Concerns within the model. This approach improves test coverage of variability. Results from an experiment on a real-life case show that up to 37% of the test effort could be reduced and up to 79% defects from the live system could be detected. We discuss the learning from this experiment and further research potential in testing variability intensive systems.,2013,0, 6525,Managing technical debt: An industrial case study,"Technical debt is the consequence of trade-offs made during software development to ensure speedy releases. The research community lacks rigorously evaluated guidelines to help practitioners characterize, manage and prioritize debt. This paper describes a study conducted with an industrial partner during their implementation of Agile development practices for a large software development division within the company. The report contains our initial findings based on ethnographic observations and semi-structured interviews. The goal is to identify the best practices regarding managing technical debt so that the researchers and the practitioners can further evaluate these practices to extend their knowledge of the technical debt metaphor. We determined that the developers considered their own taxonomy of technical debt based on the type of work they were assigned and their personal understanding of the term. Despite management's high-level categories, the developers mostly considered design debt, testing debt and defect debt. In addition to developers having their own taxonomy, assigning dedicated teams for technical debt reduction and allowing other teams about 20% of time per sprint for debt reduction are good initiatives towards lowering technical debt. While technical debt has become a well-regarded concept in the Agile community, further empirical evaluation is needed to assess how to properly apply the concept for various development organizations.",2013,0, 6526,Mapping architectural decay instances to dependency models,"The architectures of software systems tend to drift or erode as they are maintained and evolved. These systems often develop architectural decay instances, which are instances of design decisions that negatively impact a system's lifecycle properties and are the analog to code-level decay instances that are potential targets for refactoring. While code-level decay instances are based on source-level constructs, architectural decay instances are based on higher levels of abstractions, such as components and connectors, and related concepts, such as concerns. Unlike code-level decay instances, architectural decay usually has more significant consequences. Not being able to detect or address architectural decay in time incurs architecture debt that may result in a higher penalty in terms of quality and maintainability (interest) over time. To facilitate architecture debt detection, in this paper, we demonstrate the possibility of transforming architectural models and concerns into an extended augmented constraint network (EACN), which can uniformly model the constraints among design decisions and environmental conditions. From an ACN, a pairwise-dependency relation (PWDR) can be derived, which, in turn, can be used to automatically and uniformly detect architectural decay instances.",2013,0, 6527,Preliminary results of ON/OFF detection using an integrated system for Parkinson's disease monitoring,"This paper describes the experimental set up of a system composed by a set of wearable sensors devices for the recording of the motion signals and software algorithms for the signal analysis. This system is able to automatically detect and assess the severity of bradykinesia, tremor, dyskinesia and akinesia motor symptoms. Based on the assessment of the akinesia, the ON-OFF status of the patient is determined for each moment. The assessment performed through the automatic evaluation of the akinesia is compared with the status reported by the patients in their diaries. Preliminary results with a total recording period of 32 hours with two PD patients are presented, where a good correspondence (88.2 +/- 3.7 %) was observed. Best (93.7%) and worst (87%) correlation results are illustrated, together with the analysis of the automatic assessment of the akinesia symptom leading to the status determination. The results obtained are promising, and if confirmed with further data, this automatic assessment of PD motor symptoms will lead to a better adjustment of medication dosages and timing, cost savings and an improved quality of life of the patients.",2013,0, 6528,A laboratory instrument for characterizing multiple microelectrodes,"The task of chronic monitoring and characterizing a large number of microelectrodes can be tedious and error prone, especially if needed to be done in vivo. This paper presents a lab instrument that automates the measurement and data processing, allowing for large numbers of electrodes to be characterized within a short time period. A version 1.0 of the Electrode Analyser System (EAS 1.0) has already been used in various neural engineering laboratories, as well by one electrode array manufacturer. The goal of the current work is to implement the EAS 2.0 system that provides improved performance beyond that of the 1.0 system, as well as reducing size and cost.",2013,0, 6529,A new device for the care of Congenital Central Hypoventilation Syndrome patients during sleep,"Congenital Central Hypoventilation Syndrome (CCHS) is a genetic disease that causes an autonomous nervous system dysregulation. Patients are unable to have a correct ventilation, especially during sleep, facing risk of death. Therefore, most of them are mechanically ventilated during night and their blood oxygenation is monitored, while a supervisor keeps watch over them. If low oxygen levels are detected by the pulse-oximeter, an alarm fires; the supervisor deals with the situation and, if there is neither a technical problem nor a false alarm, wakes the subject, as CCHS patients usually recover from hypoxia when roused from sleep. During a single night multiple alarms may occur, causing fractioned sleep for the subject and a lasting state of anxiety for supervisors. In this work we introduce a novel device that can: acquire realtime data from a pulse-oximeter; provide a multisensory stimulation (e.g. by means of an air fan, a vibrating pillow, and a buzzer), if saturation falls under a threshold; stop the stimulation if oxygenation recovers; wake up the patient or the supervisor if the suffering state lasts beyond a safe interval. The main aim of this work is to lessen the number of awakenings, improving the quality of sleep and life for patients and their supervisors, and to increase young and adult CCHS patients autonomy. Initial testing of the device on a CCHS patient and his supervisor has provided encouraging preliminary results.",2013,0, 6530,Saccadic Vector Optokinetic Perimetry (SVOP): A novel technique for automated static perimetry in children using eye tracking,"Perimetry is essential for identifying visual field defects due to disorders of the eye and brain. However, young children are often unable to reliably perform the preferred method of visual field assessment known as automated static perimetry (ASP). This paper introduces a novel method of ASP specifically developed for children called Saccadic Vector Optokinetic Perimetry (SVOP). SVOP uses eye tracking to detect the natural saccadic eye response of gaze orientation towards visual field stimuli if they are seen. In this paper, the direction and magnitude of a sample of subject gaze responses to visual field stimuli is used to construct a software decision algorithm for use in SVOP. SVOP was clinically evaluated in a group of 24 subjects, comprising children and adults, with and without visual field defects, by comparison with an equivalent test on the Humphrey Field Analyser (HFA). SVOP provides promising visual field test results when compared with the reference HFA test, and has proven extremely useful in detecting visual field defects in children unable to perform traditional ASP.",2013,0, 6531,A fast recognition algorithm for detecting common broadcasting faults,"The video before it is been broadcast should be detected to find out whether it has broadcasting faults. Broadcasting fault mainly refers to black field, static frame, color bar, mute, volume excess. In this paper, we proposed a fast recognition algorithm based on MATLAB to detect common broadcasting faults. We used M language to complete a software platform which can identify the static frame, black fields, color bar in the video sequence, and the mute, volume excess in the audio sequence. Experimental evaluation shows that our approach significantly performs well in the automatic broadcasting faults detection.",2013,0, 6532,Avoiding state explosion problem of generated AJAX web application state machine using BDD,"There is growing tendency of users to use web application in place of desktop application because of technological advancement such as AJAX. AJAX is used to build single page web application because content and structure can be changed using AJAX features like Asynchronous communication and run time DOM manipulation. To understand and analyze extreme dynamism of web application, we implemented a tool which is used to generate state machine model of dynamic behavior of user session. In this research paper, we validated and evaluated efficiency of the generated model to detect faults embedded area. However, the state machine can be huge and unbounded and may hit by state explosion problem for large number of user session traces and for extreme dynamism. In this paper to avoid this problem, we used binary decision diagram, a model checking technique to reduce state space at the time of state machine generation. Finally, we are able to control the size of generated state machine without affecting faults embedded area.",2013,0, 6533,Runtime verification and reflection for wireless sensor networks,"The paper proposes to re-visit a light-weight verification technique called runtime verification in the context of wireless sensor networks. The authors believe that especially an extension of runtime verification which is called runtime reflection and which is not only able to detect faults, but diagnose and even repair them, can be an important step towards robust, self-organizing and self-healing WSNs. They present the basic idea of runtime reflection and possible applications.",2013,0, 6534,Sens4U: Wireless sensor network applications for environment monitoring made easy,"The development of wireless sensor network (WSN) or cyber physical systems (CPS) applications is a complex and error prone task. This is due to the huge number of possible combinations of protocols and other software modules, to choose from. Additionally, testing of the chosen configuration and the individual software modules is not a trivial task, especially in case where they are all implemented from scratch. The aim of the Sens4U methodology we present in this paper is to simplify and possibly automate the process of building a WSN application and to simplify its testing. The main idea of our approach is to exploit the modularity of the available libraries in order to speed-up application development done by non-WSN-experts and to solve the real-life problems. The proposed abstraction is very powerful-the modules provide specific functionalities via defined interfaces and can be connected using these according to the application requirements, to create the desired and minimum target configuration. The modularity improves the testability and reuse of components and thus, their reliability and, as a result, the reliability of the target configurations. Further, the Sens4U approach goes beyond pure software generation and supports creating software and hardware configurations. We are currently focusing on environment monitoring scenarios in order to analyze this problem area in the semi-automatic computer aided application logic generalization process. This paper presents the general concept as well as the tool chain that supports the application development done by non-WSN-experts.",2013,0, 6535,How much really changes? A case study of firefox version evolution using a clone detector,"This paper focuses on the applicability of clone detectors for system evolution understanding. Specifically, it is a case study of Firefox for which the development release cycle changed from a slow release cycle to a fast release cycle two years ago. Since the transition of the release cycle, three times more versions of the software were deployed. To understand whether or not the changes between the newer versions are as significant as the changes in the older versions, we measured the similarity between consecutive versions.We analyzed 82MLOC of C/C++ code to compute the overall change distribution between all existing major versions of Firefox. The results indicate a significant decrease in the overall difference between many versions in the fast release cycle. We discuss the results and highlight how differently the versions have evolved in their respective release cycle. We also relate our results with other results assessing potential changes in the quality of Firefox. We conclude the paper by raising questions on the impact of a fast release cycle.",2013,0, 6536,Improving program comprehension by answering questions (keynote),"My Natural Programming Project is working on making software development easier to learn, more effective, and less error prone. An important focus over the last few years has been to discover what are the hard-to-answer questions that developers ask while they are trying to comprehend their programs, and then to develop tools to help answer those questions. For example, when studying programmers working on everyday bugs, we found that they continuously ask Why and Why Not questions as they try to comprehend what happened. We developed the Whyline debugging tool, which allows programmers to directly ask these questions of their programs and get a visualization of the answers. In a small lab study, Whyline increased productivity by a factor of about two. We studied professional programmers trying to understand unfamiliar code and identified over 100 questions they identified as hard-to-answer. In particular, we saw that programmers frequently had specific questions about the feasible execution paths, so we developed a new visualization tool to directly present this information. When trying to use unfamiliar APIs, such as the Java SDK and the SAP eSOA APIs, we discovered some common patterns that make programmers up to 10 times slower in finding and understanding how to use the appropriate methods, so we developed new tools to assist them. This talk will provide an overview of our studies and resulting tools that address program comprehension issues.",2013,0, 6537,SArF map: Visualizing software architecture from feature and layer viewpoints,"To facilitate understanding the architecture of a software system, we developed SArF Map technique that visualizes software architecture from feature and layer viewpoints using a city metaphor. SArF Map visualizes implicit software features using our previous study, SArF dependency-based software clustering algorithm. Since features are high-level abstraction units of software, a generated map can be directly used for high-level decision making such as reuse and also for communications between developers and non-developer stakeholders. In SArF Map, each feature is visualized as a city block, and classes in the feature are laid out as buildings reflecting their software layer. Relevance between features is represented as streets. Dependency links are visualized lucidly. Through open source and industrial case studies, we show that the architecture of the target systems can be easily overviewed and that the quality of their packaging designs can be quickly assessed.",2013,0, 6538,Quality analysis of source code comments,"A significant amount of source code in software systems consists of comments, i. e., parts of the code which are ignored by the compiler. Comments in code represent a main source for system documentation and are hence key for source code understanding with respect to development and maintenance. Although many software developers consider comments to be crucial for program understanding, existing approaches for software quality analysis ignore system commenting or make only quantitative claims. Hence, current quality analyzes do not take a significant part of the software into account. In this work, we present a first detailed approach for quality analysis and assessment of code comments. The approach provides a model for comment quality which is based on different comment categories. To categorize comments, we use machine learning on Java and C/C++ programs. The model comprises different quality aspects: by providing metrics tailored to suit specific categories, we show how quality aspects of the model can be assessed. The validity of the metrics is evaluated with a survey among 16 experienced software developers, a case study demonstrates the relevance of the metrics in practice.",2013,0, 6539,QoS support in routing protocols for MANET,"This paper deals with the characteristics of mobile ad-hoc networks and the issue of quality of service assurance in these networks. The work is focused mainly on the routing protocol AODV and its ability to provide different processing probabilities for different types of network traffic. Firstly, the development of the MANET simulation model and implementation of AODV protocol are described. The Network Simulator 3 was used as the key software tool for this purpose. The analysis of the efficiency of QoS mechanism implemented into AODV protocol and the final evaluation are posted at the end of this paper.",2013,0, 6540,Copy-move forgery detection in images via 2D-Fourier Transform,"Digital images have been widely used in many applications. However, digital image forgery has already become a serious problem due to the rapid development of powerful image editing software. One of the most commonly used forgery techniques is Copy-move forgery that copies a region of an image and pastes it on the other region in the same image. In recent years, most techniques aim to detect such tampering. Different feature extraction methods have been used to improve the capability of the detection algorithm. In this work, we used two dimensional Fourier Transform (2D-FT) to extract some features from the blocks. Predetermined number of Fourier coefficients hold information about the blocks. At the final stage, the similarity search between the adjacent feature vectors is performed to determine the forgery. Experimental results show that proposed method can detect the duplicated regions with high accuracy rate even if the image is distorted with blurring mask or it is compressed with different JPEG quality factors. The dimension of feature vector is also lower than the other methods in the literature. Thus, the method ensures the lower feature vector with high accuracy rates. The proposed method also detects multiple copy move forgery as shown in the results.",2013,0, 6541,Range Analyzer: An Automatic Tool for Arithmetic Overflow Detection in Model-Based Development,"Airborne software is considered safety critical, since a defect in its execution can lead to economic consequences and the loss of human lives. In order to increase the correctness of an embedded software implementing system functions, compliance to the guidelines DO-178C and DO-331 is used to demonstrate that the software was developed according to requirements. Software verification is one of the processes to be performed during software development life cycle, analyzing the files generated during the development process looking for defects that could have been introduced. Absence of arithmetic overflow in one of the variables is a situation to be proved by the verification team because, when there is an overflow, the software calculations could not be trusted anymore. In order to detect this situation, some tools may be used to check source codes or to perform such analysis in model-based software design. The aim of this paper is to present an overview of the airborne software approval process, focusing on the model-based development, and to introduce a preliminary version of the development of the Range Analyzer, a tool with the capability to detect arithmetic overflow occurrences in a model within a SCADE Suite project. This proposed tool is an implementation of a range propagation algorithm, modified for the software analysis needs.",2013,0, 6542,The Test Path Generation from State-Based Polymorphic Interaction Graph for Object-Oriented Software,"Successful integration of classes makes functionalities work correctly in software. The individual class usually functions correctly, but when the classes are integrated several unexpected faults may occur. In Object-Oriented software it is particularly hard to detect faults when classes are integrated because of inheritance, polymorphism and dynamic binding. Software designers use Unified Modeling Language (UML) to create an abstract system scenario and to visualize the system's architecture. A lot of research reveals that UML is not only for software design, but also for software testing. More and more researchers have realized UML models can be a source for Object-Oriented software testing. This paper proposes an intermediate test model called the Polymorphism State SEquence Test Model (PSSETM), which is generated from sequence diagram, class diagram and state-charts for integration testing. The example of Bookstore System shows the PSSETM test model is able to exhibit the possible state of object and the polymorphic information of class. Based on the PSSETM test model, various coverage criteria are defined to generate valid test paths to enhance testing on interaction among classes and the polymorphism of class.",2013,0, 6543,Towards Test Focus Selection for Integration Testing Using Method Level Software Metrics,"The aim of integration testing is to uncover errors in the interactions between system modules. However, it is generally impossible to test all the interactions between modules because of time and cost constraints. Thus, it is important to focus the testing on the connections presumed to be more error-prone. The goal of this research is to guide quality assurance team wherein a software system to focus when they perform integration testing to save time and resources. In this work, we use method level metrics that capture both dependencies and internal complexity of methods. In addition, we build a tool that calculates the metrics automatically. We also propose an approach to select the test focus in integration testing. The main goal is to reduce the number of test cases needed while still detecting at least 80% of integration errors. We conducted an experimental study on several Java applications taken from different domains. Error seeding technique have been used for evaluation. The experimental results showed that our proposed approach is very effective for selecting the test focus in integration testing. It reduces considerably the number of required test cases while at the same time detects at least 80% of integration errors.",2013,0, 6544,Automated Test Data Generation for Coupling Based Integration Testing of Object Oriented Programs Using Evolutionary Approaches,"Software testing is one of the most important phases in development of software. Software testing detects faults in software and ensures quality. Software testing can be performed at unit, integration, or system level. Integration testing tests the interactions of different components, when they are integrated together in specific application, for the smooth functionality of software system. Coupling based testing is an integration testing approach that is based upon coupling relationships that exist among different variables across different call sites in functions. Different types of coupling exist between variables across different call sites. Up until now, test data generation approaches deal only unit level testing. There is no work for test data generation for coupling based integration testing. In this paper, we have proposed a novel approach for automated test data generation for coupling based integration testing of object oriented programs using genetic algorithm. Our approach takes the coupling path as input, containing different sub paths, and generates the test data using genetic algorithm. We have implemented a prototype tool E-Coup in Java and successfully performed different experiments for the generation of test data. In experiments with this tool, our approach has much better results as compared to random testing.",2013,0, 6545,Collaborative bug triaging using textual similarities and change set analysis,"Bug triaging assigns a bug report, which is also known as a work item, an issue, a task or simply a bug, to the most appropriate software developer for fixing or implementing it. However, this task is tedious, time-consuming and error-prone if not supported by effective means. Current techniques either use information retrieval and machine learning to find the most similar bugs already fixed and recommend expert developers, or they analyze change information stemming from source code to propose expert bug solvers. Neither technique combines textual similarity with change set analysis and thereby exploits the potential of the interlinking between bug reports and change sets. In this paper, we present our approach to identify potential experts by identifying similar bug reports and analyzing the associated change sets. Studies have shown that effective bug triaging is done collaboratively in a meeting, as it requires the coordination of multiple individuals, the understanding of the project context and the understanding of the specific work practices. Therefore, we implemented our approach on a multi-touch table to allow multiple stakeholders to interact simultaneously in the bug triaging and to foster their collaboration. In the current stage of our experiments we have experienced that the expert recommendations are more specific and useful when the rationale behind the expert selection is also presented to the users.",2013,0, 6546,What is social debt in software engineering?,"Social debt in software engineering informally refers to unforeseen project cost connected to a suboptimal development community. The causes of suboptimal development communities can be many, ranging from global distance to organisational barriers to wrong or uninformed socio-technical decisions (i.e., decisions that influence both social and technical aspects of software development). Much like technical debt, social debt impacts heavily on software development success. We argue that, to ensure quality software engineering, practitioners should be provided with mechanisms to detect and manage the social debt connected to their development communities. This paper defines and elaborates on social debt, pointing out relevant research paths. We illustrate social debt by comparison with technical debt and discuss common real-life scenarios that exhibit sub-optimal development communities.",2013,0, 6547,Meeting intensity as an indicator for project pressure: Exploring meeting profiles,"Meetings are hot spots of communication and collaboration in software development teams. Both distributed and co-located teams need to meet for coordination, communication, and collaboration. It is difficult to assess the quality of these three crucial aspects, or the social effectiveness and impact of a meeting: Personalities, psychological and professional aspects interact. It is, therefore, challenging to identify emerging communication problems or to improve collaboration by studying a wealth of interrelated details of project meetings. However, it is relatively easy to count meetings, and to measure when and how long they took place. This is objective information, does not violate privacy of participants, and the data might even be retrieved from project calendars automatically. In an exploratory study, we observed 14 student teams working on comparable four-month projects. Among many other aspects, we counted and measured meetings. In this contribution, we compare the meeting profiles qualitatively, and derive a number of hypotheses relevant for software projects.",2013,0, 6548,Addressing the QoS drift in specification models of self-adaptive service-based systems,"Analysts elaborate precise and verifiable specification models, using as inputs non-functional requirements and assumptions drawn from the current environment studied at design time. As most real world applications exist in dynamic environments, recently there has been research efforts towards the design of software systems that use their specification models during runtime. The main idea is that software systems should endeavor to keep their requirements satisfied by adapting their architectural configurations when appropriated. Unfortunately, such specifications models use specific numbers (i.e. values) to specify non-functional constraints (NFCs) and may become rapidly obsolete during runtime given the drastic changes that operational environments can go through. The above may create circumstances when software systems are unaware that their requirements have been violated. To mitigate the obsolescence of specification models we have already proposed to use computing with words (CWW) to represent the NFCs with linguistic values instead of numbers. The numerical meanings of these linguistic values are computed from the measurements of non-functional properties (NFPs) gathered by a monitoring infrastructure. This article introduces the concept of QoS-drift to represent a significant degree of change in the numerical meanings of the linguistic values of the NFPs in the service market. We add to our former proposal a QoS-drift's vigilance unit to update linguistic values only when a QoS-drift is detected. Therefore, the new models are proactive and automatically maintained, what results in a more efficient assessment of run-time requirements' compliance under non-stationary environments. We validate the effectiveness of our approach using (1) a service market of 1500 services with two NFPs, (2) a synthetical QoS-drift and, (3) five systems built by different service compositions. We have detected that four of t- e five systems experienced requirements violations that would not have been detected without the use of our approach.",2013,0, 6549,The study of urban drainage network information system space framework data standards in kunming based on GIS,"Geographic data are the basis of the urban drainage network management information system, in order to ensure the consistency and validity of the data, provide reliable data sharing and exchange for the tube network information system, research of data standardization is very important and must be carried out. This article based on detected data, completing the following standardized work: 1.Divide the drainage network layer for ordering layer structure in logic; 2.Formulate the rules of data encoding for providing all the data in a standardized format; 3.Developing data attribute table, so to elaborate content rules for each type of tube wells based on unified coding; 4. At last, propose requirements of validity of the data to ensure the maintenance of the system data and new data update.",2013,0, 6550,Testing Central Processing Unit scheduling algorithms using Metamorphic Testing,"Central Processing Unit (CPU) scheduling is used to allocate CPU for multiple processes. CPU is one of the most important resources in the computer system, and its scheduling is vital and influential in operating systems. Thus, it is necessary to ensure the correctness of the CPU scheduling program. However, testing the correctness of a scheduling program is difficult because it is hard to verify the correctness of its output, which is known as the test oracle problem in software testing. Metamorphic Testing (MT) which has been recently proposed to alleviate the test oracle problem, is applied to test the CPU scheduling program. In this paper, we use MT to test the Highest Response Ratio Next (HRRN) scheduling algorithm. Two simulators of HRRN scheduler are used in the evaluation of our method. Surprisingly, some real life faults in one open source simulator are detected by MT. Further experiments are performed based on mutants, and the experimental results show that MT is an effective strategy to test CPU scheduler.",2013,0, 6551,The model-based service fault diagnosis with probability analysis,"With the development of web service, it's important to localize the fault activity and explain the faulty reason. In this paper, we propose a model-based diagnosis method of web service fault. In our method, we firstly built service model using Petri nets, then rank the diagnosed activities by computing the faulty probability of all activities, finally use the defined diagnosis rules to find out the faulty activity in the given diagnosis sequence. Our method not only improves the diagnostic accuracy, but also raises the diagnostic efficiency. Our experimental results show our method is effective and outperforms the existing model-based method.",2013,0, 6552,Fault detection method based on file naming algorithm for ocean observing network,"Ocean observing network is foundational facility for marine disaster prevention. Base on the file naming algorithm of ocean observing data, a fault detection method for ocean observing network is proposed in this paper. By analyzing file naming and storing rules of ocean observing data, the file naming algorithm is designed. Further, the data receiving exception can be detected and located to the data source node by executing file naming algorithm. Last, combining with network circuit detecting result, the fault type can be distinguished and the faulty object can be located precisely. For illustration, a fault detection example with minutely data files is utilized to show the effect. Empirical results show that the fault detection method based on file naming algorithm can locate the faulty object in 10 minutes, and can provide operation protecting for ocean observing network.",2013,0, 6553,Reducing service failures by failure and workload aware load balancing in SaaS clouds,"SLA violations are typically viewed as service failures. If service fails once, it will fail again unless remedial action is taken. In a virtualized environment, a common remedial action is to restart or reboot a virtual machine (VM). In this paper we present, a VM live-migration policy that is aware of SLA threshold violations of workload response time, physical machine (PM) and VM utilization as well as availability violations at the PM and VM. In the migration policy we take into account PM failures and VM (software) failures as well as workload features such as burstiness (coefficient of variation or CoV >1) which calls for caution during the selection of target PM when migrating these workloads. The proposed policy also considers migration of a VM when the utilization of the physical machine hosting the VM approaches its utilization threshold. We propose an algorithm that detects proactive triggers for remedial action, selects a VM (for migration) and also suggests a possible target PM. We show the efficacy of our proposed approach by plotting the decrease in the number of SLA violations in a system using our approach over existing approaches that do not trigger migration in response to non-availability related SLA violations, via discrete event simulation of a relevant case study.",2013,0, 6554,Predicting job completion times using system logs in supercomputing clusters,"Most large systems such as HPC/cloud computing clusters and data centers are built from commercial off-the-shelf components. System logs are usually the main source of choice to gain insights into the system issues. Therefore, mining logs to diagnose anomalies has been an active research area. Due to the lack of organization and semantic consistency in commodity PC clusters' logs, what constitutes a fault or an error is subjective and thus building an automatic failure prediction model from log messages is hard. In this paper we sidestep the difficulty by asking a different question: Given the concomitant system log messages of a running job, can we predict the job's remaining time? We adopt Hidden Markov Model (HMM) coupled with frequency analysis to achieve this. Our HMM approach can predict 75% of jobs' remaining times with an error of less than 200 seconds.",2013,0, 6555,Detecting and tolerating data corruptions due to device driver defects,"Critical systems widely depend on operating systems to perform their mission. Device drivers are a critical and defect-prone part of operating systems. Software defects in device drivers often cause corruption of data that may lead to data losses, that are a significant source of costs for large enterprise systems. This paper describes an ongoing research that aims at mitigating the impact of data corruption due to device driver defects on the availability and the integrity of data. We discuss a methodology for run-time detection and the tolerance of protocol violations in device drivers and then we present a preliminary activity that we are currently performing.",2013,0, 6556,The KARYON project: Predictable and safe coordination in cooperative vehicular systems,"KARYON, a kernel-based architecture for safety-critical control, is a European project that proposes a new perspective to improve performance of smart vehicle coordination. The key objective of KARYON is to provide system solutions for predictable and safe coordination of smart vehicles that autonomously cooperate and interact in an open and inherently uncertain environment. One of the main challenges is to ensure high performance levels of vehicular functionality in the presence of uncertainties and failures. This paper describes some of the steps being taken in KARYON to address this challenge, from the definition of a suitable architectural pattern to the development of proof-of-concept prototypes intended to show the applicability of the KARYON solutions. The project proposes a safety architecture that exploits the concept of architectural hybridization to define systems in which a small local safety kernel can be built for guaranteeing functional safety along a set of safety rules. KARYON is also developing a fault model and fault semantics for distributed, continuous-valued sensor systems, which allows abstracting specific sensor faults and facilitates the definition of safety rules in terms of quality of perception. Solutions for improved communication predictability are proposed, ranging from network inaccessibility control at lower communication levels to protocols for assessment of cooperation state at the process level. KARYON contributions include improved simulation and fault-injection tools for evaluating safety assurance according to the ISO 26262 safety standard. The results will be assessed using selected use cases in the automotive and avionic domains.",2013,0, 6557,Smart sensing and smart material for smart automotive damping,"Vehicle suspensions represent one of the most interesting applications of dampening, where the introduction of smart damping technologies and control policies lead hardware and software designers to overcome new measurement challenges. Advanced sensing techniques have been proposed and tested for real-time execution by an automotive ECU based on a DSP-microcontroller. Smart sensing is able both to improve noise filtering of the signals provided by low-cost sensors and detect specific motorcycle dynamics which most influence the road holding and comfort. Finally, an ANN has been modeled, which can be effectively adopted as a benchmark (in terms of false alarms and correctly detected faults) in the development of fault detection strategies (i.e., threshold identification) directed to the sensor validation of the rear suspension stroke.",2013,0, 6558,The Time/State-based Software-Intensive Systems Failure Mode Researches,"Nowadays the application status of Software-Intensive Systems(SISs) introduces a category of system failure caused by unforeseen operation or environment change. Generally speaking this kind of failure can be observed as system emergent behavior or degraded running. Because it relates to both the running time and state, it is called Time/State(TS)-based SISs failure. Moreover it is one of the significant sources of SISs failure. However the related researches are few. This paper presents the life cycle of software-related failure of SISs firstly. Secondly it analyzes the TS-based SISs failure mechanism and establishes the corresponding model. Moreover it introduces the traditional verification methods of SISs. Furthermore it presents the definition, classification and ontology representation of TS-based SISs failure mode. The instance validation shows the existence of TS-based SISs failure and feasibility of detecting the failure by using combined test method primarily. Finally this paper analyzes the problems and prospects the future researches.",2013,0, 6559,Early diagnostic value of circulating MiRNA-21 in lung cancer: A meta-analysis,"To evaluate the early diagnostic value of circulating miRNA-21 in diagnosis of lung cancer, databases such as Wan Fang, VIP, PubMed, and Elsevier were systematically searched from 2005 to 2013 to collect relevant references in which the diagnostic value had been evaluated. The statistics were consolidated and the qualities of the studies were classified. The data were analyzed using Meta Disc1.4 software. The diagnostic value of circulating miRNA-21 in lung cancer was assessed by pooling sensitivity, specificity, the likelihood ratio, and the Summary Receiver Operating Characteristic (SROC) curve. Publication biases of the studies involved were analyzed using Stata 11.0 software. A total of 143 papers were collected of which 8 were included, which contained 600 cases and 440 controls. A heterogeneity test proved the existence of homogeneity in this study. Upon analysis using random effects models, the weighted sensitivity was 0.68, the specificity 0.77, the positive likelihood ratio 2.84, the negative likelihood ratio 0.40, and the SROC Area Under the Curve (AUC) was 0.8133. Further analysis by subgroup showed that the 5 indicators mentioned above were 0.72, 0.84, 4.50, 0.27, and 0.8987, respectively, for the serum group and 0.63, 0.70, 1.95, 0.53, and 0.7318, respectively, for the plasma group. We conclude that circulating miRNA-21 can be regarded a valuable reference in diagnosis of lung cancer. This research showed that in lung cancer the early diagnostic value of miRNA-21 in serum was better than that in plasma.",2013,0, 6560,Three dimensional multifunctional nanomechanical photonic crystal sensor with high sensitivity by using pillar-inserted aslant nanocavity resonator,"We propose a method to detect nanomechanical variations in the three dimensional space with a shoulder-coupled pillar-inserted aslant photonic crystal nanocavity resonator. FEM and 3D-FDTD simulation software are employed to investigate the sensing characteristics. With high quality factor of the aslant nanocavity and the optimized structure, high sensitivity of nanomechanical sensing can be achieved in three dimensions and the limitation of the smallest detectable variations is ultra small. For its ultra small size and ultra high sensitivity in every dimension, this versatile nanomechanical sensor can be widely used in MEMS.",2013,0, 6561,Methods with low complexity for evaluating cloud service reliability,"The critical significance of cloud computing lies in its ability to deliver high performance and reliable calculation on demand to external customers over the Internet. Because of heterogeneous software/hardware components and complicated interactions among them, the probability of failures improves. The services reliability arouses more attention. Cloud reliability analysis and modeling are very critical but hard because of the complexity and large scale of the system. The connectivity of subtasks and data-resources can affect the system reliability. This paper proposes the analysis methods of cloud service reliability based on a simple manner on two conditions: independent failures and correlated failures, which uses Graph Theory, Bayesian networks and Markov models. Simulation results show that time complexity of our proposed method has been greatly improved than traditional algorithms. Our new methods ensure the precision of reliability calculation.",2013,0, 6562,Jigsaw: Scalable software-defined caches,"Shared last-level caches, widely used in chip-multi-processors (CMPs), face two fundamental limitations. First, the latency and energy of shared caches degrade as the system scales up. Second, when multiple workloads share the CMP, they suffer from interference in shared cache accesses. Unfortunately, prior research addressing one issue either ignores or worsens the other: NUCA techniques reduce access latency but are prone to hotspots and interference, and cache partitioning techniques only provide isolation but do not reduce access latency.",2013,0, 6563,A study of the community structure of a complex software network,"This paper presents a case study of a large software system, Netbeans 6.0, made of independent subsystems, which are analyzed as complex software networks. Starting from the source code we built the associated software graphs, where classes represent graph nodes and inter-class relationships represent graph edges. We computed various metrics for the software systems and found interdependences with various quantities computed by mean of the complex network analysis. In particular we found that the number of communities in which the software networks can be partitioned and their modularity, average path length and mean degree can be related to the amount of bugs detected in the system. This result can be useful to provide indications about the fault proneness of software clusters in terms of quantities related to the associated graph structure.",2013,0, 6564,Metrics for modularization assessment of Scala and C# systems,"Modularization of a software system leads to software that is more understandable and maintainable. Hence it is important to assess the modularization quality of a given system. In this paper, we define metrics for quantifying the level of modularization in Scala and C# systems. We propose metrics for Scala systems, measuring modularization with respect to concepts like referential transparency, functional purity, first order functions etc., which are present in modern functional programming languages. We also propose modularity metrics for C# systems in addition to the Object Oriented metrics that are existing in the literature. We validated our metrics, by applying them to popular open-source Scala Systems - Lift, Play, Akka and C# systems - ProcessHacker and Cosmos.",2013,0, 6565,Towards indicators of instabilities in software product lines: An empirical evaluation of metrics,"A Software Product Line (SPL) is a set of software systems (products) that share common functionalities, so-called features. The success of a SPL design is largely dependent on its stability; otherwise, a single implementation change will cause ripple effects in several products. Therefore, there is a growing concern in identifying means to either indicate or predict design instabilities in the SPL source code. However, existing studies up to now rely on conventional metrics as indicators of SPL instability. These conventional metrics, typically used in standalone systems, are not able to capture the properties of SPL features in the source code, which in turn might neglect frequent causes of SPL instabilities. On the other hand, there is a small set of emerging software metrics that take into account specific properties of SPL features. The problem is that there is a lack of empirical validation of the effectiveness of metrics in indicating quality attributes in the context of SPLs. This paper presents an empirical investigation through two set of metrics regarding their power of indicating instabilities in evolving SPLs. A set of conventional metrics was confronted with a set of metrics we instantiated to capture important properties of SPLs. The software evolution history of two SPLs were analysed in our studies. These SPLs are implemented using two different programming techniques and all together they encompass 30 different versions under analysis. Our analysis confirmed that conventional metrics are not good indicators of instabilities in the context of evolving SPLs. The set of employed feature dependency metrics presented a high correlation with instabilities proving its value as indicator of SPL instabilities.",2013,0, 6566,TDDHQ: Achieving Higher Quality Testing in Test Driven Development,"Test driven development (TDD) appears not to be immune to positive test bias effects, as we observed in several empirical studies. In these studies, developers created a significantly larger set of positive tests, but at the same time the number of defects detected with negative tests is significantly higher than those detected by positive ones. In this paper we propose the concept of TDDHQ which is aimed at achieving higher quality of testing in TDD by augmenting the standard TDD with suitable test design techniques. To exemplify this concept, we present combining equivalence partitioning test design technique together with the TDD, for the purpose of improving design of test cases. Initial evaluation of this approach showed a noticeable improvement in the quality of test cases created by developers utilising TDDHQ approach.",2013,0, 6567,A Toolchain for Home Automation Controller Development,"Home Automation systems provide a large number of devices to control diverse appliances. Taking advantage of this diversity to create efficient and intelligent environments requires well designed, validated, and implemented controllers. However, designing and deploying such controllers is a complex and error prone process. This paper presents a tool chain that transforms a design in the form of communicating state machines to an executable controller that interfaces to appliances through a service oriented middleware. Design and validation is supported by integrated model checking and simulation facilities. This is extendable to controller synthesis. This tool chain is implemented, and we provide different examples to show its usability.",2013,0, 6568,Towards Translational Execution of Action Language for Foundational UML,"Model-driven engineering has prominently gained consideration as effective substitute of error-prone code-centric development approaches especially for its capability of abstracting the problem through models and then manipulating them to automatically generate target code. Nowadays, thanks to powerful modelling languages, a system can be designed by means of well-specified models that capture both structural as well as behavioural aspects. From them, target implementation is meant to be automatically generated. An example of well-established general purpose modelling language is the UML, recently enhanced with the introduction of an action language denominated ALF, both proposed by the OMG. In this work we focus on enabling the execution of models defined in UML-ALF and more specifically on the translational execution of ALF towards non-UML target platforms.",2013,0, 6569,Accuracy of Contemporary Parametric Software Estimation Models: A Comparative Analysis,"Predicting the effort, duration and cost required to develop and maintain a software system is crucial in IT project management. Although an accurate estimation is invaluable for the success of an IT development project, it often proves difficult to attain. This paper presents an empirical evaluation of four parametric software estimation models, namely COCOMO II, SEER-SEM, SLIM, and True Planning, in terms of their project effort and duration prediction accuracy. Using real project data from 51 software development projects, we evaluated the capabilities of the models by comparing the predictions with the actual effort and duration values. The study showed that the estimation capabilities of the models investigated are on a par in accuracy, while there is still significant room for improvement in order to better address the prediction challenges faced in practice.",2013,0, 6570,Identifying Implicit Architectural Dependencies Using Measures of Source Code Change Waves,"The principles of Agile software development are increasingly used in large software development projects, e.g. using Scrum of Scrums or combining Agile and Lean development methods. When large software products are developed by self-organized, usually feature-oriented teams, there is a risk that architectural dependencies between software components become uncontrolled. In particular there is a risk that the prescriptive architecture models in form of diagrams are outdated and implicit architectural dependencies may become more frequent than the explicit ones. In this paper we present a method for automated discovery of potential dependencies between software components based on analyzing revision history of software repositories. The result of this method is a map of implicit dependencies which is used by architects in decisions on the evolution of the architecture. The software architects can assess the validity of the dependencies and can prevent unwanted component couplings and design erosion hence minimizing the risk of post-release quality problems. Our method was evaluated in a case study at one large product at Saab Electronic Defense Systems (Saab EDS) and one large software product at Ericsson AB.",2013,0, 6571,LiRCUP: Linear Regression Based CPU Usage Prediction Algorithm for Live Migration of Virtual Machines in Data Centers,"Virtualization is a vital technology of cloud computing which enables the partition of a physical host into several Virtual Machines (VMs). The number of active hosts can be reduced according to the resources requirements using live migration in order to minimize the power consumption in this technology. However, the Service Level Agreement (SLA) is essential for maintaining reliable quality of service between data centers and their users in the cloud environment. Therefore, reduction of the SLA violation level and power costs are considered as two objectives in this paper. We present a CPU usage prediction method based on the linear regression technique. The proposed approach approximates the short-time future CPU utilization based on the history of usage in each host. It is employed in the live migration process to predict over-loaded and under-loaded hosts. When a host becomes over-loaded, some VMs migrate to other hosts to avoid SLA violation. Moreover, first all VMs migrate from a host while it becomes under-loaded. Then, the host switches to the sleep mode for reducing power consumption. Experimental results on the real workload traces from more than a thousand Planet Lab VMs show that the proposed technique can significantly reduce the energy consumption and SLA violation rates.",2013,0, 6572,Facilitating Scientific Workflow Configuration with Parameterized Workflow Skeletons,"This paper describes an operator for configuring scientific workflows that facilitates the process of assigning workflow activities to cloud resources. In general, modeling and configuring scientific workflows is complex and error-prone, because workflows are built of highly parallel patterns comprising huge numbers of tasks. Reusing tested patterns as building blocks avoids repeating errors. Workflow skeletons are parametrizable building blocks describing such patterns. Hence, scientists have a means to reuse validated parallel constructs for rapidly defining their in-silico experiments. Often, configurations of data parallel patterns are generated automatically. However, for many task parallel patterns each task needs to be configured manually. In frameworks like MapReduce, scientists have no control of how tasks are assigned to cloud resources. What is the strength of such patterns, may lead to unnecessary data transfers in other patterns. Workflow Skeletons facilitate the configuration by providing an operator that accepts parameters, this allows for scalable configurations saving time and cost by allocating cloud resources just in time. In addition, this configuration operator helps to define configurations that avoid unnecessary data transfers.",2013,0, 6573,Simulation design of duffing system based on singlechip microcomputer,"In order to solve the parameters' adjusting difficulty of the continuous Duffing chaotic system, a method of using single-chip microcomputer to realize Duffing chaotic system is proposed in this paper. It is proven that the discrete Duffing chaotic system as well as the continuous system has the ability of signal detection. The simulation results based on singlechip microcomputer under MATLAB environment show that this method is feasible to adjust the parameters of the discrete Duffing chaotic system by software, and Duffing chaotic system based on single-chip microcomputer can detect the multi-frequency sine signals.",2013,0, 6574,Detection limitation of high frequency signal travelling along underground power cable,"High frequency pulse injection is part of many diagnostics techniques, e.g. cable fault location. An injected pulse along an underground power cable will reflect at an impedance impurity, for instance a cable joint. These joints can be identified based on pulse reflections only if sufficiently high-frequency components can be detected. Signals with higher frequency components can provide more accurate spatial resolution but experience stronger attenuation and dispersion. This paper discusses whether pulse reflection from a cable joint can be better distinguished if detection is focused on high frequency signal content. Two aspects, considering effects of noise level and applied equipment, are discussed in detail: averaging and highpass filtering. It is shown that in presence of noise, averaging can improve signal to noise ratio also for signals below the quantization error of digital detection equipment. High-pass filtering is realized in hardware but similar results can be achieved by software implementation. However, simulation study shows that a high-pass filter itself hardly improves reflection recognition.",2013,0, 6575,Towards feature-aware retrieval of refinement traces,"Requirements traceability supports practitioners in reaching higher project maturity and better product quality. To gain this support, traces between various artifacts of the software development process are required. Depending on the number of existing artifacts, establishing traces can be a time-consuming and error-prone task. Additionally, the manual creation of traces frequently interrupts the software development process. In order to overcome those problems, practitioners are asking for techniques that support the creation of traces (see Grand Challenge: Ubiquitous (GC-U)). In this paper, we propose the usage of a graph clustering algorithm to support the retrieval of refinement traces. Refinement traces are traces that exist between artifacts created in different phases of a development project, e.g., between features and use cases. We assessed the effectiveness of our approach in several TraceLab experiments. These experiments employ three standard datasets containing differing types of refinement traces. Results show that graph clustering can improve the retrieval of refinement traces and is a step towards the overall goal of ubiquitous traceability.",2013,0, 6576,GPUburn: A system to test and mitigate GPU hardware failures,"Due to many factors such as, high transistor density, high frequency, and low voltage, today's processors are more than ever subject to hardware failures. These errors have various impacts depending on the location of the error and the type of processor. Because of the hierarchical structure of the compute units and work scheduling, the hardware failure on GPUs affect only part of the application. In this paper we present a new methodology to characterize the hardware failures of Nvidia GPUs based on a software micro-benchmarking platform implemented in OpenCL. We also present which hardware part of TESLA architecture is more sensitive to intermittent errors, which usually appears when the processor is aging. We obtained these results by accelerating the aging process by running the processors at high temperature. We show that on GPUs, intermittent errors impact is limited to a localized architecture tile. Finally, we propose a methodology to detect, record location of defective units in order to avoid them to ensure the program correctness on such architectures, improving the GPU fault-tolerance capability and lifespan.",2013,0, 6577,Prototype test insertion co-processor for agile development in multi-threaded embedded environments,"Agile methodologies have been shown useful in constructing Enterprise applications with a reduced level of defects in the released product. Movement of Agile processes into the embedded world is hindered by the lack of suitable tool support. For example, software instrumented test insertion methods to detect race condition in multithreaded programs have the potential to increase code size beyond the limited embedded system memory, and degrade performance to an extent that would impair the real-time characteristics of the system. We propose a FPGA-based, hardware assisted, test insertion co-processor for embedded systems which introduces low additional system overhead and incurs minimal code size increase. In this preliminary study, we compare the ideal characteristics of a FPGA-based test insertion co-processor with our initial prototype and other proposed hardware assisted test insertion approaches.",2013,0, 6578,Security Prognostics: Cyber meets PHM,"In this paper we cast a vision for Security Prognostics (SP) for critical systems, promoting the view that security related protections would be well served to integrate fully with Monitoring and Diagnostics (M&D) systems that assess the health of complex assets and systems. To detect complex Cyber threats we propose combining system parameters already in use by M&D systems for Prognostics and Health Monitoring (PHM) with security parameters. Combining system parameters used by M&D to detect non-malicious faults with the system parameters used by security schemes to detect complex Cyber threats will improve: (a) accuracy of PHM (b) security of M&D, and (c) availability and safety of critical systems. We also introduce the notion of Remaining Secure Life (RSL), assessed based on the propagation of security damage, to create the prospect for Security Prognostics. RSL will assist in the selection of appropriate response(s), based on breach or compromise to security component's and potential impact on system operation. An example of M&D data is provided which is normally associated with non-malicious faults providing input to detect Malware execution through time series monitoring.",2013,0, 6579,Towards systems level prognostics in the Cloud,"Many application systems are transforming from device centric architectures to cloud based systems that leverage shared compute resources to reduce cost and maximize reach. These systems require new paradigms to assure availability and quality of service. In this paper, we discuss the challenges in assuring Availability and Quality of Service in a Cloud Based Application System. We propose machine learning techniques for monitoring systems logs to assess the health of the system. A web services data set is employed to show that variety of services can be clustered to different service classes using a k-means clustering scheme. Reliability, Availability, and Serviceability (RAS) logs and Job logs dataset from high performance computing system is employed to show that impending fatal errors in the system can be predicted from the logs using an SVM classifier. These approaches illustrate the feasibility of methods to monitor the systems health and performance of compute resources and hence can be used to manage these systems for high availability and quality of service for critical tasks such as health care monitoring in the cloud.",2013,0, 6580,Method and RIKDEDIN software package for interpretation of remote sensing data,"The creation of the computer program package RIKDEDIN with the realization of all five preceding items will permit to develop and construct a specialized multi-processor on-board computer, which will automatically decode remote observation data, i.e. recognize objects of environment by remote (satellite, aircraft and ground-based) sensing in real time [1, 2].",2013,0, 6581,CI-LQD: A software tool for modeling and decision making with Low Quality Data,"The software tool CI-LQD (Computational Intelligence for Low Quality Data) is introduced in this paper. CI-LQD is an ongoing project that includes a lightweight open source software that has been designed with scientific and teaching purposes in mind. The main usefulness of the software is to automate the calculations involved in the statistical comparisons of different algorithms, with both numerical and graphical techniques, when the available information is interval-valued, fuzzy, incomplete or otherwise vague. A growing catalog of evolutionary algorithms for learning classifiers, models and association rules, along with their corresponding data conditioning and preprocessing techniques, is included. A demonstrative example of the tool is described that illustrates the capabilities of the software.",2013,0, 6582,Hardware Trojan Horses in Cryptographic IP Cores,"Detecting hardware trojans is a difficult task in general. In this article we study hardware trojan horses insertion and detection in cryptographic intellectual property (IP) blocks. The context is that of a fabless design house that sells IP blocks as GDSII hard macros, and wants to check that final products have not been infected by trojans during the foundry stage. First, we show the efficiency of a medium cost hardware trojans detection method if the placement or the routing have been redone by the foundry. It consists in the comparison between optical microscopic pictures of the silicon product and the original view from a GDSII layout database reader. Second, we analyze the ability of an attacker to introduce a hardware trojan horse without changing neither the placement nor the routing of the cryptographic IP logic. On the example of an AES engine, we show that if the placement density is beyond 80%, the insertion is basically impossible. Therefore, this settles a simple design guidance to avoid trojan horses insertion in cryptographic IP blocks: have the design be compact enough, so that any functionally discreet trojan necessarily requires a complete replace and re-route, which is detected by mere optical imaging (and not complete chip reverse-engineering).",2013,0, 6583,Self-healing Performance Anomalies in Web-based Applications,"In this paper, we describe the SHoWA framework and evaluate its ability to recover from performance anomalies in Web-based applications. SHoWA is meant to automatically detect and recover from performance anomalies, without calling for human intervention. It does not require manual changes to the application source code or previous knowledge about its implementation details. The application is monitored at runtime and the anomalies are detected and pinpointed by means of correlation analysis. A recovery procedure is performed every time an anomaly is detected. An experimental study was conducted to evaluate the recovery process included in the SHoWA framework. The experimental environment considers a benchmarking application, installed in a high-availability system. The results show that SHoWA is able to detect and recover from different anomaly scenarios, before any visible error, higher-latency or work-in-progress loss is observed. It proved to be efficient in terms of time of repair. The performance impact induced on the managed system was low: the response time penalty per request varied between 0 and 2.21 milliseconds, the throughput was affected in less than 1%.",2013,0, 6584,Hybrid Cloud Management to Comply Efficiently with SLA Availability Guarantees,"SLAs are common means to define specifications and requirements of cloud computing services, where the guaranteed availability is one of the most important parameters. Fulfilling the stipulated availability may be expensive, due to the cost of failure recovery software, and the amount of physical equipment needed to deploy the cloud services. Therefore, a relevant question for cloud providers is: How to guarantee the SLA availability in a cost efficient way? This paper studies different fault tolerance techniques available in the market, and it proposes the use of an hybrid management to have full control over the SLA risk, using only the necessary resources in order to keep a cost efficient operation. This paper shows how to model the probability distribution of the accumulated downtime, and how this can be used in the design of hybrid policies. Using specific case studies, this paper illustrates how to implement the proposed hybrid policies, and it shows the obtained cost saving by using them. This paper takes advantage of the cloud computing flexibility, and it opens the door to the use of dynamic management policies to reach specific performance objectives in ICT systems.",2013,0, 6585,Composition-Safe re-parametrization in Distributed Component-based WSN Applications,"Contemporary Wireless Sensor Networks like Smart Offices and Smart Cities are evolving to become multi-purpose application hosting platforms. These WSN platforms can simultaneously support multiple applications which may be managed by multiple actors. Reconfigurable component models have been shown to be viable solutions to reducing the complexity of managing and developing these applications while promoting software re-use. However, implicit parameter dependencies between components make reconfiguration complex and error-prone. Our approach achieves automatic composition-safe re-parametrization of distributed component compositions. To achieve this, we propose the use of language annotations that allow component developers to make these dependencies explicit and constraint-aware network protocols to ensure constraint propagation and enforcement.",2013,0, 6586,Answering questions about unanswered questions of Stack Overflow,"Community-based question answering services accumulate large volumes of knowledge through the voluntary services of people across the globe. Stack Overflow is an example of such a service that targets developers and software engineers. In general, questions in Stack Overflow are answered in a very short time. However, we found that the number of unanswered questions has increased significantly in the past two years. Understanding why questions remain unanswered can help information seekers improve the quality of their questions, increase their chances of getting answers, and better decide when to use Stack Overflow services. In this paper, we mine data on unanswered questions from Stack Overflow. We then conduct a qualitative study to categorize unanswered questions, which reveals characteristics that would be difficult to find otherwise. Finally, we conduct an experiment to determine whether we can predict how long a question will remain unanswered in Stack Overflow.",2013,0, 6587,Search-based duplicate defect detection: An industrial experience,"Duplicate defects put extra overheads on software organizations, as the cost and effort of managing duplicate defects are mainly redundant. Due to the use of natural language and various ways to describe a defect, it is usually hard to investigate duplicate defects automatically. This problem is more severe in large software organizations with huge defect repositories and massive number of defect reporters. Ideally, an efficient tool should prevent duplicate reports from reaching developers by automatically detecting and/or filtering duplicates. It also should be able to offer defect triagers a list of top-N similar bug reports and allow them to compare the similarity of incoming bug reports with the suggested duplicates. This demand has motivated us to design and develop a search-based duplicate bug detection framework at BlackBerry. The approach follows a generalized process model to evaluate and tune the performance of the system in a systematic way. We have applied the framework on software projects at BlackBerry, in addition to the Mozilla defect repository. The experimental results exhibit the performance of the developed framework and highlight the high impact of parameter tuning on its performance.",2013,0, 6588,A contextual approach towards more accurate duplicate bug report detection,"Bug-tracking and issue-tracking systems tend to be populated with bugs, issues, or tickets written by a wide variety of bug reporters, with different levels of training and knowledge about the system being discussed. Many bug reporters lack the skills, vocabulary, knowledge, or time to efficiently search the issue tracker for similar issues. As a result, issue trackers are often full of duplicate issues and bugs, and bug triaging is time consuming and error prone. Many researchers have approached the bug-deduplication problem using off-the-shelf information-retrieval tools, such as BM25F used by Sun et al. In our work, we extend the state of the art by investigating how contextual information, relying on our prior knowledge of software quality, software architecture, and system-development (LDA) topics, can be exploited to improve bug-deduplication. We demonstrate the effectiveness of our contextual bug-deduplication method on the bug repository of the Android ecosystem. Based on this experience, we conclude that researchers should not ignore the context of software engineering when using IR tools for deduplication.",2013,0, 6589,The Eclipse and Mozilla defect tracking dataset: A genuine dataset for mining bug information,"The analysis of bug reports is an important subfield within the mining software repositories community. It explores the rich data available in defect tracking systems to uncover interesting and actionable information about the bug triaging process. While bug data is readily accessible from systems like Bugzilla and JIRA, a common database schema and a curated dataset could significantly enhance future research because it allows for easier replication. Consequently, in this paper we propose the Eclipse and Mozilla Defect Tracking Dataset, a representative database of bug data, filtered to contain only genuine defects (i.e., no feature requests) and designed to cover the whole bug-triage life cycle (i.e., store all intermediate actions). We have used this dataset ourselves for predicting bug severity, for studying bug-fixing time and for identifying erroneously assigned components. Sharing these data with the rest of the community will allow for reproducibility, validation and comparison of the results obtained in bug-report analyses and experiments.",2013,0, 6590,Do software categories impact coupling metrics?,"Software metrics is a valuable mechanism to assess the quality of software systems. Metrics can help the automated analysis of the growing data available in software repositories. Coupling metrics is a kind of software metrics that have been extensively used since the seventies to evaluate several software properties related to maintenance, evolution and reuse tasks. For example, several works have shown that we can use coupling metrics to assess the reusability of software artifacts available in repositories. However, thresholds for software metrics to indicate adequate coupling levels are still a matter of discussion. In this paper, we investigate the impact of software categories on the coupling level of software systems. We have found that different categories may have different levels of coupling, suggesting that we need special attention when comparing software systems in different categories and when using predefined thresholds already available in the literature.",2013,0, 6591,"Discovering, reporting, and fixing performance bugs","Software performance is critical for how users perceive the quality of software products. Performance bugs - programming errors that cause significant performance degradation - lead to poor user experience and low system throughput. Designing effective techniques to address performance bugs requires a deep understanding of how performance bugs are discovered, reported, and fixed. In this paper, we study how performance bugs are discovered, reported to developers, and fixed by developers, and compare the results with those for non-performance bugs. We study performance and non-performance bugs from three popular code bases: Eclipse JDT, Eclipse SWT, and Mozilla. First, we find little evidence that fixing performance bugs has a higher chance to introduce new functional bugs than fixing non-performance bugs, which implies that developers may not need to be over-concerned about fixing performance bugs. Second, although fixing performance bugs is about as error-prone as fixing nonperformance bugs, fixing performance bugs is more difficult than fixing non-performance bugs, indicating that developers need better tool support for fixing performance bugs and testing performance bug patches. Third, unlike many non-performance bugs, a large percentage of performance bugs are discovered through code reasoning, not through users observing the negative effects of the bugs (e.g., performance degradation) or through profiling. The result suggests that techniques to help developers reason about performance, better test oracles, and better profiling techniques are needed for discovering performance bugs.",2013,0, 6592,Better cross company defect prediction,"How can we find data for quality prediction? Early in the life cycle, projects may lack the data needed to build such predictors. Prior work assumed that relevant training data was found nearest to the local project. But is this the best approach? This paper introduces the Peters filter which is based on the following conjecture: When local data is scarce, more information exists in other projects. Accordingly, this filter selects training data via the structure of other projects. To assess the performance of the Peters filter, we compare it with two other approaches for quality prediction. Within-company learning and cross-company learning with the Burak filter (the state-of-the-art relevancy filter). This paper finds that: 1) within-company predictors are weak for small data-sets; 2) the Peters filter+cross-company builds better predictors than both within-company and the Burak filter+cross-company; and 3) the Peters filter builds 64% more useful predictors than both within-company and the Burak filter+cross-company approaches. Hence, we recommend the Peters filter for cross-company learning.",2013,0, 6593,Using citation influence to predict software defects,"The software dependency network reflects structure and the developer contribution network reflects process. Previous studies have used social network properties over these networks to predict whether a software component is defect-prone. However, these studies do not consider the strengths of the dependencies in the networks. In our approach, we use a citation influence topic model to determine dependency strengths among components and developers, analyze weak and strong dependencies separately, and apply social network properties to predict defect-prone components. In experiments on Eclipse and NetBeans, our approach has higher accuracy than prior work.",2013,0, 6594,Error model and the reliability of arithmetic operations,"Error detecting and correcting codes are widely used in data transmission, storage systems and also for data processing. In logical circuits like arithmetic operations, arbitrary faults can cause errors in the result. However in safety critical applications, it is important to avoid those errors which would lead to system failures. Several approaches are known to protect the result of operations during software processing. In the same way like transmission systems, coded processing uses codes for fault detection. But in contrast to transmission systems, there is no adequate channel model available which makes it possible to evaluate the residue error probability of an arithmetic operation in an analytical way. This paper tries to close the gap of arithmetic error models by the development of a model for an ordinary addition in a computer system. Thus, the reliability of an addition's result can be analytically evaluated.",2013,0, 6595,Modeling of 25 kV electric railway system for power quality studies,"25 kV, 50 Hz single-phase AC supply has been widely adopted in the long-distance electrified railway systems in many countries. Electrical locomotives generate harmonic currents in railway power supply systems. Single-phase traction loads also inject large unbalance currents to the transmission system and cause voltage unbalance subsequently. As the amount of rail traffic increases, the issue of power quality distortion becomes more critical. Harmonic currents and unbalanced voltages may cause negative effects on the components of the power system such as overheating, vibration and torque reduction of rotating machines, additional losses of lines and transformers, interference with communication systems, malfunctions of protection relays, measuring instrument error, etc. Therefore, the harmonic current flow must be assessed exactly in the designing and planning stage of the electric railway system (ERS). Harmonic current flow through the contact line system has to be accurately modeled to analyze and assess the harmonic effect on the transmission system. This paper describes the influence of electric railway system on power quality in 110 kV transmission system. Locomotives with diode rectifiers were analyzed. Electric railway system was modeled using EMTP-RV software. Currents and voltages were calculated in 110 kV and 25 kV network. Power quality measurements were performed on 110 kV level in 110/35/25 kV substation and analyzed according to IEC 61000-3-6.",2013,0, 6596,Fail-safe and fail-operational systems safeguarded with coded processing,"Safety has the highest priority because it helps contribute to customer confidence and thereby ensures further growth of the new markets, like electromobility. Therefore in series production redundant hardware concepts like dual core microcontrollers running in lock-step-mode are used to reach for example ASIL D safety requirements given from the ISO 26262. Coded processing is capable of reducing redundancy in hardware by adding diverse redundancy in software, e.g. by specific coding of data and instructions. A system with two coded processing channels is considered. Both channels are active. When one channel fails, the service can be continued with the other channel. It is imaginable that the two channels with implemented coded processing are running with time redundancy on a single core or on a multi core system where for example different ASIL levels are partitioned on different cores. In this paper a redundancy concept based on coded processing will be taken into account. The improvement of the Mean Time To Failure by safeguarding the system with coded processing will be computed for fail-safe as well as for fail-operational systems. The use of the coded processing approach in safeguarding failsafe systems is proved.",2013,0, 6597,An enhanced Jacobi method for lattice-reduction-aided MIMO detection,"Lattice reduction aided decoding has been successfully used for signal detection in multiinput and multioutput (MIMO) systems and many other wireless communication applications. In this paper, we propose a novel enhanced Jacobi (short as EJacobi) method for lattice basis reduction. To assess the performance of the new EJacobi method, we compared it with the LLL algorithm, a widely used algorithm in wireless communications. Our experimental results show that the EJacobi method is more efficient and produces better results measured by both orthogonality defect and condition number than the LLL algorithm.",2013,0, 6598,Forensics of blurred images based on no-reference image quality assessment,"The inexpensive hardware and sophisticated image editing software tools have been widely used, which makes it easy to create and manipulate digital images. The detection of forgery images has attracted academic researches in recent years. In this paper, we proposed a forensic method to detect globally or locally blurred images using no-reference image quality assessment. The features are extracted from mean subtracted contrast normalized (MSCN) coefficients and fed to SVM, which can distinguish the tampered regions from the original ones and can quantify the tampered regions. Experimental results show that this method can detect the edges of tampered regions efficiently.",2013,0, 6599,Optimal threshold characteristics of call admission control by considering cooperative behavior of users (loss model),"Call admission control (CAC) is an important technology for maintaining Quality of Service (QoS) in Software-Defined Networking (SDN). The most popular type of CACs is trunk reservation (TR) control, and many TR control methods had been proposed for improving evaluation values such as call blocking probability or the resource utilization rate. In general, some users under these TR control methods may behave cooperatively. However, conventional TR methods do not take into account a user's cooperative behavior when they begin to communicate. In this paper, we propose a novel TR-type CAC method by considering the cooperative behavior of some users. This proposed method is presented using the loss model of queueing theory for the call-level analysis of a single link. We also analyze the characteristics of the optimal control parameter.",2013,0, 6600,Performance reliability simulation analysis for the complex mechanical system,"Large sample failure data for traditional probability statistics methods usually cannot be obtained from over-sized and complex mechanisms, which makes its reliability prediction very difficult. In this paper, the physics of failure technology is considered for the failure of products thoroughly. With computer simulation, the kinematic performance reliability of an aircraft cabin-door-lock mechanism is researched in this article. The example shows that the method combining physics of failure and software simulation method can solve the reliability problem of complex mechanical products effectively, which will bring guiding significance to engineering practice.",2013,0, 6601,Implementing Nataf transformation in a spreadsheet environment and application in reliability analysis,"This paper presents a methodology for implementing Nataf transformation in a spreadsheet environment which is a widely used tool for generating correlated samples in reliability analysis. While the practicability of Nataf transformation has been demonstrated by numerous studies, this work emphasizes on the development of a systematic and user-friendly EXCEL Add-In. Combining with the recently developed subset simulation Add-In, it can be easily applied for reliability analysis problems with correlated random variables. A simple example is used to illustrate the performance of the proposed methodology.",2013,0, 6602,Notice of Retraction
Analysis of multi-body systems with revolute joint wear,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

Clearances exist in all joints in multi-body system, but traditional dynamic analysis method often treats joints as ideal or perfect, ignoring joint clearance, which leads to a large error in the results. Joint clearance can cause structure vibration, bring some peaks to dynamic response, which is bad to the reliability of the system. And clearance also led to joint wear, it makes the clearance get larger with the increasing of working time. In a word, joint clearance will degenerate the performance of the multi-body system or even cause system failure. In this paper, a slider-crank mechanism is used for example, apply multi-body dynamics software to establish the mechanism simulation model, the joint with clearance was replaced by a contact unit, after calculate the contact force and the relative sliding velocity of the contact unit, applying Archard's Wear model to calculate wear depth of each segment in bushing circumference, and then update the geometry profile of bushing in the multi-body model. Through repeating the above process, the joint wear prediction and the dynamic response of the multi-body system after predetermined cycles can be obtained.",2013,0, 6603,Fatigue life prediction for wind turbine main shaft bearings,"Taking the wind turbine main shaft bearing as the research object, the actual working status of the main shaft bearing under radial load and axial load is considered. Using the ANSYS software, the contact stress under different working conditions are analyzed, the dangerous position of main shaft bearing and the stress analysis results of dangerous position are determined. Based on the results of stress analysis, the main shaft bearing's S-N curve is obtained by changing the material's S-N curve. Considering the influence of average stress to fatigue damage, the average stress is corrected using the Goodman formula. According to the nominal stress approach and the fatigue damage cumulative rule, the fatigue life of the wind turbine main shaft bearing is predicted under combined action of different working conditions. The prediction result is 24.07 years, which meets with the requirement of 20-year design life of wind turbines.",2013,0, 6604,Design of safety-critical software reliability demonstration test based on Bayesian theory,"The original software reliability demonstration test (SRDT) takes no consideration of prior knowledge and priori distribution adequately, which costs a lot of time and resource. A new improved Bayesian based SRDT method was proposed. First, a framework for SRDT scheme was constructed. According to the framework the decreasing function was employed to construct the priori distribution density functions for discrete and continuous safety-critical software respectively, and then discrete Bayesian software demonstration function (DBSDF) scheme and continuous Bayesian software demonstration function (CBSDF) scheme were presented. A set of comparative experiments have been carried out with the classic demonstration testing scheme on several published data sets. The experimental results reveal that both DBSDF scheme and CBSDF scheme are more efficient and applicable especially for the safety-critical software with high reliability requirements.",2013,0, 6605,Malicious circuitry detection using transient power analysis for IC security,"Malicious modification of integrated circuits (ICs) in untrusted foundry, referred to as Hardware Trojan, has emerged as a serious security threat. Since it is extremely difficult to detect the presence of such Trojan circuits using conventional testing strategies, side-channel analysis has been considered as an alternative. In this paper, we proposed a non-destructive side-channel approach that characterizes and compares transient power signature using principle component analysis to achieve the hardware Trojan detection. The approach is validated with hardware measurement results using an FPGA-based test setup for large design including a 128-bit AES cipher. Experimental results show that this approach can discover small (<;1.1% area) Trojans under large noise and variations.",2013,0, 6606,A method for software reliability test case design based on Markov chain usage model,"Test case design is a key factor to improve software reliability level. A new method for software reliability test case design based on Markov chain usage model is presented. Construction steps of software Markov chain usage Model based on UML are introduced. Markov chain usage model is described with directed graph. Auto generation arithmetic of reliability test case is proposed to ease the software reliability test in practice. Based on the method, an ATM software reliability test case is designed and demonstrated. The result proves the method in this paper is practical and efficient in engineering practices.",2013,0, 6607,Research on safety assessment of gas environment in ammunition warehouse,"Because of the accumulation of harmful gas in the ammunition warehouse, the health of the privates and the long-term safety of ammunition storage would be affected. In order to assess the safety of the gas environment in ammunition warehouse, the software Fluent was applied to simulate the distribution rule of the harmful gas, and the area where the concentration of harmful gas is the highest in the warehouse was found. Then, a gas analysis system composed of sensor array and BP neural network was established to detect the harmful gas in the area where the concentration of harmful gas is the highest in the warehouse. If the concentration of harmful gas was greater than the extreme, the measures would be taken to guarantee the safety of the warehouse.",2013,0, 6608,Notice of Retraction
Assessment model for battle damage location process based on diagram entropy method,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

The quality of battle damage location process (BDLP) is very important for the efficiency of damage location. The person's location ability can also be reacted by it. However, measuring complexity of BDLP is a rather new area of research with only a small number of contributions. So the complexity of BDLP is put forward to assess the BDLP's quality. The diagram entropy model which applied on the program maintenance complexity in software engineering is introduced to quantify the complexity of BDLP. Both the method of generating information structure diagram (ISG) and action control diagram (ACG) are given. To validate the relationship between the complexity value and the locating time which is an important factor of BDLP's quality, a non-linear cure fitting is introduced, and they are consistent correlative proved by the result. An instance is given to explain this method.",2013,0, 6609,The application of the MODHMS in contaminant transport simulation,"With the development of the economy and rapid population growth, the range of contaminated groundwater is getting bigger and bigger. Groundwater pollution has a slow process, and it is not easy to find and difficult to control. Once polluted, it takes more than ten years, sometimes even decades to recover. Groundwater pollution has a close relationship with people's health. This paper mainly discusses the range of contaminant transport in different time of groundwater in the MODHMS model. MODHMS is a fully integrated surface/groundwater flow code. The Guishui River is selected as the study area, and the major involved research methods and key technologies are discussed briefly. Analysis shows that the MODHMS is an effective tool to simulate the contaminant transport of groundwater; it provides a more feasible idea to predict and control the groundwater quality.",2013,0, 6610,DC technology for stable and reliable grids,"The expansion of cities often leads to enlarged load concentration and carries extra challenges to managing the electrical grid while insuring power availability to critical loads. The developing Shanghai power grid faces many challenges in the areas of generation, transmission, and distribution of the rapidly growing amounts of electrical energy in demand. Its development should address issues of congestion, voltage-stability, power restriction, permits, and scarcity of land or right of way. In this paper, an equivalent model of Shanghai power grid is implemented using the ABB power transient software analyzer (SIMPOW) where detailed modeling of the generators including exciters and governors is performed. Solutions including Fault Current Limiters (FCL), Static Var Compensators Light (SVC-Light) and High Voltage Direct Current (HVDC) systems were analyzed to assess their effectiveness in addressing the grid issues stated above. The best solution was obtained when the individual AC subsystems were decoupled so a fault in a given subsystem is not propagated to another subsystem and short-circuit currents are limited only by local generation capacity. This solution was obtained using a DC ring where power sources and loads are connected to a common DC bus through Voltage Source Converters (VSC).",2013,0, 6611,Software Modification Aided Transient Error Tolerance for Embedded Systems,"Commercial off-the-shelf (COTS) components are increasingly being employed in embedded systems due to their high performance at low cost. With emerging reliability requirements, design of these components using traditional hardware redundancy incur large overheads, time-demanding re-design and validation. To reduce the design time with shorter time-to-market requirements, software-only reliable design techniques can provide with an effective and low-cost alternative. This paper presents a novel, architecture-independent software modification tool, SMART (Software Modification Aided transient eRror Tolerance) for effective error detection and tolerance. To detect transient errors in processor data path, control flow and memory at reasonable system overheads, the tool incorporates selective and non-intrusive data duplication and dynamic signature comparison. Also, to mitigate the impact of the detected errors, it facilitates further software modification implementing software-based check-pointing. Due to automatic software based source-to-source modification tailored to a given reliability requirement, the tool requires no re-design effort, hardware- or compiler-level intervention. We evaluate the effectiveness of the tool using a Xentium processor based system as a case study of COTS based systems. Using various benchmark applications with single-event upset (SEUs) based error model, we show that up to 91% of the errors can be detected or masked with reasonable performance, energy and memory footprint overheads.",2013,0, 6612,Data Flow Analysis of Software Executed by Unreliable Hardware,"The data flow is a crucial part of software execution in recent applications. It depends on the concrete implementation of the realized algorithm and it influences the correctness of a result in case of hardware faults during the calculation. In logical circuits, like arithmetic operations in a processor system, arbitrary faults become a more tremendous aspect in future. With modern manufacturing processes, the probability of such faults will increase and the result of a software's data flow will be more vulnerable. This paper shows a principle evaluation method for the reliability of a software's data flow with arbitrary soft errors also with the concept of fault compensation. This evaluation is discussed by means of a simple example based on an addition.",2013,0, 6613,Automatic Hard Block Inference on FPGAs,"Modern FPGAs often provide a number of highly optimized hard IP blocks with certain functionalities. However, manually instantiating these blocks is both time-consuming and error-prone, in particular, if only a part of the functionality of the IP block is used. To solve this problem, we developed an algorithm to automatically replace a selected combinational subset of a hardware design with a correct instantiation of a given IP block. Both the IP block and the part of the hardware circuit to be replaced are specified using arithmetic and Boolean operators. Our method is based on higher-order E-unification with an equational theory of arithmetic and Boolean laws. To demonstrate the effectiveness and efficiency of our approach, we present preliminary experiments with various circuits.",2013,0, 6614,Power and Thermal Fault Effect Exploration Framework for Reader/Smart Card Designs,"Power consumption and thermal behavior are important characteristics that need to be explored and evaluated during a product's development cycle. If not handled properly, the consequences are, for example, increased mean-time-to-failure and fatal timing variations of the critical path. In the field of contactlessly powered reader/smart card systems, a magnetic field strength exceeding the allowed maximum threshold may harm the smart card's hardware. Thus, secure smart cards must be designed to cope with faults provoked by power oversupply and thermal stress. Proper fault detection and fault handling are imperative tasks to protect internal secrets. However, state-of-the-art design exploration tools cover these smart card specific power and thermal stress issues only to some extent. Here we present an innovative high level simulation approach used for exploring and simulating secure reader/smart card systems, focusing on magnetic field oversupply and thermal stress evaluations. Gate-level-based power models are used besides RF-channel models, thermal models, and thermal effect models. Furthermore, fault injection techniques are featured to evaluate the fault resistance of a smart card system's software implementation. This framework grants software and hardware designers a novel opportunity to detect functional, power, thermal, and security issues during the design time. We demonstrate the usage of our exploration framework and show an innovative hardware design approach to prolong the lifetime of smart card electronics, which are exposed to high magnetic field strengths.",2013,0, 6615,Real Time Camera Phone Guidance for Compliant Document Image Acquisition without Sight,"Here we present an evaluation of an ideal document acquisition guidance system. Guidance is provided to help someone take a picture of a document capable of Optical Character Recognition (OCR). Our method infers the pose of the camera by detecting a pattern of fiduciary markers on a printed page. The guidance system offers a corrective trajectory based on the current pose, by optimizing the requirements for complete OCR. We evaluate the effectiveness of our software by measuring the quality of the image captured when we vary the experimental setting. After completing a user study with eight participants, we found that our guidance system is effective at helping the user position the phone in such a way that a compliant image is captured. This is based on an evaluation of a one way analysis of variance comparing the percentage of successful trials in each experimental setting. Negative Helmert Contrast is applied in order to tolerate only one ordering of experimental settings: no guidance (control), just confirmation, and full guidance.",2013,0, 6616,Document Authentication Using Printing Technique Features and Unsupervised Anomaly Detection,"Automatically identifying that a certain page in a set of documents is printed with a different printer than the rest of the documents can give an important clue for a possible forgery attempt. Different printers vary in their produced printing quality, which is especially noticeable at the edges of printed characters. In this paper, a system using the difference in edge roughness to distinguish laser printed ages from inkjet printed pages is presented. Several feature extraction methods have been developed and evaluated for that purpose. In contrast to previous work, this system uses unsupervised anomaly detection to detect documents printed by a different printing technique than the majority of the documents among a set. This approach has the advantage that no prior training using genuine documents has to be done. Furthermore, we created a dataset featuring 1200 document images from different domains (invoices, contracts, scientific papers) printed by 7 different inkjet and 13 laser printers. Results show that the presented feature extraction method achieves the best outlier rank score in comparison to state-of-the-art features.",2013,0, 6617,Detecting OOV Names in Arabic Handwritten Data,"This paper presents a novel approach to detect Arabic OOV names from OCR'ed handwritten documents. In our approach, OOV names are searched for using approximate string match on character consensus networks (cnets). The retrieved regions are re-ranked using novel features representing the quality of the match and the likelihood of the detected region to be an OOV name. Our features that encode word boundary information into the approximate match algorithm significantly improve mean average precision (MAP) by 12.2% (absolute gains) for rank cut-off 100 (48.2% vs. 36.0%) and 11.9% for cut-off 1000 (47.0% vs. 35.1%) over the baseline system. Discriminative reranking based on maximum entropy classification using novel features, such as the probability of a retrieved region being an OOV name (called OOV name probability) from a conditional random field model, further improve MAP by 2.3% (absolute gains) for cut-off 100 and 3.0% for cut-off 1000. The improvements are consistent in DET (Detection Error Tradeoff) curves. Our results show that character cnet based OOV name search benefits clearly from the approximate match using word boundary information and the reranking algorithm. Our experiments also show that OOV name probability is very useful for reranking.",2013,0, 6618,A New Method for Discriminating Printers Based on Contours Qualities of Printed Characters Using Wavelet Decomposition,"This article described a new method for discriminating models of laser printers by means of their printed characters, in particular details in contours of characters. A method we proposed was based on evaluating qualities in contours of printed characters using wavelet decomposition. Recently most of characters printed by laser printers were originally stored in printers or computers as vector outline such as Bezier or Spline. Raster Image Processor (RIP) implemented as hardware or software in printers or computers rasterized the outline into a pattern which was composed of subtle vast dots. There was a variety of types in contours of printed characters which were printed by each printer model. In Japan, stalkers typed their threatening letters using common used fonts such as MS Mincho in Japanese and Times or Century in English. Even though the kinds of fonts were known, there was not evidence since these fonts were equipped in almost all computers in Japan. Therefore a new method to discriminate models of printers was desired. Even though same too common fonts were used in threatening letters, subtle differences among contours of printed characters were observed since there was a variety of methods which each maker adopted in rendering and screening which converted from outline to pattern of dots. In order to detect the subtle differences among contours of printed documents, the article utilized wavelet decomposition and a high resolution i.e., 5400dpi flat bed image scanner. The article also used a simple method to analyze results of wavelet decomposition which was counting numbers of zero-crossing points at each scale of decomposition. The results of the experiment showed that the method we proposed was able to detect differences among models of laser printers even though using same too common fonts both Japanese and English.",2013,0, 6619,Fault detection for vehicular ad-hoc wireless networks,"An increasing number of intelligent transportation applications require robust and reliable wireless communication. To achieve the required quality of service it is necessary to implement redundancy in the critical path which includes the radio software and hardware. In a real-world application there are many things that can cause the communication between two vehicles to degrade or stop completely. This paper describes a novel technique for detecting degradation or failure of communication links by comparing the performance of the radios to a probabilistic model built using data collected in the field. The results show that this techinique can successfully detect when there is partial or complete failure to communicate due to damage to the external components such as antennas, connectors and cables.",2013,0, 6620,"An FPGA-based coded excitation system for ultrasonic imaging using a second-order, one-bit sigma-delta modulator","Coded excitation and pulse compression techniques have been used to improve the echo signal-to-noise ratio (eSNR) in ultrasonic imaging. However, most hardware use phase modulated codes over frequency modulated codes because of ease of implementation. In this study, a technique that converts non-binary frequency modulated codes into binary frequency modulated codes is evaluated. To convert from a non-binary to a binary code, a second-order, one-bit sigma delta modulator is used. This sigma-delta modulated code is generated in MATLAB which is then stored in a double data rate synchronous dynamic random access memory. A field programmable gate array, which has access to the memory device, transmits the binary waveform and is recorded using an oscilloscope. The recorded data is then filtered using the pulse-echo transducer model of a linear array with a center frequency of 8.4 MHz and a fractional bandwidth of 100% at -6 dB. Pulse compression was then performed using a matched filter, a mismatched filter, and a Wiener filter. Image quality metrics, such as modulation transfer function and sidelobe-to-mainlobe ratio, were used to assess compression performance. Overall, echoes compressed when the excitation is the sigma-delta modulated coded waveform resulted in no measurable difference in axial resolution.",2013,0, 6621,An application of improved synchronous reference frame-based voltage sag detection in voltage sag compensation system,"This paper proposes an application of improved synchronous reference frame (ISRF)-based voltage sag detection in voltage sag compensation system. The ISRF-based voltage sag detection presents the fast detection time that is suitable to use in voltage sag compensation system. The proposed voltage sag compensation system consists of ISRF-based voltage sag detector, static transfer switches (STSs), and alternative voltage source using the inverter. In a normal incident, the load fed the power from main voltage source via the main STS. When the voltage sag occurs and it is detected by the voltage sag detector, the alternative STS connects the load to an alternative voltage source instead of the main STS. The computer software simulation and the experiment were made to investigate and verify the operation of proposed voltage sag compensation system. It can be seen from the results that a short voltage sag detection time is obtained and voltage sag can be compensated with the proposed system.",2013,0, 6622,RECOG: A Sensing-Based Cognitive Radio System with Real-Time Application Support,"While conventional cognitive radio (CR) system is striving at providing best possible protections for the usage of primary users (PU), little attention has been given to ensure the quality of service (QoS) of applications of secondary users (SU). When loading real-time applications over such a CR system, we have found that existing spectrum sensing schemes create a major hurdle for real-time traffic delivery of SU. For example, energy detection based sensing, a widely used technique, requires possibly more than 100 ms to detect a PU with weak signals. The delay is intolerable for real-time applications with stringent QoS requirements, such as voice over internet protocol (VoIP) or live video chat. This delay, along with other delays caused by backup channel searching, channel switching, and possible buffer overflow due to the insertion of sensing periods, makes supporting real-time applications over CR system very difficult if not impossible. In this paper, we present the design and implementation of a sensing-based CR system - RECOG, which is able to support realtime communications among SUs. We first redesign the conventional sensing scheme. Without increasing the complexity or trading off the detection performance, we break down a long sensing period into a series of shorter blocks, turning a disruptive long delay into negligible short delays. To enhance the sensing capability as well as better protect the QoS of SU traffic, we also incorporate an on-demand sensing scheme based on MAC layer information. In addition, to ensure a fast and reliable switching when PU returns, we integrate an efficient backup channel scanning and searching component in our system. Finally, to overcome a potential buffer overflow, we propose a CR-aware QoS manager. Our extensive experimental evaluations validate that RECOG can not only support realtime traffic among SUs with high quality, but also improve protections for PUs.",2013,0, 6623,Simulation on quantitative analysis of crack inspection by using eddy current stimulated thermography,"Eddy current (EC) stimulated thermography has been proven to be an emerging integrative nondestructive approach for detecting and characterizing surface and subsurface cracks. In this paper, numerical simulation study has been conducted to understand EC stimulated thermography for defect inspection on metallic sample. It has been investigated that transient EC distribution and heating propagation for cracks with different lengths and depths. The simulations are carried out by using AC/DC module of COMSOL mul-tiphysics software. Image processing technique is proposed to analyze the thermal images obtained during the heating and cooling period of the inspection process. The proposed approach is proved to be capable of tracking the heat diffusion by processing the images sequentially. Understanding of transient EC distribution and heating propagation is the fundamental of quantitative nondestructive evaluation of crack inspection with EC stimulated thermography.",2013,0, 6624,Plate components ultrasonic guide wave detection based on the transducer array,"Aluminum plate components has been widely used in the practical industrial applications, because of the plate's defects, a lot of major accidents occurred and caused significant economic loss. Therefore, based on the fundamental theory of the guide wave that propagating in the aluminum plate, and by solving and analyzing the guide wave's dispersion, the guide wave's inspiring method have been obtained from the wave-mode conversion. Using the wavelet to process the testing signal, and introducing the ellipse localization imaging algorithm to identify the defect's orientation, the defect's localization and orientation can be detected accurately. Based on the theory and technique previous statement, a multi-channel ultrasonic transducer array detecting system including hardware and software has been established, and a series of experiments have been done. The results shows that: the multi-channel ultrasonic detecting system established has a high detection precision in defect's localization and orientation detecting, which is up to 98%.",2013,0, 6625,Discrimination of stator winding turn fault and unbalanced supply voltage in permanent magnet synchronous motor using ANN,"Permanent magnet synchronous motor (PMSM) is currently the most attractive application electric machine for several industrial applications. It has obtained widespread application in motor drives in recent time. However, different types of faults are unavoidable in such motors. This paper focuses on stator winding faults diagnosis. This paper proposes the ratio of third harmonic to fundamental FFT magnitude component of the three-phase stator line current and supply voltage as a parameter for detecting stator winding turn faults under different load conditions and using artificial neural network (ANN). Discrimination among unbalancing of supply voltage conditions and stator turn short circuit poses a challenge that is addressed in this paper. The presented approach yields a high degree of accuracy in fault detection and diagnosis between the effects of stator winding turn fault and those due to unbalanced supply voltages using artificial neural network. All simulations in this paper are conducted using finite element analysis software.",2013,0, 6626,Towards QoS prediction based on composition structure analysis and probabilistic environment models,"Complex software systems are usually built by composing numerous components, including external services. The quality of service (QoS) is essential for determining the usability of such systems, and depends both on the structure of the composition and on the QoS of its components. Since the QoS of each component is usually determined with uncertainty and varies from one invocation to another, the composite system also exhibits stochastic QoS behavior. We propose an approach for computing probability distributions of the composite system QoS attributes based on known probability distributions of the component QoS attributes and the composition structure. The approach is experimentally evaluated using a prototype analyzer tool and a real-world service-based example, by comparing the predicted probability distributions for the composition QoS with the actual distribution of QoS values from repeated actual executions.",2013,0, 6627,Framework for evaluating reusability of Component-as-a-Service (CaaS),"As a form of service, Component-as-a-Service (CaaS) provides a reusable functionality which is subscribed by and integrated into service-based applications. Hence, the reusability of CaaS is a key factor for its value. This paper proposes a comprehensive reusability evaluation framework for CaaS. We derive a set of CaaS reusability attributes by applying a logical and objective process, and define metrics for key attributes with the focuses on theoretical soundness and practical applicability. The proposed reusability evaluation suite is assessed with a case study.",2013,0, 6628,Research on QoS Reliability Prediction in Web Service Composition,"Because of the flexibility of Web Services and dynamical network, QoS is difficult to be assured of reliability, often causing the Web Service selected and invoked by users are often not working properly, not result in high performance Web service composition. In the composition of services for service selection, in order to improve the reliability and performance of composite services need to consider the services of non-functional factors, the paper apply of knowledge of probability and statistics predicting Web service's dynamic QoS property values, propose a objective evaluation of the credibility of Web Service and a improved K-MCSP QoS global optimization algorithm of service composition for improving the reliability of service composition.",2013,0, 6629,A Conceptual Approach for Assessing SOA Design Defects' Impact on Quality Attributes,"This research proposes an approach for assessing the impacts of SOA design defects on SOA quality attributes. Eleven items were selected to measure SOA Design Defects, fourteen items were selected to measure SOA Design Attributes, seventeen items were selected to measure SOA Quality Attributes and eleven items were selected to measure SOA Quality Metrics. This work is an integrated part to previous studies in the field.",2013,0, 6630,Comparison of different materials for manufacturing of antialiasing LP filter,"This paper deals with design, simulation, manufacturing and experimental testing of antialiasing low pass (LP) filter for I - Q demodulator that is a part of Ultra Wide-Band (UWB) sensor system. It focuses on various technological possibilities (mechanical by CNC drilling and chemical by etching) of production of printed circuit boards (PCB) for antialiasing filters that are used in the branch of UWB area. Paper demonstrates realization of LP filter designed in the Filter Solution 2011 software from Nuhertz as well as its electromagnetic (EM) simulation made by Ansoft Designer software from ANSYS Company. It assesses the suitability of hydrocarbon ceramic laminate RO4003C and epoxy-glass laminate FR4 for production of LP filter suitable for high frequency (HF) area. Paper refers to the needs for focusing on the design and construction of passive filters used in I - Q demodulator systems. Emphasis lay on issues of quality of transmitted signals in the HF range. It examines the potential manufacturing possibility of such filter based on Low Temperature Ceramics (LTCC) technology. There are presented simulated and measured results of insertion loss (S21) and return loss (S11) of LP filter for I - Q demodulator made from RO4003C and FR4 substrate. The presented filters should be used as an antialiasing LP filter mean for I - Q demodulator presented in [1] which is a part of evaluated UWB sensor system.",2013,0, 6631,Can requirements dependency network be used as early indicator of software integration bugs?,"Complexity cohesion and coupling have been recognized as prominent indicators for software quality. One characterization of software complexity is the existence of dependency relationship. Moreover, degree of dependency reflects the cohesion and coupling between software elements. Dependencies on design and implementation phase have been proven as important predictors for software bugs. We empirically investigated how requirements dependencies correlate with and predict software integration bugs, which can provide early estimate regarding software quality, therefore facilitate decision making early in the software lifecycle. We conducted network analysis on requirements dependency networks of two commercial software projects. We then performed correlation analysis between network measures (e.g., degree, closeness) and number of bugs. Afterwards, bug prediction models were built using these network measures. Significant correlation is observed between most of our network measures and number of bugs. These network measures can predict the number of bugs with high accuracy and sensitivity. We further identified the significant predictors for bug prediction. Besides, the indication effect of network measures on bug number varies among different types of requirements dependency. These observations show that requirements dependency network can be used as an early indicator of software Integration bugs.",2013,0, 6632,An empirical study on project-specific traceability strategies,"Effective requirements traceability supports practitioners in reaching higher project maturity and better product quality. Researchers argue that effective traceability barely happens by chance or through ad-hoc efforts and that traceability should be explicitly defined upfront. However, in a previous study we found that practitioners rarely follow explicit traceability strategies. We were interested in the reason for this discrepancy. Are practitioners able to reach effective traceability without an explicit definition? More specifically, how suitable is requirements traceability that is not strategically planned in supporting a project's development process. Our interview study involved practitioners from 17 companies. These practitioners were familiar with the development process, the existing traceability and the goals of the project they reported about. For each project, we first modeled a traceability strategy based on the gathered information. Second, we examined and modeled the applied software engineering processes of each project. Thereby, we focused on executed tasks, involved actors, and pursued goals. Finally, we analyzed the quality and suitability of a project's traceability strategy. We report common problems across the analyzed traceability strategies and their possible causes. The overall quality and mismatch of analyzed traceability suggests that an upfront-defined traceability strategy is indeed required. Furthermore, we show that the decision for or against traceability relations between artifacts requires a detailed understanding of the project's engineering process and goals; emphasizing the need for a goal-oriented procedure to assess existing and define new traceability strategies.",2013,0, 6633,Improving recovery probability of mobile hosts using secure checkpointing,"In this work, we have proposed a mobility based secure checkpointing and log based rollback recovery technique to provide fault tolerance to mobile hosts in infrastructured wireless/mobile computing system, like, wireless cellular network. Mobility based checkpointing limits number of scattered checkpoint or logs in different mobile support stations that remain due to movement of mobile hosts. Secure checkpointing using low overhead elliptic curve cryptography ensures protection of checkpoint against security attack in both nodes and links and restricts access to checkpoint content only to the mobile host that is owner of the checkpoint. Log based rollback recovery ensures optimized recovery from last event using determinants saved in logs. Security attack to checkpoints leads to unsuccessful recovery. In case of security attack to checkpoint, recovery probability of a failed mobile host using secure checkpointing technique is 1 whereas that in checkpointing without security technique is <;1.",2013,0, 6634,Performance evaluation of ip wireless networks using two way active measurement protocol,"With the advent of different kinds of wireless networks and smart phones, Cellular network users are provided with various data connectivity options by Network Service Providers (ISPs) abiding to Service Level Agreement, i.e. regarding to Quality of Service (QoS) of network deployed. Network Performance Metrics (NPMs) are needed to measure the network performance and guarantee the QoS Parameters like Availability, delivery, latency, bandwidth, etc. Two way active measurement protocol (TWAMP) is widely prevalent active measurement approach to measure two-way metrics of networks. In this work, software tool is developed, that enables network user to assess the network performance. There is dearth of tools, which can measure the network performance of wireless networks like Wi-Fi, 3G, etc., Therefore proprietary TWAMP implementation for IPv6 wireless networks on Android platform and indigenous driver development to obtain send/receive timestamps of packets, is proposed, to obtain metrics namely Round-trip delay, Two-way packet Loss, Jitter, Packet Reordering, Packet Duplication and Loss-patterns etc. Analysis of aforementioned metrics indicate QoS of the wireless network under concern and give hints to applications of varying QoS profiles like VOIP, video streaming, etc. to be run at that instant of time or not.",2013,0, 6635,An investigation into aliasing in images recaptured from an LCD monitor using a digital camera,"With current technology, high quality recaptured images can be created from soft displays, such as an LCD monitor, using a digital still camera and professional image editing software. The task of verifying the ownership and past history of an image is, consequently, more difficult. One approach to detecting an image that has been recaptured from an LCD monitor is to search for the presence of aliasing due to the sampling of the monitor pixel grid. To validate this approach, an investigation into the aliasing introduced in a digitally recaptured image is conducted. An anti-forensic method for recapturing images that are free from aliasing is developed using a model of the image acquisition process. This is supported by a simulation of the acquisition process and illustrated with examples of recaptured images that are free from aliasing.",2013,0, 6636,Feedwater heater system fault diagnosis during dynamic transient process based on two-stage neural networks,"At present, researches on power plant fault diagnosis are mostly for steady-state work conditions and can not well adapt to the load-changing dynamic process, which greatly limits the practical application of a fault diagnosis system. Thus, a transient fault diagnosis approach based on two-stage neural networks was put forward for power plant thermal system fault diagnosis. An Elman recurrent neural network with time-delay inputs was applied to predict the expected normal values of the fault feature variables, and a BP neural network was used to identify the fault types. To improve the diagnostic effect for faults of varying severity under transient conditions, fault symptom zoom optimization technique was also used. Taking the high-pressure feedwater heater system of a 600MW supercritical power unit as the object investigated, the predictive model was built, trained and validated with large amount of historical operating data. The BP network fault diagnosis model was trained with the fault fuzzy knowledge library including typical fault samples. The real-time fault diagnosis program was then developed with MATLAB software. By communicating with the power plant simulator, intensive fault diagnosis tests were carried out. It was shown the suggested method can achieve good diagnosis results for the power plant thermal system under load-varying transient process.",2013,0, 6637,An automated ontology generation technique for an emergent behavior detection system,"Due to the lack of central control in distributed systems, design and implementation of such systems is a challenging task. Interaction of multiple autonomous components can easily result in unwanted behavior in the system. Therefore it is vital to carefully review the design of distributed systems. Manual review of software documents is too inefficient and error prone. It would therefore be beneficial to have a systematic methodology to automatically analyze software requirements and design documents. However automating the process of software analysis is a challenging task because besides the design know-how, each software system requires its own domain knowledge. Existing approaches often require a great deal of input from system engineers familiar with the domain. Such information needs to be interpreted by the designer which is a time-consuming and error prone process. This research suggests the use of a scenario-based approach to represent system requirements. Scenarios are often depicted using message sequence charts (MSCs). Due to their formal notation, MSCs can be used to analyze software requirements in a systematic manner. In an earlier paper, it was demonstrated that ontologies can be used to effectively automate the construction of domain knowledge for the system. However the construction of ontologies remained a challenging task. This paper describes a process which infers ontology from the provided message sequence charts. Furthermore this paper introduces a software tool which automates the process of domain ontology construction. This methodology is demonstrated using a case study of a fleet-management software system.",2013,0, 6638,Multi-operator Image Retargeting Based on Automatic Quality Assessment,"Image retargeting aims to avoid visual distortion while retaining important image content in resizing. However, no single image retargeting method can handle all cases. In this paper, we propose a novel multi-operator image retargeting approach, which utilizes an efficient and human perception based automatic quality assessment in operator selection. First, we calculate the importance map and distortion map for quality assessment. Then, we construct the resizing space and assess the performance of each operator in iterative width and/or height reduction. Finally, we select the optimal operator sequence by dynamic programming and generate the target image. Experiments demonstrate the effectiveness of the proposed approach.",2013,0, 6639,Ontology of architectural decisions supporting ATAM based assessment of SOA architectures,"Nowadays, Service Oriented Architecture (SOA) might be treated as a state of the art approach to the design and implementation of enterprise software. Contemporary software developed according to SOA paradigm is a complex structure, often integrating various platforms, technologies, products and design patterns. Hence, it arises a problem of early evaluation of a software architecture to detect design flaws that might compromise expected system qualities. Such assessment requires extensive knowledge gathering information on various types of architectural decisions, their relations and influences on quality attributes. In this paper we describe SOAROAD (SOA Related Ontology of Architectural Decisions), which was developed to support the evaluation of architectures of information systems using SOA technologies. The main goal of the ontology is to provide constructs for documenting SOA. However, it is designed to support future reasoning about architecture quality and for building a common knowledge base. When building the ontology we focused on the requirements of Architecture Tradeoff Analysis Method (ATAM) which was chosen as a reference methodology of architecture evaluation.",2013,0, 6640,Object-oriented approach to Timed Colored Petri Net simulation,This paper presents object-oriented design of library meant for modeling and simulating Timed Colored Petri Net models. The approach is prepared to integrate TCPN models with crucial parts of larger applications implemented in object-oriented languages. The formal models can be tightly joined with applications allowing the latter to interpret states of the formal model in their domain of responsibility. This approach allows less error-prone and more pervasive use of formal methods to improve quality of software created with imperative languages.,2013,0, 6641,Safety analysis of Autonomous Ground Vehicle optical systems: Bayesian belief networks approach,"Autonomous Ground Vehicles (AGV) require diverse sensor systems to support the navigation and sense-and-avoid tasks. Two of these systems are discussed in the paper: dual camera-based computer vision (CV) and laser-based detection and ranging (LIDAR). Reliable operation of these optical systems is critical to safety since potential faults or failures could result in mishaps leading to loss of life and property. The paper identifies basic hazards and, using fault tree analysis, the causes and effects of these hazards as related to LIDAR and CV systems. A Bayesian Belief Network approach (BN) supported by automated tool is subsequently used to obtain quantitative probabilistic estimation of system safety.",2013,0, 6642,Design and implementation of a frequency-aware wireless video communication system,"In an orthogonal frequency division multiplexing (OFDM) communication system, data bits carried by each subcarrier are not delivered at an equal error probability due to the effect of multipath fading. The effect can be exploited to provide unequal error protections (UEP) to wireless data by carefully mapping bits into subcarriers. Previous works have shown that this frequency-aware approach can improve the throughput of wireless data delivery significantly over conventional frequency-oblivious approaches. We are inspired to explore the frequency-aware approach to improve the quality of wireless streaming, where video frames are naturally not of equal importance. In this work, we present FAVICS, a Frequency-Aware Video Communication System. In particular, we propose three techniques in FAVICS to harvest the frequency-diversity gain. First, FAVICS employs a searching algorithm to identify and provide reliable subcarrier information from a receiver to the transmitter. It effectively reduces the channel feedback overhead and decreases the network latency. Second, FAVICS uses a series of special bit manipulations at the MAC layer to counter the effects that alter the bits-to-subcarrier mapping at the PHY layer. In this way, FAVICS does not require any modifications to wireless PHY and can benefit existing wireless systems immediately. Third, FAVICS adopts a greedy algorithm to jointly deal with channel dynamics and frequency diversity, and thus can further improve the system performance. We prototype an end-to-end system on a software defined radio (SDR) platform that can stream video real-time over wireless medium. Our extensive experiments across a range of wireless scenarios demonstrate that FAVICS can improve the PSNR of video streaming by 5~10 dB.",2013,0, 6643,Sub-carrier Switch Off in OFDM-based wireless local area networks,"OFDM based wireless communication systems split the available frequency band into so-called sub-carriers, and data is transmitted on each of these sub-carriers in parallel. With frequency selective fading, sub-carriers may experience different channel qualities. Thus, choosing a different modulation and coding scheme (MCS) per sub-carrier improves performance. However, this comes at an increase in transceiver complexity and no current wireless system adapts the MCS at such a fine granularity. Some OFDMA based systems such as LTE allow to adapt the MCS per user, whereas wireless local area networks as specified by IEEE 802.11 use the same MCS on every sub-carrier. The performance of such wireless systems that use a single MCS in a frequency selective fading channel can be significantly improved through Sub-Carrier Switch Off (SSO), a simple but powerful alternative to adaptive MCS. SSO deactivates weak sub-carriers that excessively raise the error probability to improve the overall throughput. In this paper, we implement and test SSO in a software-defined radio testbed based on the Wireless Open Access Research Platform (WARP). We present a novel light-weight method for selecting the sub-carriers to be switched off based on the per-sub-carrier channel quality. The results we obtain from our measurements indicate that throughput increases of up to 250% are possible and thus SSO is a highly promising and very low complexity mechanism for future wireless local area networks.",2013,0, 6644,Using cloud computing to enhance automatic test equipment testing and maintenance capabilities,"The purpose of this paper is to present a conceptual approach and to make practical recommendations on how to improve the current Automatic Test Equipment (ATE) testing and maintenance capabilities by utilizing the existing cloud computing model to build a globally linked ATE maintenance system. The basic tenet of the ATE community is to support a multi-tiered maintenance concept which, in general, is a three tiered system that is composed of organizational maintenance (O-level), intermediate maintenance (I-level), and depot maintenance (D-level) organizations. The goal of the ATE is to (1) quickly and accurately detect and isolate each fault, (2) provide software tools for analyzing historical data, and (3) gather, manage, and distribute accurate and reliable maintenance information for the failed Unit Under Test (UUT). The ATE system should provide services that will (1) maintain a repository of information that will improve fault detection and isolation, allow for off-platform assessments, document failures, and help quantify corrective actions, (2) reduce false UUT pulls, and (3) reduce repair time by prompting repair procedures. Furthermore, the ATE system should provide additional services that will help optimize the time to diagnose problems by using collected failure information and by recommending entry points into the Test Program Set (TPS) software. It should also present information to the ATE maintainer to aid in informed repair decisions which could be in the form of pilot debrief results, platform Built In Test (BIT) results, O-level test outcomes and corrective actions, and maintenance and usage history of the platform and UUT. So, based on this definition of ATE maintenance the use of cloud computing can be used to provide services to improve the overall ATE testing throughput which will result in bottom line improvements to ATE life cycle costs. By using cloud computing, which is defined to be a model for enabling ubiquitous, convenient, - n-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction, users can develop cloud computing models that will provide access to application software and databases that can be used to build a globally linked ATE maintenance system. This paper will discuss the essential characteristics of the cloud computing models and define the various flavors of cloud offerings available to designers today. This paper will also analyze the cloud computing model to arrive at a conceptual approach that can be used to enhance the current ATE Testing and Maintenance capabilities. Practical recommendations will be discussed on how to transform the current ATE Testing and Maintenance capabilities into the specific cloud computing model offerings in order to help configure a globally linked ATE maintenance system.",2013,0, 6645,What are we able to do with test data or using LabVIEW to hone in on actual cause of failures,"As systems/circuits degrade or high failure performance trends occur over time, there is an increased probability of predicting with reasonable confidence, when a given assembly or component is likely to experience an insipient fault or a cause a mission failure. Also, performance trends can be applied to algorithms to enhance the testing process. However, profound predictions can be made accurately which are based on true test data. Using LabVIEW is an excellent way to process actual failure history and to display the results.",2013,0, 6646,Testability verification based on sequential probability ratio test method,"Testability plays an important role in the readiness of equipment as a good design for testability (DFT) can greatly decrease the fault detection and isolation time, which will accelerate the maintenance actions. Testability verification is a procedure to check that whether the testability indexes such as fault detection rate (FDR) and fault isolation rate (FIR) meet the requirement in the contract. Currently, standards and statistical methods used in testability verification have the problems such as large sample, long period and so on. Sequential probability ratio test (SPRT) method can decrease the test sample size with almost a same operation characteristic as classical method based on binomial distribution. SPRT method and its truncated rules are introduced and the spectrum of expected test number is proposed. Then, the sample size allocation method and failure mode selection method based on failure rate used in sequential testability verification are illustrated. Testability verification of a control system is implemented with the given method and steps. Software named testability demonstration and evaluation system (TDES) which can calculate the decision criteria, plot decision chart, select failure mode and make judgment is used in the test as assistance. The result shows that the test sample size is remarkably decreased while comparing with the classical method.",2013,0, 6647,Reducing test program costs through ATML-based requirements conversion and code generation,"Most military and aerospace organizations maintain their test requirements as paper-like forms stored electronically. When test programs need to be created or modified, these documents are often manually referenced, which can be an inefficient and error-prone process. Additionally, because modifications to test program code are sometimes made without updating the corresponding requirements, implementation and documentation tend to diverge as projects evolve, which has an adverse effect on the long-term maintainability of Test Program Sets (TPSs). In the past, the lack of an industry-standard data format for test requirements has imposed limitations on the traceability between test results and test specifications. Previous attempts at automating the conversion of analog and mixed-signal test requirements into test programs produced proprietary solutions with limited adoption. In this paper, we describe an innovative process in which multiple software applications interact through a standard XML format that conforms to IEEE Std 1671.1 Automatic Test Markup Language (ATML) Test Description. The process uses automated test data conversion and code generation to facilitate the initial creation and long-term maintenance of test programs.",2013,0, 6648,Multi-stratum resource integration for OpenFlow-based data center interconnect [invited],"Nowadays, most service providers offer their services and support their applications through federated sets of data centers that need to be interconnected using high-capacity telecom transport networks. To provide such high-capacity network channels, data center interconnection is typically based on IP and optical transport networks that ensure certain end-to-end connectivity performance guarantees. However, in the current mode of operation, the control of IP networks, optical networks, and data centers is separately deployed. Enabling even a limited interworking among these separated control systems requires the adoption of complex and inelastic interfaces among the various networks, and this solution is not efficient enough to provide the required quality of service. In this paper, we propose a multi-stratum resource integration (MSRI) architecture for OpenFlow-based data center interconnection using IP and optical transport networks. The control of the architecture is implemented through multiple OpenFlow controllers' cooperation. By exchanging information among multiple controllers, the MSRI can effectively overcome the interworking limitations of a multi-stratum architecture, enable joint optimization of data center and network resources, and enhance the data center responsiveness to end-to-end service demands. Additionally, a service-aware flow estimation strategy for MSRI is introduced based on the proposed architecture. The overall feasibility and efficiency of the proposed architecture are experimentally demonstrated on our optical as a service testbed in terms of blocking probability, resource occupation rate, and path provisioning latency.",2013,0, 6649,A generic framework for executable gestural interaction models,"Integrating new input devices and their associated interaction techniques into interactive applications has always been challenging and time-consuming, due to the learning curve and technical complexity involved. Modeling devices, interactions and applications helps reducing the accidental complexity. Visual modeling languages can hide an important part of the technical aspects involved in the development process, thus allowing a faster and less error-prone development process. However, even with the help of modeling, a gap remains to be bridged in order to go from models to the actual implementation of the interactive application. In this paper we use ICO, a visual formalism based on high-level Petri nets, to develop a generic layered framework for specifying executable models of interaction using gestural input devices. By way of the CASE tool Petshop we demonstrate the framework's feasibility to handle the Kinect and gesture-based interaction techniques. We validate the approach through two case studies that illustrate how to use executable, reusable and extensible ICO models to develop gesture-based applications.",2013,0, 6650,Visualization of fine-grained code change history,"Conventional version control systems save code changes at each check-in. Recently, some development environments retain more fine-grain changes. However, providing tools for developers to use those histories is not a trivial task, due to the difficulties in visualizing the history. We present two visualizations of fine-grained code change history, which actively interact with the code editor: a timeline visualization, and a code history diff view. Our timeline and filtering options allow developers to navigate through the history and easily focus on the information they need. The code history diff view shows the history of any particular code fragment, allowing developers to move through the history simply by dragging the marker back and forth through the timeline to instantly see the code that was in the snippet at any point in the past. We augment the usefulness of these visualizations with richer editor commands including selective undo and search, which are all implemented in an Eclipse plug-in called Azurite. Azurite helps developers with answering common questions developers ask about the code change history that have been identified by prior research. In addition, many of users' backtracking tasks can be achieved using Azurite, which would be tedious or error-prone otherwise.",2013,0, 6651,A resource-efficient probabilistic fault simulator,"The reduction of CMOS structures into the nanometer regime, as well as the high demand for low-power applications, animating to further reduce the supply voltages towards the threshold, results in an increased susceptibility of integrated circuits to soft errors. Hence, circuit reliability has become a major concern in today's VLSI design process. A new approach to further support these trends is to relax the reliability requirements of a circuit, while ensuring that the functionality of the circuit remains unaffected, or effects remain unnoticed by the user. To realize such an approach it is necessary to determine the probability of an error at the output of a circuit, given an error probability distribution at the circuits' elements. Purely software-based simulation approaches are unsuitable due to the large simulation times. Hardware-accelerated approaches exist, but lack the ability to inject errors based on probabilities, are slow or have a large area overhead. In this paper we propose a novel approach for FPGA-based, probabilistic, circuit fault simulation. The proposed system is a mainly hardware-based, which makes the simulation fast, but also keeps the hardware overhead on the FPGA low by exploiting FPGA specific features.",2013,0, 6652,A probabilistic verification framework of SysML activity diagrams,"SysML activity diagrams are OMG/INCOSE standard used for modeling and analyzing probabilistic systems. In this paper, we propose a formal verification framework that is based on PRISM probabilistic symbolic model checker to verify the correctness of these diagrams. To this end, we present an efficient algorithm that transforms a composition of SysML activity diagrams to an equivalent probabilistic automata encoded in PRISM input language. To clarify the quality of our verification framework, we formalize both SysML activity diagrams and PRISM input language. Finally, we demonstrate the effectiveness of our approach by presenting a case study.",2013,0, 6653,A persistent data storage design for real-time interactive applications,"Real-time Online Interactive Applications (ROIA) like multiplayer online games usually work in a persistent environment (also called virtual world) which continues to exist and evolve also while the user is offline and away from the application. This paper deals with storing persistent data of real-time interactive applications in modern relational databases. We describe a preliminary design of the Entity Persistence Module (EPM) middleware which liberates the application developer from writing and maintaining complex and error-prone, application-specific code for persistent data management.",2013,0, 6654,Inter-domain QoS in dynamic circuit network,"A dynamic circuit network (DCN) provides an advance bandwidth reservation (ABR) service across multiple administrative domains. The On-demand Secure Circuits and Advance Reservation System (OSCARS) is a DCN controller deployed in many networks, for example, the Energy Sciences Network (ESnet) and Internet2 in USA, Rede Nacional de Ensino e Pesquisa (RNP) in Brazil, and JGN-X (a new generation network testbed) in Japan. This paper proposes inter-domain QoS scenarios and an extension of a new path computation element (PCE) with Attribute-Mapping for quality of services (QoS) differentiation in OSCARS. Our modified OSCARS performs a QoS mechanism in which network resources are reserved for high priority requests to ensure low request blocking probabilities (RBPs). Our proposal differentiates among requests for both intra- and inter-domain communications.",2013,0, 6655,Analysis and control of DC voltage ripple for modular multilevel converters under single line to ground fault,"This paper deals with DC voltage ripple suppression of the modular multilevel converter (MMC) under single-line-to-ground (SLG) fault condition. First, the instantaneous power of a phase unit is derived theoretically according to the equivalent circuit model of the MMC under unbalanced condition, providing a mathematical explanation of the double-line frequency ripple contained in the dc voltage. Moreover, different characteristics of phase current during three possible SLG faults are analyzed and compared. Based on the derivation and analysis, a quasi-PR controller is proposed to suppress the dc voltage ripple. The proposed controller, combining with the negative and/or zero sequence current controllers, could enhance the overall fault-tolerant capability of the MMC under different types of SLG faults. In addition, no extra cost will be introduced given that only DC voltage is required to be detected. Simulation results from a three-phase MMC based rectifier system generated with the Matlab/Simulink software are provided to support the theoretical considerations.",2013,0, 6656,Criticality of defects in cyclic dependent components,"(Background) Software defects that most likely will turn into system and/or business failures are termed critical by most stakeholders. Thus, having some warnings of the most probable location of such critical defects in a software system is crucial. Software complexity (e.g. coupling) has long been established to be associated with the number of defects. However, what is really challenging is not in the number but identifying the most severe defects that impact reliability. (Research Goal) Do cyclic related components account for a clear majority of the critical defects in software systems? (Approach) We have empirically evaluated two non-trivial systems. One commercial Smart Grid system developed with C# and an open source messaging and integrated pattern server developed with Java. By using cycle metrics, we mined the components into cyclic-related and non-cyclic related groups. Lastly, we evaluated the statistical significance of critical defects and severe defect-prone components (SDCs) in both groups. (Results) In these two systems, results demonstrated convincingly, that components in cyclic relationships account for a significant and the most critical defects and SDCs. (Discussion and Conclusion) We further identified a segment of a system with cyclic complexity that consist almost all of the critical defects and SDCs that impact on system's reliability. Such critical defects and the affected components should be focused for increased testing and refactoring possibilities.",2013,0, 6657,Proteum/FL: A tool for localizing faults using mutation analysis,"Fault diagnosis is the process of analyzing programs with the aim of identifying the code fragments that are faulty. It has been identified as one of the most expensive and time consuming tasks of software development. Even worst, this activity is usually accomplished based on manual analysis. To this end, automatic or semi-automatic fault diagnosis approaches are useful in assisting software developers. Hence, they can play an essential role in decreasing the overall development cost. This paper presents Proteum/FL, a mutation analysis tool for diagnosing previously detected faults. Given an ANSI-C program and a set of test cases, Proteum/FL returns a list of program statements ranked according to their likelihood of being faulty. The tool differs from the rest of the mutation analysis and fault diagnosis tools by employing mutation analysis as a means of diagnosing program faults. It therefore demonstrates the effective use of mutation in supporting both testing and debugging activities.",2013,0, 6658,Fix-it: An extensible code auto-fix component in Review Bot,"Coding standard violations, defect patterns and non-conformance to best practices are abundant in checked-in source code. This often leads to unmaintainable code and potential bugs in later stages of software life cycle. It is important to detect and correct these issues early in the development cycle, when it is less expensive to fix. Even though static analysis techniques such as tool-assisted code review are effective in addressing this problem, there is significant amount of human effort involved in identifying the source code issues and fixing it. Review Bot is a tool designed to reduce the human effort and improve the quality in code reviews by generating automatic reviews using static analysis output. In this paper, we propose an extension to Review Bot- addition of a component called Fix-it for the auto-correction of various source code issues using Abstract Syntax Tree (AST) transformations. Fix-it uses built-in fixes to automatically fix various issues reported by the auto-reviewer component in Review Bot, thereby reducing the human effort to greater extent. Fix-it is designed to be highly extensible-users can add support for the detection of new defect patterns using XPath or XQuery and provide fixes for it based on AST transformations written in a high-level programming language. It allows the user to treat the AST as a DOM tree and run XQuery UPDATE expressions to perform AST transformations as part of a fix. Fix-it also includes a designer application which enables Review Bot administrators to design new defect patterns and fixes. The developer feedback on a stand-alone prototype indicates the possibility of significant human effort reduction in code reviews using Fix-it.",2013,0, 6659,Differential Debugging,"Phillip G. Armour responds to """"Differential Debugging"""" in the Tools of the Trade column September/October issue of IEEE Software to discuss the process of predicting defects.",2013,0, 6660,Service Matching under Consideration of Explicitly Specified Service Variants,"One of the main ideas of Service-Oriented Computing (SOC) is the delivery of flexibly composable services provided on world-wide markets. For a successful service discovery, service requests have to be matched with the available service offers. However, in a situation in which no service that completely matches the request can be discovered, the customer may tolerate slight discrepancies between request and offer. Some existing fuzzy matching approaches are able to detect such service variants, but they do not allow to explicitly specify which parts of a request are not mandatory. In this paper, we improve an existing service matching approach based on Visual Contracts leveraging our preliminary work of design pattern detection. Thereby, we support explicit specifications of service variants and realize gradual matching results that can be ranked in order to discover the service offer that matches a customer's request best.",2013,0, 6661,Reliable Service Composition via Automatic QoS Prediction,"Service composition has received considerable attention nowadays as a key technology to deliver desired business logics by directly aggregating existing Web services. Considering the dynamic and autonomous nature of Web services, building high-quality software systems by composing third-party services faces novel challenges. As a solution, new techniques have been recently developed to automatically predict the QoS of services in a future time and the prediction result will facilitate in selecting individual services. Nonetheless, limited effort has been devoted to QoS prediction for service composition. To fill out this technical gap, we propose a novel model in this paper that integrates QoS prediction with service composition. The integrated model will lead to a composition result that is not only able to fulfill user requirement during the composition time but also expected to maintain the desired QoS in future. As user requirement is expected to be satisfied by the composition result for a long period of time, significant effort can be reduced for re-composing newly selected services, which usually incurs high cost. We conduct experiments on both real and synthetic QoS datasets to demonstrate the effectiveness of the proposed approach.",2013,0, 6662,AESON: A Model-Driven and Fault Tolerant Composite Deployment Runtime for IaaS Clouds,"Infrastructure-as-a-Service (IaaS) cloud environments expose to users the infrastructure of a data center while relieving them from the burden and costs associated with its management and maintenance. IaaS clouds provide an interface by means of which users can create, configure, and control a set of virtual machines that will typically host a composite software service. Given the increasing popularity of this computing paradigm, previous work has focused on modeling composite software services to automate their deployment in IaaS clouds. This work is concerned with the runtime state of composite services during and after deployment. We propose AESON, a deployment runtime that automatically detects node (virtual machine) failures and eventually brings the composite service to the desired deployment state by using information describing relationships between the service components. We have designed AESON as a decentralized peer-to-peer publish/subscribe system leveraging IBM's Bulletin Board (BB), a topic-based distributed shared memory service built on top of an overlay network.",2013,0, 6663,Generalized Logit Regression-Based Software Reliability Modeling with Metrics Data,"It is well known that multifactor software reliability modeling with software metrics data is useful to predict the software reliability with higher accuracy, because it utilizes not only software fault count data but also software testing metrics data observed in the development process. In this paper we extend the existing logit regression-based software reliability model by introducing more generalized logistic type functions and improve the goodness-of-fit and predictive performances. In numerical examples with real software development project data, it is shown that our generalized models can outperform the existing logit regression-based model and the Cox regression-based model significantly.",2013,0, 6664,Expose: Discovering Potential Binary Code Re-use,"The use of third-party libraries in deployed applications can potentially put an organization's intellectual property at risk due to licensing restrictions requiring disclosure or distribution of the resulting software. Binary applications that are statically linked to buggy version(s) of a library can also provide malware with entry points into an organization. While many organizations have policies to restrict the use of third-party software in applications, determining whether an application uses a restricted library can be difficult when it is distributed as binary code. Compiler optimizations, function inlining, and lack of symbols in binary code make the task challenging for automated techniques. On the other hand, semantic analysis techniques are relatively slow. Given a library and a set of binary applications, we propose Expose, a tool that combines symbolic execution using a theorem prover, and function-level syntactic matching techniques to achieve both performance and high quality rankings of applications. Higher rankings indicate a higher likelihood of re-using the library's code. Expose ranked applications that used two libraries at or near the top, out of 2,927 and 128 applications respectively. Expose detected one application that was not detected by another scanner to use some functions in one of the libraries. In addition, Expose ranked applications correctly for different versions of a library, and when different compiler options were used. Expose analyzed 97.68% and 99.48% of the applications within five and 10 minutes respectively.",2013,0, 6665,On the Gain of Measuring Test Case Prioritization,"Test case prioritization (TCP) techniques aim to schedule the order of regression test suite to maximize some properties, such as early fault detection. In order to measure the abilities of different TCP techniques for early fault detection, a metric named average percentage of faults detected (APFD) is widely adopted. In this paper, we analyze the metric APFD and explore the gain of measuring TCP techniques from a control theory viewpoint. Based on that, we propose a generalized metric for TCP. This new metric focuses on the gain of defining early fault detection and measuring TCP techniques for various needs in different evaluation scenarios. By adopting this new metric, not only flexibility can be guaranteed, but also explicit physical significance for the metric will be provided before evaluation.",2013,0, 6666,Isolating and Understanding Program Errors Using Probabilistic Dispute Model,"Automated software debugging can have a signifi-cant impact on the cost and quality of software development and maintenance. In recent years, researchers have invested a considerable amount of effort in developing automated techniques, and have demonstrated their effectiveness in helping developers in certain debugging tasks by pinpointing faulty statements. But there is still a gap between examining a faulty statement and understanding root causes of the cor-responding bug. As a step in this direction, we believe good developers have defensive programming in minds and software debugging is a process in search of arguments about why a statement is faulty. Therefore, a fault localization problem is rephrased as a dispute game between statements involved in successful runs and failing runs. A statement is OK if it can always provide arguments against other's blames, whereas a less defensive statement is thought to be faulty. In doing so, we propose a probabilistic dispute graph which is built upon dynamic dependencies between statements and statistics of program runs. Using such a graph, we put executed statements in dispute, compute acceptable statements, and thus figure out faulty statements if they have not strong arguments about their correctness. For empirical purpose, we carry out experiments on the well-known Siemens benchmark, and conclude that our approach not only casts new light on the causes of bugs in various cases, but also is statistically more effective in fault localization than competitors like Tarantula, SOBER, CT and PPDG.",2013,0, 6667,SQAF-DS: A Software Quality Assessment Framework for Dependable Systems,"This paper proposes a software quality assessment framework for dependable systems (SQAF-DS), providing a systematic way to assess software quality through test cases, indirectly. SQAF-DS intends to reduce the time and cost for dependability assessment thorough using test cases as a means of the assessment. Test cases are developed in the process of software development and used to test target system, while dependability requirements are derived from dependability analysis, such as FTA (Fault Tree Analysis). SQAF-DS formally checks inclusion relation between dependability requirements and test cases. If the formal checking succeeds, then we can assure that the dependability requirements are well implemented in the software system.",2013,0, 6668,Empirical Effectiveness Evaluation of Spectra-Based Fault Localization on Automated Program Repair,"Researchers have proposed many spectra-based fault localization (SBFL) techniques in the past decades. Existing studies evaluate the effectiveness of these techniques from the viewpoint of developers, and have drawn some important conclusions through either empirical study or theoretical analysis. In this paper, we present the first study on the effectiveness of SBFL techniques from the viewpoint of fully automated debugging including the program repair of automation, for which the activity of automated fault localization is necessary. We assess the accuracy of fault localization according to the repair effectiveness in the automated repair process guided by the localization technique. Our experiment on 14 popular SBFL techniques with 11 subject programs shipping with real-life field failures presents the evidence that some conclusions drawn in prior studies do not hold in our experiment. Based on experimental results, we suggest that Jaccard should be used with high priority before some more effective SBFL techniques specially proposed for automated program repair occur in the future.",2013,0, 6669,Using HTML5 visualizations in software fault localization,"Testing and debugging is the most expensive, error-prone phase in the software development life cycle. Automated software fault localization can drastically improve the efficiency of this phase, thus improving the overall quality of the software. Amongst the most well-known techniques, due to its efficiency and effectiveness, is spectrum-based fault localization. In this paper, we propose three dynamic graphical forms using HTML5 to display the diagnostic reports yielded by spectrum-based fault localization. The visualizations proposed, namely Sunburst, Vertical Partition, and Bubble Hierarchy, have been implemented within the GZOLTAR toolset, replacing previous and less-intuitive OpenGL-based visualizations. The GZOLTAR toolset is a plug-and-play plugin for the Eclipse IDE to ease world-wide adoption. Finally, we performed an user study with GZOLTAR and confirmed that the visualizations help to drastically reduce the time needed in debugging (e.g., all participants using the visualizations were able to pinpoint the fault, whereas of those using traditional methods only 35% found the fault). The group that used the visualizations took on average 9 minutes and 17 seconds less than the group that did not use them.",2013,0, 6670,Model Checking Stencil Computations Written in a Partitioned Global Address Space Language,"This paper proposes an approach to software model checking of stencil computations written in partitioned global address space (PGAS)languages. Although a stencil computation offers a simple and powerful programming style, it becomes error prone when considering optimization and parallelization. In the proposed approach, the state explosion problem associated with model checking (that is, where the number of states to be explored increases dramatically) is avoided by introducing abstractions suitable for stencil computation. In addition, this paper also describes XMP-SPIN, our model checker for XcalableMP (XMP), a PGAS language that provides support for implementing parallelized stencil computations. One distinguishing feature of XMP-SPIN is that users are able to define their own abstractions in a simple and flexible way. The proposed abstractions are implemented as user-defined abstractions. This paper also presents experimental results for model checking stencil computations using XMP-SPIN. The results demonstrate the effectiveness and practicality of the proposed approach and XMP-SPIN.",2013,0, 6671,Dauphin: A new statistical signal processing language,"Many software packages support scientific research by means of numerical calculations and specialised library calls, but very few support specific application domains such as signal processing at the symbolic level or at problem formulation. Translation from the natural domain-specific structure of problem description to the computer formulation is often a time consuming and error-prone exercise. As signal processing becomes more sophisticated, there is a need to codify its basic tools, thus allowing the researcher to spend more time on the challenges specific to a particular application. In this paper, we describe the design of Dauphin, a domain-specific programming language. Dauphin ultimately aims to extend the power of signal processing researchers by allowing them to focus on their research problems while simplifying the process of implementing their ideas. In Dauphin, the basic algorithms of signal processing become the standard function calls and are expressed naturally in terms of predefined signal processing primitives such as random variables and probability distributions.",2013,0, 6672,An efficient and intelligent model to control driving offenses by using cloud computing concepts based on road transportation situation in Malaysia,"Information Technology (IT) has had undeniable effects on various industries in the recent years whereby road transportation industry and control services over vehicles and drivers are not apart from these effects. One of the most potential and new IT technologies that has not been considered and focused significantly in road transportation industry is cloud computing. This paper proposes an efficient and intelligence model for control driving offenses by using three main technologies in IT industry: Image Processing, Artificial Intelligence, and Cloud Computing. In the proposed model, Vertical-Edge Detection Algorithm (VEDA) was used for car license plate detection process in highways to provide an efficient image processing process with low quality images that were taken from installed cameras. Furthermore, two intelligence cloud-based Software-as-a-Service applications were used for car license plate detection, matching violations detected numbers with entrance detected numbers, and identification of possible exit routes for further processes. In addition, the suggested model contains a cloud server for storing databases and violation records which make them always accessible according to cloud computing concepts. The theoretical analysis of the proposed model was done according to three main parameters: efficiency, intelligence, and compatibility, and showed that Cloud-based Driving Offenses Control (CDOC) algorithm might be effective for providing an efficient method to control driving offenses and decreasing the rate of violations at highways.",2013,0, 6673,CFEDR: Control-flow error detection and recovery using encoded signatures monitoring,"The incorporation of error detection and recovery mechanisms becomes mandatory as the probability of the occurrence of transient faults increases. The detection of control flow errors has been extensively investigated in literature. However, only few works have been conducted towards recovery from control-flow errors. Generally, a program is re-executed after error detection. Although re-execution prevents faults from corrupting data, it does not allow the application to run to completion correctly in the presence of an error. Moreover, the overhead of re-execution increases prominently. The current study presents a pure-software method based on encoded signatures to recover from control-flow errors. Unlike general signature monitoring techniques, the proposed method targets not only interblock transitions, but also intrablock and inter-function transitions. After detecting the illegal transition, the program flow transfers back to the block where the error occurred, and the data errors caused by the error propagation are recovered. Fault injection and performance overhead experiments are performed to evaluate the proposed method. The experimental results show that most control flow errors can be recovered with relatively low performance overhead.",2013,0, 6674,DaemonGuard: O/S-assisted selective software-based Self-Testing for multi-core systems,"As technology scales deep into the sub-micron regime, transistors become less reliable. Future systems are widely predicted to suffer from considerable aging and wear-out effects. This ominous threat has urged system designers to develop effective run-time testing methodologies that can monitor and assess the system's health. In this work, we investigate the potential of online software-based functional testing at the granularity of individual microprocessor core components in multi-core systems. While existing techniques monolithically test the entire core, our approach aims to reduce testing time by avoiding the over-testing of under-utilized units. To facilitate fine-grained testing, we introduce DaemonGuard, a framework that enables the real-time observation of individual sub-core modules and performs on-demand selective testing of only the modules that have recently been stressed. The monitoring and test-initiation process is orchestrated by a transparent, minimally-intrusive, and lightweight operating system process that observes the utilization of individual datapath components at run-time. We perform a series of experiments using a full-system, execution-driven simulation framework running a commodity operating system, real multi-threaded workloads, and test programs. Our results indicate that operating-system-assisted selective testing at the sub-core level leads to substantial savings in testing time and very low impact on system performance.",2013,0, 6675,Approximate simulation of circuits with probabilistic behavior,"Various emerging technologies promise advantages with respect to integration density, performance or power consumption, at the cost of approximate or probabilistic behavior. Approximate computing, where limited computational inaccuracies are tolerated at the system or application level is therefore of increasing interest. This paper investigates the use of stochastic computing (SC) as a tool for approximate simulation of probabilistic behavior. SC has the advantage of processing probabilities directly at very low hardware cost. It also allows accuracy to be traded for run-time in a natural way (progressive precision). AS a target technology to be simulated, we choose quantum computing circuits, whose behavior is inherently probabilistic and cannot be efficiently simulated by conventional (classical) means. We show how complex operations such as superposition and entanglement can be handled by SC. Finally, we report experimental results on software-based simulation of representative quantum circuits, both stand-alone and FPGA-supported. The results show that the SC implementations are orders of magnitude more compact than those based on classical circuits. Accurate results may require very long simulation runs, but run-times can be reduced by exploiting SC's progressive precision property.",2013,0, 6676,Shielding heterogeneous MPSoCs from untrustworthy 3PIPs through security-driven task scheduling,"Outsourcing of the various aspects of IC design and fabrication flow strongly questions the classic assumption that hardware is trustworthy. Multiprocessor System-on-Chip (MPSoC) platforms face some of the most demanding security concerns, as they process, store, and communicate sensitive information using third-party intellectual property (3PIP) cores that may be untrustworthy. The complexity of an MPSoC makes it expensive and time consuming to fully analyze and test it during the design stage. Consequently, the trustworthiness of the 3PIP components cannot be ensured. To protect MPSoCs against malicious modifications, we propose to incorporate trojan toleration into MPSoC platforms by revising the task scheduling step of the MPSoC design process. We impose a set of security-driven diversity constraints into the scheduling process, enabling the system to detect the presence of malicious modifications or to mute their effects during application execution. Furthermore, we pose the security-constrained MPSoC task scheduling as a multi-dimensional optimization problem, and propose a set of heuristics to ensure that the introduced security constraints can be fulfilled with minimum performance and hardware overhead.",2013,0, 6677,Exploiting error control approaches for Hardware Trojans on Network-on-Chip links,"We exploit transient and permanent error control methods to address Hardware Trojan (HT) issues in Network-on-Chip (NoC) links. The use of hardware-efficient error control methods on NoC links has the potential to reduce the overall hardware cost for security protection, with respect to cryptographic-based rerouting algorithms. An error control coding method for transient errors is used to detect the HT-induced link errors. Regarding the faulty links as permanently failed interconnects, we propose to reshuffle the links and isolate the HT-controlled link wires. Rather than rerouting packets via alternative paths, the proposed method resumes the utilization of partially failed links to improve the bandwidth and the average latency of NoCs. Simulation results show that our method improves the average latency by up to 44.7% over the rerouting approach. The reduction on latency varies from 20% to 41% for three traffic patterns on a 55 mesh NoC. The impact of different HT locations on NoC links was examined, as well. Our method is not sensitive to HT locations and can improve the effective bandwidth by up to 29 bits per cycle with minor overhead.",2013,0, 6678,A smart Trojan circuit and smart attack method in AES encryption circuits,"The increased utilization of outsourcing services for designing and manufacturing LSIs can reduce the reliability of LSIs. Trojan circuits are malicious circuits that can leak secret information. In this paper, we propose a Trojan circuit whose detection is difficult in AES circuits. To make it difficult to detect the proposed Trojan circuit, we propose two methods. In one method, one of test mode signal lines not used in normal operation is included in the activation conditions on the trigger unit. In the other, the payload unit does not directly leak the cipher key of an AES circuit but instead leaks information related to the cipher key. We also propose a procedure to obtain the secret key from the information. We demonstrate that it is difficult to detect the proposed Trojan circuit by using existing approaches. We show results to implement and to estimate the area and power of AES circuits with and without the proposed Trojan circuit.",2013,0, 6679,A novel approach to effective detection and analysis of code clones,"Code clones are found in most of the software systems. They play a major role in the field of software engineering. The presence of clones in a particular module will either improve or degrade the quality of the overall software system. Poor quality software indirectly leads to strenuous software maintenance. Detecting the code clones will also pave way for analyzing them. The existing approaches detect the clones efficiently. Though some of the tools analyze the clones, accuracy is still missing. In this paper, a novel method is proposed, which exhibits the use of an efficient data mining technique in the phase of analysis. Based on the outcome of the analysis, the clones are either removed or retained in the software system.",2013,0, 6680,Sensor data quality assessment for building simulation model calibration based on automatic differentiation,"Building simulation models play a vital role in optimal building climate control, energy audit, fault detection and diagnosis, continuous commissioning, and planning. Real system parameters are often unknown or partially unknown and need to be identified through historical data, which are currently acquired by heuristically designed experiments. Without quality sensor data, model calibration is prone to fail, even if the calibration algorithm is appropriate. In this paper, we propose a Fisher-information-matrix (FIM)-based metric to examine the sensor data measurements and how their quality is related to the model calibration quality. It aims to provide quantitative guidance in the calibration cycle of a whole building model that takes as many variables as possible into consideration for the sake of accuracy. Our concerned model is based on well-known physical laws and tries to avoid simplification, thereby leading to a highly discontinuous system with model switches due to the seasonal or daily variation and other reasons. Such a model is implemented in the form of a software package. Hence, no explicit mathematical expression can be given. A key technical challenge is that the complexity of the model prohibits the analytical derivation of FIM, while the numeric calculation is sensitive to sensor noise and model switches. We, hence, propose to adopt an automatic differentiation method, which exploits the operator overload feature of object oriented programming language, for robust numerical FIM calculation.",2013,0, 6681,A comprehensive QoS determination model for Infrastructure-as-a-Service clouds,"Cloud computing is a recently developed new technology for complex systems with massive service sharing, which is different from the resource sharing of the grid computing systems. In a cloud environment, service requests from users go through numerous provider specific steps from the instant it is submitted to when the service is fully delivered. Quality modeling and analysis of clouds are not easy tasks because of the complexity of the provisioning mechanism and the dynamic cloud environment. This study proposes an analytical model-based approach for quality evaluation of Infrastructure-as-a-Service cloud and consider expected request completion time, rejection probability, and system overhead rate as key QoS metrics. It also features with the modeling of different warming and cooling strategies of machines and the ability to identify the optimal balance between system overhead and performance.",2013,0, 6682,Sensor health state estimation for target tracking with binary sensor networks,"We consider the problem of target (event source) tracking using a binary Wireless Sensor Network (WSN). For this problem, a WSN consisting of sensors that can detect the presence of a target in an area around them, should fuse the information received by the individual sensors in order to localize and track the target. This is a challenging problem particularly when sensors may fail either due to hardware and/or software malfunctions, energy depletion or adversary attacks. Using information from failed sensors during target tracking may lead to high estimation errors. Since failure of individual sensors is unavoidable, there is a need to estimate the health state of each sensor in order to ignore those sensors that are considered as faulty. The contribution of this work is the investigation of three different algorithms for estimating the sensors' health state simultaneously with target tracking.",2013,0, 6683,DSVM: A buffer management strategy for video transmission in opportunistic networks,"In recent years, more and more people have begun to focus on data delivery among mobile users through opportunistic networks, and in most cases, such as emergency, traffic accident and disaster, we need to transmit video data to other users by the means of opportunistic contacts between mobile users. Meanwhile, the buffers of mobile wireless devices are usually limited, then some messages which have different importance on video recovery, will be dropped during transmission. Therefore, it's imperative to design efficient buffer management strategy to improve the video delivery quality. Several policies have been presented, but they are just designed for general data and not suitable for video transmission. In this paper, we comprehensively take the temporal correlation of video data and the diffusivity of messages into account, and propose DSVM, a novel buffer management strategy. Extensive simulations validate its performance, and up to about 3dB Peak Signal-to-Noise Ratio (PSNR) gain can be achieved over the state-of-the-art buffer management policies.",2013,0, 6684,Efficient Formal Verification in Banking Processes,"Model checking is a very useful method to verify concurrent and distributed systems which is traditionally applied to computer system design. We examine the applicability of model checking to validation of Business Processes that are mapped through the systems of Workflow Management. The use of model checking in business domain is affected by the state explosion problem, which says that the state space grows exponentially in the number of concurrent processes. In this paper we consider a property-based methodology developed to combat the state explosion problem. Our focus is two fold; firstly we show how model checking can be applied in the context of business modelling and analysis and secondly we evaluate and test the methodology using as a case study a real-world banking workflow of a loan origination process. Our investigations suggest that the business community, especially in the banking field, can benefit from this efficient methodology developed in formal methods since it can detect errors that were missed by traditional verification techniques, and being cost-efficient, it can be adopted as a standard quality assurance procedure. We show and discuss the experimental results obtained.",2013,0, 6685,"Improving frequency and ROCOF accuracy during faults, for P class Phasor Measurement Units","Many aspects of Phasor Measurement Unit (PMU) performance are tested using the existing (and evolving) IEEE C37.118 standard. However, at present the reaction of PMUs to power network faults is not assessed under C37.118. Nevertheless, the behaviour of PMUs under such conditions may be important when the entire closed loop of power system measurement, control and response is considered. This paper presents ways in which P class PMU algorithms may be augmented with software which reduces peak frequency excursions during unbalanced faults by factors of typically between 2.5 and 6 with no additional effect on response time, delay or latency. Peak ROCOF excursions are also reduced. In addition, extra filtering which still allows P class response requirements to be met can further reduce excursions, in particular ROCOF. Further improvement of triggering by using midpoint taps of the P class filter, and adaptive filtering, allows peak excursions to be reduced by total factors of between 8 and 40 (or up to 180 for ROCOF), compared to the C37.118 reference device. Steady-state frequency and ROCOF errors during sustained faults or unbalanced operation, particularly under unbalanced conditions, can be reduced by factors of hundreds or thousands compared to the C37.118 reference device.",2013,0, 6686,Rigorous Performance Evaluation of Self-Stabilization Using Probabilistic Model Checking,"We propose a new metric for effectively and accurately evaluating the performance of self-stabilizing algorithms. Self-stabilization is a versatile category of fault-tolerance that guarantees system recovery to normal behavior within a finite number of steps, when the state of the system is perturbed by transient faults (or equally, the initial state of the system can be some arbitrary state). The performance of self-stabilizing algorithms is conventionally characterized in the literature by asymptotic computation complexity. We argue that such characterization of performance is too abstract and does not reflect accurately the realities of deploying a distributed algorithm in practice. Our new metric for characterizing the performance of self-stabilizing algorithms is the expected mean value of recovery time. Our metric has several crucial features. Firstly, it encodes accurate average case speed of recovery. Secondly, we show that our evaluation method can effectively incorporate several other parameters that are of importance in practice and have no place in asymptotic computation complexity. Examples include the type of distributed scheduler, likelihood of occurrence of faults, the impact of faults on speed of recovery, and network topology. We utilize a deep analysis technique, namely, probabilistic model checking to rigorously compute our proposed metric. All our claims are backed by detailed case studies and experiments.",2013,0, 6687,Adaptive Anomaly Identification by Exploring Metric Subspace in Cloud Computing Infrastructures,"Cloud computing has become increasingly popular by obviating the need for users to own and maintain complex computing infrastructures. However, due to their inherent complexity and large scale, production cloud computing systems are prone to various runtime problems caused by hardware and software faults and environmental factors. Autonomic anomaly detection is a crucial technique for understanding emergent, cloud-wide phenomena and self-managing cloud resources for system-level dependability assurance. To detect anomalous cloud behaviors, we need to monitor the cloud execution and collect runtime cloud performance data. These data consist of values of performance metrics for different types of failures, which display different correlations with the performance metrics. In this paper, we present an adaptive anomaly identification mechanism that explores the most relevant principal components of different failure types in cloud computing infrastructures. It integrates the cloud performance metric analysis with filtering techniques to achieve automated, efficient, and accurate anomaly identification. The proposed mechanism adapts itself by recursively learning from the newly verified detection results to refine future detections. We have implemented a prototype of the anomaly identification system and conducted experiments in an on-campus cloud computing environment and by using the Google data center traces. Our experimental results show that our mechanism can achieve more efficient and accurate anomaly detection than other existing schemes.",2013,0, 6688,Cross Domain Assessment of Document to HTML Conversion Tools to Quantify Text and Structural Loss during Document Analysis,"During forensic text analysis, the automation of the process is key when working with large quantities of documents. As documents often come in a wide variety of different file types, this creates the need for tailored tools to be developed to analyze each document type to correctly identify and extract text elements for analysis without loss. These text extraction tools often omit sections of text that are unreadable from documents leaving drastic inconsistencies during the forensic text analysis process. As a solution to this a single output format, HTML, was chosen as a unified analysis format. Document to HTML/CSS extraction tools each with varying techniques to convert common document formats to rich HTML/CSS counterparts were tested. This approach can reduce the amount of analysis tools needed during forensic text analysis by utilizing a single document format. Two tests were designed, a 10 point document overview test and a 48 point detailed document analysis test to assess and quantify the level of loss, rate of error and overall quality of outputted HTML structures. This study concluded that tools that utilize a number of different approaches and have an understanding of the document structure yield the best results with the least amount of loss.",2013,0, 6689,A Classifier of Malicious Android Applications,"Malware for smart phones is rapidly spreading out. This paper proposes a method for detecting malware based on three metrics, which evaluate: the occurrences of a specific subset of system calls, a weighted sum of a subset of permissions that the application required, and a set of combinations of permissions. The experimentation carried out suggests that these metrics are promising in detecting malware, but further improvements are needed to increase the quality of detection.",2013,0, 6690,Measuring the Portability of Executable Service-Oriented Processes,"A key promise of process languages based on open standards, such as the Web Services Business Process Execution Language, is the avoidance of vendor lock-in through the portability of process definitions among runtime environments. Despite the fact that today, various runtimes claim to support this language, every runtime implements a different subset, thus hampering portability and locking in their users. In this paper, we intend to improve this situation by enabling the measurement of the degree of portability of process definitions. This helps developers to assess their process definitions and to decide if it is feasible to invest in the effort of porting a process definition to another runtime. We define several software quality metrics that quantify the degree of portability a process definition provides from different viewpoints. We validate these metrics theoretically with two validation frameworks and empirically with a large set of process definitions coming from several process libraries.",2013,0, 6691,Detecting Software Aging in safety-critical infrastuctures,"In this paper we investigate the application of Software Aging and Rejuvenation in the context of Critical Infrastructures and Systems-of-Systems. Explained are the characteristics of Systems-of-Systems and classes of Critical Systems, attributes which define their dependability as a high priority requirement. In addition we survey Software Aging and Rejuvenation establishing founding research and recent research in this field. The work presented is on-going, however that discusses the challenges pertinent in the field of critical infrastructures, how we intend to investigate and propose new methods for applying context-sensitive fault-forecasting in a variety of complex systems that are associated with both domains and why context is so important.",2013,0, 6692,Non-destructive testing of wood defects for Korean pine in northeast China based on ultrasonic technology,"The wood samples were tested by the technique of ultrasonic, and the testing results were analyzed by using the statistic software of SPSS. The results showed that the length, density and knots of wood, the sizes of holes and numbers of holes have significant influence on propagation parameters and dynamic modulus of elasticity. If there are holes in the propagation path, the propagation time will be longer, and the propagation velocity and wood modulus will decrease accordingly. The studying results of this paper will provide a sound background for the application of ultrasonic technique in detecting the inner defects of wood products and other wooden structures, and also offer important reference for testing the inner defects of old trees and ancient buildings.",2013,0, 6693,Method for visualizing information from large-scale carrier networks,"With the increase in services, such as telephone, video on demand, and internet connection, networks now consist of various elements, such as routers, switches, and a wide variety of servers. The structure of a network has become more complicated. Therefore, failuare diagnosis and the affected area by using many alarms tends to be more difficult and the time required detecting the causal point of failure also becomes longer. However, to improve quality of services, reducing diagnosis time is essential. Alarm browsers and graphs are used to display the collected data from a networkto determine the network's status. An operator manages a network by envisioning the network structure. However, the larger the network becomes, the more difficult it is for operators to do this. Therefore, a topology view with geographical information and a topology view with hierarchical information of equipment are used. However, these views degrade if the scale of the network is even larger and more complex. We propose a method for visualizing network information on space and time axes. This method can support network operators to recognize causal points of failure and affected areas. We also explain a prototype software implementation of this visualization method.",2013,0, 6694,A Mechanism of Maintaining the Survivability of Streaming Media Service in Overlay Networks,"With the quick development of service-oriented network, the services of streaming media and their traffic has become a rather important part in the circumstance of virtual network. However, there hasn't been a unified control mechanism to maintain the qualities of these services. The QoS of streaming media is a great concern of current network, and it'll be vital in future network because of the rapid growth of internet users and traffics. This paper describes a mechanism of maintaining the survivalbility of streaming media service in the circumstance of overlay network. We propose a method to calculate the health value of each path and service. Through this mechanism we can evaluate the current quality of services and provide information for further decisions. As a proof of concept we implement an experimental scenario to assess the functionality and the availability of this mechanism. It has managed effectively in the evaluation of streaming services.",2013,0, 6695,Substation grounding transfer of potential case studies,"Industrial substation grounding studies usually assume a simple isolated substation, follow the routine IEEE 80 design and analysis procedures using the software tools bundled with the power systems analysis suite of choice, and carry on with the rest of the project. However, there can be complications to substation grounding designs. This paper discusses two case studies where a ground fault in one substation generates a voltage which is transferred to other areas. The issues are assessed using common software tools supplemented with simple spreadsheet calculations. Some common limitations of the standard software tools, when applied to these more complex problems, are discussed in this paper, along with appropriate work-arounds. Mitigation methods are discussed.",2013,0, 6696,"Design and implementation of an intelligent system to detect quality state of temperature defects in hot rolled strips: At Siderurgica del Orinoco """"Alfredo Maneiro""""","This work show the design and implementation of an on-line intelligent system to determine the quality status of temperature defects in the hot rolled strips manufactured by SIDOR. The proposed system is based on a combination of expert systems, the standard automation platform of the company, and signal processing techniques. The rules of the expert system were proposed by quality assurance experts, who hold a huge expertise identifying temperature defects in the hot rolled strips, which indirectly determine the mechanical properties of the material. The entire architecture of the system was designed according to software engineering practices. The results shows the system is successful identifying and applying the quality status of each strip manufactured by the mill, with an initial performance of 34.5% of retained coils, and 12% of released coils.",2013,0, 6697,Diagnosability Behaviour over faulty concurrent systems,"Complex systems often exhibit unexpected faults that are difficult to handle. It is desirable that such systems are diagnosable, i.e. faults are automatically detected as they occur (or shortly afterwards), enabling the system to handle the fault or recover. Formally, a system is diagnosable if it is possible to detect every fault, in a finite time after they occurred, by only observing available information from the system. Complex systems are usually built from simpler subsystems running concurrently. In order to model different communication and synchronization methods, the interactions between subsystems may be specified in various ways. In this work we present an analysis of the di-agnosability problem in concurrent systems under such different interaction strategies, with arbitrary faults occurring freely in subsystems. We rigorously define diagnosability in this setting, and formally prove in which cases diagnosability is preserved under composition. We illustrate our approach with several examples, and present a tool that implements our analysis.",2013,0, 6698,"The first decade of GUI ripping: Extensions, applications, and broader impacts","This paper provides a retrospective examination of GUI Ripping - reverse engineering a workflow model of the graphical user interface of a software application - born a decade ago out of recognition of the severe need for improving the then largely manual state-of-the-practice of functional GUI testing. In these last 10 years, GUI ripping has turned out to be an enabler for much research, both within our group at Maryland and other groups. Researchers have found new and unique applications of GUI ripping, ranging from measuring human performance to re-engineering legacy user interfaces. GUI ripping has also enabled large-scale experimentation involving millions of test cases, thereby helping to understand the nature of GUI faults and characteristics of test cases to detect them. It has resulted in large multi-institutional Government-sponsored research projects on test automation and benchmarking. GUI ripping tools have been ported to many platforms, including Java AWT and Swing, iOS, Android, UNO, Microsoft Windows, and web. In essence, the technology has transformed the way researchers and practitioners think about the nature of GUI testing, no longer considered a manual activity; rather, thanks largely to GUI Ripping, automation has become the primary focus of current GUI testing techniques.",2013,0, 6699,Distilling useful clones by contextual differencing,"Clone detectors find similar code fragments and report large numbers of them for large systems. Textually similar clones may perform different computations, depending on the program context in which clones occur. Understanding these contextual differences is essential to distill useful clones for a specific maintenance task, such as refactoring. Manual analysis of contextual differences is time consuming and error-prone. To mitigate this problem, we present an automated approach to helping developers find and analyze contextual differences of clones. Our approach represents context of clones as program dependence graphs, and applies a graph differencing technique to identify required contextual differences of clones. We implemented a tool called CloneDifferentiator that identifies contextual differences of clones and allows developers to formulate queries to distill candidate clones that are useful for a given refactoring task. Two empirical studies show that CloneDifferentiator can reduce the efforts of post-detection analysis of clones for refactorings.",2013,0, 6700,The influence of non-technical factors on code review,"When submitting a patch, the primary concerns of individual developers are How can I maximize the chances of my patch being approved, and minimize the time it takes for this to happen? In principle, code review is a transparent process that aims to assess qualities of the patch by their technical merits and in a timely manner; however, in practice the execution of this process can be affected by a variety of factors, some of which are external to the technical content of the patch itself. In this paper, we describe an empirical study of the code review process for WebKit, a large, open source project; we replicate the impact of previously studied factors - such as patch size, priority, and component and extend these studies by investigating organizational (the company) and personal dimensions (reviewer load and activity, patch writer experience) on code review response time and outcome. Our approach uses a reverse engineered model of the patch submission process and extracts key information from the issue tracking and code review systems. Our findings suggest that these nontechnical factors can significantly impact code review outcomes.",2013,0, 6701,A model-driven graph-matching approach for design pattern detection,"In this paper an approach to automatically detect Design Patterns (DPs) in Object Oriented systems is presented. It allows to link system's source code components to the roles they play in each pattern. DPs are modelled by high level structural properties (e.g. inheritance, dependency, invocation, delegation, type nesting and membership relationships) that are checked against the system structure and components. The proposed metamodel also allows to define DP variants, overriding the structural properties of existing DP models, to improve detection quality. The approach was validated on an open benchmark containing several open-source systems of increasing sizes. Moreover, for other two systems, the results have been compared with the ones from a similar approach existing in literature. The results obtained on the analyzed systems, the identified variants and the efficiency and effectiveness of the approach are thoroughly presented and discussed.",2013,0, 6702,Automatic discovery of function mappings between similar libraries,"Library migration is the process of replacing a third-party library in favor of a competing one during software maintenance. The process of transforming a software source code to become compliant with a new library is cumbersome and error-prone. Indeed, developers have to understand a new Application Programming Interface (API) and search for the right replacements for the functions they use from the old library. As the two libraries are independent, the functions may have totally different structures and names, making the search of mappings very difficult. To assist the developers in this difficult task, we introduce an approach that analyzes source code changes from software projects that already underwent a given library migration to extract mappings between functions. We demonstrate the applicability of our approach on several library migrations performed on the Java open source software projects.",2013,0, 6703,Heuristics for discovering architectural violations,"Software architecture conformance is a key software quality control activity that aims to reveal the progressive gap normally observed between concrete and planned software architectures. In this paper, we present ArchLint, a lightweight approach for architecture conformance based on a combination of static and historical source code analysis. For this purpose, ArchLint relies on four heuristics for detecting both absences and divergences in source code based architectures. We applied ArchLint in an industrial-strength system and as a result we detected 119 architectural violations, with an overall precision of 46.7% and a recall of 96.2%, for divergences. We also evaluated ArchLint with four open-source systems, used in an independent study on reflexion models. In this second study, ArchLint achieved precision results ranging from 57.1% to 89.4%.",2013,0, 6704,Circe: A grammar-based oracle for testing Cross-site scripting in web applications,"Security is a crucial concern, especially for those applications, like web-based programs, that are constantly exposed to potentially malicious environments. Security testing aims at verifying the presence of security related defects. Security tests consist of two major parts, input values to run the application and the decision if the actual output matches the expected output, the latter is known as the oracle. In this paper, we present a process to build a security oracle for testing Cross-site scripting vulnerabilities in web applications. In the learning phase, we analyze web pages generated in safe conditions to learn a model of their syntactic structure. Then, in the testing phase, the model is used to classify new test cases either as safe tests or as successful attacks. This approach has been implemented in a tool, called Circe, and empirically assessed in classifying security test cases for two real world open source web applications.",2013,0, 6705,Improving SOA antipatterns detection in Service Based Systems by mining execution traces,"Service Based Systems (SBSs), like other software systems, evolve due to changes in both user requirements and execution contexts. Continuous evolution could easily deteriorate the design and reduce the Quality of Service (QoS) of SBSs and may result in poor design solutions, commonly known as SOA antipatterns. SOA antipatterns lead to a reduced maintainability and reusability of SBSs. It is therefore important to first detect and then remove them. However, techniques for SOA antipattern detection are still in their infancy, and there are hardly any tools for their automatic detection. In this paper, we propose a new and innovative approach for SOA antipattern detection called SOMAD (Service Oriented Mining for Antipattern Detection) which is an evolution of the previously published SODA (Service Oriented Detection For Antpatterns) tool. SOMAD improves SOA antipattern detection by mining execution traces: It detects strong associations between sequences of service/method calls and further filters them using a suite of dedicated metrics. We first present the underlying association mining model and introduce the SBS-oriented rule metrics. We then describe a validating application of SOMAD to two independently developed SBSs. A comparison of our new tool with SODA reveals superiority of the former: Its precision is better by a margin ranging from 2.6% to 16.67% while the recall remains optimal at 100% and the speed is significantly reduces (2.5+ times on the same test subjects).",2013,0, 6706,Mining system specific rules from change patterns,"A significant percentage of warnings reported by tools to detect coding standard violations are false positives. Thus, there are some works dedicated to provide better rules by mining them from source code history, analyzing bug-fixes or changes between system releases. However, software evolves over time, and during development not only bugs are fixed, but also features are added, and code is refactored. In such cases, changes must be consistently applied in source code to avoid maintenance problems. In this paper, we propose to extract system specific rules by mining systematic changes over source code history, i.e., not just from bug-fixes or system releases, to ensure that changes are consistently applied over source code. We focus on structural changes done to support API modification or evolution with the goal of providing better rules to developers. Also, rules are mined from predefined rule patterns that ensure their quality. In order to assess the precision of such specific rules to detect real violations, we compare them with generic rules provided by tools to detect coding standard violations on four real world systems covering two programming languages. The results show that specific rules are more precise in identifying real violations in source code than generic ones, and thus can complement them.",2013,0, 6707,Mining the relationship between anti-patterns dependencies and fault-proneness,"Anti-patterns describe poor solutions to design and implementation problems which are claimed to make object oriented systems hard to maintain. Anti-patterns indicate weaknesses in design that may slow down development or increase the risk of faults or failures in the future. Classes in anti-patterns have some dependencies, such as static relationships, that may propagate potential problems to other classes. To the best of our knowledge, the relationship between anti-patterns dependencies (with non anti-patterns classes) and faults has yet to be studied in details. This paper presents the results of an empirical study aimed at analysing anti-patterns dependencies in three open source software systems, namely ArgoUML, JFreeChart, and XerecesJ. We show that, in almost all releases of the three systems, classes having dependencies with anti-patterns are more fault-prone than others. We also report other observations about these dependencies such as their impact on fault prediction. Software organizations could make use of these knowledge about anti-patterns dependencies to better focus their testing and reviews activities toward the most risky classes, e.g., classes with fault-prone dependencies with anti-patterns.",2013,0, 6708,Assessing the complexity of upgrading software modules,"Modern software development frequently involves developing multiple codelines simultaneously. Improvements to one codeline should often be applied to other codelines as well, which is typically a time consuming and error-prone process. In order to reduce this (manual) effort, changes are applied to the system's modules and those affected modules are upgraded on the target system. This is a more coarse-grained approach than upgrading the affected files only. However, when a module is upgraded, one must make sure that all its dependencies are still satisfied. This paper proposes an approach to assess the ease of upgrading a software system. An algorithm was developed to compute the smallest set of upgrade dependencies, given the current version of a module and the version it has to be upgraded to. Furthermore, a visualization has been designed to explain why upgrading one module requires upgrading many additional modules. A case study has been performed at ASML to study the ease of upgrading the TwinScan software. The analysis shows that removing elements from interfaces leads to many additional upgrade dependencies. Moreover, based on our analysis we have formulated a number improvement suggestions such as a clear separation between the test code and the production code as well as an introduction of a structured process of symbols deprecation and removal.",2013,0, 6709,Analyzing PL/1 legacy ecosystems: An experience report,This paper presents a case study of analyzing a legacy PL/1 ecosystem that has grown for 40 years to support the business needs of a large banking company. In order to support the stakeholders in analyzing it we developed St1-PL/1 - a tool that parses the code for association data and computes structural metrics which it then visualizes using top-down interactive exploration. Before building the tool and after demonstrating it to stakeholders we conducted several interviews to learn about legacy ecosystem analysis requirements. We briefly introduce the tool and then present results of analysing the case study. We show that although the vision for the future is to have an ecosystem architecture in which systems are as decoupled as possible the current state of the ecosystem is still removed from this. We also present some of the lessons learned during our experience discussions with stakeholders which include their interests in automatically assessing the quality of the legacy code.,2013,0, 6710,Detecting dependencies in Enterprise JavaBeans with SQuAVisiT,"We present recent extensions to SQuAVisiT, Software Quality Assessment and Visualization Toolset. While SQuAVisiT has been designed with traditional software and traditional caller-callee dependencies in mind, recent popularity of Enterprise JavaBeans (EJB) required extensions that enable analysis of additional forms of dependencies: EJB dependency injections, object-relational (persistence) mappings and Web service mappings. In this paper we discuss the implementation of these extensions in SQuAVisiT and the application of SQuAVisiT to an open-source software system.",2013,0, 6711,Disjoint paths pair computation procedure for SDH/SONET networks,"The increasing demand for bandwidth and the error-prone, costly, and long-lasting service provisioning process force carriers to find new ways of automatic service provisioning. Despite the introduction of numerous path computation procedures, suitable for WDM-technology-based networks, very few studies exist that discusses the SDH/SONET specific multiplexing requirements. Furthermore, the existing disjoint path computation procedures do not always find the best possible path between two nodes. In this paper, we propose a path computation procedure capable of addressing the SDH/SONET multiplexing requirements and the shortcomings of the existing disjoint path computation procedures. Our disjoint paths pair computation procedure is applied to the topology of the NSF.net network. The simulation results, obtained with Matlab, suggest that the proposed procedure outperforms existing path computation procedures. The proposed procedure covers the demand for bandwidth with fewer resources. Furthermore, it finds paths considering the capacity units of SDH/SONET. It is also observed that the time complexity is tolerable.",2013,0, 6712,End-to-end QoS management across LTE networks,"In order to effectively deliver traffic from different applications, providing end-to-end Quality of Service (QoS) is critical in Long Term Evolution (LTE) networks. Mobility requires special handling of QoS enforcement rules and methods. The LTE QoS Signaling (LQSIG) protocol presented in this paper will allow ensuring resource reservation before using a data path. Operation of the proposed protocol in different mobility scenarios is also explained. The key features of the protocol includes LTE QoS model mapping to QSPEC objects used in reservation and interworking with mobility protocols in the LTE protocol stack, especially with Radio Resource Control (RRC). The basics of an analytical model is proposed in order to determine the blocking probability at the bottlenecks of LTE network.",2013,0, 6713,Energy-efficient and low blocking probability differentiated quality of protection scheme for dynamic elastic optical networks,We proposed a differentiated quality of protection scheme to improve the energy efficiency and to reduce the blocking probability in dynamic elastic optical core networks. Simulation results show significant energy efficiency improvements and a notably lower blocking ratio of this novel scheme compared to 1+1 dedicated protection.,2013,0, 6714,Traveling-wave-based line fault location in star-connected multiterminal HVDC systems,"Summary form only given. This paper presents a novel algorithm to determine the location of dc line faults in an HVDC system with multiple terminals connected to a common point, using only the measurements taken at the converter stations. The algorithm relies on the traveling-wave principle, and requires the fault-generated surge arrival times at the converter terminals. With accurate surge arrival times obtained from time-synchronized measurements, the proposed algorithm can accurately predict the faulty segment as well as the exact fault location. Continuous wavelet transform coefficients of the input signal are used to determine the precise time of arrival of traveling waves at the dc line terminals. Performance of the proposed fault-location scheme is analyzed through detailed simulations carried out using the electromagnetic transient simulation software PSCAD. The algorithm does not use reflected waves for its calculations and therefore it is more robust compared to fault location algorithms previously proposed for teed transmission lines. Furthermore, the algorithm can be generalized to handle any number of line segments connected to the star point.",2013,0, 6715,Unambiguous power system dynamic modeling and simulation using modelica tools,"Dynamic modeling and time-domain simulation for power systems is inconsistent across different simulation platforms, which makes it difficult for engineers to consistently exchange models and assess model quality. Therefore, there is a clear need for unambiguous dynamic model exchange. In this article, a possible solution is proposed by using open modeling equation-based Modelica tools. The nature of the Modelica modeling language supports model exchange at the equation-level, this allows for unambiguous model exchange between different Modelica-based simulation tools without loss of information about the model. An example of power system dynamic model exchange between two Modelica-based software Scilab/Xcos and Dymola is presented. In addition, common issues related to simulation, including the extended modeling of complex controls, the capabilities of the DAE solvers and initialization problems are discussed. In order to integrate power system Modelica models into other simulation tools (Matlab/Simulink), the utilization of the FMI Toolbox is investigated as well.",2013,0, 6716,Voltage unbalance emission assessment in radial power systems,"Voltage unbalance (VU) emission assessment is an integral part in the VU management process where loads are allocated a portion of the unbalance absorption capacity of the power system. The International Electrotechnical Commission Report IEC/TR 61000-3-13:2008 prescribes a VU emission allocation methodology establishing the fact that the VU can arise at the point of common connection (PCC) due to both upstream network unbalance and load unbalance. Although this is the case for emission allocation, approaches for post connection emission assessment do not exist except for cases where the load is the only contributor to the VU at the PCC. Such assessment methods require separation of the post connection VU emission level into its constituent parts. In developing suitable methodologies for this purpose, the pre- and post-connection data requirements need to be given due consideration to ensure that such data can be easily established. This paper presents systematic, theoretical bases which can be used to assess the individual VU emission contributions made by the upstream source, asymmetrical line and the load for a radial power system. The methodology covers different load configurations including induction motors. Assessments obtained employing the theoretical bases on the study system were verified by using unbalanced load flow analysis in MATLAB and using DIgSILENT PowerFactory software.",2013,0, 6717,Synthesis of clock trees for Sampled-Data Analog IC blocks,"This paper describes a methodology for automated design of clock trees in Sampled-Data Analog Circuits (SDACs). The current practice in the industry and academia for clock tree design of SDACs is a manual process, which is time-consuming and error-prone. Clock tree design in digital domain, however, is fully automated and is carried out by what we call Clock Tree Synthesis (CTS) software. In spite of some critical differences, SDAC clock tree design problem has fundamental similarities with its digital counterpart. As a result, we were able to construct a methodology for SDACs around a commercial digital CTS software and a set of Perl & Tcl scripts. We will explain our methodology using a 10-bit 180 MHz 2-stage ADC as a test circuit.",2013,0, 6718,On the development of diagnostic test programs for VLIW processors,"Software-Based Self-Test (SBST) approaches have shown to be an effective solution to detect permanent faults, both at the end of the production process, and during the operational phase. When partial reconfiguration is adopted to deal with permanent faults, we also need to identify the faulty module, which is then substituted with a spare one. Software-based Diagnosis techniques can be exploited for this purpose, too. When Very Long Instruction Word (VLIW) processors are addressed, these techniques can effectively exploit the parallelism intrinsic in these architectures. In this paper we propose a new approach that starting from existing detection-oriented programs generates a diagnosis-oriented test program which in most cases is able to identify the faulty module. Experimental results gathered on a case study show the effectiveness of the proposed approach.",2013,0, 6719,Scalable fragile watermarking for image authentication,"Semi-fragile watermarks are used to detect unauthorised changes to an image, whereas tolerating allowed changes such as compression. Most semi-fragile algorithms that tolerate compression assume that because compression only removes the less visually significant data from an image, tampering with any data that would normally be removed by compression cannot affect a meaningful change to the image. Scalable compression allows a single compressed image to produce a variety of reduced resolution or reduced quality images, termed subimages, to suit the different display or bandwidth requirements of each user. However, highly scaled subimages remove a substantial fraction of the data in the original image, so the assumption used by most semi-fragile algorithms breaks down, as tampering with this data allows meaningful changes to the image content. The authors propose a scalable fragile watermarking algorithm for authentication of scalable JPEG2000 compressed images. It tolerates the loss of large amounts of image data because of resolution or quality scaling, producing no false alarms. Yet, it also protects that data from tampering, detecting even minor manipulations other than scaling, and is secure against mark transfer and collage attacks. Experimental results demonstrate this for scaling down to 1/1024th the area of the original or to 1/100th the file size.",2013,0, 6720,Extending IP-XACT to embedded system HW/SW integration,"Typical MPSoC FPGA product design is a rigid waterfall process proceeding one-way from HW to SW design. Any changes to HW trigger the SW project re-creation from the beginning. When several product variations or speculative development time exploration is required, the disk bloats easily with hundreds of Board Support Package (BSP), configuration and SW project files. In this paper, we present an IP-XACT based design flow that solves the problems by agile re-use of HW and SW components, automation and single golden reference source for information. We also present new extensions to IP-XACT since the standard lacks SW related features. Three use cases demonstrate how the BSP is changed, an application is moved to another processor and a function is moved from SW implementation to a HW accelerator. Our flow reduces the design time to one third compared to the conventional FPGA flow, the number of automated design phases is doubled and any manual error prone data transfer between HW and SW tools is completely avoided.",2013,0, 6721,Reliability comparison of various regenerating codes for cloud services,"In this paper, we consider the reliability comparison of various regenerating codes for cloud services. By simulations, we compare the performance of such regenerating codes as MBR codes, MSR codes, local reconstruction codes, and LT regenerating codes, in terms of storage overhead, repair read cost, and repair failure probability. We give some discussions on what must be done in the near future.",2013,0, 6722,Evaluating IEEE 802.11s mesh channel switching using open source solution,"To avoid interference from a detected radar signal, or to reassign mesh station (STA) channels to ensure the connectivity, IEEE 802.11s defines a procedure on how to propagate the channel switch attempt throughout the mesh network, known as mesh Basic Service Set (MBSS) channel switch. Wireless Mesh Network (WMN) that utilizes single-radio nodes with omni-directional and directional antennas is ideally suited for rural area where cost and simplicity take precedence over service quality. However, single-channel single-radio mesh network is easily affected by the interference from its neighborhood, especially from co-located Wi-Fi deployment or other devices operating in the same frequency channel. Thus, the ability to switch to a new channel for self healing is indeed appealing to single-channel single-radio mesh network. The implementation of 802.11s in Linux kernel is available since the year 2007, but the MBSS channel switching has yet to be implemented. This paper describes the MBSS channel switching in details and also discusses our efforts to implement this in the Linux wireless subsystem. Our implementation are verified and evaluated in our experimental testbed.",2013,0, 6723,Constraint-Based Autonomic Reconfiguration,"Declarative, object-oriented configuration management systems are widely used by system administrators. Recently, logical constraints have been added to such systems to facilitate the automatic generation of configurations. However, there is no facility for reasoning about subsequent reconfigurations, such as those needed in an autonomic configuration system. In this paper we develop a number of language primitives, which facilitate not only one-off configuration tasks, but also subsequent reconfigurations in which the previous state of the system is taken into account. We show how it can be directly integrated into a declarative language, and assess its impact on performance.",2013,0, 6724,FESAS: Towards a Framework for Engineering Self-Adaptive Systems,"The complexity and size of information systems are growing, resulting in an increasing effort for maintenance. Self-adaptive systems (SAS) that autonomously adapt to changes in the environment or in the system itself (e.g. disfunction of components) can be a solution. So far, the development of SAS is frequently tailored to specific use case requirements. The creation of frameworks with reusable process elements and system components is often neglected. However, with such a framework developing SAS would become faster and less error prone. This work addresses this gap by providing a framework for engineering SAS.",2013,0, 6725,QoS-Aware VM Placement in Multi-domain Service Level Agreements Scenarios,"Virtualization technologies of Infrastructure-as-a- Service enable the live migration of running Virtual Machines (VMs) to achieve load balancing, fault-tolerance and hardware consolidation in data centers. However, the downtime/service unavailability due to live migration may be substantial with relevance to the customers' expectations on responsiveness, as the latter are declared in established Service Level Agreements (SLAs). Moreover, it may cause significant (potentially exponential) SLA violation penalties to its associated higher- level domains (Platform-as-a-Service and Software-as-a-Service). Therefore, VM live migration should be managed carefully. In this paper, we present the OpenStack version of the Generic SLA Manager, alongside its strategies for VM selection and allocation during live migration of VMs. We simulate a use case where IaaS (OpenStack-SLAM) and PaaS (OpenShift) are combined, and assess performance and efficiency of the aforementioned VM placement strategies, when a multi-domain SLA pricing & penalty model is involved. We find that our proposal is efficient in managing trade-offs between the operational objectives of service providers (including financial considerations) and the customers' expected QoS requirements.",2013,0, 6726,An Empirical Study of API Stability and Adoption in the Android Ecosystem,"When APIs evolve, clients make corresponding changes to their applications to utilize new or updated APIs. Despite the benefits of new or updated APIs, developers are often slow to adopt the new APIs. As a first step toward understanding the impact of API evolution on software ecosystems, we conduct an in-depth case study of the co-evolution behavior of Android API and dependent applications using the version history data found in github. Our study confirms that Android is evolving fast at a rate of 115 API updates per month on average. Client adoption, however, is not catching up with the pace of API evolution. About 28% of API references in client applications are outdated with a median lagging time of 16 months. 22% of outdated API usages eventually upgrade to use newer API versions, but the propagation time is about 14 months, much slower than the average API release interval (3 months). Fast evolving APIs are used more by clients than slow evolving APIs but the average time taken to adopt new versions is longer for fast evolving APIs. Further, API usage adaptation code is more defect prone than the one without API usage adaptation. This may indicate that developers avoid API instability.",2013,0, 6727,"How We Design Interfaces, and How to Assess It","Interfaces are widely used in Java applications as central design elements for modular programming to increase program reusability and to ease maintainability of software systems. Despite the importance of interfaces and a considerable research effort that has investigated code quality and concrete classes' design, few works have investigated interfaces' design. In this paper, we empirically study interfaces' design and its impact on the design quality of implementing classes (i.e., class cohesion) analyzing twelve Java object-oriented applications. In this study we propose the """"Interface-Implementations Model"""" that we use to adapt class cohesion metrics to assess the cohesion of interfaces based on their implementations. Moreover, we use other metrics that evaluate the conformance of interfaces to the well-known design principles """"Program to an Interface, not an implementation"""" and """"Interface Segregation Principle"""". The results show that software developers abide well by the interface design principles cited above, but they neglect the cohesion property. The results also show that such design practices of interfaces lead to a degraded cohesion of implementing classes, where these latter would be characterized by a worse cohesion than other classes.",2013,0, 6728,DRONE: Predicting Priority of Reported Bugs by Multi-factor Analysis,"Bugs are prevalent. To improve software quality, developers often allow users to report bugs that they found using a bug tracking system such as Bugzilla. Users would specify among other things, a description of the bug, the component that is affected by the bug, and the severity of the bug. Based on this information, bug triagers would then assign a priority level to the reported bug. As resources are limited, bug reports would be investigated based on their priority levels. This priority assignment process however is a manual one. Could we do better? In this paper, we propose an automated approach based on machine learning that would recommend a priority level based on information available in bug reports. Our approach considers multiple factors, temporal, textual, author, related-report, severity, and product, that potentially affect the priority level of a bug report. These factors are extracted as features which are then used to train a discriminative model via a new classification algorithm that handles ordinal class labels and imbalanced data. Experiments on more than a hundred thousands bug reports from Eclipse show that we can outperform baseline approaches in terms of average F-measure by a relative improvement of 58.61%.",2013,0, 6729,Predicting Bugs Using Antipatterns,"Bug prediction models are often used to help allocate software quality assurance efforts. Software metrics (e.g., process metrics and product metrics) are at the heart of bug prediction models. However, some of these metrics like churn are not actionable, on the contrary, antipatterns which refer to specific design and implementation styles can tell the developers whether a design choice is """"poor"""" or not. Poor designs can be fixed by refactoring. Therefore in this paper, we explore the use of antipatterns for bug prediction, and strive to improve the accuracy of bug prediction models by proposing various metrics based on antipatterns. An additional feature to our proposed metrics is that they take into account the history of antipatterns in files from their inception into the system. Through a case study on multiple versions of Eclipse and ArgoUML, we observe that (i) files participating in antipatterns have higher bug density than other files, (ii) our proposed antipattern based metrics can provide additional explanatory power over traditional metrics, and (iii) improve the F-measure of cross-system bug prediction models by 12.5% in average. Managers and quality assurance personnel can use our proposed metrics to better improve their bug prediction models and better focus testing activities and the allocation of support resources.",2013,0, 6730,Will Fault Localization Work for These Failures? An Automated Approach to Predict Effectiveness of Fault Localization Tools,"Debugging is a crucial yet expensive activity to improve the reliability of software systems. To reduce debugging cost, various fault localization tools have been proposed. A spectrum-based fault localization tool often outputs an ordered list of program elements sorted based on their likelihood to be the root cause of a set of failures (i.e., their suspiciousness scores). Despite the many studies on fault localization, unfortunately, however, for many bugs, the root causes are often low in the ordered list. This potentially causes developers to distrust fault localization tools. Recently, Parnin and Orso highlight in their user study that many debuggers do not find fault localization useful if they do not find the root cause early in the list. To alleviate the above issue, we build an oracle that could predict whether the output of a fault localization tool can be trusted or not. If the output is not likely to be trusted, developers do not need to spend time going through the list of most suspicious program elements one by one. Rather, other conventional means of debugging could be performed. To construct the oracle, we extract the values of a number of features that are potentially related to the effectiveness of fault localization. Building upon advances in machine learning, we process these feature values to learn a discriminative model that is able to predict the effectiveness of a fault localization tool output. In this preliminary work, we consider an output of a fault localization tool to be effective if the root cause appears in the top 10 most suspicious program elements. We have experimented our proposed oracle on 200 faulty programs from Space, NanoXML, XML-Security, and the 7 programs in Siemens test suite. Our experiments demonstrate that we could predict the effectiveness of fault localization tool with a precision, recall, and F-measure (harmonic mean of precision and recall) of 54.36%, 95.29%, and 69.23%. The numbers indicate that many ineffective - ault localization instances are identified correctly, while only very few effective ones are identified wrongly.",2013,0, 6731,Supporting and Accelerating Reproducible Research in Software Maintenance Using TraceLab Component Library,"Research studies in software maintenance are notoriously hard to reproduce due to lack of datasets, tools, implementation details (e.g., parameter values, environmental settings) and other factors. The progress in the field is hindered by the challenge of comparing new techniques against existing ones, as researchers have to devote a lot of their resources to the tedious and error-prone process of reproducing previously introduced approaches. In this paper, we address the problem of experiment reproducibility in software maintenance and provide a long term solution towards ensuring that future experiments will be reproducible and extensible. We conducted a mapping study of a number of representative maintenance techniques and approaches and implemented them as a library of experiments and components that we make publicly available with TraceLab, called the Component Library. The goal of these experiments and components is to create a body of actionable knowledge that would (i) facilitate future research and would (ii) allow the research community to contribute to it as well. In addition, to illustrate the process of using and adapting these techniques, we present an example of creating new techniques based on existing ones, which produce improved results.",2013,0, 6732,How Does Context Affect the Distribution of Software Maintainability Metrics?,"Software metrics have many uses, e.g., defect prediction, effort estimation, and benchmarking an organization against peers and industry standards. In all these cases, metrics may depend on the context, such as the programming language. Here we aim to investigate if the distributions of commonly used metrics do, in fact, vary with six context factors: application domain, programming language, age, lifespan, the number of changes, and the number of downloads. For this preliminary study we select 320 nontrivial software systems from Source Forge. These software systems are randomly sampled from nine popular application domains of Source Forge. We calculate 39 metrics commonly used to assess software maintainability for each software system and use Kruskal Wallis test and Mann-Whitney U test to determine if there are significant differences among the distributions with respect to each of the six context factors. We use Cliff's delta to measure the magnitude of the differences and find that all six context factors affect the distribution of 20 metrics and the programming language factor affects 35 metrics. We also briefly discuss how each context factor may affect the distribution of metric values. We expect our results to help software benchmarking and other software engineering methods that rely on these commonly used metrics to be tailored to a particular context.",2013,0, 6733,Can Refactoring Cyclic Dependent Components Reduce Defect-Proneness?,"Previous studies have shown that dependency cycles contain significant number of defects, defect-prone components and account for the most critical defects. Thereby, demonstrating the impacts of cycles on software reliability. This preliminary study investigates the variables in a cyclic dependency graph that relate most with the number of defect-prone components in such graphs so as to motivate and guide decisions for possible system refactoring. By using network analysis and statistical methods on cyclic graphs of Eclipse and Apache-Active MQ, we have examined the relationships between the size and distance measures of cyclic dependency graphs. The size of the cyclic graphs consistently correlates more with the defect-proneness of components in these systems than other measures. Showing that adding new components to and/or creating new dependencies within an existing cyclic dependency structures are stronger in increasing the likelihood of defect-proneness. Our next study will investigate whether there is a cause and effect between refactoring (breaking) cyclic dependencies and defect-proneness of affected components.",2013,0, 6734,"Determining """"Grim Reaper"""" Policies to Prevent Languishing Bugs","Long-lived software products commonly have a large number of reported defects, some of which may not be fixed for a lengthy period of time, if ever. These so-called languishing bugs can incur various costs to project teams, such as wasted time in release planning and in defect analysis and inspection. They also result in an unrealistic view of the number of bugs still to be fixed at a given time. The goal of this work is to help software practitioners mitigate their costs from languishing bugs by providing a technique to predict and pre-emptively close them. We analyze defect fix times from an ABB program and the Apache HTTP server, and find that both contain a substantial number of languishing bugs. We also train decision tree classification models to predict whether a given bug will be fixed within a desired time period. We propose that an organization could use such a model to form a """"grim reaper"""" policy, whereby bugs that are predicted to become languishing will be pre-emptively closed. However, initial results are mixed, with models for the ABB program achieving F-scores of 63-95%, while the Apache program has Fscores of 21-59%.",2013,0, 6735,Towards a Scalable Cloud Platform for Search-Based Probabilistic Testing,"Probabilistic testing techniques that sample input data at random from a probability distribution can be more effective at detecting faults than deterministic techniques. However, if overly large (and therefore expensive) test sets are to be avoided, the probability distribution from which the input data is sampled must be optimised to the particular software-under-test. Such an optimisation process is often resource-intensive. In this paper, we present a prototypical cloud platform-and architecture-that permits the optimisation of such probability distributions in a scalable, distributed and robust manner, and thereby enables cost-effective probabilistic testing.",2013,0, 6736,TRINITY: An IDE for the Matrix,"Digital forensics software often has to be changed to cope with new variants and versions of file formats. Developers reverse engineer the actual files, and then change the source code of the analysis tools. This process is error-prone and time consuming because the relation between the newly encountered data and how the source code must be changed is implicit. TRINITY is an integrated debugging environment which makes this relation explicit using the DERRIC DSL for describing file formats. TRINITY consists of three simultaneous views: 1) the runtime state of an analysis, 2) a hex view of the actual data, and 3) the file format description. Cross-view trace ability links allow developers to better understand how the file format description should be modified. TRINITY aims to make the process of adapting digital forensics software more effective and efficient.",2013,0, 6737,Improving Statistical Approach for Memory Leak Detection Using Machine Learning,"Memory leaks are major problems in all kinds of applications, depleting their performance, even if they run on platforms with automatic memory management, such as Java Virtual Machine. In addition, memory leaks contribute to software aging, increasing the complexity of software maintenance. So far memory leak detection was considered to be a part of development process, rather than part of software maintenance. To detect slow memory leaks as a part of quality assurance process or in production environments statistical approach for memory leak detection was implemented and deployed in a commercial tool called Plumbr. It showed promising results in terms of leak detection precision and recall, however, even better detection quality was desired. To achieve this improvement goal, classification algorithms were applied to the statistical data, which was gathered from customer environments where Plumbr was deployed. This paper presents the challenges which had to be solved, method that was used to generate features for supervised learning and the results of the corresponding experiments.",2013,0, 6738,Implementation of semi-virtual Multiple-Master/Multiple-Slave system,"Building an experimental robotic setup can be a very tedious, prone to hardware faults and expensive process. A common way to circumvent some of this problems is to model a part or entire system in software. Moreover, virtual environments can be the only option to model hazardous or inaccessible sites. However, implementation of teleoperated robotic systems with force feedback exhibits additional problems. The human-robot interfaces should exist in hardware and this in turn requires simulated system to work in a proximity with a real-time. In this paper we describe a successful implementation of such a system in Gazebo simulator and Robot Operating System. We built an experimental Multiple-Master/Multiple-Slave setup in virtual environment for peg-in-hole task that consists of two 7-DOF Schunk LWA-3 robots and a common object to manipulate. To display forces to operators two SensAble PHANToM devices were utilized.",2013,0, 6739,Evaluating Neutron Induced SEE in SRAM-Based FPGA Protected by Hardware- and Software-Based Fault Tolerant Techniques,"This paper presents an approach to detect SEEs in SRAM-based FPGAs by using software-based techniques combined with a nonintrusive hardware module. We implemented a MIPS-based soft-core processor in a Virtex5 FPGA and hardened it with software- and hardware-based fault tolerance techniques. First fault injection in the configuration memory bitstream was performed in order to verify the feasibility of the proposed approach, detection rates and diagnosis. Furthermore a neutron radiation experiment was performed at LANSCE. Results demonstrate the possibility of employing more flexible fault tolerant techniques to SRAM-based FPGAs with a high detection rate. Comparisons between bitstream fault injection and radiation test is also presented.",2013,0, 6740,Use of hydroxyl-modified carbon nanotubes for detecting SF6 decomposition products under partial discharge in gas insulated switchgear,"Gas-insulated switchgear (GIS) has inherent internal defects that may result in partial discharge (PD) and the eventual development of equipment faults. PD in GIS can lead to the generation of multiple decomposition products of SF6, and the detection and analysis of these decomposition products is important for fault diagnosis. In this paper, a molecular dynamics simulation software package, Materials Studio (MS), is used to model accurately the processes by which single-walled carbon nanotubes modified by hydroxyl (SWNT-OH) adsorb the main decomposition products of SF6 (SOF2, SO2F2, SO2 and CF4) generated by PD. In addition, experimental studies are performed to validate the predicted gas-sensing characteristics. The theoretical calculations and experimental results both indicate that, of the four gases, SWNT-OH showed the fastest response time and highest sensitivity to SO2. The sensitivities of SWNT-OH to the other gases were low, and response times long. We conclude that SWNT-OH shows good sensitivity and selectivity to SO2.",2013,0, 6741,Integrating a market-based model in trust-based service systems,"The reputation-based trust mechanism is a way to assess the trustworthiness of offered services, based on the feedback obtained from their users. In the absence of appropriate safeguards, service users can still manipulate this feedback. Auction mechanisms have already addressed the problem of manipulation by markettrading participants. When auction mechanisms are applied to trust systems, their interaction with the trust systems and associated overhead need to be quantitatively evaluated. This paper proposes two distributed architectures based on centralized and hybrid computing for integrating an auction mechanism with the trust systems. The empirical evaluation demonstrates how the architectures help to discourage users from giving untruthful feedback and reduce the overhead costs of the auction mechanisms.",2013,0, 6742,Control of Multiple Packet Schedulers for Improving QoS on OpenFlow/SDN Networking,"Packet scheduling is essential to properly support applications on Software-Defined Networking (SDN) model. However, on OpenFlow/SDN, QoS is only performed with bandwidth guarantees and by a well-known FIFO scheduling. Facing this limitation, this paper presents the QoSFlow proposal, which controls multiple packet schedulers of Linux kernel and improve the flexibility of QoS control. The paper assesses QoSFlow performance, by analysing response time of packet scheduler operations running on datapath level, maximum bandwidth capacity, hardware resource utilization rate, bandwidth isolation and QoE. Our outcomes show an increase more than 48% on PSNR value of QoE by using SFQ scheduling.",2013,0, 6743,Machine learning approaches for predicting software maintainability: a fuzzy-based transparent model,"Software quality is one of the most important factors for assessing the global competitive position of any software company. Thus, the quantification of the quality parameters and integrating them into the quality models is very essential.Many attempts have been made to precisely quantify the software quality parameters using various models such as Boehm's Model, McCall's Model and ISO/IEC 9126 Quality Model. A major challenge, although, is that effective quality models should consider two types of knowledge: imprecise linguistic knowledge from the experts and precise numerical knowledge from historical data.Incorporating the experts' knowledge poses a constraint on the quality model; the model has to be transparent.In this study, the authorspropose a process for developing fuzzy logic-based transparent quality prediction models.They applied the process to a case study where Mamdani fuzzy inference engine is used to predict software maintainability.Theycompared the Mamdani-based model with other machine learning approaches.The resultsshow that the Mamdani-based model is superior to all.",2013,0, 6744,"Algorithm Parallelization Using Software Design Patterns, an Embedded Case Study Approach","Multicore embedded systems introduce new opportunities and challenges. Scaling of computational power is one of the main reasons for transition to a multicore environment. In most cases parallelization of existing algorithms is time consuming and error prone, dealing with low-level constructs. Migrating principles of object-oriented design patterns to parallel embedded software avoids this. We propose a top-down approach for refactoring existing sequential to parallel algorithms in an intuitive way, avoiding the usage of locking mechanisms. We illustrate the approach on the well known Fast Fourier Transformation algorithm. Parallel design patterns, such as Map Reduce, Divide-and-Conquer and Task Parallelism assist to derive a parallel approach for calculating the Fast Fourier Transform. By combining these design patterns, a robust and better performing application is obtained.",2013,0, 6745,An Empirical Study of Client-Side JavaScript Bugs,"Context: Client-side JavaScript is widely used in web applications to improve user-interactivity and minimize client-server communications. Unfortunately, web applications are prone to JavaScript faults. While prior studies have demonstrated the prevalence of these faults, no attempts have been made to determine their root causes and consequences. Objective: The goal of our study is to understand the root causes and impact of JavaScript faults and how the results can impact JavaScript programmers, testers and tool developers. Method: We perform an empirical study of 317 bug reports from 12 bug repositories. The bug reports are thoroughly examined to classify and extract information about the fault's cause (the error) and consequence (the failure and impact). Result: The majority (65%) of JavaScript faults are DOM-related, meaning they are caused by faulty interactions of the JavaScript code with the Document Object Model (DOM). Further, 80% of the highest impact JavaScript faults are DOM-related. Finally, most JavaScript faults originate from programmer mistakes committed in the JavaScript code itself, as opposed to other web application components such as the server-side or HTML code. Conclusion: Given the prevalence of DOM-related faults, JavaScript programmers need development tools that can help them reason about the DOM. Also, testers should prioritize detection of DOM-related faults as most high impact faults belong to this category. Finally, developers can use the error patterns we found to design more powerful static analysis tools for JavaScript.",2013,0, 6746,Evaluating Software Product Metrics with Synthetic Defect Data,"Source code metrics have been used in past research to predict software quality and focus tasks such as code inspection. A large number of metrics have been proposed and implemented in consumer metric software, however, a smaller, more manageable subset of these metrics may be just as suitable for accomplishing specific tasks as the whole. In this research, we introduce a mathematical model for software defect counts conditioned on product metrics, along with a method for generating synthetic defect data that chooses parameters for this model to match statistics observed in empirical bug datasets. We then show how these synthetic datasets, when combined with measurements from actual software systems, can be used to demonstrate how sets of metrics perform in various scenarios. Our preliminary results suggest that a small number of source code metrics conveys similar information as a larger set, while providing evidence for the independence of traditional software metric classifications such as size and coupling.",2013,0, 6747,Tools to Support Systematic Literature Reviews in Software Engineering: A Mapping Study,Background: Systematic literature reviews (SLRs) have become an established methodology in software engineering (SE) research however they can be very time consuming and error prone. Aim: The aims of this study are to identify and classify tools that can help to automate part or all of the SLR process within the SE domain. Method: A mapping study was performed using an automated search strategy plus snowballing to locate relevant papers. A set of known papers was used to validate the search string. Results: 14 papers were accepted into the final set. Eight presented text mining tools and six discussed the use of visualisation techniques. The stage most commonly targeted was study selection. Only two papers reported an independent evaluation of the tool presented. The majority were evaluated through small experiments and examples of their use. Conclusions: A variety of tools are available to support the SLR process although many are in the early stages of development and usage.,2013,0, 6748,Cost Effectiveness of Unit Testing: A Case Study in a Financial Institution,"This paper presents a case study on the cost effectiveness of unit testing in the context of a financial institution in Costa Rica. The study comprises four main steps: choosing a software application, implementing unit tests for this application, identifying prevented defects, and performing a cost and savings analysis. The impact of unit testing on the quality of software is assessed in terms of early defect detection, and the impact on the overall cost of software is evaluated based on the cost of developing the unit tests and the savings derived from the reduction of defects in later phases of the application development lifecycle. Our results indicate that while unit testing could help early defect detection, the monetary cost associated to unit testing would be higher than the monetary savings, in the particular context of the financial software studied, and under the limitations of our cost-savings model.",2013,0, 6749,Constructing Defect Predictors and Communicating the Outcomes to Practitioners,"Background: An alternative to expert-based decisions is to take data-driven decisions and software analytics is the key enabler for this evidence-based management approach. Defect prediction is one popular application area of software analytics, however with serious challenges to deploy into practice. Goal: We aim at developing and deploying a defect prediction model for guiding practitioners to focus their activities on the most problematic parts of the software and improve the efficiency of the testing process. Method: We present a pilot study, where we developed a defect prediction model and different modes of information representation of the data and the model outcomes, namely: commit hotness ranking, error probability mapping to the source and visualization of interactions among teams through errors. We also share the challenges and lessons learned in the process. Result: In terms of standard performance measures, the constructed defect prediction model performs similar to those reported in earlier studies, e.g. 80% of errors can be detected by inspecting 30% of the source. However, the feedback from practitioners indicates that such performance figures are not useful to have an impact in their daily work. Pointing out most problematic source files, even isolating error-prone sections within files are regarded as stating the obvious by the practitioners, though the latter is found to be helpful for activities such as refactoring. On the other hand, visualizing the interactions among teams, based on the errors introduced and fixed, turns out to be the most helpful representation as it helps pinpointing communication related issues within and across teams. Conclusion: The constructed predictor can give accurate information about the most error prone parts. Creating practical representations from this data is possible, but takes effort. The error prediction research done in Elektrobit Wireless Ltd is concluded to be useful and we will further improve the present- tions made from the error prediction data.",2013,0, 6750,FChain: Toward Black-Box Online Fault Localization for Cloud Systems,"Distributed applications running inside cloud systems are prone to performance anomalies due to various reasons such as resource contentions, software bugs, and hardware failures. One big challenge for diagnosing an abnormal distributed application is to pinpoint the faulty components. In this paper, we present a black-box online fault localization system called FChain that can pinpoint faulty components immediately after a performance anomaly is detected. FChain first discovers the onset time of abnormal behaviors at different components by distinguishing the abnormal change point from many change points caused by normal workload fluctuations. Faulty components are then pinpointed based on the abnormal change propagation patterns and inter-component dependency relationships. FChain performs runtime validation to further filter out false alarms. We have implemented FChain on top of the Xen platform and tested it using several benchmark applications (RUBiS, Hadoop, and IBM System S). Our experimental results show that FChain can quickly pinpoint the faulty components with high accuracy within a few seconds. FChain can achieve up to 90% higher precision and 20% higher recall than existing schemes. FChain is non-intrusive and light-weight, which imposes less than 1% overhead to the cloud system.",2013,0, 6751,Improvement of Peach Platform to Support GUI-Based Protocol State Modeling,"This Article describes how to improve model-based testing on the Peach platform, and it could make protocol security experts and testers describe network protocol state machine models and carry on a model-based testing much easier than ever before. This paper describes:(1) the graphical user interface of protocol state machine and the process of modeling, (2) the algorithm converting protocol state machine in graphic format to SCXML format, (3) the algorithm converting protocol state machine in SCXML format to the Pit File format. The Pit File generated could be loads into Peach platform directly to test target software. The contribution of our work could ease protocol security experts from tedious and error-prone testing work: creative research work completed by them, and tedious Pit File syntax learning and debugging is accomplished by the computer. Therefore, it is possible to focus on the description of the protocol state machine, rather than the tedious Pit File syntax details, and improve their work efficiency. Besides, this method applies the SCXML as intermediate files between graphical user interface and Pit File, so that has a high flexibility.",2013,0, 6752,Fuzzy-neuro Health Monitoring System for HVAC system variable-air-volume unit,"For Indoor Smart Grids (ISG), the proper operation of building environmental systems is essential to energy efficiency, so automatic detection and classification of abnormal conditions is important. The application of computational intelligence tools to a building's environmental systems that include the Building Automation System (BAS) and Heating Ventilating and Air Conditioning (HVAC) loads, is used to develop Automatic Building Diagnostic Software (ABDS) Tools for health monitoring, fault detection, and diagnostics. A novel Health Monitoring System (HMS) for a Variable Air Volume Unit is developed using fuzzy logic to detect abnormal operating conditions and to generate fault signatures for various fault types. Artificial Neural Network software is applied to fault signatures to classify the fault type. The HMS is tested with simulated data and actual BAS data. The system created was demonstrated to recognize faults and to accurately classify the various fault signatures for test faults of interest.",2013,0, 6753,A simple predictive method to estimate flicker,"This paper presents a method for predicting the flicker level generated by high power load in a distribution system. Through the development of the Flickermeter that met the IEC requirements, a reference curve is obtained based on the variation of voltage and the amount of changes per minute of the load. The purpose of this prediction is to determine if the load operation will produce flicker before being connected to the electrical system. This analysis cannot be developed by power system software due to the Flickermeter complexity and amount of data required for processing.",2013,0, 6754,Finding assignable cause in medium voltage network by statistical process control,"The current of outgoing feeders are very important data transmitted over SCADA system. Monitoring of these currents can help dispatching engineers to detect abnormality in energy consumption trend and minor faults in distribution network. Statistical process control (SPC) is one of the capable approaches which can be used for this purpose. Statistical process control is based on categorizing variations into assignable causes and random causes. In current paper we described the methods which were used for finding assignable causes in load trend and short time load variation in Alborz province power distribution company pilot project. Although this approach is not developed completely and some theoretical and practical challenges should be met before extending this project to all feeders, we hope completing this study can help engineers to developing more capable network monitoring softwares.",2013,0, 6755,Calculation and analysis of customer dissatisfaction index for reliability studies in gilan electric distribution network,"Reliable electric energy is one of the most important necessities for customers and there are high correlation between customer based reliability indices and customer satisfaction. Electric power interruptions have not only economics problems but also it makes social and mental difficulties. However customers' sensitive is different against interruption. Culture and living stiles of customers have significant effects on their satisfaction from utilities. For consideration of customer view point against interruptions, it seems that Customer Dissatisfaction Index (CDI) should define and enter as a reliability index. Based on this index, reliability enhancement strategies can be planned for maximizing customer satisfaction index. In this paper for assessing customer dissatisfaction index, questionnaires are designed. As case study these questionnaires have been filled by domestic customers in Rasht, a big and costal city in north of Iran. Gathered data has entered in SPSS software and customers' reply stochastic indices have obtained and analyzed. According to results, sharp threshold values for customer satisfied regarding reliability of supply were found. Results show customers' sensitive for number and duration of outages and transient-fault depends on time a day or seasons, in this paper time based Customer Dissatisfaction Index has been analyzed.",2013,0, 6756,Quality Assessment of Software as a Service on Cloud Using Fuzzy Logic,"Cloud computing is a business model which provides on demand services on the pay-per-use premise. Software as a Service (SaaS) is one of the delivery models for cloud computing, where software ownership by the SaaS provider is isolated from its use by the SaaS customer. The notion of quality is central to any service provision. Also, it is important to evaluate the quality of SaaS in order to be able to improve it. Traditional software engineering quality models are not effusively suitable for this purpose due to difference in the nature of software and service. In the past, a few approaches to service quality estimation have been proposed. Some of these approaches extend quality characteristics from existing quality models and even devise SaaS quality metrics, while others discuss quality around Service Level Agreement (SLA) and Quality of Service (QoS) parameters. In this paper, some representative quality factors have been identified by analyzing literature and a model based on fuzzy logic has been proposed to assess SaaS quality. Such a model of quality criteria may provide a ground for more comprehensive quality model which may assist a SaaS customer to choose a higher quality service from available services on cloud; the quality model may also serve as a guideline to SaaS provider to improve the quality of service provided.",2013,0, 6757,The Erlang approach to concurrent system development,"The prevalence of multi-core processors means application developers can no longer ignore concurrency and its attendant problems of data races, deadlock, safety, and liveness. Imperative languages such as Java and C, based on shared, mutable state, have added locks, semaphores and condition variables to address these problems; unfortunately, these locking approaches are notoriously error-prone. Functional (""""single assignment"""") languages with immutable state have been promoted as tools to mitigate these problems. In particular, Erlang, a functional language with roots in Prolog, has been used by Erickson, Ltd., to develop robust, concurrent, fault-tolerant, communications switches (31ms downtime per year). This workshop will introduce Erlang to educators interested in the language per se as well as those focusing on concurrent system development. The goal is to encourage the use of both imperative and functional languages in teaching about concurrency. Participants will install the Erlang system on their notebooks so as to engage in activities along with the organizer. Both sequential and concurrent systems - small but complete - will be developed in conjunction with the presentations. Time is allocated at the end of the workshop to discuss the pedagogical issues involved in adopting Erlang or similar technology.",2013,0, 6758,Improving Modular Reasoning on Preprocessor-Based Systems,"Preprocessors are often used to implement the variability of a Software Product Line (SPL). Despite their widespread use, they have several drawbacks like code pollution, no separation of concerns, and error-prone. Virtual Separationof Concerns (VSoC) has been used to address some of thesepreprocessor problems by allowing developers to hide featurecode not relevant to the current maintenance task. However, different features eventually share the same variables and methods, so VSoC does not modularize features, since developers do not know anything about hidden features. Thus, the maintenance of one feature might break another. Emergent Interfaces (EI) capture dependencies between a feature maintenance point and parts of other feature implementation, but they do not provide an overall feature interface considering all parts in an integrated way. Thus, we still have the feature modularization problem. To address that, we propose Emergent Feature Interfaces (EFI) that complement EI by treating feature as a module in order to improve modular reasoning on preprocessor-based systems. EFI capture dependencies among entire features, with the potential of improving productivity. Our proposal, implemented in an opensource tool called Emergo, is evaluated with preprocessor-based systems. The results of our study suggest the feasibility and usefulness of the proposed approach.",2013,0, 6759,A Reference Architecture Based on Reflection for Self-Adaptive Software,"Self-adaptive Software (SaS) presents specific characteristics compared to traditional ones, as it makes possible adaptations to be incorporated at runtime. These adaptations, when manually performed, normally become an onerous, error-prone activity. In this scenario, automated approaches have been proposed to support such adaptations; however, the development of SaS is not a trivial task. In parallel, reference architectures are reusable artifacts that aggregate the knowledge of architectures of software systems in specific domains. They have facilitated the development, standardization, and evolution of systems of those domains. In spite of their relevance, in the SaS domain, reference architectures that could support a more systematic development of SaS are not found yet. Considering this context, the main contribution of this paper is to present a reference architecture based on reflection for SaS, named RA4SaS (Reference Architecture for SaS). Its main purpose is to support the development of SaS that presents adaptations at runtime. To show the viability of this reference architecture, a case study is presented. As result, it has been observed that RA4SaS has presented good perspective to efficiently contribute to the area of SaS.",2013,0, 6760,Polynomial Models Identification Using Real Data Acquisition Applied to Didactic System,"Models of real systems are of fundamental importance for its analysis, making it possible to simulate or predict its behavior. Additionally, advanced techniques for controller design, optimization, monitoring, fault detection and diagnosis components are also based on process models. One of the most used techniques to model a system is by identification. System identification or process identification is the field of mathematical modeling of systems, in which the parameters are obtained from test or experimental data. Given the importance of obtaining a model able to represent the dynamics of real processes, we developed a software that aggregates identification algorithms using Least squares (LS), Least squares Extended (ELS), Generalized Least Squares (GLS) Recursive least squares with Compensator Polarization (BCRLS). This identification package is used in this paper to identify an educational level plant. Its actual data was inserted in the package and thus, results from different identification techniques implemented in the algorithm were compared. All steps necessary to carry out the identification and analysis of the autocorrelation of the output data for the definition of the sampling period, the design of excitation signals and data collection were taken in consideration. The conclusions reached are that the software provides consistency and the implemented algorithms return a model capable of representing the linear part of the system's dynamics.",2013,0, 6761,An Architecture for Justified Assessments of Service Provider Reputation,"In a service-oriented system, an accurate assessment of reputation is essential for selecting between alternative providers. In many cases, providers have differing characteristics that must be considered alongside reliability, including their cost, experience, quality, and use of sub-providers, etc. Existing methods for reputation assessment are limited in terms of the extent to which the full interaction history and context is considered. While factors such as cost and quality might be considered, the assessment of reputation is typically based only on a combination of direct experience and recommendations from third parties, without considering the wider context. Furthermore, reputation is typically expressed as a simple numerical score or probability estimate with no rationale for the reasoning behind it, and there is no opportunity for the user to interrogate the assessment. Existing approaches exclude from consideration a wide range of information, about the context of providers' previous actions, that could give useful information to a user in selecting a service provider. For example, there may have been mitigating circumstances for past failures, or a provider may have changed their organisational affiliation. In this paper we argue that provenance records are a rich source of information on which a more nuanced reputation mechanism can be based. Specifically, the paper makes two main contributions: (i) we provide an analysis of the challenges and open research questions that must be addressed in achieving a rich provenance-based reputation mechanism, and (ii) we define an architecture in which the results of these challenges fit together with existing technologies to enable provenance-based reputation.",2013,0, 6762,Invited talk: How the Fundamental Assurance Question pervades certification,"Assurance Cases are promoted as a means by which to present an argument for why a system is sufficiently dependable (alternate terms for the same concept include Dependability Cases, Safety Cases when the concern is safety, Security Cases, etc.). The purpose of such an argument is typically to inform a decision maker, often in the context of a key certification decision, so he/she will be better able to make that decision. Examples of such decisions include whether to deploy a system, whether to make an upgrade to an existing system, whether to advance a system to the next phase in its development. Assurance Cases are widely practiced in Europe, and are receiving growing attention in North America. For software systems in particular, an assurance-case-based approach is often contrasted to a standards-based approach, the latter being characterized as more prescriptive in specifying the process and techniques to be applied to sufficiently assure software. The proponents of an assurance-case-based approach point out that the need to construct a sufficiently convincing Assurance Case puts the onus on the provider of the software to present the argument for its dependability, as compared to putting the onus on the regulator to have described in advance a sufficient process to be followed by the provider in their development of software. The distinction is not as clear-cut as it might at first seem. Both approaches have the need to assess by how much the outcomes of assurance activities (e.g., testing; code review; fault tree analysis; model-checking) raise confidence in decisions made about the system. For a standards-based approach, how is it possible to determine whether the required standard practice can be relaxed or waived entirely, when an alternate approach can be substituted, when additional activities are warranted? These determinations hinge on an understanding of the role of assurance act- vities, and the information conveyed by their outcome. These questions will arise more often and become more urgent to answer in the evolving world mentioned in the Call for Papers. For an assurance-case-based approach the outcome of an assurance activity will be evidence located within the assurance case, which makes it easier to see the role it plays in the overall assurance argument, but the same question arises what is its information contribution to confidence? Distilling these gives the Fundamental Assurance Question, namely how much do assurance activities contribute to raising decision confidence about system qualities, such as safety? These questions and an intriguing start at answering them will be the focus of this talk.",2013,0, 6763,Improving reliability of data protection software with integrated multilayered fault Injection testing,"Application involved in data protection for enterprises are responsible to ensure data integrity on backup target as well as remote site designed for disaster recovery (DR). By nature, backup applications needs to operate under very infrastructure which are prone to multiple failure right from physical to application layers. If applications are not designed to consider its operating environment effectively they may not respond to fault in operating environment and may result in data loss and data unavailability scenario. They could potential lead into false reporting which later can become issue with data integrity. We at EMC applied multilayered fault injection test strategy for backup product where we identified different layers of product operating environments. The interface between two layers was targeted to inject appropriate fault based on role and functionality of these layers. The response of application and its impact to product behavior was monitored and analyzed. This has helped improving various exception handling, product agility to fault operating environment and improving usability by providing better picture on failure in product. This session can help audience on understanding how an application operating environments plays key role in designing test strategy. This leads into improving product reliability and better customer experience about application.",2013,0, 6764,Improving manual analysis of automated code inspection results: Need and effectiveness,"Automated code inspection using static analysis tools has been found to be useful and cost-effective over manual code reviews. This is due to ability of these tools to detect programming bugs (or defects) early in the software development cycle without running the code. Further, using sound static analysis tools, even large industry applications can be certified to be free of certain types of the programming bugs such as Division by Zero, Null/Illegal Dereference of a Pointer, Memory Leaks, and so on. In spite of these merits, as per various surveys, the static analysis tools are used infrequently and inconsistently in practice to ensure software quality. Large number of false alarms generated and the efforts required to manually analyze them are the primary reasons for this. Similar has been the experience of our team with the usage of these tools.",2013,0, 6765,Using capture-recapture models to make objective post-inspection decisions,"Problem Definition: Project managers manage the development process by enabling software developers to perform inspection of early software artifacts. However, an inspection can only detect the presence of defects; it cannot certify the absence of defects or indicate how many defects remain post inspection. Managers need objective information to help them decide when they can safely stop the inspection process. A reliable estimate of the number of defects remaining in software can aid mangers in determining whether there is a need for additional inspections.",2013,0, 6766,Design of a dependable peer-to-peer system for numerical optimization,"Summary form only given. Numerical Optimization is an integral part of most engineering, scientific work and is a computationally intensive job. Most optimization frameworks developed so far executes numerical algorithms in a single processor or in a dedicated cluster of machines. A single system based optimizer is plagued by the resources and a dedicated high performance computational cluster is extremely cost prohibitive. Further with the increase in dimensions of the decision / objective space variables / functions, it is difficult to foresee and plan a computation cluster ahead of time. A peer-to-peer system provides a viable alternative to this problem. A peer-to-peer (P2P) system has no central co-ordination and is generally a loose union of a set of non-dedicated machines glued via a logical network for fast dissemination of information. The advantage to cost-effectiveness and elasticity with a P2P system however comes with a price. A P2P system lacks trust and malicious nodes can jeopardize the application to a significant extent. The nodes/communication links are prone to failure of various types such a fail-stop, omission, timing (value) and response (value). As a result there is no guarantee of completion of an optimization job. Furthermore, if a certain section of nodes are susceptible to Byzantine faults, it could lead to a misleading front in the objective space where there is absolute un-certainty of reaching a global minimum. Redundancy, failure detection and recovery are an essential part in the design of such a system. In essence, since in a large scale distributed system Failure is not an exception but a norm, dependability in design of the system is not just a choice but an absolute requirement. In this presentation, we would like to put forth the challenges of designing such a P2P system together with the algorithms that has been used, designed and developed by us in creating a P2P optimization framework. The presentation is - ivided into three sections: firstly in identifying the challenges, secondly, the solutions to mitigate the challenges and thirdly the results that we have obtained by applying the solutions to the problem sets.",2013,0, 6767,Predicting multi-platform release quality,"One difficulty in characterizing the quality of a major feature release is that many releases are implemented on several platforms, with each platform using a different subset of the new features. Also, these platforms can have substantially different performance expectations and results. In order to characterize the entire release adequately in predictive models, we need a robust customer experience metric that is capable of representing many disparate platforms. Several multi-platform SWDPMH (software defects per million usage hours per month) variants have been developed in an attempt to anticipate a release's overall field quality. In addition to predicting the overall release quality, it is critical that we provide guidance to business units concerning remediation of releases predicted to not achieve adequate quality, and also provide guidance regarding how to modify practices so subsequent releases achieve adequate quality. Models have been developed to both predict MP-SWDPMH and to identify specific in-process drivers that likely influence MP-SWDPMH. At this time, these modeling results can be available as early as five or six months prior to release to the customers.",2013,0, 6768,A statistical approach for software resource leak detection and prediction,"Summary form only given. Resource leaks are a common type of software fault. Accruing with time, resource leaks can lead to performance degradation and/or service failures. However, there are few effective general methods and tools to detect and especially predict resource leaks. We propose a lightweight statistical approach to tackling this problem. Without complex resource management and modification to the original application code, the proposed approach simply monitors the target's resource usage periodically, and exploits some statistical analysis methods to extract the useful information behind the usage data. The decomposition method from the field of time series analysis is adopted to identify the different components (trend, seasonal, and random) of resource usage. The Mann-Kendall test method is then applied to the decomposed trend component to identify whether a significant consistent upward trend exists (and thus a leak). Furthermore, we establish a prediction procedure based on the decomposition. The basic idea is to estimate the three different components separately (using such statistical methods as curve fitting and confidence limit), and then add them together to predict the total usage. Several experimental studies that take memory as an example resource demonstrate that our proposed approach is effective to detect leaks and predict relevant leak index of interest (e.g., time to exhaustion, time to crossing some dangerous threshold), and has a very low runtime overhead.",2013,0, 6769,Diagnosing development software release to predict field failures,"With the advancement of analytical engines for big data, the healthcare industry has taken a big leap to minimize escalations on healthcare expenditure, while providing a reliably working solution for the customers based on the slice and dice of the collected information. The research and development (R & D) departments of the healthcare players are providing more focus on the stability and the usage of the system in the field. The field studies have created a reliability based feedback loop that has helped R & D provide hotfixes and service packs in shrinking time lines to better answer the customized needs of the user. Given the variety of possible optimizations in the actual usage, the software-hardware product combine such as the Philips Magnetic Resonance (MR) modality has to ensure that the business critical workflows are ever stable. In a nutshell, fault prediction becomes an important aspect for the R & D department because it helps address the situation in an effective and timely fashion, for both the end-user and the manufacturer to alleviate process hiccups and delays in addressing the fault. Reliability growth plot using the Weibull probability plots helps to predict failures that guide reliability centric maintenance strategies [1]; however, this will be a passive application of prediction for the new software yet to be released for market. This paper tries to address the case where a fault/failure at the customer-end can be better predicted for software-under-development with the help of analysis of field data. The terms failures and faults are interchangeably used in the paper to represent error events that can occur at an installed base.",2013,0, 6770,Predicting field experience of releases on specific platforms,"Since 2009, Software Defects Per Million Hours (SWDPMH) has been the primary customer experience metric used at Cisco, and is goaled on a yearly basis for about 100 product families. A key reason SWDPMH is considered to be of critical importance is that we see a high correlation between SWDPMH and Software Customer Satisfaction (SW CSAT) over a wide spectrum of products and feature releases. Therefore, it is important to try to anticipate SWDPMH for new releases before the software is released to customers, for several reasons: Early warning that a major feature release is likely to experience substantial quality problems in the field may allow for remediation of the release during, or even prior to, function and system testing Prediction of SWDPMH enables better planning for subsequent maintenance releases and rollout strategies Calculating the tradeoffs between SWDPMH and feature volume can provide guidance concerning acceptable feature content, test effort, release cycle timing, and other key parameters affecting subsequent feature releases. Our efforts over the past year have been to enhance our ability to predict SWDPMH in the field. Toward this end, we have developed predictive models, tested the models with major feature releases for strategic products, and provided guidance to development, test, and release management teams on how to improve the chances of achieving best-in-class levels of SWDPMH. This work is ongoing, but several models are currently used in a production mode for five product families, with good results. We plan to achieve production capability with an additional several dozen product families over the next year.",2013,0, 6771,An extended notation of FTA for risk assessment of software-intensive medical devices.: Recognition of the risk class before and after the risk control measure,It is difficult to assess the risk of software-intensive medical devices. An extended notation of FTA recognizes the risk class before and after the risk control measure and the software in the system affects the top event of FTA.,2013,0, 6772,Safety assessment of software-intensive medical devices: Introducing a safety quality model approach,"Argumentation-based safety assurance is a promising approach for the development of safe software-intensive medical devices. However, one challenge is safety assessment by an independent authority. This article presents an approach that enables argumentation-based safety development on the one hand, while providing means for assessing the product's safety afterwards on the other hand. We combine a generic safety case with an engineering model, which results in specific quality questions for assessors and provides a generic argumentation structure for manufacturers.",2013,0, 6773,Testing distortion estimations in Retinal Prostheses,Retinal Prosthesis device has been approved by FDA for treatment of vision impairment caused by RP. Validating the visual distortion estimation algorithms used in prosthesis is crucial for the safe use of prosthesis. An approach based on metamorphic testing was described to validate a prosthesis distortion estimation algorithm. Four metamorphic relations including two necessary conditions for the correct functioning of the estimation algorithm were identified. Violations in two metamorphic relations were detected showing different estimation behavior of prosthetic vs. regular images and those having high distortions.,2013,0, 6774,On the effectiveness of Mann-Kendall test for detection of software aging,"Software aging (i.e. progressive performance degradation of long-running software systems) is difficult to detect due to the long latency until it manifests during program execution. Fast and accurate detection of aging is important for eliminating the underlying defects already during software development and testing. Also in a deployment scenario, aging detection is needed to plan mitigation methods like software rejuvenation. The goal of this paper is to evaluate whether the Mann-Kendall test is an effective approach for detecting software aging from traces of computer system metrics. This technique tests for existence of monotonic trends in time series, and studies of software aging often consider existence of trends in certain metrics as indication of software aging. Through an experimental study we show that the Mann-Kendall test is highly vulnerable to creating false positives in context of aging detection. By increasing the amount of data considered in the test, the false positive rate can be reduced; however, time to detect aging increases considerably. Our findings indicate that aging detection using the Mann-Kendall test alone is in general unreliable, or may require long measurement times.",2013,0, 6775,Software rejuvenation impacts on a phased-mission system for Mars exploration,"When software contains aging-related faults and the system has a long mission period, phased-mission systems consisting of several software components can suffer from software aging, which is a progressive degradation of the software execution environment. Failures caused by software aging might impact on the mission success probability. In this paper, we present a model for a phased-mission system with software rejuvenation, and analyze the impacts of software rejuvenation on the success probability and completion time distribution of the mission. The mission of Mars exploration rover is considered as an example of phased-mission system. The analysis results show that the mission success probability is improved by software rejuvenation at the cost of the mission completion time.",2013,0, 6776,Comparing four case studies on Bohr-Mandel characteristics using ODC,"This paper uses four case studies to examine the difference in properties of Bohr-Mandel bugs. The mechanism used to differentiate Bohr versus Mandel bugs are the ODC Triggers that was developed in a previous study on this subject. In this study, the method is extended to reflect on two additional dimensions. First, on the customer perceived impact. And, second, on how these change between their manifestation in production or the field usage versus late stage development or quality assurance testing. This paper: ; Compares Bohr and Mandel bugs rates between customer/field usage and pre-release system testing. ; Finds that Mandel bugs predominantly have a Reliability-Availability-Serviceability (RAS) Impact. ; Finds that Mandel bugs, rarely, if ever have a Functional Impact. ; Finds these studies predict Mandel bug rates consistent with other studies. ; Finds that pre-release testing found very few Mandel bugs (<;10%).",2013,0, 6777,Identifying silent failures of SaaS services using finite state machine based invariant analysis,"Field failure analysis is usually driven by a characterization of the different time related properties of failure. This characterization does not help the production support team in understanding the root cause. In order to pinpoint the root cause of failure, one of the most effective techniques used is checking for violations of the system invariants which are the consistent, time invariant correlations that exist in the system. Understanding when and where these violations happen helps in detecting the root cause of the failure. Silent failures, on the other hand are characterized by no evidence of failures either in the console or in the field failure logs. They are unearthed at moments of crisis, either with a customer complaint or other cascading failures. These failures often result in data loss or data corruption, creating many latent errors. Accumulation of these errors over time results in degraded system performance. This represents the problem of software aging and restoration of the system, i.e. its rejuvenation becomes a critical need. Subsequent to the restoration, a rigorous failure detection mechanism is needed to detect them early. What we describe in the paper is a novel method that could be used to detect silent failures using a combination of invariant violation checking and finite state machine based analysis of the system. We use the audit-trail logs of system to extract information about the state and transitions for FSM representation. Currently our research work was limited to proving its efficiency. We applied this approach to our SaaS platform and were able to detect 36 silent failures over a period of 9 months. As next steps, we will implement this as a part of automated failure detection in the operational SaaS platforms.",2013,0, 6778,D-Script : Dependable scripting with DEOS process,"This paper presents our idea and design on script-based framework for dynamic fault management in distributed open systems. Today's distributed systems face unexpected faults and error propagations that are hard to predict at the design time. A key idea behind our D-Script is the dependability through assuredness with a scripting solution to add fault detection and its recovery at the operation time. To realize our vision, we have developed several key tools, including Assure-It authoring tool, D-Shell dependable shell, and REC runtime evidence collector.",2013,0, 6779,Empirical evaluation of an early understandability measurement method,"Usability is a quality factor which increasingly attracts the attention of Human Computer Interaction (HCI) developers. It consists on measuring the usability aspects of a user interface and identifying specific problems. It was usually evaluated based on user's perception. The development costs are the main limitation of methods which target the usability measurement. However, the appearance of the Model Driven Engineering (MDE) allows migrating to a new challenge: early usability evaluation. In an MDE method, the conceptual model represents an abstraction of the application code. Hence, measuring the usability since the conceptual model can be a promising method to predict the usability of the application code. This paper proposes that certain usability attributes, especially understandability attributes, can be measured from the conceptual model. An empirical study is carried out in order to evaluate our proposal. The goal is to evaluate the coherence between values obtained using our proposal and those perceived by the end user.",2013,0, 6780,Sensor and actuator fault detection and isolation based on artificial neural networks and fuzzy logic applicated on induction motor,"This paper presents a scheme for fault detection and isolation (FDI). It deals with sensors and actuator fault of an induction machine. This scheme is established with artificial intelligent techniques in order to resolve two big troubles. The first is the detection problem. It is resolved with the neural network and the second is the isolation difficulty, it solved using the fuzzy logic. The proposed FDI approach is implemented on Matlab/Simulink software and tested under three types of fault (current, speed sensor fault and inverter fault). The obtained results improving the importance of this method. Then, the actuator and sensor fault are detected and isolated successfully.",2013,0, 6781,Detection of Process Antipatterns: A BPEL Perspective,"With the increasing significance of the service-oriented paradigm for implementing business solutions, assessing and analyzing such solutions also becomes an essential task to ensure and improve their quality of design. One way to develop such solutions, a.k.a., Service-Based systems (SBSs) is to generate BPEL (Business Process Execution Language) processes via orchestrating Web services. Development of large business processes (BPs) involves design decisions. Improper and wrong design decisions in software engineering are commonly known as antipatterns, i.e., poor solutions that might affect the quality of design. The detection of antipatterns is thus important to ensure and improve the quality of BPs. However, although BP antipatterns have been defined in the literature, no effort was given to detect such antipatterns within BPEL processes. With the aim of improving the design and quality of BPEL processes, we propose the first rule-based approach to specify and detect BP antipatterns. We specify 7 BP antipatterns from the literature and perform the detection for 4 of them in an initial experiment with 3 BPEL processes.",2013,0, 6782,Automatic concolic test generation with virtual prototypes for post-silicon validation,"Post-silicon validation is a crucial stage in the system development cycle. To accelerate post-silicon validation, high-quality tests should be ready before the first silicon prototype becomes available. In this paper, we present a concolic testing approach to generation of post-silicon tests with virtual prototypes. We identify device states under test from concrete executions of a virtual prototype based on the concept of device transaction, symbolically execute the virtual prototype from these device states to generate tests, and issue the generated tests concretely to the silicon device. We have applied this approach to virtual prototypes of three network adapters to generate their tests. The generated test cases have been issued to both virtual prototypes and silicon devices. We observed significant coverage improvement with generated test cases. Furthermore, we detected 20 inconsistencies between virtual prototypes and silicon devices, each of which reveals a virtual prototype or silicon device defect.",2013,0, 6783,A reliability prediction model for complex systems using data flow dependency,Research on software reliability prediction is of great practical importance. Failure characteristic of large and complex software depends on the operations of individual components and their architecture. Complexity of the components as well as their dependency provides a greater impact on overall reliability of the software. Reliability prediction in the early stage of component based software requires the knowledge of interconnection between the components as well as propagation of errors between the components. In this paper we propose a reliability prediction model which not only considers the control flow of the component it also considers data sharing between the components and their deployment details. We propose a new graphical structure Data Flow Dependency Graph to estimate the effective reliability of the data processed by the components and then use the operational profile to predict the reliability.,2013,0, 6784,Big data solutions for predicting risk-of-readmission for congestive heart failure patients,"Developing holistic predictive modeling solutions for risk prediction is extremely challenging in healthcare informatics. Risk prediction involves integration of clinical factors with socio-demographic factors, health conditions, disease parameters, hospital care quality parameters, and a variety of variables specific to each health care provider making the task increasingly complex. Unsurprisingly, many of such factors need to be extracted independently from different sources, and integrated back to improve the quality of predictive modeling. Such sources are typically voluminous, diverse, and vary significantly over the time. Therefore, distributed and parallel computing tools collectively termed big data have to be developed. In this work, we study big data driven solutions to predict the 30-day risk of readmission for congestive heart failure (CHF) incidents. First, we extract useful factors from National Inpatient Dataset (NIS) and augment it with our patient dataset from Multicare Health System (MHS). Then, we develop scalable data mining models to predict risk of readmission using the integrated dataset. We demonstrate the effectiveness and efficiency of the open-source predictive modeling framework we used, describe the results from various modeling algorithms we tested, and compare the performance against baseline non-distributed, non-parallel, non-integrated small data results previously published to demonstrate comparable accuracy over millions of records.",2013,0, 6785,Entropy-based test generation for improved fault localization,"Spectrum-based Bayesian reasoning can effectively rank candidate fault locations based on passing/failing test cases, but the diagnostic quality highly depends on the size and diversity of the underlying test suite. As test suites in practice often do not exhibit the necessary properties, we present a technique to extend existing test suites with new test cases that optimize the diagnostic quality. We apply probability theory concepts to guide test case generation using entropy, such that the amount of uncertainty in the diagnostic ranking is minimized. Our ENTBUG prototype extends the search-based test generation tool EVOSUITE to use entropy in the fitness function of its underlying genetic algorithm, and we applied it to seven real faults. Empirical results show that our approach reduces the entropy of the diagnostic ranking by 49% on average (compared to using the original test suite), leading to a 91% average reduction of diagnosis candidates needed to inspect to find the true faulty one.",2013,0, 6786,Detecting bad smells in source code using change history information,"Code smells represent symptoms of poor implementation choices. Previous studies found that these smells make source code more difficult to maintain, possibly also increasing its fault-proneness. There are several approaches that identify smells based on code analysis techniques. However, we observe that many code smells are intrinsically characterized by how code elements change over time. Thus, relying solely on structural information may not be sufficient to detect all the smells accurately. We propose an approach to detect five different code smells, namely Divergent Change, Shotgun Surgery, Parallel Inheritance, Blob, and Feature Envy, by exploiting change history information mined from versioning systems. We applied approach, coined as HIST (Historical Information for Smell deTection), to eight software projects written in Java, and wherever possible compared with existing state-of-the-art smell detectors based on source code analysis. The results indicate that HIST's precision ranges between 61% and 80%, and its recall ranges between 61% and 100%. More importantly, the results confirm that HIST is able to identify code smells that cannot be identified through approaches solely based on code analysis.",2013,0, 6787,Personalized defect prediction,"Many defect prediction techniques have been proposed. While they often take the author of the code into consideration, none of these techniques build a separate prediction model for each developer. Different developers have different coding styles, commit frequencies, and experience levels, causing different defect patterns. When the defects of different developers are combined, such differences are obscured, hurting prediction performance. This paper proposes personalized defect prediction-building a separate prediction model for each developer to predict software defects. As a proof of concept, we apply our personalized defect prediction to classify defects at the file change level. We evaluate our personalized change classification technique on six large software projects written in C and Java-the Linux kernel, PostgreSQL, Xorg, Eclipse, Lucene and Jackrabbit. Our personalized approach can discover up to 155 more bugs than the traditional change classification (210 versus 55) if developers inspect the top 20% lines of code that are predicted buggy. In addition, our approach improves the F1-score by 0.01-0.06 compared to the traditional change classification.",2013,0, 6788,Automatically partition software into least privilege components using dynamic data dependency analysis,"The principle of least privilege requires that software components should be granted only necessary privileges, so that compromising one component does not lead to compromising others. However, writing privilege separated software is difficult and as a result, a large number of software is monolithic, i.e., it runs as a whole without separation. Manually rewriting monolithic software into privilege separated software requires significant effort and can be error prone. We propose ProgramCutter, a novel approach to automatically partitioning monolithic software using dynamic data dependency analysis. ProgramCutter works by constructing a data dependency graph whose nodes are functions and edges are data dependencies between functions. The graph is then partitioned into subgraphs where each subgraph represents a least privilege component. The privilege separated software runs each component in a separated process with confined system privileges. We evaluate it by applying it on four open source software. We can reduce the privileged part of the program from 100% to below 22%, while having a reasonable execution time overhead. Since ProgramCutter does not require any expert knowledge of the software, it not only can be used by its developers for software refactoring, but also by end users or system administrators. Our contributions are threefold: (i) we define a quantitative measure of the security and performance of privilege separation; (ii) we propose a graph-based approach to compute the optimal separation based on dynamic information flow analysis; and (iii) the separation process is automatic and does not require expert knowledge of the software.",2013,0, 6789,Finding architectural flaws using constraints,"During Architectural Risk Analysis (ARA), security architects use a runtime architecture to look for security vulnerabilities that are architectural flaws rather than coding defects. The current ARA process, however, is mostly informal and manual. In this paper, we propose Scoria, a semi-automated approach for finding architectural flaws. Scoria uses a sound, hierarchical object graph with abstract objects and dataflow edges, where edges can refer to nodes in the graph. The architects can augment the object graph with security properties, which can express security information unavailable in code. Scoria allows architects to write queries on the graph in terms of the hierarchy, reachability, and provenance of a dataflow object. Based on the query results, the architects enhance their knowledge of the system security and write expressive constraints. The expressiveness is richer than previous approaches that check only for the presence or absence of communication or do not track a dataflow as an object. To evaluate Scoria, we apply these constraints to several extended examples adapted from the CERT standard for Java to confirm that Scoria can detect injected architectural flaws. Next, we write constraints to enforce an Android security policy and find one architectural flaw in one Android application.",2013,0, 6790,Characterizing and detecting resource leaks in Android applications,"Android phones come with a host of hardware components embedded in them, such as Camera, Media Player and Sensor. Most of these components are exclusive resources or resources consuming more memory/energy than general. And they should be explicitly released by developers. Missing release operations of these resources might cause serious problems such as performance degradation or system crash. These kinds of defects are called resource leaks. This paper focuses on resource leak problems in Android apps, and presents our lightweight static analysis tool called Relda, which can automatically analyze an application's resource operations and locate the resource leaks. We propose an automatic method for detecting resource leaks based on a modified Function Call Graph, which handles the features of event-driven mobile programming by analyzing the callbacks defined in Android framework. Our experimental data shows that Relda is effective in detecting resource leaks in real Android apps.",2013,0, 6791,Dangling references in multi-configuration and dynamic PHP-based Web applications,"PHP is a dynamic language popularly used in Web development for writing server-side code to dynamically create multiple versions of client-side pages at run time for different configurations. A PHP program contains code to be executed or produced for multiple configurations/versions. That dynamism and multi-configuration nature leads to dangling references. Specifically, in the execution for a configuration, a reference to a variable or a call to a function is dangling if its corresponding declaration cannot be found. We conducted an exploratory study to confirm the existence of such dangling reference errors including dangling cross-language and embedded references in the client-side HTML/JavaScript code and in data-accessing SQL code that are embedded in scattered PHP code. Dangling references have caused run-time fatal failures and security vulnerabilities. We developed DRC, a static analysis method to detect such dangling references. DRC uses symbolic execution to collect PHP declarations/references and to approximate all versions of the generated output, and then extracts embedded declarations/references. It associates each detected declaration/reference with a conditional constraint that represents the execution paths (i.e. configurations/versions) containing that declaration/reference. It then validates references against declarations via a novel dangling reference detection algorithm. Our empirical evaluation shows that DRC detects dangling references with high accuracy. It revealed 83 yet undiscovered defects caused by dangling references.",2013,0, 6792,Environment rematching: Toward dependability improvement for self-adaptive applications,"Self-adaptive applications can easily contain faults. Existing approaches detect faults, but can still leave some undetected and manifesting into failures at runtime. In this paper, we study the correlation between occurrences of application failure and those of consistency failure. We propose fixing consistency failure to reduce application failure at runtime. We name this environment rematching, which can systematically reconnect a self-adaptive application to its environment in a consistent way. We also propose enforcing atomicity for application semantics during the rematching to avoid its side effect. We evaluated our approach using 12 self-adaptive robot-car applications by both simulated and real experiments. The experimental results confirmed our approach's effectiveness in improving dependability for all applications by 12.5-52.5%.",2013,0, 6793,PYTHIA: Generating test cases with oracles for JavaScript applications,Web developers often write test cases manually using testing frameworks such as Selenium. Testing JavaScript-based applications is challenging as manually exploring various execution paths of the application is difficult. Also JavaScript's highly dynamic nature as well as its complex interaction with the DOM make it difficult for the tester to achieve high coverage. We present a framework to automatically generate unit test cases for individual JavaScript functions. These test cases are strengthened by automatically generated test oracles capable of detecting faults in JavaScript code. Our approach is implemented in a tool called Pythia. Our preliminary evaluation results point to the efficacy of the approach in achieving high coverage and detecting faults.,2013,0, 6794,Class level fault prediction using software clustering,"Defect prediction approaches use software metrics and fault data to learn which software properties associate with faults in classes. Existing techniques predict fault-prone classes in the same release (intra) or in a subsequent releases (inter) of a subject software system. We propose an intra-release fault prediction technique, which learns from clusters of related classes, rather than from the entire system. Classes are clustered using structural information and fault prediction models are built using the properties of the classes in each cluster. We present an empirical investigation on data from 29 releases of eight open source software systems from the PROMISE repository, with predictors built using multivariate linear regression. The results indicate that the prediction models built on clusters outperform those built on all the classes of the system.",2013,0, 6795,Context-aware task allocation for distributed agile team,"The philosophy of Agile software development advocates the spirit of open discussion and coordination among team members to adapt to incremental changes encountered during the process. Based on our observations from 20 agile student development teams over an 8-week study in Beihang University, China, we found that the task allocation strategy as a result of following the Agile process heavily depends on the experience of the users, and cannot be guaranteed to result in efficient utilization of team resources. In this research, we propose a context-aware task allocation decision support system that balances the considerations for quality and timeliness to improve the overall utility derived from an agile software development project.We formulate the agile process as a distributed constraint optimization problem, and propose a technology framework that assesses individual developers' situations based on data collected from a Scrum-based agile process, and helps individual developers make situation-aware decisions on which tasks from the backlog to select in real-time. Preliminary analysis and simulation results show that it can achieve close to optimally efficient utilization of the developers' collective capacity. We plan to build the framework into a computer-supported collaborative development platform and refine the method through more realistic projects.",2013,0, 6796,"Preventing erosion of architectural tactics through their strategic implementation, preservation, and visualization","Nowadays, a successful software production is increasingly dependent on how the final deployed system addresses customers' and users' quality concerns such as security, reliability, availability, interoperability, performance and many other types of such requirements. In order to satisfy such quality concerns, software architects are accountable for devising and comparing various alternate solutions, assessing the trade-offs, and finally adopting strategic design decisions which optimize the degree to which each of the quality concerns is satisfied. Although designing and implementing a good architecture is necessary, it is not usually enough. Even a good architecture can deteriorate in subsequent releases and then fail to address those concerns for which it was initially designed. In this work, we present a novel traceability approach for automating the construction of traceabilty links for architectural tactics and utilizing those links to implement a change impact analysis infrastructure to mitigate the problem of architecture degradation. Our approach utilizes machine learning methods to detect tactic-related classes. The detected tactic-related classes are then mapped to a Tactic Traceability Pattern. We train our trace algorithm using code extracted from fifty performance-centric and safety-critical open source software systems and then evaluate it against a real case study.",2013,0, 6797,Towards the Development of a Defect Detection Tool for COSMIC Functional Size Measurement,"Reliability of functional size measurement is very crucial since software management activities such as cost and budget estimations, process benchmarking and project control depend on software size measurements. In order to improve the reliability of functional size measurements, they should be controlled and reviewed at the end of the measurement process. However, manual inspection for detecting defects and errors of measurements is time and effort consuming and there is always a possibility of missing a defect. To overcome such problems we developed a tool, for detecting defects of COSMIC functional size measurements automatically. In this study we presented the process of developing the tool, R-COVER, and the results of the case studies conducted for analyzing the efficiency of the tool in terms of correctness and accuracy.",2013,0, 6798,AM-QuICk: A Measurement-Based Framework for Agile Methods Customisation,"Software development practitioners are increasingly interested in adopting agile methods and generally recommend customisation so that the adopted method can fit the organisational reality. Many studies from the literature report agile adoption and customisation experiences but most of them are hardly generalisable and few are metric-based. They therefore cannot provide quantitative evidence of the suitability of the customised agile method, neither assess the organisation readability to adopt it, nor help in decision-making concerning the organisation transformation strategy. In this paper, we first describe the Agile Methods Quality-Integrated Customisation framework (AM-QuICk) that relies on measurements and aims to continuously assist agile methodologists throughout the agile adoption and customisation process, i.e., during the initial organisation adoption, the method design and throughout the working development process. Then, we present a case study using AM-QuICk within an organisation. With this study, we aim to analyse the current development process and its level of agility and identify the initial risk factors. The data were collected using preliminary interviews with the different team members and two questionnaires. The results reveal that though most respondents are enthusiastic towards agile principles, a progressive transformation strategy would be beneficial.",2013,0, 6799,Experiences from an Initial Study on Risk Probability Estimation Based on Expert Opinion,"Background: Determining the factor probability in risk estimation requires detailed knowledge about the software product and the development process. Basing estimates on expert opinion may be a viable approach if no other data is available. Objective: In this paper we analyze initial results from estimating the risk probability based on expert opinion to answer the questions (1) Are expert opinions consistent? (2) Do expert opinions reflect the actual situation? (3) How can the results be improved? Approach: An industry project serves as case for our study. In this project six members provided initial risk estimates for the components of a software system. The resulting estimates are compared to each other to reveal the agreement between experts and they are compared to the actual risk probabilities derived in an ex-post analysis from the released version. Results: We found a moderate agreement between the rations of the individual experts. We found a significant accuracy when compared to the risk probabilities computed from the actual defects. We identified a number of lessons learned useful for improving the simple initial estimation approach applied in the studied project. Conclusions: Risk estimates have successfully been derived from subjective expert opinions. However, additional measures should be applied to triangulate and improve expert estimates.",2013,0, 6800,Noise in Bug Report Data and the Impact on Defect Prediction Results,"The potential benefits of defect prediction have created widespread interest in research and generated a considerable number of empirical studies. Applications with real-world data revealed a central problem: Real-world data is """"dirty"""" and often of poor quality. Noise in bug report data is a particular problem for defect prediction since it effects the correct classification of software modules. Is the module actually defective or not? In this paper we examine different causes of noise encountered when predicting defects in an industrial software system and we provide an overview of commonly reported causes in related work. Furthermore we conduct an experiment to explore the impact of class noise on the predictions performance. The experiment shows that the prediction results for the studied system remain reliable even at a noise level of 20% probability of incorrect links between bug reports and modules.",2013,0, 6801,A Comparison of Different Defect Measures to Identify Defect-Prone Components,"(Background) Defect distribution in software systems has been shown to follow the Pareto rule of 20-80. This motivates the prioritization of components with the majority of defects for testing activities. (Research goal) Are there significant variations between defective components and architectural hotspots identified by other defect measures? (Approach) We have performed a study using post-release data of an industrial Smart Grid application with a well-maintained defect tracking system. Using the Pareto principle, we identify and compare defect-prone and hotspots components based on four defect metrics. Furthermore, we validated the quantitative results against qualitative data from the developers. (Results) Our results show that at the top 25% of the measures 1) significant variations exist between the defective components identified by the different defect metrics and that some of the components persist as defective across releases 2) the top defective components based on number of defects could only identify about 40% of critical components in this system 3) other defect metrics identify about 30% additional critical components 4) additional quality challenges of a component could be identified by considering the pair wise intersection of the defect metrics. (Discussion and Conclusion) Since a set of critical components in the system is missed by using largest-first or smallest-first prioritization approaches, this study, therefore, makes a case for an all-inclusive metrics during defect model construction such as number of defects, defect density, defect severity and defect correction effort to make us better understand what comprises defect-prone components and architectural hotspots, especially in critical applications.",2013,0, 6802,Comparing between Maximum Likelihood Estimator and Non-linear Regression Estimation Procedures for NHPP Software Reliability Growth Modelling,"Software Reliability Growth Models (SRGMs) have been used by engineers and managers for tracking and managing the reliability change of software to ensure required standard of quality is achieved before the software is released to the customer. SRGMs can be used during the project to help make testing resource allocation decisions and/ or it can be used after the testing phase to determine the latent faults prediction to assess the maturity of software artifact. A number of SRGMs have been proposed and to apply a given reliability model, defect inflow data is fitted to model equations. Two of the widely known and recommended techniques for parameter estimation are maximum likelihood and method of least squares. In this paper we compare between the two estimation procedures for their applicability in context of NHPP SRGMs. We also highlight a couple of practical considerations, reliability practitioners must be aware of when applying SRGMs.",2013,0, 6803,Assessing Organizational Learning in IT Organizations: An Experience Report from Industry,"With the increase in demand for higher-quality and more capable IT services, IT organizations in order to obtain competitive advantage require extensive knowledge that needs to be shared and reused among different entities within the organization. The existing IT Service Management (ITSM) mechanisms mention the importance of organizational learning (OL) and knowledge management (KM) for IT organizations. However, they do not explicitly address how OL capabilities of an IT organization can be assessed. This paper, by using an OL assessment model developed for software organizations, namely AiOLoS, shows that with the proper adjustment, the application of the model to IT organizations is feasible. We report the results of applying the model in four functional teams in an IT organization from private sector.",2013,0, 6804,FinancialCloud: Open Cloud Framework of Derivative Pricing,"Predicting prices and risk measures of assets and derivatives and rating of financial products have been studied and widely used by financial institutions and individual investors. In contrast to the centralized and oligopoly nature of the existing financial information services, in this paper, we advocate the notion of a Financial Cloud, i.e., an open distributed framework based cloud computing architecture to host modularize financial services such that these modularized financial services may easily be integrated flexibly and dynamically to meet users' needs on demand. This new cloud based architecture of modularized financial services provides several advantages. We may have different types of service providers in the ecosystem on top of the framework. For example, market data resellers may collect and sell long-term historical market data. Statistical analyses of macroeconomic indices, interest rates, and correlation of a set of assets may also be purchased online. Some agencies might be interested in providing services based on rating or pricing values of financial products. Traders may use the statistically estimated parameters to fine-tune their trading algorithm to maximize the profit of their clients. Providers of each service module may focus on effectiveness, performance, robustness, and security of their innovative products. On the other hand, a user pays for exactly what one uses to optimally manage their assets. A user may also acquire services through an online agent who is an expert in assessing the structural model and quality of existing products and thus assembles service modules matching users risk taking behavior. In this paper, we will also present a survey of related existing technologies and a prototype we developed so far.",2013,0, 6805,Fault tolerant approach for verified software: Case of natural gas purification simulator,"Well logically verified and tested software may fail because of undesired physical phenomena provoking transient faults during its execution. While being the most frequent kind of faults, transient faults are difficult to localize because they have a very short life, but they may cause the failure of software. A fault tolerant method against transient faults under the hypothesis of statically verified software is presented. In order to ensure the right experimental environment, first the specification of the application is validated by Alloy analyzer, second a JML annotated Java code is statically verified. The proposed approach is based on some rules transforming basic Java statements like assignments, conditional and iterative statements into equivalent fault tolerant ones. The current research has exhibited some natural redundancy in any code, and the corrective power of repetitive statements. It also proved that the proposed method makes more efficient fault tolerant versions compared with natural error recovery, i.e. without inserting any additional code for detecting or repairing the damaged state. Illustrated by Gas purification simulator, one can see the natural error recovery in case of fault injection in the code, and how fault tolerant rules recover more errors in less time compared to the natural recovery. The proposed approach is preventive because it avoids the propagation of errors at early stages by repeating low level statements until some stability of their behavior.",2013,0, 6806,Hierarchical diagnosis for an overactuated autonomous vehicle,"The paper presents a new strategy based on hierarchical diagnosis for an autonomous four-wheel steering four-wheel driving (4WS4WD) electrical vehicle. It is known that the lateral stability of the vehicle may be lost in specific faulty scenarios (due, for instance, to the front wheels steering mechanism faults, wheels blocking or drop of pressure). We propose a hierarchical diagnosis to ensure the stability of the vehicle when isolating precisely the component fault. When the vehicle lateral error exceeds a threshold, a dynamic reference generator for rear wheels steering actuator is activated in order to guarantee the vehicle's lateral stability. Simultaneously, an active diagnosis based on the rear wheels steering mathematical model is used to identify the tire-road interface, a vital information for detecting and isolating faults when using analytical redundancy based residuals. The strategy proposed is tested and validated on a realistic dynamic vehicle model simulated using CarSim and Matlab-Simulink softwares.",2013,0, 6807,A method of illumination effect transfer between images using color transfer and gradient fusion,"Illumination plays a crucial role to determine the quality of an image especially in photography. However, illumination alteration is quite difficult to achieve with existing image composition techniques. This paper proposes an unsupervised illumination-transfer approach for altering the illumination effects of an image by transferring illumination effects from another. Our approach consists of three phases. Phase-one layers the target image to three luminosity-variant layers by a series of pre-processing and alpha matting; meanwhile the source image is layered accordingly. Then the layers of the source image are recolored respectively by casting the colors from the corresponding layers of the target image. In phase-two, the recolored source image is edited to seamlessly transit at the boundaries between the layers using gradient fusion technique. Finally, phase-three recolors the fused source image again to produce a similar illuminating image with the target image. Our approach is tested on a number of different scenarios and the experimental results show that our method works well to transfer illumination effects between images.",2013,0, 6808,Secret sharing mechanism with cheater detection,"Cheater detection is essential for a secret sharing approach which allows the involved participants to detect cheaters during the secret retrieval process. In this article, we propose a verifiable secret sharing mechanism that can not only resist dishonest participants but can also satisfy the requirements of larger secret payload and camouflage. The new approach conceals the shadows into a pixel pair of the cover image based on the adaptive pixel pair matching. Consequently, the embedding alteration can be reduced to preserve the fidelity of the shadow image. The experimental results exhibit that the proposed scheme can share a large secret capacity and retain superior quality.",2013,0, 6809,On effects of tokens in source code to accuracy of fault-prone module prediction,"In the software development, defects affect quality and cost in an adverse way. Therefore, various studies have been proposed defect prediction techniques. Most of current defect prediction approaches use past project data for building prediction models. That is, these approaches are difficult to apply new development projects without past data. In this study, we use 28 versions of 8 projects to conduct experiments using the fault-prone filtering technique. Fault-prone filtering is a method that predicts faults using tokens from source code modules. Since the classes of tokens have impact to the accuracy of fault-proneness, we conduct an experiment to find appropriate token sets for prediction. From the results of experiments, we found that using tokens extracted from all parts of modules is the best way to predict faults and using tokens extracted from code part of modules shows better precision.",2013,0, 6810,Enhancing the control of IP tactical networks via measurements,"Measurements in an IP communication system should serve network planning, allow the follow-up of Service Level Agreements and contribute in protecting the security of the network by detecting denial of service attacks as well as threats to the exterior routing protocol. In a black network, they will be best realised using active methods which rely on specific test flows. In a tactical system, they can be performed by software probes which will preserve the compactness of network nodes. All the components of a comprehensive measurement architecture are available off the shelf today, but some precautions must be taken to avoid a number of pitfalls which could lead to erroneous measurement results or make measurement overhead unacceptable. These precautions include an appropriate use of statistical laws and steps to compensate for some errors inherent in sampling methods. The conclusions of this paper are valid for deployable tactical networks, but not necessarily for highly mobile ones. Measurements in a MANET would require more theoretical and experimental work.",2013,0, 6811,Model-based generation of safety test-cases for Onboard systems,"As a core subsystem in CTCS-3, the Onboard subsystem is a typical safety-critical system, in which any fault can lead to huge human injury or wealth losing. It is important to guarantee the safety of train control system. Safety testing is an effective method to detect the safety holes and bugs in the system. However, because of the special characters of train control system like diversification, structural complexity and multiplicity of interfaces, most safety testing for train control system are manually executed based on specialistic experience, which leads to a huge testing workload. Besides, manual generation will easily cause the problem of missing test cases. In this paper, a model-based safety test method is introduced. We select a core function of onboard system as the representative to study the method. This function was analyzed by Fault Tree Analysis (FTA) to get the bottom events, which are used to turn to fault models being injected into the whole system model, affected system safety, and a set of timed automata network model of the core function is built using the tools of UPPAAL. Then COVER, the real-time test case generation tool, is used to generate the safety test cases from the system model (included fault models) automatically, and states transition criteria is customized based on preferences to achieve user-defined test, the test accuracy and efficiency is improved.",2013,0, 6812,A hybrid algorithm for coverage path planning with imperfect sensors,"We are interested in the coverage path planning problem with imperfect sensors, within the context of robotics for mine countermeasures. In the studied problem, an autonomous underwater vehicle (AUV) equipped with sonar surveys the bottom of the ocean searching for mines. We use a cellular decomposition to represent the ocean floor by a grid of uniform square cells. The robot scans a fixed number of cells sideways with a varying probability of detection as a function of distance and of seabed type. The goal is to plan a path that achieves the minimal required coverage in each cell while minimizing the total traveled distance and the total number of turns. We propose an off-line hybrid algorithm based on dynamic programming and on a traveling salesman problem reduction. We present experimental results and show that our algorithm's performance is superior to published results in terms of path quality and computational time, which makes it possible to implement the algorithm in an AUV.",2013,0, 6813,Automatic TFT-LCD mura detection based on image reconstruction and processing,"Automatic inspection of Mura defects is a challenging task in thin-film transistor liquid crystal display (TFT-LCD) defect detection, which is critical for LCD manufacturers to guarantee high standard quality control. In this paper, we propose a set of automatic procedures to detect mura defects by using image processing and computer vision techniques. Singular Value Decomposition (SVD) and Discrete Cosine Transformation(DCT) techniques are employed to conduct image reconstruction, based on which we are able to obtain the differential image of LCD Cells. In order to detect different types of mura defects accurately, we then design a method that employs different detection modules adaptively, which can overcome the disadvantage of simply using a single threshold value. Finally, we provide the experimental results to validate the effectiveness of the proposed method in mura detection.",2013,0, 6814,MATCASC: A tool to analyse cascading line outages in power grids,"Blackouts in power grids typically result from cascading failures. The key importance of the electric power grid to society encourages further research into sustaining power system reliability and developing new methods to manage the risks of cascading blackouts. Adequate software tools are required to better analyse, understand, and assess the consequences of the cascading failures. This paper presents MATCASC, an open source MATLAB based tool to analyse cascading failures in power grids. Cascading effects due to line overload outages are considered. The applicability of the MATCASC tool is demonstrated by assessing the robustness of IEEE test systems and real-world power grids with respect to cascading failures.",2013,0, 6815,Adaptive protection schemes for feeders with the penetration of SEIG based wind farm,"Due to the increasing penetration of Distributed Generation (DG), conventional distribution overhead feeder protection schemes are prone to potential threats. In order to cope up with this, adaptive feeder protection schemes are required. This work involves the development and evaluation of an adaptive Overcurrent feeder protection scheme and an adaptive Recloser and Sectionalizers feeder protection scheme to vanquish the impacts of wind based DG which is highly intermittent in nature. PSCAD is used to carry out simulations. MATLAB based software package for distribution system conductor sizing and protection coordination studies are also presented.",2013,0, 6816,Model-based testing of NASA's OSAL API An experience report,"We present a case study that evaluates the applicability and effectiveness of model-based testing in detecting bugs in real-world, mission-critical systems. NASA's Operating System ion Layer (OSAL) is the subject system of this paper. The OSAL is a reusable framework that wraps several operating systems (OS) and is used extensively in NASA's flight software missions. We developed a suite of behavioral models, represented as hierarchical finite state machines (FSMs), of the core file system API and generated a large number of test cases automatically. We then automatically executed these test cases against the OSAL. The results show that the OSAL is a high quality product. Naturally, due to the systematic and rigorous nature of MBT, we detected a few previously unknown corner-case bugs and issues, which escaped traditional manual testing and code reviews. We discuss the MBT architecture, the detected bugs, the code coverage of generated tests, as well as threats to validity of the study.",2013,0, 6817,"Help, help, i'm being suppressed! The significance of suppressors in software testing","Test features are basic compositional units used to describe what a test does (and does not) involve. For example, in API-based testing, the most obvious features are function calls; in grammar-based testing, the obvious features are the elements of the grammar. The relationship between features as abstractions of tests and produced behaviors of the tested program is surprisingly poorly understood. This paper shows how large-scale random testing modified to use diverse feature sets can uncover causal relationships between what a test contains and what the program being tested does. We introduce a general notion of observable behaviors as targets, where a target can be a detected fault, an executed branch or statement, or a complex coverage entity such as a state, predicate-valuation, or program path. While it is obvious that targets have triggers - features without which they cannot be hit by a test - the notion of suppressors - features which make a test less likely to hit a target - has received little attention despite having important implications for automated test generation and program understanding. For a set of subjects including C compilers, a flash file system, and JavaScript engines, we show that suppression is both common and important.",2013,0, 6818,An empirical comparison of the fault-detection capabilities of internal oracles,"Modern computer systems are prone to various classes of runtime faults due to their reliance on features such as concurrency and peripheral devices such as sensors. Testing remains a common method for uncovering faults in these systems, but many runtime faults are difficult to detect using typical testing oracles that monitor only program output. In this work we empirically investigate the use of internal test oracles: oracles that detect faults by monitoring aspects of internal program and system states. We compare these internal oracles to each other and to output-based oracles for relative effectiveness and examine tradeoffs between oracles involving incorrect reports about faults (false positives and false negatives). Our results reveal several implications that test engineers and researchers should consider when testing for runtime faults.",2013,0, 6819,Towards fast OS rejuvenation: An experimental evaluation of fast OS reboot techniques,"Continuous or high availability is a key requirement for many modern IT systems. Computer operating systems play an important role in IT systems availability. Due to the complexity of their architecture, they are prone to suffer failures due to several types of software faults. Software aging causes a nonnegligible fraction of these failures. It leads to an accumulation of errors with time, increasing the system failure rate. This phenomenon can be accompanied by performance degradation and eventually system hang or even crash. As a countermeasure, software rejuvenation entails stopping the system, cleaning its internal state, and resuming its operation. This process usually incurs downtime. For an operating system, the downtime impacts any application running on top of it. Several solutions have been developed to speed up the boot time of operating systems in order to reduce the downtime overhead. We present a study of two fast OS reboot techniques for rejuvenation of Linux-based operating systems, namely Kexec and Phase-based reboot. The study measures the performance penalty they introduce and the gain in reduction of downtime overhead. The results reveal that the Kexec and Phase-based reboot have no statistically significant impact in terms of performance penalty from the user perspective. However, they may require extra resource (e.g., CPU) usage. The downtime overhead reduction, compared with normal Linux and VM reboots, is 77% and 79% in Kexec and Phase-based reboot, respectively.",2013,0, 6820,Predicting defects using change genealogies,"When analyzing version histories, researchers traditionally focused on single events: e.g. the change that causes a bug, the fix that resolves an issue. Sometimes however, there are indirect effects that count: Changing a module may lead to plenty of follow-up modifications in other places, making the initial change having an impact on those later changes. To this end, we group changes into change genealogies, graphs of changes reflecting their mutual dependencies and influences and develop new metrics to capture the spatial and temporal influence of changes. In this paper, we show that change genealogies offer good classification models when identifying defective source files: With a median precision of 73% and a median recall of 76%, change genealogy defect prediction models not only show better classification accuracies as models based on code complexity, but can also outperform classification models based on code dependency network metrics.",2013,0, 6821,Predicting risk of pre-release code changes with Checkinmentor,"Code defects introduced during the development of the software system can result in failures after its release. Such post-release failures are costly to fix and have negative impact on the reputation of the released software. In this paper we propose a methodology for early detection of faulty code changes. We describe code changes with metrics and then use a statistical model that discriminates between faulty and non-faulty changes. The predictions are done not at a file or binary level but at the change level thereby assessing the impact of each change. We also study the impact of code branches on collecting code metrics and on the accuracy of the model. The model has shown high accuracy and was developed into a tool called CheckinMentor. CheckinMentor was deployed to predict risk for the Windows Phone software. However, our methodology is versatile and can be used to predict risk in a variety of large complex software systems.",2013,0, 6822,Fault localization based on failure-inducing combinations,"Combinatorial testing has been shown to be a very effective testing strategy. After a failure is detected, the next task is to identify the fault that causes the failure. In this paper, we present an approach to fault localization that leverages the result of combinatorial testing. Our approach is based on a notion called failure-inducing combinations. A combination is failure-inducing if it causes any test in which it appears to fail. Given a failure-inducing combination, our approach derives a group of tests that are likely to exercise similar traces but produce different outcomes. These tests are then analyzed to locate the faults. We conducted an experiment in which our approach was applied to the Siemens suite as well as the grep program from the SIR repository that has 10068 lines of code. The experimental results show that our approach can effectively and efficiently localize the faults in these programs.",2013,0, 6823,Evaluating long-term predictive power of standard reliability growth models on automotive systems,"Software is today an integral part of providing improved functionality and innovative features in the automotive industry. Safety and reliability are important requirements for automotive software and software testing is still the main source of ensuring dependability of the software artifacts. Software Reliability Growth Models (SRGMs) have been long used to assess the reliability of software systems; they are also used for predicting the defect inflow in order to allocate maintenance resources. Although a number of models have been proposed and evaluated, much of the assessment of their predictive ability is studied for short term (e.g. last 10% of data). But in practice (in industry) the usefulness of SRGMs with respect to optimal resource allocation depends heavily on the long term predictive power of SRGMs i.e. much before the project is close to completion. The ability to reasonably predict the expected defect inflow provides important insight that can help project and quality managers to take necessary actions related to testing resource allocation on time to ensure high quality software at the release. In this paper we evaluate the long-term predictive power of commonly used SRGMs on four software projects from the automotive sector. The results indicate that Gompertz and Logistic model performs best among the tested models on all fit criterias as well as on predictive power, although these models are not reliable for long-term prediction with partial data.",2013,0, 6824,Quality assessment of row crop plants by using a machine vision system,"This paper reports research results on developing a machine vision system to assess the quality of row crop plants. Comparing to the prevalent machine vision system employed in agricultural industry for weed-crops classification as well as plant density evaluation, the proposed machine vision system is able to detect the location of plants (weed / crops) and calculate the leaves' area for plant quality assessment, even if the leaves are overlapped with each other. The developed machine vision system involves a camera system and an image processing system. The camera system uses a coaxial camera constructed by a RGB sensor and near infrared (NIR) sensor, which cooperate with a white front lighting and NIR front lighting respectively. Plants are firstly captured by the coaxial camera. The plants are segmented from background on RGB image; the overlapping edges of leaves are detected on NIR image. Afterwards the overlapping leaves are separated and assigned to the assessed stem position of plants. At last, based on the assigned leaves, the plants are separated, and the area of plant canopy is calculated. A set of experiments have been made to prove the feasibility of the proposed machine vision system.",2013,0, 6825,Semi-automated deployment of Simulation-aided Building Controls,"The deployment of Simulation-aided Building Controls is a complex process due to the uniqueness of each building along with an increasing complexity of building systems. Typically the deployment tasks are performed manually by highly specialized personnel which results in a poorly documented and extremely scattered deployment process. This paper introduces a workflow for the deployment of a Simulation-aided Building Control service suitable for supporting the operation phase of a building. Some tasks in the deployment process may benefit from machine support, especially data intensive, repetitive and error prone tasks. These may be fully or semi-automated. The proposed approach reduces the complexity of the setup procedure, decreases problems related to the uniqueness of the infrastructure and supports the documentation of the deployment process. Specially large facility management service providers may profit from this deployment process.",2013,0, 6826,A customized design framework for the model-based development of engine control systems,"In the model-based design of complex technical systems, many design data artifacts are generated, such as models in different formalisms and design-related documents, which include specifications, test results, and design decisions. The consistent treatment and integration of these design artifacts is a challenge that is as of yet unsolved in industrial practice. This paper illustrates the industrial applicability of a software-based Design Framework (DF) [1] for the model-based design of an engine control system that was developed recently within the European research project MULTIFORM [2]. The goal of the Design Framework is to reduce the design effort, and thus the cost, while improving the quality of the designed system by consistently integrating the artifacts and tools that arise in model-based design processes. To ensure that design inconsistencies and errors are detected as early as possible (i.e. when it is relatively cheap to correct them), the framework provides structured data and model management as well as automated design consistency checking and design parameter propagation.",2013,0, 6827,Recent progress in thin wafer processing,"The ability to process thin wafers with thicknesses of 20-50um on front- and backside is a key technology for 3D IC. The most obvious reason for thin wafers is the reduced form factor, which is especially important for handheld devices. However, probably even more important is that thinner wafers enable significant cost reduction for TSVs. The silicon real estate consumed by the TSVs has to be minimized in order that the final device provides a performance advantage compared to traditional 2D devices. The only way to reduce area consumption by the TSVs is to reduce their diameter. For a given wafer thickness the reduction of TSV diameter increases the TSV aspect ratio. Consensus has developed on the use of Temporary Bonding / Debonding Technology as the solution of choice for reliably handling thin wafers through backside processing steps. While the majority of the device manufacturing steps on the front side of the wafer will be completed with the wafer still at full thickness, it will be temporarily mounted onto a carrier before thinning and processing of the features on its backside. Once the wafer reaches the temporary bonding step, it already represents a significant value, as it has already gone through numerous processing steps. For this reason, inspection of wafers prior to non-reworkable process steps is of great interest. Within the context of Temporary Bonding this consideration calls for inline metrology that allows for detection of excursions of the temporary bonding process in terms of adhesive thickness, thickness uniformity as well as bonding voids prior to thinning of the product wafer. This paper introduces a novel metrology solution capable of detecting all quality relevant parameters of temporarily bonded stacks in a single measurement cycle using an Infrared (IR) based measurement principle. Thanks to the IR based measurement principle, the metrology solution is compatible with both silicon and glass carriers. The system design has been develop- d with the inline metrology task in mind. This has led to a unique system design concept that enables scanning of wafers at a throughput rate sufficient to enable 100% inspection of all bonded wafers inline in the Temporary Bonding system. Both, current generation temporary bonding system throughputs and future high volume production system throughputs as required by the industry for cost effective manufacturing of 3D stacked devices were taken into account as basic specifications for the newly developed metrology solution. Sophisticated software algorithms allow for making pass/ fail decisions for the bonded stacks and triggering further inspection, processing and / or rework. Actual metrology results achieved with this novel system will be presented and discussed. In terms of adhesive total thickness variation (TTV) of bonded wafers, currently achieved performance values for postbond TTV will be reviewed in light of roadmaps as required by high volume production customers.",2013,0, 6828,OpenFlow Rules Interactions: Definition and Detection,"Software Defined Networking (SDN) is a promising architecture for computer networks that allows the development of complex and revolutionary applications, without breaking the backward compatibility with legacy networks. Programmability of the control-plane is one of the most interesting features of SDN, since it provides a higher degree of flexibility in network management: network operations are driven by ad-hoc written programs that substitute the classical combination of firewalls, routers and switches configurations performed in traditional networks. A successful SDN implementation is provided by the OpenFlow standard, that defines a rule-based programming model for the network. The development process of OpenFlow applications is currently a low-level, error prone programming exercise, mainly performed manually in both the implementation and verification phases. In this paper we provide a first formal classification of OpenFlow rules interactions into a single OpenFlow switch, and an algorithm to detect such interactions in order to aid the OpenFlow applications development. Moreover, we briefly present a performance evaluation of our prototype and how it has been used in a real-word application.",2013,0, 6829,Easily Rendering Token-Ring Algorithms of Distributed and Parallel Applications Fault Tolerant,"We propose in this paper a new algorithm that, when called by existing token ring-based algorithms of parallel and distributed applications, easily renders the token tolerant to losses in presence of node crashes. At most k consecutive node crashes are tolerated in the ring. Our algorithm scales very well since a node monitors the liveness of at most k other nodes and neither a global election algorithm nor broadcast primitives are used to regenerate a new token. It is thus very effective in terms of latency cost. Finally, a study of the probability of having at most k consecutive node crashes in the presence of f failures and a discussion of how to extend our algorithm to other logical topologies are also presented.",2013,0, 6830,A Model to Assess the Usability of Enterprise Architecture Frameworks,"Since the advent of Enterprise Architecture (EA), several EA frameworks have been proposed. Each EA framework has strong and weak points which can be assessed qualitatively to determine the best EA for an organization. One of the qualitative characteristics is usability of the EA framework. However, currently there is a lack of well-defined criteria to measure EA framework usability. In this paper a model is proposed to evaluate and measure the usability of EA frameworks.",2013,0, 6831,An Empirical Analysis of a Testability Model,"Testability modeling has been performed for many years. Unfortunately, the modeling of a design for testability is often performed after the design is complete. This limits the functional use of the testability model to determining what level of test coverage is available in the design. This information may be useful to help assess whether a product meets a requirement to achieve a desired level of test coverage, but has little pro-active effect on making the design more testable. This paper investigates and presents a number of approaches for tackling this problem. Approaches are surveyed, achievements and main issues of each approach are considered. Investigation of that classification will help researchers who are working on model testability to deliver more applicable solutions.",2013,0, 6832,An Empirical Study into Model Testability,"Testability modeling has been performed for many years. Unfortunately, the modeling of a design for testability is often performed after the design is complete. This limits the functional use of the testability model to determining what level of test coverage is available in the design. This information may be useful to help assess whether a product meets a requirement to achieve a desired level of test coverage, but has little pro-active effect on making the design more testable. This paper investigates and presents a number of approaches for tackling this problem. Approaches are surveyed, achievements and main issues of each approach are considered. Investigation of that classification will help researchers who are working on model testability to deliver more applicable solutions.",2013,0, 6833,Measuring and visualising the quality of models,"The quality of graphical software or business process models is influenced by several aspects such as correctness of the formal syntax, understandability or compliance to existing rules. Motivated by a standardised software quality model, we discuss characteristics and subcharacteristics of model quality and sugest measures for those quality (sub)characteristics. Also, we extended SonarQube, a well-known tool for aggregating and visualising different measures for software quality such that it can now be used with repositories of business process models as well. This allows assessing the quality of a collection of models in the same way that is already well-established for assessing the quality of software code. Given the fact that models are early software development artifacts (and can even be executable and thus become a part of a software product), such a quality control can lead to the detection of possible problems in the early phases of the software development process.",2013,0, 6834,Multi_level data pre_processing for software defect prediction,"Early detection of defective software components enables verification experts give much time and allocate scare resources to the problem areas of the system under development. This is the usefulness of defect prediction; defect prediction streamline testing efforts and reduce the development cost of software when as stated above it is detected at the early stages. An important step to building effective predictive models is to apply one or more sampling techniques. A model is claimed to be effective if it is able to correctly classify defective and non-defective modules as accurately as possible. In this paper we considered the outcome of data preprocessing by filtering and compared the performance with non-pre-processing original dataset. We compared the performance of the four different K-Nearest Neighbor(KNN-LWL, Kstar, IBK, IB1 classifiers) with Non Nested Generalized Exemplars (NNGE), Random Tree and Random Forest. We observed that our Multi-level data pre-processing; which includes double attribute selection and tripartite instance filtering enhanced the defect prediction results. We also observed that these two filtering methods improved performance of the prediction results independently; by using attribute selection only and resampling filtering. The excellent performance achieved could be attributed to the removal of irrelevant attributes by dimension reduction and Resampling also handled the problem of class imbalanced. These together led to the improved performance competences of the classifiers considered. NNGE as its name implies avoided generalization of some of the datasets; those with instances above 2,000; (JM1=10,885 and KC1=2,109) using pre-processing, this may be due to conflicting instances. We also used Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) measures to check the effectiveness of our model.",2013,0, 6835,"A cognitive system for future-proof wireless microphones: Concept, implementation and results","Since several years, `cognitive radio' is given much attention in research and is seen as a future technology in communications. In this paper a cognitive radio system for Program Making and Special Event (PMSE) devices (e.g. wireless microphones) is presented and the results of a field test are discussed. First, we outline the challenges which the PMSE industry has to cope with and how changes in regulatory and the digital dividend in the TV UHF band has impact on the operation reliability of PMSE devices. Afterwards, a system is presented that uses cognitive radio techniques to ensure the high audio quality requirements and interference free operation of PMSE devices under the previously mentioned circumstances. Beside a general overview over the developed system we focus on a distributed spectrum sensing network as part of the cognitive information acquisition to monitor the current spectrum situation. The data is used by the cognitive engine to detect potential interferers and to trigger operation parameter changes (e.g. frequency or transmit power) of the PMSE devices to avoid link quality degradation. We present the built-up sensor nodes for the spectrum measurement and the developed software to control the sensor nodes and to preprocess the measured data. Finally, the results, collected with the field test platform in the fair-ground of Berlin, show convincingly how the developed cognitive system makes PMSE devices future-proof.",2013,0, 6836,Agreement assessment of biochemical pathway models by structural analysis of their intersection,"In case of model development, it would be an advantage to assess the quality of available models looking for the best one or to find suitable parts of a published model to build a new one. The differences or contradictions in reconstructions can indicate the level of agreement between different authors about the topic of interest. The intersecting part of models can reveal also the differences in the scope of the models. Two pairs of models from BioCyc database were analyzed: 1) the Escherichia coli models ecol199310cyc and ecol316407cyc and 2) the Saccharomyces cerevisiae models iND750 and iLL672. The ModeRator software tool is used to compare models and generate their intersection model. The structural parameters of models are analyzed by the software BINESA. The study reveals very different parameters of the intersections of the pairs of the E. coli and the S. cerevisiae models. The models built by the same group of authors like in the case of E. coli is selected as an example of a high agreement between models and can be interpreted as a consensus part of two initial models. The intersection of the S. cerevisiae models demonstrates very different structural properties and the intersection model would not be able to function even after significant improvement. The structural analysis of the pairs of original models and their intersections is performed to determine which structural parameters can be used to determine a poor agreement between the pairs of models. It is concluded that an application of the automated comparison and intersection generation of two models can give a fast insight in the similarity of the models to find out the consensus level in modelling of metabolism of a particular organism. This approach can be used also to find similarities between the models of different organisms. Automation of intersection creation and structural analysis are enabling technologies of this approach.",2013,0, 6837,A Simulink model of an active island detection technique for inverter-based distributed generation,"With the increased involvement of Distributed Power Generation Systems (DPGSs) into the conventional power system, the structure has evolved and therefore has brought in various challenges albeit improving flexibility and smartness of the system. This paper addresses modeling one of these challenges where a Simulink model for inverter-based distributed generation (IBDG) active islanding detection technique is introduced. Out of the various types of active islanding detection methods, the modeling of the general electric islanding detection method which uses the positive feedback of the voltage or frequency at the point of common coupling (PCC) for the detection of an island is presented. This methodology is modeled and applied for an IBDG connected to a low voltage distribution network. The simulation results are presented for a 20kW, three-phase IBDG showing that the system is able to detect islanding and cease the current flow from the IBDG even under critical operating condition of a close matching between the power delivered by the inverter and the load demand (zero nondetection zone operation).",2013,0, 6838,An interval nonparametric regression method,"This paper proposes a nonparametric multiple regression method for interval data. Regression smoothing investigates the association between an explanatory variable and a response variable. Here, each interval variable of the input data is represented by its range and center and a smooth function between a pair of vector of interval variables is defined. In order to test the suitability of the proposed model, a simulation study is undertaken and an application using thirteen project data of the NASA repository to estimate interval software size is also considered. These real data represent variability and/or uncertainty innate to the project data. The prediction quality is assessed by a mean magnitude of relative errors calculated from test data.",2013,0, 6839,A trust management system for ad-hoc mobile clouds,"Most current cloud provisioning involves a data center model, in which clusters of machines are dedicated to running cloud infrastructure software. Ad-hoc mobile clouds, in which infrastructure software is distributed over resources harvested from machines already existed and used for other purposes, are gaining popularity. In this paper, a trust management system (TMC) for mobile ad-hoc clouds is proposed. This system considers availability, neighbors' evaluation and response quality and task completeness in calculating the trust value for a node. The trust management system is built over PlanetCloud which introduced the term of ubiquitous computing. EigenTrust algorithm is used to calculate the reputation trust value for nodes. Finally, performance tests were executed to prove the efficiency of the proposed TMC in term of execution time, and detecting node behavior.",2013,0, 6840,Software applications integrated in the management of the patient with chronic liver disease,"The focal point in the debate on the management of the patient with chronic liver disease is the healthcare system and the health care services, with an accent on the health care provided at home, which can be discussed only when there is a connection integrated with the medical assistance, providing useful information. This paper discusses in theoretical synthesis (definitions, terms, classification), accompanied by empirical evidence, matters regarding the chronic diseases, followed by comments on the integration of the software applications in the management of the patient with chronic liver disease and the ethical concerns it involves. We believe that all the activities implicated illustrate the complexity of the management of the patient with a chronic disease, which concentrates on the minimization of the probability for complications and strives to offer a quality of life as best as possible.",2013,0, 6841,Development of machine vision solution for grading of Tasar silk yarn,"Quality of Tasar fabric demands uniform coloured silk yarn during weaving. But, the variation of yarn colour depends on various natural factors like eco-race and feeding of silk worms, weather conditions etc and other production factors. So, silk yarns need to be sorted after production. At present, yarns are sorted manually by a group of experts which is subjective in nature. Again, due to lustrous nature of silk yarn, it reflects light and therefore it is difficult to ascertain the exact colour manually. Slight variation in colour is difficult to detect manually but the market demands lots with perfectly uniformly coloured yarns within the lot though the inter-lot variation in colour is encouraged. So, there is need to develop a solution which can grade the silk yarn objectively, reliably and mimic the human perception. This paper proposes a new machine vision solution for automatic grading of silk yarn based on its colour. The system consists of an enclosed cabinet which encompasses of a low cost digital camera, uniform illumination arrangement, weighing module, mechanical arrangement for sample holding and a grading software which applies image analysis technique using CIELab colour model with rotational invariant statistical feature based hierarchical grading algorithm for colour characterization. Performance of the system has been validated with the human experts and accuracy has been calculated as 91%.",2013,0, 6842,The Application of Fuzzing in Web Software Security Vulnerabilities Test,"Web applications need for extensive testing before deployment and use, for early detecting security vulnerabilities to improve the quality of the safety of the software, the purpose of this paper is to research the fuzzing applications in security vulnerabilities. This article first introduces the common Web software security vulnerabilities, and then provide a comprehensive overview of the fuzzing technology, and using fuzzing tools Web fuzz to execute a software vulnerability testing, test whether there is a software security hole. Test results prove that fuzzing is suitable for software security vulnerabilities testing, but this methodology applies only to security research field, and in the aspect of software security vulnerabilities detection is still insufficient.",2013,0, 6843,A toolset for easy development of test and repair infrastructure for embedded memories,"The development of a modern System-on-Chip (SOC) requires usage of embedded IP blocks from different vendors. One of widely used IP blocks in SOC is an embedded memory that usually occupies an essential die area. All IP blocks can have manufacturing defects. Meanwhile, in difference to other SOC components embedded memories are more defect-prone. STAR Hierarchical System (SHS) is an infrastructural IP solution for built-in test and repair engines of IP blocks. It is widely adopted now by a variety of customers which development flows essentially differ from each other. To cover the diversity of requests for user maintenance implying from difference in development flows we suggest a new approach basing on a library of SHS standard use flows implemented in a form of templates and a special toolset for their modification and verification. The implemented library of templates assists to design new flows quickly through retrieving and customizing specific examples. User can extend the library via insertion of new templates. A formal verification approach used already for business processes is successfully applied to the built library. The application is illustrated on some use flow examples.",2013,0, 6844,A novel fault-tolerant task scheduling algorithm for computational grids,"A computational grid is a hardware and software infrastructure that provides consistent, dependable, pervasive and expensive access to high-end computational capabilities in a multi-institutional virtual organization. Computational grids provide computing power needed for execution of tasks. Scheduling the task in computing grid is an important problem. To select and assign the best resources for task, we need a good scheduling algorithm in grids. As grids typically consist of strongly varying and geographically distributed resources, choosing a fault-tolerant computational resource is an important issue. The main scheduling strategy of most fault-tolerant scheduling algorithms depends on the response time and fault indicator when selecting a resource to execute a task. In this paper, a scheduling algorithm is proposed to select the resource, which depends on a new factor called Scheduling Success indicator (SSI). This factor consists of the response time, success rate and the predicted Experience of grid resources. Whenever a grid scheduler has tasks to schedule on grid resources, it uses the Scheduling Success indicator to generate the scheduling decisions. The main scheduling strategy of the Fault-tolerant algorithm is to select resources that have lowest tendency to fail and having more experience in task execution. Extensive experiment simulations are conducted to quantify the performance of the proposed algorithm on GridSim. GridSim is a Java based discrete-event Grid simulation toolkit. Experiments have shown that the proposed algorithm can considerably improve grid performance in terms of throughput, failure tendency and worth.",2013,0, 6845,"Fetal heart rate discovery: Algorithm for detection of fetal heart rate from noisy, noninvasive fetal ECG recordings","Fetal heart rate variability is known to be of a great meaning in assessing fetal health status. The simplest way of measuring fetal heart rate is to the non-invasive fetal ECG (fECG). A novel and efficient algorithm for detection of fetal ECG is needed. We analyzed 75 FECG recordings from the PhysioNet Challenge 2013 database. The detected RR interval peaks were compared with fetal scalp electrode measurements. Our algorithm focuses on detecting the most prominent part of the fetal QRS complex i.e. the RS slope. First, we remove long-range trends and find the two channels with the best quality fetal ECG. Then, we localize the repolarisations having the required characteristics (adequate amplitude and slope). Note, that the algorithm is adaptive and finds by itself the optimal RS slope characteristics for every recording. These steps allowed us to obtain accurate and reliable results of fetal R peak detection, even in the case of very noisy data. The preliminary test score of the PhysioNet Challenge were 132.664 (event 4) and 11.961 (event 5). The phase 3 score of the PhysioNet Challenge were 118.221 (event 4) and 10.663 (event 5). This is an opensource algorithm available at the PhysioNet library.",2013,0, 6846,ECGlab: User friendly ECG/VCG analysis tool for research environments,"We present ECGlab, a cross-platform, user friendly, graphical user interface for assessing results from automated analysis of ECGs in research environments. ECGlab allows visual inspection and adjudication of ECGs. It is part of our recently developed framework to automatically analyze ECGs from clinical studies, including those in the US Food and Drug Administration (FDA) ECG Warehouse. ECGlab is written in C++ using open-source libraries. Supported ECG formats include Physionet, ISHNE and FDA XML HL7. ECG processing and automated analysis is done with ECGlib (ECG analysis library). ECGs can be loaded individually or grouped using ECGlib index format and information such as demographics or signal quality metrics can be loaded from metafiles to navigate through the ECGs and guide their review. The user can graphically adjudicate the ECGs in a semi-automatic or manual fashion. Vectorcardiograms can be assessed as well. A prototype for automatic extraction, based on heart rate stability and signal quality, of 10 seconds ECGs from continuous Holter recordings is also available. ECGlab, which has been successfully tested in Linux and Microsoft Windows, is currently being used to assess ECGs from clinical studies. We are working on making ECGlab open-source in order to facilitate ECG research.",2013,0, 6847,3D analysis of myocardial perfusion from vasodilator stress computed tomography: Can accuracy be improved by iterative reconstruction?,"Computed tomography (CT) is an emerging tool to detect stress-induced myocardial perfusion abnormalities. We hypothesized that iterative reconstruction (IR) could improve the accuracy of the detection of significant coronary artery disease using quantitative 3D analysis of myocardial perfusion during vasodilator stress. We studied 39 patients referred for CT coronary angiography (CTCA) who agreed to undergo additional imaging with regadenoson (Astelias). Images were acquired using 256-channel scanner (Philips) and reconstructed using 2 different algorithms: filtered backprojection (FEP) and IR (iDose 7, Philips). Custom software was used to analyze both FEP and IR images. An index of severity and extent of perfusion abnormality was calculated for each 3D myocardial segment and compared to perfusion defects predicted by coronary stenosis > 50% on CTCA. Five patients with image artifacts were excluded. Ten patients with normal coronaries were used to obtain reference values, which were used to correct for x-ray attenuation differences among normal myocardial segments. Compared to the conventional FEP images, IR images had considerably lower noise levels, resulting in tighter histograms of x-ray attenuation. In the remaining 24 patients, IR improved the detection of perfusion abnormalities. Quantitative 3D analysis of MDCT images allows objective detection of stress-induced perfusion abnormalities, the accuracy of which is improved by IR.",2013,0, 6848,An improved scheme for minimizing handoff failure due to poor signal quality,"There is a growing demand on the mobile wireless operators to provide continuous, satisfactory, and reliable quality of service to their teeming subscribers. Handoff failure which is one of the major causes of call drops is a major challenge prevalent in the mobile systems worldwide. Mobile users are more sensitive to handoff failure than new call failure. In this paper, an improved scheme for minimizing handoff failure due to poor signal quality was presented. The improved scheme was based on the following parameters: call signal quality, channel availability and the direction of movement of the mobile terminal to the base station. Through the use of MatLab software, the performance of the improved handoff scheme was compared with an existing handoff scheme. The comparison was based on the handoff failure probability and new call blocking probability for each of the schemes. The results obtained from simulation showed that the new scheme has a reduced handoff failure probability than the existing scheme.",2013,0, 6849,Performance test and bottle analysis based on scientific research management platform,"The performance and service quality of a Web system become more and more important along with the development of Web application technology and popularization of Web application rapidly. There are many particularities and difficulties in the testing of web applications as to traditional application, especially in performance testing, such as unpredictable load, reality of designing scenario and veracity of analysis bottleneck. This paper which based on traditional Web system performance testing theory and used the testing tool named LoadRunner to analyze how to detect the shortage of Web system performance precisely. The method has been implemented in System of scientific research management platform, and has been obtained anticipative result. This paper has divide the web system method into six processes based on the Web system performance testing: Making performance testing plan, build performance testing environment, record and develop testing script, foundation testing scene, play the monitor scene and analysis testing result. And also gives Web performance test the general step.",2013,0, 6850,Implementation and characterization of a reconfigurable time domain reflectometry system,"A practical architecture for pulsed radar and time domain reflectometry (TDR) is presented in this paper. Incorporating the software-defined radio paradigm, the prototype features a reconfigurable transceiver. Reconfigurability is achieved by implementing an arbitrary waveform generator (AWG) in a Field Programmable Gate Array (FPGA) and suitable digital-to-analog converters (DAC). The AWG allows for changes in the width and shape of a transmitted pulse on-the-fly, i.e. without the need for reprogramming. In the current implementation, the transmitter is able to achieve a minimum pulse width of 6.25ns, which result in a 62.5 cm range resolution for non-dispersive medium with 0.67 velocity factor. The resolution was verified by testing several cable setups with two differently-spaced discontinuities. The receiver, on the other hand, employs equivalent time sampling (ETS) through on-board analog-to-digital converters (ADC) and a custom delay generator. The ETS receiver was able to attain 0.357ns equivalent time sampling interval, which is equivalent to a 2.8 GHz sampling rate for periodic signals. This allows the transceiver to locate a discontinuity with 3.57cm accuracy in a non-dispersive medium with a velocity factor of 0.67, which was verified through experiments performed on open circuit-terminated cables with varying length. The system is intended to be used in detecting faults on a TDR cable buried underground to detect slope movement.",2013,0, 6851,Study of led power fault online avoidance control strategy,"This paper study the power fault online avoidance strategy to improve the reliability of power source. According to real-time monitoring of the power running status, confirm the property and urgency degree. Through adaptive adjusting the stress on the component and executing the sound software shut off method, targeted protect the power source into different levels. The results show the occurrence probability of power source faults decrease and time between failures increase. This is especially applicable to severe environments and fields where power source should have high reliability.",2013,0, 6852,Fault prediction by utilizing self-organizing Map and Threshold,"Predicting parts of the programs that are more defects prone could ease up the software testing process, which leads to testing cost and testing time reduction. Fault prediction models use software metrics and defect data of earlier or similar versions of the project in order to improve software quality and exploit available resources. However, some issues such as cost, experience, and time, limit the availability of faulty data for modules or classes. In such cases, researchers focus on unsupervised techniques such as clustering and they use experts or thresholds for labeling modules as faulty or not faulty. In this paper, we propose a prediction model by utilizing self-organizing map (SOM) with threshold to build a better prediction model that could help testers in labeling process and does not need experts to label the modules any more. Data sets obtained from three Turkish white-goods controller software are used in our empirical investigation. The results based on the proposed technique is shown to aid the testers in making better estimation in most of the cases in terms of overall error rate, false positive rate (FPR), and false negative rate (FNR).",2013,0, 6853,A study of comparative analysis of regression algorithms for reusability evaluation of object oriented based software components,"Reusability of software is found to be a key feature of quality. The most obvious outcomes of software reuse are overcoming the software crisis, advancing in software quality and improving productivity. The issue of spotting reusable software components from given existing system is very important but yet it is not much cultivated. For identification and evaluation of reusable software we use an approach that has foundation of software models and metrics. Idea of this study is to examine the competence and effectiveness of machine learning regression techniques which are experimented here to build precise and constructive evaluation model that can assess the reusability of Object Oriented based software components based on the values of five metrics of metrics suite presented by Shyam R. Chaidmber and Chris F. Kemerer. By setting different values of parameters of these algorithms, it is also concluded that which specific algorithm or class of algorithms is appropriate for reusability evaluation and with which parameter's values. For this comparative analysis we have used Weka and experimented different regression techniques as Multi-linear regression, Model Tree M5P, Standard instance-based learning scheme IBk and Meta-learning scheme Additive Regression. As the result of this analysis and experimentation Standard instance-based learning IBk with no distance weighting is found to be the best regression algorithm for reusability evaluation of Object Oriented software components using CK metrics.",2013,0, 6854,Bootstrap Interval Estimation Methods for Cost-Optimal Software Release Planning,"We discuss interval estimation methods for cost-optimal software release time based on a discretized software reliability growth model. In our approach, we use a bootstrap method, in which we do not need to derive probability distributions of model parameters and optimal software release time analytically by using an asymptotic theory assuming a large number of samples. Then we estimate bootstrap confidence intervals of cost-optimal software release time based on two kinds of bootstrap confidence interval methods. Our numerical examples confirm that our bootstrap approach yields a simulation-based probability distribution of cost-optimal software release time from software fault-count data.",2013,0, 6855,EucaBomber: Experimental Evaluation of Availability in Eucalyptus Private Clouds,"Cloud computing is a computational paradigm with increasing adoption because it offers resources as services in a dynamically scalable way through the Internet. The constant concern in providing cloud computing services in a reliable and uninterrupted manner inspires availability and reliability studies. A feasible method of performing such studies is through automated fault injection, enabling to observe the behavior of the cloud architecture under many conditions. This paper presents a fault injection tool, named EucaBomber, for reliability and availability studies in the Eucalyptus cloud computing platform. EucaBomber allows to define the probability distribution associated to the time between generated events. The efficiency of EucaBomber is verified through testbed scenarios where faults and repairs are injected in a private Eucalyptus cloud. The experimental results are cross-checked with results estimated from a Reliability Block Diagram, using the same input parameters of the experimental testbed. The test scenarios also illustrate how the tool may assist cloud systems administrators and planners to evaluate the system's availability and maintenance policies.",2013,0, 6856,Optimization of test suite-test case in regression test,"Exhaustive product evolution and testing is required to ensure the quality of product. Regression testing is crucial to ensure software excellence. Regression test cases are applied to assure that new or adapted features do not relapse the existing features. As innovative features are included, new test cases are generated to assess the new functionality, and then included in the existing pool of test cases, thus escalating the cost and the time required in performing regression test and this unswervingly impacts the release, laid plan and the quality of the product. Hence there is a need to select minimal test cases that will test all the functionalities of the engineered product and it must rigorously test the functionalities that have high risk exposure. Test Suite-Test Case Refinement Technique will reduce regression test case pool size, reduce regression testing time, cost & effort and also ensure the quality of the engineered product. This technique is a regression test case optimization technique that is a hybrid of Test Case Minimization based on specifications and Test Case Prioritization based on risk exposure. This approach will facilitate achievement of quality product with decreased regression testing time and cost yet uncover same amount of errors as the original test cases.",2013,0, 6857,Enhancing the Accuracy of Case-Based Estimation Model through Early Prediction of Error Patterns,"The paper tries to explore the importance of software fault prediction and to minimize them thoroughly with the advanced knowledge of the error-prone modules, so as to enhance the software quality. For estimating a new project effort, case-based reasoning is used to predict software quality of the system by examining a software module and predicting whether it is faulty or non faulty. In this research we have proposed a model with the help of past data which is used for prediction. Two different similarity measures namely, Euclidean and Manhattan are used for retrieving the matching case from the knowledge base. These measures are used to calculate the distance of the new record set or case from each record set stored in the knowledge base. The matching case(s) are those that have the minimum distance from the new record set. This can be extended to variety of system like web based applications, real time system etc. In this paper we have used the terms errors and faults, and no explicit distinction made between errors and faults. In order to obtain results we have used MATLAB 7.10.0 version as an analyzing tool.",2013,0, 6858,Acoustic imaging of bump defects in flip-chip devices using split spectrum analysis,"In this paper the performance of multi-narrow-band spectral analysis was evaluated concerning defect detection in microelectronic components with flip-chip contacts. Today, flip-chip technology is widely applied for interconnecting silicon dies to a substrate within high-end semiconductor packaging technologies. The integrity of the bump solder interconnection is of major concern for the reliability in this technology. Non-destructive defect localization and analysis of the flip-chip interconnections operating in a semi-automated mode is strongly desired. Scanning acoustic microscopy (SAM) combined with subsequent signal analysis has high potential for non-destructive localization of defective flip-chip interconnects. Analyzing multiple narrow spectral bands of signals acquired by a scanning acoustic microscope enabled the identification and localization of defective flip-chip interconnects. In the current study a 180 MHz transducer with 8 mm focal length was employed for acoustic data acquisition by SAM. Those data were then analyzed off-line by discrete Fourier transformation, chirp z-transform and cosine transform using custom made MATLAB software. Through multi-narrow band spectral analysis, defective flip-chip interconnects that have not been revealed by standard acoustical imaging methods have been detected successfully. Acoustically found defects have been confirmed by subsequent FIB-cross sectioning and SEM imaging. The high resolution SEM imaging revealed complete and partial delamination at the interface between the die and the bump.",2013,0, 6859,Towards a safety case for runtime risk and uncertainty management in safety-critical systems,"Many safety-critical systems have a human-in-the-loop for some part of their operation, and rely on the higher cognitive abilities of the human operator for fault diagnosis and risk-management decision-making. Although these operators are often experts on the processes being controlled, they still sometimes misjudge situations or make poor decisions. There is thus potential for Safety Decision Support Systems (SDSS) to help operators, building on past successes with Clinical Decision Support Systems in the health care industry. Such SDSS could help operators more accurately assess the system's state along with any associated risk and uncertainty. However, such a system supporting a safety critical operation inevitably attracts its own safety assurance obligations. This paper will outline those challenges and suggest an initial safety case architecture for SDSS.",2013,0, 6860,Notice of Violation of IEEE Publication Principles
The Right Thing to Do: Automating Support for Assisted Living with Dynamic Decision Networks,"Notice of Violation of IEEE Publication Principles
???The Right Thing To Do: Automating Support for Assisted Living with Dynamic Decision Networks???
by Nayyab Zia Naqvi, Davy Preuveneers, Wannes Meert, Yolande Berbers in the Proceedings of the IEEE 10th International Conference on Ubiquitous Intelligence and Computing, and 10th International Conference on Autonomic and Trusted Computing (UIC/ATC), December 2013, pp. 262-269

After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE???s Publication Principles.

This paper contains significant portions of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.

???Dynamic Decision Networks for Decision-Making in Self-Adaptive Systems: A Case Study???
by Nelly Bencomo, Amel Belaggoun, Valerie Issarny in the Proceedings of the 8th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), May 2013, pp. 113-122

In an era where ubiquitous systems will be mainstream, users will take a more passive role and these systems will have to make smart decisions on behalf of their users. Automating these decisions in a continuously evolving dynamic context can be challenging. First of all, the right thing to do usually depends on the circumstances and context at hand. What might be a good decision today, could be a bad one tomorrow. Secondly, the system should be made aware of the impact of its decisions over time so that it can learn from its mistakes as humans do. In this paper, we formulate a technique for decision support systems to mitigate runtime uncertainty in the observed context, and demonstrate our context-driven probabilistic framework for ubiquitous systems that addresses the above mentioned - hallenges. Our framework incorporates end-to-end Quality of Context (QoC) as a key ingredient to make well-informed decisions. It leverages Dynamic Decision Networks (DDN) to deal with the presence of uncertainty and the partial observability of context information, as well as the temporal effects of the decisions. Our experiments with the framework demonstrate the feasibility of our approach and potential benefits to automatically make the best decision in the presence of a changing environment.",2013,0, 6861,A Smart Diagnostic Model for an Autonomic Service Bus Based on a Probabilistic Reasoning Approach,"The growing complexity and scale of systems implies challenges to include Autonomic Computing capabilities that help to maintain or improve the performance, availability and reliability characteristics. The autonomic management of a system can be defined deterministically based on experiment observations on the system and possible results of associated plans. However in dynamic environments with changing conditions and requirements, a better technique to diagnose observations and learn about the functioning conditions of the managed system is needed to guide the autonomic management. In the case of medical diagnostic, tests have included statistical and probabilistic models to aid and improve the results and select better medical treatments. In this paper we also adopt a probabilistic approach to define a Bayesian network from monitored data of an Enterprise Service Bus under different workload conditions. This model is used by the Autonomic Service Bus as a knowledge base to diagnose the cause of degradation problems and repair them. Experimental results assess the effectiveness of our approach.",2013,0, 6862,Agents Based Monitoring of Heterogeneous Cloud Infrastructures,"Monitoring of resources is one among the major challenges that the virtualization brings with it within the Cloud environment since the user applications are often distributed on several nodes whose location is unknown a priori and can dynamically change. Consumers need to monitor their resources for checking that service levels are continuously compliant with the agreed SLA and for detecting under-utilization or overloading conditions of their resources. Furthermore the monitoring of service levels becomes critical because of the conflicts of interest that might occur between provider and customer in case of outage. In this paper a framework that supports the monitoring of Cloud infrastructure is presented. It provides to the user the possibility to check the state of his/her resources, even if they have been acquired from heterogeneous vendors. The proposed environment will offer high elasticity and extensibility by the provisioning of an high level of customization of the performance indexes and metrics.",2013,0, 6863,The Nondestructive Testing Approach of Acoustic Emission for Environmentally Hazardous Objects,"Classical method of frequency distortion influence exclusion consists of FRF calculation with subsequent adjustment of received signals spectral characteristics. In the article, plane shape objects FRF can be calculated theoretically. Let us do FRF calculations for a long rod. The obtained results confirm high rate of AE signals emission irregularity and large fluctuations of spectrum components. This conclusion is valid for all ceramic materials under test. Acoustic emission method allows detecting and registering of only developing defects, prompting to classify them not by the size but by the danger level.",2013,0, 6864,Using Jason to Develop Refactoring Agents,"Refactoring is one of the main techniques used when maintaining and evolving the software. It works by changing the software in such way that improves its internal structure without changing its external behavioral. This paper focuses on issues around software refactoring, such as: (i) figure out where the software should be refactored, (ii) define which refactoring(s) should be applied, (iii) ensure that the external behavior of the software will be preserved after applying the refactoring, (iv) evaluate the gains and losses on the software quality resulting of the refactoring, (v) apply the refactoring, and, (vi) maintain the consistence between the refactored program code and other artifacts. Given the amount of issues to be considered, the refactoring activities when done in a manual way are an error-prone and extremely expensive. This paper provides an extension of the Jason platform to enable the development of refactoring agents able to perform software refactoring in an autonomic way. Such approach accelerates the process of executing a refactoring and reduces the probability of introducing defects.",2013,0, 6865,Automatic verification of test oracles in functional testing,"Functional testing of the applications becomes essential to detect the faults. Nowadays machine learning techniques have been implemented in the software engineering particularly in software testing field. But machine learning algorithms are difficult to detect the faults in certain applications because it is difficult to find the test oracle which is used to verify computed outputs. Here in this paper, a novel functional testing approach to verify the test oracles is attempted. The expected execution result for a given application is generated and verified with the oracle whether the application under test has or has not behaved correctly and issues a pass/fail verdict.",2013,0, 6866,Unifying clone analysis and refactoring activity advancement towards C# applications,"Refactoring involves improve the quality of software and reduce the software complexity without affecting its external behavior. The research focuses code clones is a vital target of refactoring and code clones increase the internal complexity, maintenance effort and reduce the quality of software. A clone consists of two or more segments of code that duplicates with each other on the basis of distinct type of similar measurements. The developed algorithm insist a new semantic based clone management and refactoring system to detect and manage as well as refactor both exact and near-miss code clones. The major goal has to remove the clones in source code fragments by unifying the process of clone detection and refactoring. The implemented clone refactoring technique detects and fixes the clones in multiple classes using graph structure and methods. The code analyzer analyzes the user typed code by separating the auto generated code. Based on a graph structure, a new Abstract Semantic Graph Refactoring algorithm for detecting the clones in multiple classes of source code fragments, have been experimented in this research.",2013,0, 6867,Performance analysis of circular patch antenna for breast cancer detection,"Now a day wireless communication systems has becoming the master for the world. The reason behind is the major component called Antenna. Even though peoples have sophisticated life, they are suffered by several diseases. Especially women are suffered by breast cancer. In earlier days X-ray mammography technique has been applied to detect the breast cancer. But the major drawback in this technique is the ionizing radiation level from X-rays which leads to cell death, cell mutation and fetal damage within the body. So the Microwave Breast Imaging (MBI) technique has been implemented to get the information about breast tissues. It uses low power when compared to X-ray technique. But this technique also has the impediments which gets high reflection from the breast tissue. Hence to overcome this problem, a circular patch antenna has been designed using flexible FR-4 substrate with the radius of 14.5 mm. The proposed antenna is also gets combined with the skin model to get the cancer level within the body. Both the models get designed in Ansoft HFSS software over an operating frequency range of 2.5GHz. The results of the proposed antenna gets analyzed such as return loss, VSWR, gain, directivity which is achieved of about -21dB, 1.2, 4.404dB, 4.48dB. The current density parameter plays a wide role here since it only tells about the cancer level. Higher the current density parameter increases the visibility of the cancer. Here it is achieved of about 127 A/m2. For this proposed antenna a miniaturization technique called Defected Ground Structure has been applied in the ground to improve the performance of the system.",2013,0, 6868,Agent based tool for topologically sorting badsmells and refactoring by analyzing complexities in source code,"Code smells are smells found in source code. As the source code becomes larger and larger we find bad smells in the source code. These bad smells are removed using Refactoring. Hence experts say the method of removing bad smells without changing the quality of code is called as Refactoring[1]. But this Refactoring if not done properly is risky, and can take time (i.e.) might be days or weeks. Hence here we provide a technique to arrange these Bad smells analyze the complexities found in the source code and then Refactor them. These Bad smell detection and scheduling has been done manually or semi automatically. This paper provides a method of automatically detecting theses Bad smells. This Automatic detection of Bad smells are done with the help of Java Agent DEvelopment.",2013,0, 6869,Verification of multi decisional reactive agent using SMV model checker,"On account of the evolution of technology, more complicated software arrives with the need to be verified to prevent the errors occurrence in a system which could generate fatal accidents and economic loss. These errors must be detected in an early stage during the development process to reduce redesign costs and faults. To ensure the correctness of software systems, formal verification provides an alternative approach to verify that an implementation of the expected system fulfills its specification. This paper focuses on the verification of reactive system behaviors specified by the Multi Decisional Reactive Agent (MDRA) and modeled using MDRA Profile. The objective in this paper is to use the Model Checking technique for MDRA Profile verification through the Model Checker SMV (Symbolic Model Verifier) to automatically verify the system properties expressed in temporal logic. The SMV mainly focusing on reactive systems provides a modular hierarchical descriptions and definition of reusable components. Besides, the expression of system properties is more described through both Computational Tree Logic (CTL) and Linear Temporal Logic (LTL).",2013,0, 6870,Fault tolerance on multicore processors using deterministic multithreading,"This paper describes a software based fault tolerance approach for multithreaded programs running on multicore processors. Redundant multithreaded processes are used to detect soft errors and recover from them. Our scheme makes sure that the execution of the redundant processes is identical even in the presence of non-determinism due to shared memory accesses. This is done by making sure that the redundant processes acquire the locks for accessing the shared memory in the same order. Instead of using record/replay technique to do that, our scheme is based on deterministic multithreading, meaning that for the same input, a multithreaded program always have the same lock interleaving. Unlike record/replay systems, this eliminates the requirement for communication between the redundant processes. Moreover, our scheme is implemented totally in software, requiring no special hardware, making it very portable. Furthermore, our scheme is totally implemented at user-level, requiring no modification of the kernel. For selected benchmarks, our scheme adds an average overhead of 49% for 4 threads.",2013,0, 6871,Refinement of Adaptivity by Reflection,"Adaptivity is a system's ability to respond flexibly to dynamically changing needs. Adaptivity to human needs, wishes and desires-even to those that might be unconsciously present-is a particularly ambitious task. A digital system which is expected to behave adaptively has to learn about the needs and desires to which it shall adapt. Advanced adaptivity requires learning on the system's side. Under realistic application conditions, information about a human user available to a computerized system usually is highly incomplete. Therefore, the system's learning process is unavoidably error-prone and the knowledge on which the system's adaptive behavior has to rely is hypothetical by nature. Adaptive system behavior is improved by the system's ability to reflect on the reliability of its current hypothetical knowledge.",2013,0, 6872,Intellectus: Multi-hop fault detection methodology evaluation,"Wireless Sensor Networks (WSNs) can experience problems (anomalies) during deployment, due to dynamic environmental factors or node hardware and software failures. These anomalies demand reliable detection strategies for supporting long term and/or large scale WSN deployments. Several strategies have been proposed for detecting specific WSN anomaly, yet there is still a need for more comprehensive anomaly detection strategies that jointly address network and node level anomalies. Intellectus methodology [23], [24], [25] build a tool that detected a new limited set of faults: sensor nodes may dynamically fail, be isolate and reboot and local topology control. These bugs are difficult to diagnose because the only externally visible characteristic is that no data is seen at the sink, from one or more nodes. This paper evaluate Intellectus methodology by different experiment in a Testbed network. In fact, Intellectus is be able to detect the injected fault and assess different scenarios of topology change.",2013,0, 6873,Automated source code extension for debugging of OpenFlow based networks,"Software-Defined Networks using OpenFlow have to provide a reliable way to to detect network faults in operational environments. Since the functionality of such networks is mainly based on the installed software, tools are required in order to determine software bugs. Moreover, network debugging might be necessary in order to detect faults that occurred on the network devices. To determine such activities, existing controller programs must be extended with the relevant functionality. In this paper we propose a framework that can modify controller programs transparently by using graph transformation, making possible online fault management through logging of network parameters in a NoSQL database. Latter acts as a storage system for flow entries and respective parameters, that can be leveraged to detect network anomalies or to perform forensic analysis.",2013,0, 6874,Lane marking aided vehicle localization,"A localization system that exploits L1-GPS estimates, vehicle data, and features from a video camera as well as lane markings embedded in digital navigation maps is presented. A sensitivity analysis of the detected lane markings is proposed in order to quantify both the lateral and longitudinal errors caused by 2D-world hypothesis violation. From this, a camera observation model for vehicle localization is proposed. The paper presents also a method to build a map of the lane markings in a first stage. The solver is based on dynamical Kalman filtering with a two-stage map-matching process which is described in details. This is a software-based solution using existing automotive components. Experimental results in urban conditions demonstrate an significant increase in the positioning quality.",2013,0, 6875,A high-resolution imaging method based on broadband excitation and warped frequency transform,"Lamb wave has received widely attention in structural health monitoring (SHM). However, due to its multi-mode character and dispersion effect, the damage positioning and imaging resolution are limited. Besides Narrowband wave, which is usually adopted as excitation in Lamb wave detecting, broadband signal can also be chosen as excitation, with which plenty of signals can be obtained in one test to strengthen the brightness of damages by superposition. Warped frequency transform (WFT) is a new method based on group velocity dispersion curves, which can directly used to the received signals to suppress the dispersion and turn the signal to distance domain. In this paper, a new high-resolution method is proposed base on warped frequency transform and broadband excitation. The propagation of Lamb waves in damaged aluminum plate is simulated by finite element software ABAQUS, results show that high resolution images can be obtained with the proposed method.",2013,0, 6876,Detection and Root Cause Analysis of Memory-Related Software Aging Defects by Automated Tests,"Memory-related software defects manifest after a long incubation time and are usually discovered in a production scenario. As a consequence, this frequently encountered class of so-called software aging problems incur severe follow-up costs, including performance and reliability degradation, need for workarounds (usually controlled restarts) and effort for localizing the causes. While many excellent tools for identifying memory leaks exist, they are inappropriate for automated leak detection or isolation as they require developer involvement or slow down execution considerably. In this work we propose a lightweight approach which allows for automated leak detection during the standardized unit or integration tests. The core idea is to compare at the byte-code level the memory allocation behavior of related development versions of the same software. We evaluate our approach by injecting memory leaks into the YARN component of the popular Hadoop framework and comparing the accuracy of detection and isolation in various scenarios. The results show that the approach can detect and isolate such defects with high precision, even if multiple leaks are injected at once.",2013,0, 6877,Reliability analysis using fault tree method and its prediction using neural networks,"An electric power system is a network of electrical components used to supply, transmit and distribute electric power. It is an interconnected and complex system. It consist of many components like buses, substations, transformers, generators etc. The main function of the power system is to provide energy to the customers adequately and efficiently. In the normal situation, the power system is demanded to be highly efficient and safe. If any part within the system has failed, the amount of delivered power can be affected and huge economic losses can be induced. Consequently, reliability evaluation of the power system is of significant importance. Here reliability evaluation is done using fault tree method and it is done for 220 kV Kerala Power System. The numerical probability of failure is found from Open FTA software. Single line diagram of the 220kv substation in Kerala is simulated using ETAP software. Reliability indices are determined using this software. Reliability prediction is done using neural networks. Neural lab is used for the reliability prediction.",2013,0, 6878,Loopy An open-source TCP/IP rapid prototyping and validation framework,"Setting up host-to-board connections for hardware validation or hybrid simulation purposes is a time-consuming and error-prone process. In this paper we present a novel approach to automatically generate host-to-board connections, called the Loopy framework. The generated drivers enable blocking and non-blocking access to the hardware from high-level languages like C++ through an intuitive, object-based model of the hardware implementation. The framework itself is written in Java, and offers cross-platform support. It is open-source, well-documented, and can be enhanced with new supported languages, boards, tools, and features easily. Loopy combines several approaches presented in the past to an all-embracing helper toolkit for hardware designers, verification engineers, or people who want to use hardware accelerators in a software context. We have evaluated Loopy with real-life examples and present a case study with a complex MIMO system hardware-in-the-Ioop setup.",2013,0, 6879,StEERING: A software-defined networking for inline service chaining,"Network operators are faced with the challenge of deploying and managing middleboxes (also called inline services) such as firewalls within their broadband access, datacenter or enterprise networks. Due to the lack of available protocols to route traffic through middleboxes, operators still rely on error-prone and complex low-level configurations to coerce traffic through the desired set of middleboxes. Built upon the recent software-defined networking (SDN) architecture and OpenFlow protocol, this paper proposes StEERING, short for SDN inlinE sERvices and forwardlNG. It is a scalable framework for dynamically routing traffic through any sequence of middleboxes. With simple centralized configuration, StEERING can explicitly steer different types of flows through the desired set of middleboxes, scaling at the level of per-subscriber and per-application policies. With its capability to support flexible routing, we further propose an algorithm to select the best locations for placing services, such that the performance is optimized. Overall, StEERING allows network operators to monetize their middlebox deployment in new ways by allowing subscribers flexibly to select available network services.",2013,0, 6880,Sensor fault detection for a repetitive controller based D-FACT device,"This paper proposes a sensor fault detection system for a two-level DVR, controlled by a repetitive controller. The system compensates key voltage-quality disturbances namely; voltage sags, harmonic voltages, voltage imbalances, and control current during downstream fault, additionally detect any fault in sensor measurements as well. All the control actions of controller depend on the availability and quality of sensor measurement. However, measurements are inevitably subjected to faults caused by sensor failure, broken or bad connections, bad communication, or malfunction of some hardware or software. Therefore an auto-associative neural network based system is used here to detect any fault in sensor measurement. MATLAB/SIMULINK is used to carry out all modeling aspects of test system.",2013,0, 6881,Situational requirement engineering: A systematic literature review protocol,"Requirements Engineering (RE) is known to be one of the critical phases in software development. Lots of work related to RE is already published. Field of RE is maturing day by day, leading to exploration at its deeper level. It is argued that RE is subject to situational characteristics. This exposure becomes even more when RE is performed in global software development environment. There is a need to identify these situational characteristics based on RE literature. We plan to systematically explore situational RE based studies to distinguish and account state of the art in situational RE based reported research. This paper objective is to provide the systematic literature review (SLR) protocol to illustrate a process for combining the situational RE work that will ultimately present a state of the art of the field in global software development environment. This SLR aims to not only summarize the data related to situational RE in form of situational characteristics but will also be useful for RE practitioners specifically working in global software development environment by providing a check list base upon situational characteristics. It will also assist RE researchers to discover knowledge gaps to distinguish needs and probability for future research directions in the field of situational RE in global software development environment.",2013,0, 6882,Reliability analysis of an on-chip watchdog for embedded systems exposed to radiation and EMI,"Due to stringent constraints such as battery-powered, high-speed, low-voltage power supply and noise-exposed operation, safety-critical real-time embedded systems are often subject to transient faults originated from a large spectrum of noisy sources; among them, conducted and radiated Electromagnetic Interference (EMI). As the major consequence, the system's reliability degrades. In this paper, we present the most recent results involving the reliability analysis of a hardware-based intellectual property (IP) core, namely Real-Time Operating System - Guardian (RTOS-G). This is an on-chip watchdog that monitors the RTOS' activity in order to detect faults that corrupt tasks' execution flow in embedded systems running preemptive RTOS. Experimental results based on the Plasma processor IP core running different test programs that exploit several RTOS resources have been developed. During test execution, the proposed system was aged by means of total ionizing dose (TID) radiation and then, exposed to radiated EMI according to the international standard IEC 62.132-2 (TEM Cell Test Method). The obtained results demonstrate the proposed approach provides higher fault coverage and reduced fault latency when compared to the native (software) fault detection mechanisms embedded in the kernel of the RTOS.",2013,0, 6883,Workload analysis and efficient OpenCL-based implementation of SIFT algorithm on a smartphone,"Feature detection and extraction are essential in computer vision applications such as image matching and object recognition. The Scale-Invariant Feature Transform (SIFT) algorithm is one of the most robust approaches to detect and extract distinctive invariant features from images. However, high computational complexity makes it difficult to apply the SIFT algorithm to mobile applications. Recent developments in mobile processors have enabled heterogeneous computing on mobile devices, such as smartphones and tablets. In this paper, we present an OpenCL-based implementation of the SIFT algorithm on a smartphone, taking advantage of the mobile GPU. We carefully analyze the SIFT workloads and identify the parallelism. We implemented major steps of the SIFT algorithm using both serial C++ code and OpenCL kernels targeting mobile processors, to compare the performance of different workflows. Based on the profiling results, we partition the SIFT algorithm between the CPU and GPU in a way that best exploits the parallelism and minimizes the buffer transferring time to achieve better performance. The experimental results show that we are able to achieve 8.5 FPS for keypoints detection and 19 FPS for descriptor generation without reducing the number and the quality of the keypoints. Moreover, the heterogeneous implementation can reduce energy consumption by 41% compared to an optimized CPU-only implementation.",2013,0, 6884,Priority classification based fast intra mode decision for High Efficiency Video Coding,"The latest High-Efficiency Video Coding (HEVC) video coding standard version 1 offers 50% bit rate reduction against the H.264/AVC at the same visual quality. However, HEVC encoder complexity is tremendously increased. It is therefore important to develop efficient encoding algorithms for the success of HEVC based applications. In this paper, we propose a priority classification based fast intra mode decision to speed up the HEVC intra encoder. Each prediction unit (PU) is given a priority label out of four based on its spatial and temporal neighbor PU information as well as the predicted PU depth. Different processing strategy will be applied to different priority class, under the assumption that more computing resource should be allocated to the high priority class since its corresponding PU has the high potential to be chosen as the optima. Experiments are performed using all the common test sequences, and results show that, the encoder complexity is significantly reduced by about 46% for All Intra configuration with BD-Rate (Bjontegaard Delta Rate) increase less than 0.9% for luma component. Meanwhile, compared with several recent works, our proposed solution demonstrates the well trade-off between the coding efficiency and complexity reduction.",2013,0, 6885,Motion blur compensation in scalable HEVC hybrid video coding,"One main element of modern hybrid video coders consists of motion compensated prediction. It employs spatial or temporal neighborhood to predict the current sample or block of samples, respectively. The quality of motion compensated prediction largely depends on the similarity of the reference picture block used for prediction and the current picture block. In case of varying blur in the scene, e.g. caused by accelerated motion between the camera and objects in the focal plane, the picture prediction is degraded. Since motion blur is a common characteristic in several application scenarios like action and sport movies we suggest the in-loop compensation of motion blur in hybrid video coding. Former approaches applied motion blur compensation in single layer coding with the drawback of needing additional signaling. In contrast to that we employ a scalable video coding framework. Thus, we can derive strength as well as the direction of motion of any block for the high quality enhancement layer by base-layer information. Hence, there is no additional signaling necessary neither for predefined filters nor for current filter coefficients. We implemented our approach in a scalable extension of the High Efficiency Video Coding (HEVC) reference software HM 8.1 and are able to provide up to 1% BD-Rate gain in the enhancement layer compared to the reference at the same PSNR-quality for JCT-VC test sequences and up to 2.5% for self-recorded sequences containing lots of varying motion blur.",2013,0, 6886,A rule-based instantaneous denoising method for impulsive noise removal in range images,"To improve comprehensive performance of denoising range images, A rule-based instantaneous denoising method for impulsive noise removal (RID-INR) is proposed in this paper. Based on silhouette features analysis for two typical impulsive noise (IN), dropouts and outliers, a few new coefficients are defined to describe their exclusive features. Founded on several discriminant criteria, the principles of dropout IN detection and outlier IN detection are detailed demonstrated. Subsequently, IN denoising is performed by an Index Distance Weighted Mean filter after a nearest non-IN neighbors searching process. Originated from a theoretical model of invader occlusion, variable window technique is presented for enhancing adaptability of our method, accompanying with practical criteria of adaptive variable window size determination. A complete algorithm has been implemented as embedded modules in two self-developed software. A series of experiments on real range images of single scan line are carried out with comprehensive evaluations in terms of computational complexity, time expenditure and denoising quality. It is indicated that the proposed method can not only detect the impulsive noises with high accuracy, but also denoise them with outstanding efficiency, quality, and adaptability. The proposed method is inherently invariant to translation and rotation transformations, since all the coefficients are established based on distances between the points or their ratio. Therefore, RID-INR is qualified for industrial applications with stringent requirements due to its practicality.",2013,0, 6887,Exposing fake bitrate video and its original bitrate,"Video bitrate, as one of the important factors that reflect the video quality, can be easily manipulated via some video editing softwares. In some forensic scenarios, for example, video uploaders of video-sharing websites may increase video bitrate for seeking more commercial profits. In this paper, we try to detect those fake high bitrate videos, and then further to estimate their original bitrates. The proposed method is mainly based on the fact that if the video bitrate has been increased with the help of video editing software, its essential video quality will not increase at all. By analyzing the quality of the questionable video and a series of its re-encoded versions with different lower bitrates, we can obtain a feature curve to measure the change of the video quality, and then we propose a compact feature vector (3-D) to expose fake bitrate videos and their original bitrates. The experimental results evaluated on both CIF and QCIF raw sequences have shown the effectiveness of the proposed method.",2013,0, 6888,SAFe: A Secure and Fast Auto Filling Form System,"Current practices in government and private offices to register a service are time-consuming and prone to fraud. As a consequence, government offices are unable to provide high-quality services to citizens; while private offices make less productivity and profits from their services. This paper presents an innovative registration system for Malaysian when applying services at government and private offices. Referred to as Secure and Fast Auto Filling Form System (SAFe), the proposed system retrieves the customers' information from their MyKad and transfers the information to a digital application form. For security measures, the fingerprint verification of the authentic customers is required before their formation can be retrieved from MyKad. The proposed system is developed using the Visual Basic software and a commercial smartcard reader. Results of the performance evaluation show that SAFe shortens the time to complete the registration securely. Therefore, SAFe can improve the service quality and productivity of government and private offices.",2013,0, 6889,A combined analysis method of FMEA and FTA for improving the safety analysis quality of safety-critical software,"Software safety analysis methods are used broadly in safety-critical systems to secure software safety and to recognize potential errors during software development, particularly at the early stage. FMEA and FTA are two traditional safety analysis methods, both of which provide a complementary way of identifying errors and tracking their possible influences. They have already been widely adopted in safety-critical industries. However, the effectiveness of FMEA and FTA depends on a complete understanding of the software being analyzed. Unlike hardware safety analysis, software safety analysis is usually a process of iteration. It is more difficult to get a comprehensive understanding of the software being analyzed at the early stage of software life cycle. A combined analysis method of FMEA and FTA was presented in this paper, which could detect more potential errors of software at the early stage. An analysis process which can convert and verify between FMEA and FTA was created. A semi-auto analyzing tool was developed to carry the process. Comparison experiments were carried out to testify the effectiveness of this method, which showed that the combined method proposed by this paper achieved better results.",2013,0, 6890,Autonomous control and simulation of the VideoRay Pro III vehicle using MOOS and IvP Helm,Most underwater vehicles are controlled by human operators through tethered cables which inhibit range and affect the hydrodynamics. The overall performance of these promising vehicles would improve through the employment of higher levels of autonomy. Implementation of open source autonomous guidance software in an off the shelf underwater vehicle is explored here as a solution. Development and implementation of this autonomous guidance and vehicle control is greatly facilitated through the use of an accurate vehicle simulator. Running real world tests of an underwater vehicle is extremely time intensive and error prone. The analysis of the vehicle performance underwater is extremely challenging with limited accurate positioning sensing e.g. GPS. A vehicle simulator allows for faster development of the system by providing vehicle performance information prior to expensive real world missions. This research presents a method for simulation and testing of autonomous guidance and control in a vehicle accurate simulator for the VideoRay Pro III underwater vehicle and demonstrates the capability through simulated examples and analysis.,2013,0, 6891,Performance analysis of a fault-tolerant exact motif mining algorithm on the cloud,"In this paper, we present the performance analysis and design challenges of implementing a fault-tolerant parallel exact motif mining algorithm leveraging the services provided by the underlying cloud storage platform (e.g., data replication, node failure detection). More specifically, first, we present the design of the intermediate data structures and data models that are needed for effective parallelization of the motif mining algorithm on the cloud. Second, we present the design and implementation of a fault-tolerant parallel motif mining algorithm that enables the data analytic system to recover from arbitrary node failures in the cloud environment by detecting node failures and redistributing remaining computational tasks in real-time. We also present a data caching scheme to improve the system performance even further. We evaluated the impact of various factors such as the replication factor and random node failures on the performance of our system using two different datasets, namely, an EOG dataset and an image dataset. In both cases, our algorithm exhibits superior performance over the existing algorithms, thus demonstrating the effectiveness of our presented system.",2013,0, 6892,A reconfigurable AC excitation control system for impulse hydroelectric generating unit based on fault-tolerance,"Aiming at the AC excitation control of high-head impulse hydroelectric generating unit, and in order to improve system reliability, a reconfigurable AC excitation control system for impulse hydroelectric generating unit based on fault-tolerance control is introduced. The reconfigurable scheme and structure of hardware system for AC excitation system is proposed. On the basis of the analysis of main system faults, an intelligent control strategy is proposed, with the construction of reconfigurable software modules, including basic structure of software platform, reconfiguration of detecting algorithm, and reconfiguration of control algorithm for digital valves. The experimental results show that the reconfigurable AC excitation control system can realize intelligent diagnosis of system fault, reconfiguration of system structure, and active fault-tolerance control, with the improvement of system reliability.",2013,0, 6893,Visual Quality and File Size Prediction of H.264 Videos and Its Application to Video Transcoding for the Multimedia Messaging Service and Video on Demand,"In this paper, we address the problem of adapting video files to meet terminal file size and resolution constraints while maximizing visual quality. First, two new quality estimation models are proposed, which predict quality as function of resolution, quantization step size, and frame rate parameters. The first model is generic and the second takes video motion into account. Then, we propose a video file size estimation model. Simulation results show a Pearson correlation coefficient (PCC) of 0.956 between the mean opinion score and our generic quality model (0.959 for the motion-conscious model). We obtain a PCC of 0.98 between actual and estimated file sizes. Using these models, we estimate the combination of parameters that yields the best video quality while meeting the target terminal's constraints. We obtain an average quality difference of 4.39% (generic model) and of 3.22% (motion-conscious model) when compared with the best theoretical transcoding possible. The proposed models can be applied to video transcoding for the Multimedia Messaging Service and for video on demand services such as YouTube and Netflix.",2013,0, 6894,Tutorial: Digital microfluidic biochips: Towards hardware/software co-design and cyber-physical system integration,"This tutorial will first provide an overview of typical bio-molecular applications (market drivers) such as immunoassays, DNA sequencing, clinical chemistry, etc. Next, microarrays and various microfluidic platforms will be discussed. The next part of the tutorial will focus on electro-wetting-based digital micro-fluidic biochips. The key idea here is to manipulate liquids as discrete droplets. A number of case studies based on representative assays and laboratory procedures will be interspersed in appropriate places throughout the tutorial. Basic concepts in micro-fabrication techniques will also be discussed. Attendees will next learn about CAD and reconfiguration aspects of digital microfluidic biochips. Synthesis tools will be described to map assay protocols from the lab bench to a droplet-based microfluidic platform and generate an optimized schedule of bioassay operations, the binding of assay operations to functional units, and the layout and droplet-flow paths for the biochip. The role of the digital microfluidic platform as a programmable and reconfigurable processor for biochemical applications will be highlighted. Cyber-physical integration using low-cost sensors and adaptive control, software will be highlighted. Cost-effective testing techniques will be described to detect faults after manufacture and during field operation. On-line and off-line reconfiguration techniques will be presented to easily bypass faults once they are detected. The problem of mapping a small number of chip pins to a large number of array electrodes will also be covered. With the availability of these tools, chip users and chip designers will be able to concentrate on the development and chip-level adaptation of nano-scale bioassays (higher productivity), leaving implementation details to CAD tools.",2013,0, 6895,A control strategy for inverter-interfaced microgrids under symmetrical and asymmetrical faults,"The increase in the renewable energy penetration level imposes the microgrid concept, which can be consisted of several inverter-interfaced distributed resources (DERs) and loads, operating in dual state; either connected with the utility grid or isolated in island mode. The power sharing among the connected DERs is carried out by the combination of the droop characteristics of each DER according to the active and reactive power demand of the loads. When a fault occurs within the microgrid operating in island mode, it is very difficult to be detected due to the lack of large current production capacity. The fault situation can be further complicated, if the fault takes place between two phases or between a single phase and the earth. This paper proposes a fault detection method for the symmetrical and asymmetrical faults, which is independent of any further communication means. The three-phase faults can be detected through the impedance variation of the islanded microgrid, while the asymmetrical ones by the negative sequence components of the output voltage of each DER. A significant contribution is the voltage recovery after the fault clearance with a seamless transient effect. After the fault clearance, the microgrid will continue feeding its loads, through the implementation of a positive- and negative-sequence control strategy. The effectiveness of the proposed control strategy is evaluated through a set of simulation tests, conducted in PSIM software environment.",2013,0, 6896,Real-time model base fault diagnosis of PV panels using statistical signal processing,"This paper proposes new method of monitoring and fault detection in photovoltaic systems, based mainly on the analysis of the power losses of the photovoltaic system (PV) by using statistical signal processing. Firstly, real time new universal circuit based model of photovoltaic panels is presented. Then, the development of software fault detection on a real installation is performed under the MATLAB/Simulink environment. With model based fault diagnosis analysis, residual signal from comparing Simulink and real model is generated. To observe clear alarm signal from arbitrary data captured, Wald test technic is applied on residual signal. A model residual based on Sequential Probability Ratio Test (WSPRT) framework for electrical fault diagnosis in PV system is introduced.",2013,0, 6897,AptStore: Dynamic Storage Management for Hadoop,"Typical Hadoop setups employ Direct Attached Storage (DAS) with compute nodes and uniform replication of data to sustain high I/O throughput and fault tolerance. However, not all data is accessed at the same time or rate. Thus, if a large replication factor is used to support higher throughput for popular data, it wastes storage by unnecessarily replicating unpopular data as well. Conversely, if less replication is used to conserve storage for the unpopular data, it means fewer replicas for even popular data and thus lower I/O throughput. We present Apt Store, a dynamic data management system for Hadoop, which aims to improve overall I/O throughput while reducing storage cost. We design a tiered storage that uses the standard DAS for popular data to sustain high I/O throughput, and network-attached enterprise filers for cost-effective, fault-tolerant, but lower-throughput storage for unpopular data. We design a file Popularity Prediction Algorithm (PPA) that analyzes file system audit logs and predicts the appropriate storage policy of each file, as well as use the information for transparent data movement between tiers. Our evaluation of Apt Store on a real cluster shows 21.3% improvement in application execution time over standard Hadoop, while trace driven simulations show 23.7% increase in read throughput and 43.4% reduction in the storage capacity requirement of the system.",2013,0, 6898,Dynamic Workflow Reconfigurations for Recovering from Faulty Cloud Services,"The workflow paradigm is a well established approach to deal with application complexity by supporting the application development by composition of multiple activities. Furthermore workflows allow encapsulating parts of a problem inside an activity that can be reused in different workflow application scenarios for instance long-running experiments such as the ones involving data streaming. These workflows are characterized by multiple, eventually infinite, iterations processing datasets in multiple activities according to the workflow graph. Some of these activities can invoke Cloud services often unreliably or with limitations on quality of service provoking faults. After a fault the most common approach requires restarting of the entire workflow which can lead to a waste of execution time due to unnecessarily repeating of computations. This paper discuss how the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) framework supports recovery from activity faults using dynamic reconfigurations. This is illustrated through an experimental scenario based on a long-running workflow where an activity fails when invoking a Cloud-hosted Web service with a variable level of availability. On detecting this, the AWARD framework allows the dynamic reconfiguration of the corresponding activity to access a new Web service, and avoiding restarting the complete workflow.",2013,0, 6899,Improvement on ABDOM-Qd and Its Application in Open-Source Community Software Defect Discovery Process,"ABDOM-Qd is a model to describe the characteristics of software defect discovery time-ordered process, such as periodicity, attenuation, oscillation, incremental and discrete. It can help testing participants to evaluate testing quality and to predict testing process with the time-ordered software defect discovery amounts in a good organized software testing process. Due to the poor organization in open-source software community, software defect discovery data show their obvious uncertainties such as mutations and randomness. In the early study of ABDOM/ABDOM-Qd this kind of process was excluded from the discussion. So it becomes an issue that whether ABDOM-Qd model can be applied to open-source community software defect discovery process and reveal new natures and characteristics. In order to answer the question, the normalization of b in ABDOM-Qd were discussed firstly under the curves paradigms and theirs actual significances discussion in conditions b >0 and b <; 0. With the discussion results, a software defect discovery cycle stability coefficient B was proposed and the improved model ABDOM-QBd with stronger describing capability was established. Then ABDOM-QBd was applied in a NASA's open-source project, which is a typical representative of open-source projects, to fit the software defect data and a good fitting result was obtained. Finally with the application results the model's applicability and the characteristics of open-source community software defects discovery process were preliminarily discussed.",2013,0, 6900,Patch Reviewer Recommendation in OSS Projects,"In an Open Source Software (OSS) project, many developers contribute by submitting source code patches. To maintain the quality of the code, certain experienced developers review each patch before it can be applied or committed. Ideally, within a short amount of time after its submission, a patch is assigned to a reviewer and reviewed. In the real world, however, many large and active OSS projects evolve at a rapid pace and the core developers can get swamped with a large number of patches to review. Furthermore, since these core members may not always be available or may choose to leave the project, it can be challenging, at times, to find a good reviewer for a patch. In this paper, we propose a graph-based method to automatically recommend the most suitable reviewers for a patch. To evaluate our method, we conducted experiments to predict the developers who will apply new changes to the source code in the Eclipse project. Our method achieved an average recall of 0.84 for top-5 predictions and a recall of 0.94 for top-10 predictions.",2013,0, 6901,A Practical Study of Debugging Using Model Checking,"Debugging is one of the most time-consuming tasks in software development. The application of a model-checking technique in debugging has strong potential to solve this problem. Here, lessons learned through our practical experiences with POM/MC are discussed. The aim of this proposed hypothesis-based method of debugging is not only to reproduce a failure as counterexamples, but also to obtain a counterexample that is useful for detecting the fault or the cause of the failure. One of the characteristics of the proposed approach is that it degenerates a source code in order to clarify the fault. An example of this degeneration shows that the method is useful for fault analysis and avoidance of the """"state-explosion"""" problem. Furthermore, the characteristics of debugging using POM/MC are explained from the viewpoint of debugging hypotheses.",2013,0, 6902,Comparison of stamp classification using SVM and random ferns,"In distributed software systems and processes that use large amounts of documents there is an essential need for data mining and document classification algorithms. These algorithms are aimed at optimizing the process, making it less error prone. In this paper we deal with the problem of document classification using two machine learning algorithms. Both algorithms use stamp images in documents to classify the document itself. The idea is to classify the document stamp and then, using known information about the stamp owner, search the rest of the document for relevant data. Our results are based on actual documents used in the process of debt collection and our training and test datasets are randomly picked from an existing database with over three million documents. The mentioned machine learning classification algorithms are implemented and compared in terms of classification accurateness, robustness and speed.",2013,0, 6903,A Byzantine Fault Tolerance Model for a Multi-cloud Computing,"Data security has become an important requirement for clients when dealing with clouds that may fail due to faults in the software or hardware, or attacks from malicious insiders. Hence, building a highly dependable and reliable cloud system has become a critical research problem. This paper presents BFT-MCDB (Byzantine Fault Tolerance Multi-Clouds Database), a practical model for building a system with Byzantine fault tolerance in a multi-cloud environment. The model relies on a novel approach that combines Byzantine Agreement protocols and Shamir's secret sharing approach to detect Byzantine failure in a multi-cloud computing environment as well as ensuring the security of the stored data within the cloud. Using qualitative analysis, we show that adopting the Byzantine Agreement protocols in the proposed BFT-MCDB model increases system reliability and enables gains in regard to the three security dimensions (data integrity, data confidentiality, and service availability). We also carry out experiments to determine the overheads of using the Agreement protocols.",2013,0, 6904,Memorization of Materialization Points,"Data streaming frameworks, constructed to work on large numbers of processing nodes in order to analyze big data, are fault-prone. Not only the large amount of nodes and network components that could fail are a source of errors. Development of data analyzing jobs has the disadvantage that errors or wrong assumptions about the input data may only be detected in productive processing. This usually leads to a re-execution of the entire job and re-computing all input data. This can be a tremendous profuseness of computing time if most of the job's tasks are not affected by these changes and therefore process and produce the same exact data again. This paper describes an approach to use materialized intermediate data from previous job executions to reduce the number of tasks that have to be re-executed in case of an updated job. Saving intermediate data to disk is a common technique to achieve fault tolerance in data streaming systems. These intermediate results can be used for memoization to avoid needless re-execution of tasks. We show that memoization can decrease the runtime of an updated job distinctly.",2013,0, 6905,Substitute Eyes for Blind with Navigator Using Android,Our aim is to develop an affordable technology which is cheap and can be a substitute eyes for blind people. As a first step to achieve this goal we decided to make a Navigation System for the Blind. Our device consists of the following 2 parts: 1) Embedded Device: can be used to detect local obstacles such as walls/cars/etc. using 2 ultrasonic sensors to detect the obstacles and vibrator motors to give tactile feedback to the blind. 2) Android App: will give the navigation directions. Can be installed on any android device: cellphone/tablet/etc.,2013,0, 6906,Road Accident Prevention Unit (R.A.P.U) (A Prototyping Approach to Mitigate an Omnipresent Threat),"Road accidents claim a staggeringly high number of lives every year. From drunk driving, rash driving and driver distraction to visual impairment, over speeding and over-crowding of vehicles, the majority of road accidents occur because of some fault or the other of the driver/occupants of the vehicle. According to the report on Road Accidents in India, 2011 by the Ministry of Transport and Highways, Government of India, approximately every 11th person out of 100,000 died in a road accident and further, every 37th person was injured in one, making it an alarming situation for a completely unnecessary cause of death. The above survey also concluded that in 77.5 percent of the cases, the driver of the vehicle was at fault. The situation makes it a necessity to target the root cause of road accidents in order to avoid them. While car manufacturers include a system for avoiding damages to the driver and the vehicle, no real steps have been taken to actually avoid accidents. Road Accident Prevention Unit is a step forward in this stead. This design monitors the driver's state using multiple sensors and looks for triggers that can cause accidents, such as alcohol in the driver's breath and driver fatigue or distraction. When an alert situation is detected, the system informs the driver and tries to alert him. If the driver does not respond within a stipulated time, the system turns on a distress signal outside the vehicle to inform nearby drivers and sends a text message to the driver's next of kin about the situation. A marketable design would also shut down power to the vehicle, thus providing maximum probability for avoiding road accidents and extending a crucial window for preventive and mitigation measures to be taken.",2013,0, 6907,Judgement of ball mill working condition in combined grinding system,"In the cement grinding system, the quantity of raw material in the ball mill has an important effect on the cement production. Based on the quantity, the working conditions of the ball mill are divided into three states, the full state, the normal state, the empty state. Normally, it's hard to predict the working station of the ball mill and the precision is low. This paper, least square method and its improved algorithm is used as a new method to judge the working condition of ball mill. A parameter is added to the finite impulse response model. Through the value of the ball mill working condition can be judged so the operator can change the parameters of the equipment in time and an on-line software is programmed by VC++. As a result, both the quality of the cement and the stability of the equipment are improved.",2013,0, 6908,The research of natural gas pipeline leak detection based on adaptive filter technology,"This paper expounds the potential safety hazard of natural gas pipeline leakage, and briefly summarizes the means and defects of pipeline leak detection recently. A acoustic detection method of natural gas pipeline leak based on adaptive filtering algorithm is proposed so that the situation the natural gas pipeline leakage is difficult to find and detect can be solved. The sensor structure and flow diagram are elaborated, and pipeline leakage simulation test system was designed. This experiment processes and discriminates of leakage characteristics through the upper machine software Labview. Eventually the leak can be detected and located in fundamental.",2013,0, 6909,Study on sensors using in wall climbing robot for motion controller,"The storage tank anticorrosion quality seeming to be particularly important with the rapid development of the domestic oil. This design is used to detect the coating's thickness of storage tanks, PC and the console computer control the wall-climbing robot's motion. The goal of this paper is to work on study of sensors using in wall climbing robot for motion controller, which include IR detector sensor and Ultrasonic PING Sensor. The system design includes hardware and software systems: the hardware control panel using STC12C5A60S2 master chip, receive the host computer's instructions, drive wall-climbing robot to walk; the host computer's software system is developed by Visual C# and the consol computer's software is developed by Keil C. Ultimately it achieves coating thickness of the oil storage tank.",2013,0, 6910,Closed-loop subspace projection based state-space model-plant mismatch detection and isolation for MIMO MPC performance monitoring,"In multivariate model predictive control (MPC) systems, the quality of multi-input multi-output (MIMO) plant models has significant impact on the controller performance in different aspects. Though re-identification of plant models can improve model quality and prediction accuracy, it is very time consuming and economically expensive in industrial practice. Therefore, the automatic detection and isolation of the model-plant mismatch is highly desirable to monitor and improve MPC performance. In this paper, a new closed-loop MPC performance monitoring approach is proposed to detect model-plant mismatch within state-space formulations through subspace projections and statistical hypothesis testing. A monitoring framework consisting of three quadratic indices is developed to capture model-plant mismatches precisely. The validity and effectiveness of the proposed method is demonstrated through a paper machine headbox control example.",2013,0, 6911,Impact of refactoring on external code quality improvement: An empirical evaluation,"Refactoring is the process of improving the design of the existing code by changing its internal structure without affecting its external behaviour, with the main aims of improving the quality of software product. Therefore, there is belief that refactoring improves quality factors such as understandability, flexibility, and reusability. Moreover, there are also claims that refactoring yields higher development productivity. However, there is limited empirical evidence to support such assumptions. The objective of this study is to validate/invalidate the claims that refactoring improves software quality. Experimental research approach was used to achieve the objective and ten selected refactoring techniques were used for the analysis. The impact of each refactoring technique was assessed based on external measures namely; analysability, changeability, time behaviour and resource utilization. After analysing the experimental results, among the tested ten refactoring techniques, Replace Conditional with Polymorphism ranked in the highest as having high percentage of improvement in code quality. Introduce Null Object was ranked as worst which is having highest percentage of deteriorate of code quality.",2013,0, 6912,Let's talk together: Understanding concurrent transmission in wireless sensor networks,"Wireless sensor networks (WSNs) are increasingly being applied to scenarios that simultaneously demand for high packet reliability and short delay. A promising technique to achieve this goal is concurrent transmission, i.e. multiple nodes transmit identical or different packets in parallel. Practical implementations of concurrent transmission exist, yet its performance is not well understood due to the lack of expressive models that accurately predict the success of packet reception. We experimentally investigate the two phenomena that can occur during concurrent transmission depending on the transmission timing and signal strength, i.e. constructive interference and capture effect. Equipped with the thorough understanding of these two phenomena, we propose an accurate prediction model for the reception of concurrent transmission. The extensive measurements carried out with varying number of transmitters, packet length and signal strength verify the excellent quality of our model, which provides a valuable tool for protocol design and simulation of concurrent transmission in WSNs.",2013,0, 6913,Fault isolation by test scheduling for embeded systems using probabilistics approach,This paper deals with the isolation of the failed components in the system. Each component can be affected in a random way by failures. The detection of the state of a component or a subsystem is carried out using tests. The objective of this research is to exploit the techniques of built in test and available knowledge to generate the sequence of tests which makes it possible to locate quickly the whole of the components responsible for the failure of the system. One considers an operative system according to a series structure for which one knows the cost of tests and the conditional probability that a component is responsible for the failure. The various strategies of diagnosis are analyzed. The treated algorithms call upon the probabilistic analysis of the systems.,2013,0, 6914,Real-time mobility aware shoe Analyzing dynamics of pressure variations at important foot points,"Data collected from pressure sensors attached to shoe insole is a rich source of information about the dynamics of the varying pressure exerted at different points while a person is in motion. Depending on the accuracy and the density of the points of data collection, this could be applied for different uses. Analyzing the time series data of the pressure, it is possible (1) to detect faults in walking and balancing problems for old people, (2) to design personalized foot orthoses, (3) to calculate the calorie burnt, even when walking and jogging are mixed, and the road slope changes, (4) to find subtle faults in sprinters or tennis players, (5) for person identification, (6) even for initiating alarm arising from mishandling of machines (like accelerator pedal of a car). In this work, we look for an efficient, real-time, yet cheap solution. We use a few thin, cheap, resistive pressure sensors, placed at critical points on the insole of the shoe to collect dynamic pressure data, preprocess it and extract features to identify the mobility speed. Nearly 100% classification accuracy was achieved. Thus, the target to classify whether the person is walking or jogging or climbing up or down the stairs was found to be possible, even with very simple gadget. From the time duration and the speed, the distance travel could be calculated. If, in addition, this signal could tell us the body-weight, we could accurately calculate the calorie burnt at the end of the day. The analysis method, and results from real experiments are discussed.",2013,0, 6915,Research on converting CAD model to MCNP model based on STEP file,"MCNP input file has the characteristics of complicated form and is error-prone in describing geometry model. Therefore we need design and implement an algorithm of conversion general CAD model to MCNP model to solve the existing problems in MCNP aided modeling software. And it can convert the CAD model to MCNP input file. In order to achieve the above goal, this paper concentrates on converting CAD model to MCNP model, after analyzing STEP neutral File and MCNP INP file, we designed an algorithm to achieve converting STEP file to INP file. The result of experiment shows that it has better applicability than the other converting algorithm after getting geometry information of the STEP file. And this algorithm can be widely used and make the communication between CAD systems and MCNP models.",2013,0, 6916,Development of a new modeling circuit for the Remote Terminal Unit (RTU) with GSM communication,"This paper introduces the design and development of Intelligent Remote Terminal Unit (RTU) which is to be applied as an automation technique for operating and controlling the low voltage (LV) downstream of 415/240V to enhance reliability of power for the consumers. The design proposed based on Global System for Mobile (GSM) communication and this paper also presents an efficient design for distribution automation system and its implementation in remote/automatic monitoring and controlling of the relays (circuit breaker) by means of GSM Short Message Service (SMS) services, automatic decision making and continuous monitoring of distribution system components in real time [1]. The systems has been equipped with microcontroller as a main component which act as an RTU programmed using Microcontroller PRO compiler software. The RTU provides monitoring fault operation, controlling functions and data collection for analysis. RTU will initiate the transaction with the digital and output modules. The master of this system is RTU and the slaves are digital and output modules. RTU plays an important role in detecting fault and assigned to serve message immediately in the control room. This system involves the detection of fault connected to the microcontroller (PIC18F77A) and GSM modem. When the fault occurs, the sensor will send the signals to the PIC16F77A. The PIC is programmed to process the data and send the signals to the GSM modem. Once received the data, GSM will send the message to the control room operators or other authorized personnel to alert them on the current situation through cellular phone. The results are then communicated between hardware circuit and simulation circuit for the final conclusion with the properly functional algorithm.",2013,0, 6917,Digital image tampering detection and localization using singular value decomposition technique,"Recent years have witnessed an exponential growth in the use of digital images due to development of high quality digital cameras and multimedia technology. Easy availability of image editing software has made digital image processing very popular. Ready to use software are available on internet which can be easily used to manipulate the images. In such an environment, the integrity of the image can not be taken for granted. Malicious tampering has serious implication for legal documents, copyright issues and forensic cases. Researchers have come forward with large number of methods to detect image tampering. The proposed method is based on hash generation technique using singular value decomposition. Design of an efficient hash vector as proposed will help in detection and localization of image tampering. The proposed method shows that it is robust against content preserving manipulation but extremely sensitive to even very minute structural tampering.",2013,0, 6918,Enhancement of camera captured text images with specular reflection,"Specular reflection of light degrades the quality of scene images. Whenever specular reflection affects the text portion of such an image, its readability is reduced significantly. Consequently, it becomes difficult for an OCR software to detect and recognize similar texts. In the present work, we propose a novel but simple technique to enhance the region of the image with specular reflection. The pixels with specular reflection were identified in YUV color plane. In the next step, it enhances the region by interpolating possible pixel values in YUV space. The proposed method has been compared against a few existing general purpose image enhancement techniques which include (i) histogram equalization, (ii) gamma correction and (iii) Laplacian filter based enhancement method. The proposed approach has been tested on some images from ICDAR 2003 Robust Reading Competition image database. We computed a Mean Opinion Score based measure to show that the proposed method outperforms the existing enhancement techniques for enhancement of readability of texts in images affected by specular reflection.",2013,0, 6919,A Density Grid-Based Clustering Algorithm for Uncertain Data Streams,"This paper proposes a grid-based clustering algorithm Clu-US which is competent to find clusters of non-convex shapes on uncertain data stream. Clu-US maps the uncertain data tuples to the grid space which could store and update the summary information of stream. The uncertainty of data is taken into account for calculating the probability center of a grid. Then, the distance between the probability centers of two adjacent grids is adopted for measuring whether they are """"close enough"""" in grids merging process. Furthermore, a dynamic outlier deletion mechanism is developed to improve clustering performance. The experimental results show that Clu-US outperforms other algorithms in terms of clustering quality and speed.",2013,0, 6920,Genetic Algorithms Applied to Discrete Distribution Fitting,"A common problem when dealing with preprocessing of real world data for a large variety of applications, such as classification and outliers detection, consists in fitting a probability distribution to a set of observations. Traditional approaches often require the resolution of complex equations systems or the use of specialized software for numerical resolution. This paper proposes an approach to discrete distribution fitting based on Genetic Algorithms which is easy to use and has a large variety of potential applications. This algorithm is able not only to identify the discrete distribution function type but also to simultaneously find the optimal value of its parameters. The proposed approach has been applied to an industrial problem concerning surface quality monitoring in flat steel products. The results of the tests, which have been developed using real world data coming from three different industries, demonstrate the effectiveness of the method.",2013,0, 6921,Perceptual Evaluation of Voice Quality and Its Correlation with Acoustic Measurement,"The GRBAS scale is a widely used subjective measure of voice quality. The aim of this paper is to investigate the correlation between the 'grade', 'roughness', 'breathiness', 'asthenia' and 'strain' dimensions of this scale and the objective measurements provided by the 'Analysis of Dysphonia in speech and Voice' (ADSV) software package. To do this, voice recordings of 107 samples were collected in a quiet room, and each voice was perceptually evaluated on the GRBAS scale by three experienced speech and language therapists. The same recordings were also acoustically analysed using ADSV. Statistical analysis using the Spearman's rank correlation coefficient model identified a degree of moderate correlation between the result of cepstral based analysis and the GRBAS scale. A classifier such as a decision tree may then be applied to the ADSV cepstral measurement for the objective prediction of GRBAS scores. The accuracy of the classifier in predicting the score of each therapist is given in the paper.",2013,0, 6922,Binomial distribution approach for efficient detection of service level violations in cloud,"Optimum usage of hardware devices, network devices, software resources and consistent quality of service is the aim of cloud computing. Thus, cloud computing is reducing the cost and mounting the revenue for cloud provider. Cloud computing gives the choice and freedom to cloud consumer for the use of dynamic and elastic services as pay-per-use model. In this context, to justify the cost and to verify the quality of services, active Service Level Management is required. Moreover to overcome unrealistic demands from customers and to surpass the occurring violations, a strong and straight forward approach is needed, through which SLA violations can be noticed, detected and distinguished. In this Paper, we have focused on fundamental elements for effective SLA violations detection system. We also propose an approach for binomial distribution for Service level agreement violation detection. This approach gives probability of success and system reliability of cloud service which may be used by service providers to minimize and prevent different violations.",2013,0, 6923,8 channel vibration monitoring and analyzing system using LabVIEW,"The prime objective of this paper is to present prompt 8 channel vibration monitoring and analyzing system using prevailing LabVIEW tools. Vibration measurement is very needful in mechanical and electrical industries to check the machine health & to take predictive maintenance steps before failure or major fault occur. A system based on virtual instrument is introduced that can measure vibration signals. The hardware-developing of the system includes vibration sensors, a signal conditioning circuit cum sensor exciter, a data acquisition device and a PC. Here for research, we are implementing vibration sensor on cantilever beam type arrangement and analyzing vibration signal using signal processing technique in LabVIEW. This methodology relies on the use of advanced methods for machine vibration analysis and health monitoring. Because of some issues encountered regarding traditional methods, for Fourier analysis of non stationary rotating machines the use of more advanced method using powerful software is required. FFT is a very powerful tool in frequency domain for analysis of vibration signal and it provides the information at which frequency the fault occurs? And according to that information we also predict the speed of rotating machine. For FFT analysis; it is necessary to convert vibration data (dynamic data 100mv/g) into EUs (Engineering Units) like acceleration, Velocity and displacement. Here we did the same. The main advantage of vibration monitoring with propose system is that it enables simultaneous monitoring of a number of machines of industries located at different places from a common point. The paper presents a scientific direction and development of vibration signals measurement.",2013,0, 6924,Fault tolerant Lu factorization in transactional memory system,"With the popularization of multi-core processors, transaction memory, as a concurrent control mechanism with easy programing and high scalability, has attracted more and more attention. As a result, the reliability problems of transactional memory become a concerning issue. This paper addresses a transactional implementation of the Lu benchmark of SPLASH-2, and proposes a fault-tolerant Lu algorithm for this transactionalize Lu algorithm. The fault-tolerant Lu uses the data-versioning mechanism of the transactional memory system, detects errors based on transactions and recovers the error by rolling back the error transaction. The experiments show that the fault-tolerant Lu can get a better fault tolerance effect under a smaller cost.",2013,0, 6925,A method of locating fault based on software monitoring,"Software testing can detect most faults, but it can not find the dynamic fault with space and time, especially the dead halt fault. In order to solve this problem, this paper proposes a method of fault location based on software monitoring. First, a method of generating the extended control flow graph is studied, and then a method of generating running record and the process of locating faults is researched. Finally, an example is taken to test validity of this method.",2013,0, 6926,Path coverage assessment for software architecture testing,"Software architecture is receiving increasingly attention as a critical design level for software systems, architecture testing and assessment is key issues to improve and assure software quality. The development of techniques and tools to support architectural understanding, testing, reengineering, maintenance, and reuse will become an important issue. This paper introduces a testing technology to aid architectural testing. Proposes a family of structural criteria for software architecture specification, sketch and proof the subsume relation between coverage criteria by CW system. The common coverage criteria are assessed against the CW system. The assessment provides a feasible comparison of the effectiveness of test criteria on software architecture. The assessment can help to design more testable software architecture.",2013,0, 6927,Software state monitoring model studies based on multivariate HPM,"Hardware Performance Monitor counters (HPM) are an emerging analysis technology in the area of software performance analysis. This paper proposes a method of software state monitoring based on HPM from the perspective of software fault diagnosis. Compared with traditional methods, the method does not depend on test case and expected result, and it can detect abnormal behavior in real-time based on software performance data. By the use of Performance API (PAPI), the method can gather CPU performance data. These data are recorded in HPM and can reflect software state at the running time of software. With Hidden Markov Model (HMM), the method can learn prior probability of software state and conditional probability of performance data readings in each interval. Finally, based on the above parameters, the method classifies the follow-up multivariate observations by Naive Bayesian classifier (NBC) so as to monitor software state in real-time. The experiment shows that based on predefined monitoring event set, our method can effectively identify abnormal behavior which may occur in the running time of software.",2013,0, 6928,Analysis of the reputation system and user contributions on a question answering website: StackOverflow,"Question answering (Q&A) communities have been gaining popularity in the past few years. The success of such sites depends mainly on the contribution of a small number of expert users who provide a significant portion of the helpful answers, and so identifying users that have the potential of becoming strong contributers is an important task for owners of such communities. We present a study of the popular Q&A website StackOverflow (SO), in which users ask and answer questions about software development, algorithms, math and other technical topics. The dataset includes information on 3.5 million questions and 6.9 million answers created by 1.3 million users in the years 2008-2012. Participation in activities on the site (such as asking and answering questions) earns users reputation, which is an indicator of the value of that user to the site. We describe an analysis of the SO reputation system, and the participation patterns of high and low reputation users. The contributions of very high reputation users to the site indicate that they are the primary source of answers, and especially of high quality answers. Interestingly, we find that while the majority of questions on the site are asked by low reputation users, on average a high reputation user asks more questions than a user with low reputation. We consider a number of graph analysis methods for detecting influential and anomalous users in the underlying user interaction network, and find they are effective in detecting extreme behaviors such as those of spam users. Lastly, we show an application of our analysis: by considering user contributions over first months of activity on the site, we predict who will become influential long-term contributors.",2013,0, 6929,An Empirical Study on Wrapper-Based Feature Selection for Software Engineering Data,"Software metrics give valuable information for understanding and predicting the quality of software modules, and thus it is important to select the right software metrics for building software quality classification models. In this paper we focus on wrapper-based feature (metric) selection techniques, which evaluate the merit of feature subsets based on the performance of classification models. We seek to understand the relationship between the internal learner used inside wrappers and the external learner for building the final classification model. We perform experiments using four consecutive releases of a very large telecommunications system, which include 42 software metrics (and with defect data collected for every program module). Our results demonstrate that (1) the best performance is never found when the internal and external learner match, (2)the best performance is usually found by using NB (Naive Bayes) inside the wrapper unless SVM (Support Vector Machine) is external learner, (3) LR (Logistic Regression) is often the best learner to use for building classification models regardless of which learner was used inside the wrapper.",2013,0, 6930,Touch screen based TETRA vehicle radio: Preliminary results of multi-methodology usability testing prototype,"A modern emergency vehicle is the combination of different technologies and single vehicle can contain dozens of user interfaces (UI). Laurea University of Applied Sciences launched on 2010 Mobile Object Bus Interaction-project (MOBI) that defines user requirements for designing emergency vehicles, finding solutions to decrease power consumption, experimenting possibilities to create common ICT-architecture and reduce number of UIs in emergency vehicles. MOBI-project equipped a demo vehicle with the latest technology and a possibility was offered to test new prototype of touch screen based TETRA vehicle radio integrated with the onboard computer. This paper presents a multi-methodology approach for usability testing for touch screen based TETRA vehicle radio where heuristic evaluation was combined with field testing and real users in a moving vehicle. Research result confirms that multi-methodology approach is able to detect key usability problems in early stage of the product development cycle and enables to improve software quality in prototyping phase.",2013,0, 6931,Functional Validation Driven by Automated Tests,"The functional quality of any software system can be evaluated by how well it conforms to its functional requirements. These requirements are often described as use cases and are verified by functional tests that check whether the system under test (SUT) runs as specified. There is a need for software tools to make these tests less laborious and more economical to create, maintain and execute. This paper presents a fully automated process for the generation, execution, and analysis of functional tests based on use cases within software systems. A software tool called Fun Tester has been created to perform this process and detect any discrepancies from the SUT. Also while performing this process it generates conditions to cause failures which can be analyzed and fixed.",2013,0, 6932,A Method for Model Checking Context-Aware Exception Handling,"The context-aware exception handling (CAEH) is an error recovery technique employed to improve the ubiquitous software robustness. In the design of CAEH, context conditions are specified to characterize abnormal situations and used to select the proper handlers. The erroneous specification of such conditions represents a critical design fault that can lead the CAEH mechanism to behave erroneously or improperly at runtime (e.g., abnormal situations may not be recognized and the system's reaction may deviate from what is expected). Thus, in order to improve the CAEH reliability this kind of design faults must be rigorously identified and eliminated from the design in the early stages of development. However, despite the existence of formal approaches to verify context-aware ubiquitous systems, such approaches lack specific support to verify the CAEH behavior. This work proposes a method for model checking CAEH. This method provides a set of modeling abstractions and 3 (three) properties formally defined that can be used to identify exiting design faults in the CAEH design. In order to assess the method feasibility: (i) a support tool was developed, and (ii) fault scenarios that are recurring in the CAEH was injected in a correct model and verified using the proposed approach.",2013,0, 6933,Are Domain-Specific Detection Strategies for Code Anomalies Reusable? An Industry Multi-project Study,"To prevent the quality decay, detection strategies are reused to identify symptoms of maintainability problems in the entire program. A detection strategy is a heuristic composed by the following elements: software metrics, thresholds, and logical operators combining them. The adoption of detection strategies is largely dependent on their reuse across the portfolio of the organizations software projects. If developers need to define or tailor those strategy elements to each project, their use will become time-consuming and neglected. Nevertheless, there is no evidence about efficient reuse of detection strategies across multiple software projects. Therefore, we conduct an industry multi-project study to evaluate the reusability of detection strategies in a critical domain. We assessed the degree of accurate reuse of previously-proposed detection strategies based on the judgment of domain specialists. The study revealed that even though the reuse of strategies in a specific domain should be encouraged, their accuracy is still limited when holistically applied to all the modules of a program. However, the accuracy and reuse were both significantly improved when the metrics, thresholds and logical operators were tailored to each recurring concern of the domain.",2013,0, 6934,An Extended Assessment of Data-Driven Bayesian Networks in Software Effort Prediction,"Software prediction unveils itself as a difficult but important task which can aid the manager on decision making, possibly allowing for time and resources sparing, achieving higher software quality among other benefits. Bayesian Networks are one of the machine learning techniques proposed to perform this task. However, the data pre-processing procedures related to their application remain scarcely investigated in this field. In this context, this study extends a previously published paper, benchmarking data-driven Bayesian Networks against mean and median baseline models and also against ordinary least squares regression with a logarithmic transformation across three public datasets. The results were obtained through a 10-fold cross validation procedure and measured by five accuracy metrics. Some current limitations of Bayesian Networks are highlighted and possible improvements are discussed. Furthermore, we assess the effectiveness of some pre-processing procedures and bring forward some guidelines on the exploration of data prior to Bayesian Networks' model learning. These guidelines can be useful to any Bayesian Networks that use data for model learning. Finally, this study also confirms the potential benefits of feature selection in software effort prediction.",2013,0, 6935,Modeling of transient processes at ground faults in the electrical network with a high content of harmonics,"The paper presents analytical investigations on determination of influence of higher harmonics existing in a ground fault current on the characteristics of transient processes at arcing ground faults. As initial data, results of real oscillography of ground fault currents in the operational 6 kV distribution network are used. It is shown that there are harmonics of high amplitudes in a ground fault current which are 2.6 times greater than a residual 50 Hz reactive current at a single-phase fault location. Calculations using the detailed mathematical model of the 6 kV network confirm the fact that negative influence of arcing overvoltages on equipment insulation increases due to the high content of harmonics in a fault current which are not compensated by Petersen coils and worsen conditions for successful arc quenching. It is noted that a ground fault current may cross zero many times without current extinction due to harmonic distortion. Due to high-frequency overvoltages, probability for restrikes increases. It results in expanding of a fault area in cable insulation and reducing of time for transition of single phase-to-ground fault into stable arcing with possible breakdowns of phase-to-phase insulation. To prove these statements, resistances in a fault place taking into account harmonic distortions of an arc current are calculated.",2013,0, 6936,On the Use of Software Quality Standard ISO/IEC9126 in Mobile Environments,"The capabilities and resources offered by mobile technologies are still far from those provided by fixed environments, and this poses serious challenges, in terms of evaluating the quality of applications operating in mobile environments. This article presents a study to help quality managers apply the ISO 9126 standard on software quality, particularly the External Quality model, to mobile environments. The influence of the limitations of mobile technologies are evaluated for each software quality characteristic, based on the coverage rates of its external metrics, which are themselves influenced by these limitations. The degrees of this influence are discussed and aggregated to provide useful recommendations to quality managers for their evaluation of quality characteristics in mobile environments. These recommendations are intended for mobile software in general and aren't targeted a specific ones. The External Quality model is especially valuable for assessing the Reliability, Usability, and Efficiency characteristics, and illustrates very well the conclusive nature of the recommendations of this study. However, more study is needed on the other quality characteristics, in order to determine the relevance of evaluating them in mobile environments.",2013,0, 6937,Fault-Prone Module Prediction Using a Prediction Model and Manual Inspection,"This paper proposes a fault-prone prediction approach that combines a fault-prone prediction model and manual inspection. Manual inspection is conducted by a predefined checklist that consists of questions and scoring procedures. The questions capture the fault signs or indications that are difficult to be captured by source code metrics used as input by prediction models. Our approach consists of two steps. In the first, the modules are prioritized by a fault-prone prediction model. In the second step, an inspector inspects and scores percent of the prioritized modules. We conducted a case study of source code modules in commercial software that had been maintained and evolved over ten years and compared AUC (Area Under the Curve) values of Alberg Diagram among three prediction models: (A) support vector machines, (B) lines of code, and (C) random predictor with four prioritization orders. Our results indicated that the maximum AUC values under appropriate and the coefficient of the inspection score were larger than the AUC values of the prediction models without manual inspection in each of the four combinations and the three models in our context. In two combinations, our approach increased the AUC values to 0.860 from 0.774 and 0.724. Our results also indicated that one of the combinations monotonically increased the AUC values with the numbers of manually inspected modules. This might lead to flexible inspection; the number of manually inspected modules has not been preliminary determined, and the inspectors can inspect as many modules as possible, depending on the available effort.",2013,0, 6938,Evaluating Performance of Network Metrics for Bug Prediction in Software,"Code-based metrics and network analysis based metrics are widely used to predict defects in software. However, their effectiveness in predicting bugs either individually or together is still actively researched. In this paper, we evaluate the performance of these metrics using three different techniques, namely, Logistic regression, Support vector machines and Random forests. We analysed the performance of these techniques under three different scenarios on a large dataset. The results show that code metrics outperform network metrics and also no considerable advantage in using both of them together. Further, an analysis on the influence of individual metrics for prediction of bugs shows that network metrics (except out-degree) are uninfluential.",2013,0, 6939,A Controlled Experiment to Assess the Effectiveness of Eight Use Case Templates,"Use case models, that include use case diagrams along with their documentations, are typically used to specify the functional requirements of the software systems. Use cases are usually semi-structured and documented using some natural language hence issues like ambiguity, inconsistency, and incompleteness are inevitably introduced in the specifications. There have been many efforts to formalize the use case template that make use of certain grammatical construction to guide the structure or style of the description. This paper describes an empirical work to assess the usefulness of eight such use case templates against a set of five judging criteria, namely, completeness, consistency, understandability, redundancy and fault proneness. We conducted a controlled experiment where a group of postgraduate students applied these use case templates on multiple problem specifications. In our results, Yue's template was found to be more consistent and less fault prone, Cockburn's template was found to be more complete and more understandable and, Tiwari's template was found to be less redundant as compared to the other use case templates, though the results were not statistically significant.",2013,0, 6940,Mining Attribute Lifecycle to Predict Faults and Incompleteness in Database Applications,"In a database application, for each attribute, a value is created initially via insertion. Then, the value can be referenced or updated via selection and updating respectively. Eventually, when the record is deleted, the values of the attributes are also deleted. These occurrences of events are associated with the states to constitute the attribute lifecycle. Our empirical studies discover that faults and incompleteness in database applications are highly associated with the attribute lifecycle. Consequently, we propose a novel approach to automatically extract the attribute lifecycle out of a database application from its source code through inter-procedural static program analysis. Data mining methods are applied to predict faults and incompleteness in database applications. Experiments on PHP systems give evidence to support applicability and accuracy of the proposed method.",2013,0, 6941,On Detecting Concurrency Defects Automatically at the Design Level,"We describe an automated approach for detecting concurrency defects from design diagrams of a software, in particular, sequence diagrams. From a given sequence diagram, we automatically infer a formal, parallel specification that generalizes the communication behavior that is designed informally and incompletely in the diagram. We model-check the parallel specification against generic concurrency defect patterns. No additional specification of the software is needed. We present several case-studies to evaluate our approach. The results show that our approach is technically feasible, and effective in detecting nasty concurrency defects at the design level.",2013,0, 6942,Data-Race-Freedom of Concurrent Programs,"Reasoning about access isolation in a program that uses locks, transactions or both to coordinate accesses to shared memory is complex and error-prone. The programmer must understand when accesses issued to the same memory by distinct threads, under possibly different coordination semantics, are isolated, otherwise, data races are introduced. We present a program analysis that guarantees a program is data-race-free irrespective of whether locks, transactions or both are used to coordinate accesses to memory. Our framework entails two main steps: (i) a program is statically executed to determine its memory space and the types of accesses it issues to that memory, then (ii) our isolation algorithm checks that the accesses issued by the program do not result in a data race. To the best of our knowledge our work is the first to guarantee the data-race-freedom of concurrent programs that use locks, transactions or both to coordinate accesses to mutable memory.",2013,0, 6943,Quality-Aware Refactoring for Early Detection and Resolution of Energy Deficiencies,"Software development processes usually target requirements regarding particular qualities in late iteration phases. The developed system is optimised in terms of quality issues, such as, e.g., energy efficiency, without altering the software's behaviour. Bad structures in terms of specific qualities can be considered as bad smells and refactorings can be used to resolve them to preserve its semantics. The problem is that no explicit relationship between smells, qualities and refactorings exists. Without such a relation it is not possible to give evidence about which quality requirements are not satisfied by detected smells. It cannot be specified which smells are resolved by particular refactorings. Thus, developers are not supported in focusing specific qualities and cannot detect and resolve badly structured code in combination. In this paper we present an approach for correlating smells, qualities and refactorings explicitly which supports to focus on specific qualities in early development phases already. We introduce the new term quality smell and come up with a metamodel and architecture enabling developers to establish such relations. A small evaluation regarding energy efficiency in Java code and discussion completes this paper.",2013,0, 6944,A Software Defined Self-Aware Network: The Cognitive Packet Network,"This article is a summary description of the Cognitive Packet Network (CPN) which is an example both of a completely software defined network (SDN) and of a self-aware computer network (SAN) which has been completely implemented and used in numerous experiments. CPN is able to observe its own internal performance as well as the interfaces of the external systems that it interacts with, in order to modify its behaviour so as to adaptively achieve objectives, such as discovering services for its users, improving their Quality of Service (QoS), reduce its own energy consumption, compensate for components which fail or malfunction, detect and react to intrusions, and defend itself against attacks.",2013,0, 6945,REE: Exploiting idempotent property of applications for fault detection and recovery,"As semiconductor technologies scale down to deep sub-micron dimensions, transient faults will soon become a critical reliability concern. This paper presents the Reliability Enhancement Exploiting (REE) technique, a software-implemented fault tolerance solution which employs idempotent property of applications. An idempotent region of code is simply one that can be re-executed multiple times and still produces the same, correct result. By instrumenting extra instructions in an idempotent region to re-execute the region, REE can detect the transient faults occurring during the execution of the idempotent region. Once a fault is detected, REE can recover from the fault by executing the idempotent region again. To the best of our knowledge, this is the first to exploit idempotent property for fault detection. With similar fault coverage to a classic solution, the memory overhead and the performance overhead have been reduced by 71.8% and 31.3%, respectively.",2013,0, 6946,Research on optimization scheme of regression testing,"Regression testing is an important process during software development. In order to reduce costs of regression testing, research on optimization of scheme of regression testing have been done in this paper. For the purpose of reducing the number of test cases and detecting faults of programs early, this paper proposed to combine test case selection with test case prioritization. Regression testing process has been designed and optimization of testing scheme has been implemented. The criterion of test case selection is modify impact of programs, finding programs which are impacted by program modification according to modify information of programs and dependencies between programs. Test cases would be selected during test case selection. The criterion of test case prioritization is coverage ability and troubleshooting capabilities of test case. Test cases which have been selected during test case selection would be ordering in test case prioritization. Finally, the effectiveness of the new method is discussed.",2013,0, 6947,Low power consumption scheduling based on software Fault-tolerance,"The space computer puts forward high demands on the performance. Therefore, the high-performance digital signal processors are increasingly used in the space computer. However, the single particle effects caused by the cosmic radiation make the reliability of the space computer become a huge challenge. The COTS DSP chip has a huge advantage compared to the antiradiation DSP chip in performance, price, size and weight. The software implemented fault-tolerance technique can protect the program, but degrade the system performance and increase the power consumption. According to the DSP structural characteristics and in the premise of not reducing the detecting error ratio, this paper proposes an instruction scheduling method for the low power consumption, to reduce the overheads in terms of the performance and the energy incurred by the fault-tolerance technique.",2013,0, 6948,Predict fault-prone classes using the complexity of UML class diagram,"Complexity is an important attribute to determine the software quality. Software complexity can be measured during the design phase before implementation of system. At the design phase, UML class diagram is the important diagram to show the relationships among the classes of objects in the system. In this paper, we measure the complexity of object-oriented software at design phase to predict the fault-prone classes. The ability to predict the fault-prone classes can provide guidance for software testing and improve the effectiveness of development process. We constructed the Naive Bayesian and k-Nearest Neighbors model to find the relationship between the design complexity and fault-proneness. The proposed models are empirically evaluated using four version of JEdit. The models had been validated using 10-fold cross validation. The performance of prediction models were evaluated by goodness-of-fit criteria and Receiver Operating Characteristic (ROC) analysis. Results obtained from our case study showed the average of models developed by design complexity can predict up to 70% fault-prone classes in object oriented software. It is a better an early indicator of software quality.",2013,0, 6949,Perceived QoS assessment for Voip networks,"As the Internet evolves into a ubiquitous communications, VoIP network becomes more important and popular. It will be expected to meet the quality standards for VoIP networks. The aim of this paper is to undertake a fundamental investigation to quantify the impact of network impairment and speech relates parameters on perceived QoS in VoIP networks. Our contribution is threefold. First, a new VoIP simulation platform is established. The network simulation software is WANem, the voice communication protocol is implemented by OpenPhone. Secondly, we analyze the factors that affect the perceive QoS of VoIP networks. Thirdly, we use the newest NPESQ (New Perceptual Evaluation of Speech Quality) algorithm to assess the perceived QoS value under different IP impairment parameters for VoIP networks.",2013,0, 6950,Improving Reliability of Real-Time Systems through Value and Time Voting,"Critical systems often use N-modular redundancy to tolerate faults in subsystems. Traditional approaches to N-modular redundancy in distributed, loosely-synchronised, real-time systems handle time and value errors separately: a voter detects value errors, while watchdog-based health monitoring detects timing errors. In prior work, we proposed the integrated Voting on Time and Value (VTV) strategy, which allows both timing and value errors to be detected simultaneously. In this paper, we show how VTV can be harnessed as part of an overall fault tolerance strategy and evaluate its performance using a well-known control application, the Inverted Pendulum. Through extensive simulations, we compare the performance of Inverted Pendulum systems which employs VTV and alternative voting strategies to demonstrate that VTV better tolerates well-recognised faults in this realistically complex control problem.",2013,0, 6951,Towards Formal Approaches to System Resilience,"Technology scaling and techniques such as dynamic voltage/frequency scaling are predicted to increase the number of transient faults in future processors. Error detectors implemented in hardware are often energy inefficient, as they are """"always on."""" While software-level error detection can augment hardware-level detectors, creating detectors in software that are highly effective remains a challenge. In this paper, we first present anew LLVM-level fault injector called KULFI that helps simulate faults occurring within CPU state elements in a versatile manner. Second, using KULFI, we study the behavior of a family of well-known and simple algorithms under error injection. (We choose a family of sorting algorithms for this study.) We then propose a promising way to interpret our empirical results using a formal model that builds on the idea of predicate state transition diagrams. After introducing the basic abstraction underlying our predicate transition diagrams, we draw connections to the level of resilience empirically observed during fault injection studies. Building on the observed connections, we develop a simple, and yet effective, predicate-abstraction-based fault detector. While in its initial stages, ours is believed to be the first study that offers a formal way to interpret and compare fault injection results obtained from algorithms from within one family. Given the absolutely unpredictable nature of what a fault can do to a computation in general, our approach may help designers choose amongst a class of algorithms one that behaves most resilient of all.",2013,0, 6952,Probabilistic Modeling of Failure Dependencies Using Markov Logic Networks,"We present a methodology for the probabilistic modeling of failure dependencies in large, complex systems using Markov Logic Networks (MLNs), a state-of-the-art probabilistic relational modeling technique in machine learning. We illustrate this modeling methodology on example system architectures, and show how the the Probabilistic Consistency Engine (PCE) tool can create and analyze failure-dependency models. We compare MLN-based analysis with analytical symbolic analysis to validate our approach. The latter method yields bounds on the expected system behaviors for different component-failure probabilities, but it requires closed-form representations and is therefore often an impractical approach for complex system analysis. The MLN-based method facilitates techniques of early design analysis for reliability (e.g., probabilistic sensitivity analysis). We analyze two examples - a portion of the Time-Triggered Ethernet (TTEthernet) communication platform used in space, and an architecture based on Honeywell's Cabin Air Compressor(CAC) - that highlight the value of the MLN-based approach for analyzing failure dependencies in complex cyber-physical systems.",2013,0, 6953,Exploring Time and Frequency Domains for Accurate and Automated Anomaly Detection in Cloud Computing Systems,"Cloud computing has become increasingly popular by obviating the need for users to own and maintain complex computing infrastructures. However, due to their inherent complexity and large scale, production cloud computing systems are prone to various runtime problems caused by hardware and software faults and environmental factors. Autonomic anomaly detection is crucial for understanding emergent, cloud-wide phenomena and self-managing cloud resources for system-level dependability assurance. To detect anomalous cloud behaviors, we need to monitor the cloud execution and collect runtime cloud performance data. For different types of failures, the data display different correlations with the performance metrics. In this paper, we present a wavelet-based multi-scale anomaly identification mechanism, that can analyze profiled cloud performance metrics in both time and frequency domains and identify anomalous cloud behaviors. Learning technologies are exploited to adapt the selection of mother wavelets and a sliding detection window is employed to handle cloud dynamicity and improve anomaly detection accuracy. We have implemented a prototype of the anomaly identification system and conducted experiments on an on-campus cloud computing environment. Experimental results show the proposed mechanism can achieve 93.3% detection sensitivity while keeping the false positive rate as low as 6.1% while outperforming other tested anomaly detection schemes.",2013,0, 6954,Applying Reduced Precision Arithmetic to Detect Errors in Floating Point Multiplication,"Prior work developed an efficient technique, called reduced precision checking, for detecting errors in floating point addition. In this work, we extend reduced precision checking (RPC) to multiplication. Our results show that RPC can successfully detect errors in floating point multiplication at relatively low cost.",2013,0, 6955,Generalized Cox Proportional Hazards Regression-Based Software Reliability Modeling with Metrics Data,"Multifactor software reliability modeling with software test metrics data is well known to be useful for predicting the software reliability with higher accuracy, because it utilizes not only software fault count data but also software testing metrics data observed in the development process. In this paper we generalize the existing Cox proportional hazards regression-based software reliability model by introducing more generalized hazards representation, and improve the goodness-of-fit and predictive performances. In numerical examples with real software development project data, we show that our generalized model can significantly outperform several logistic regression-based models as well as the existing Cox proportional hazards regression-based model.",2013,0, 6956,Should We Beware the Exceptions? An Empirical Study on the Eclipse Project,"Exception handling is a mechanism that highlights exceptional functionality of software systems. Currently there are empirical studies pointing out that design entities (classes) that use exceptions are more defect prone than the other classes and sometimes developers neglect exceptional functionality, minimizing its importance. In this paper we investigate if classes that use exceptions are the most complex classes from software systems and, consequently, have an increased likelihood to exhibit defects. We also detect two types of improper usages of exceptions in three releases of Eclipse and investigate the relations between classes that handle/do not handle properly exceptions and the defects those classes exhibit. The results show that (i) classes that use exceptions are more complex than the other classes and (ii) classes that handle improperly the exceptions in the source code exhibit an increased likelihood of exhibiting defects than classes which handle them properly. Based on the provided evidence, practitioners get knowledge about the correlations between exceptions and complexity and are advised once again about the negative impact deviations from best programming practices have at a source code level.",2013,0, 6957,Network coding to enhance standard routing protocols in wireless mesh networks,"This paper introduces a design and simulation of a locally optimized network coding protocol, called PlayNCool, for wireless mesh networks. PlayN-Cool is easy to implement and compatible with existing routing protocols and devices. This allows the system to gain from network coding capabilities implemented in software without the need for new hardware. PlayNCool enhances performance by (i) choosing a local helper between nodes in the path to strengthen the quality of each link, (ii) using local information to decide when and how many transmissions to allow from the helper, and (iii) using random linear network coding to increase the usefulness of each transmission from the helpers. This paper focuses on the design details needed to make the system operate in reality and evaluating performance using ns-3 in multi-hop topologies. Our results show that the PlayNCool protocol increases the end-to-end throughput by more than two-fold and up to four-fold in our settings.",2013,0, 6958,Design and implementation QoS system based on OpenFlow,"In this paper, we design an architecture to implement a QoS system based on OpenFlow. The system can filter malicious packets that have no permission to gain quality of service, and can guarantee scheduling the fastest path for QoS packets in working network. By predicting and estimating current network flows, we implement to assign optimal path for QoS flow without affecting the rest of the normal packet transmission. We can also implement to transferring the packet even in congested networks by simply flushing all flow-tables in the switches and rescheduling the path for different packets. All the provisioning of our system can be flexible and real-time updated. We implement the system based on POX controller, verify and evaluate our design goals.",2013,0, 6959,OB-STM: An Optimistic Approach for Byzantine Fault Tolerance in Software Transactional Memory,"Recently, researchers have shown an increased interest in concurrency control using distributed Software Transactional Memory (STM). However, there has been little discussion about certain types of fault tolerance, such as Byzantine Fault Tolerance (BFT), for kind of systems. The focus of this paper is on tolerating byzantine faults on optimistic processing of transactions using STM. The result is an algorithm named OB-STM. The processing of a transaction runs with an optimistic approach, benefiting from the high probability of messages being delivered in order when using Reliable Multicast on a local network (LAN). The protocol has a better performs when messages are delivered ordered. In case of a malicious replica or out-of-order messages, the Byzantine protocol is initiated. In smaller scenarios and using an optimistic approach, the protocol has a better throughput than Tazio.",2013,0, 6960,Automatic test platform for photovoltaic grid-connected inverters,"Photovoltaic (PV) solar inverter is equipment that converts the DC output of solar batteries to the AC power which meets the requirements of the gird, its performance and quality are directly related to the photovoltaic effect on the public grid. Current national standard specifies only the requirements for protection and did not develop appropriate testing rules and procedures. This paper researched and developed the PV grid-connected inverter detects platform, analyzed the PV grid-connected inverter protective function and testing methods and procedures. We realized the PC integration of the system and the automatic test of the inverter by using Kingview software, to ensure the reliability and accuracy of test results, in addition, the host computer system has proved ease of use, stability and scalability.",2013,0, 6961,An incentive scheme based on heterogeneous belief values for crowd sensing in mobile social networks,"Crowd sensing is a new paradigm which exploits pervasive mobile devices to provide complex sensing services in mobile social networks (MSNs). To achieve good service quality for crowd sensing applications, incentive mechanisms are indispensable to attract more participants. Most of existing research apply only for the offline sensing data collections, where all participants' information are known a priori. In contrast, we focus on a more real scenario requiring a continuous crowd sensing. We model the problem as a restless multi-armed bandit process rather than a regular auction, where users submit their bids to the server over time, and the server choose a subset of users to collect sensing data. In this paper, to maximize the social welfare for the infinite horizonal continuous sensing, we design an incentive scheme based on heterogeneous belief values for joint social states and realtime throughput. Analysis results indicate that our algorithm is not only near optimal and stable, but truthful, individually rational, profitable, and computationally efficient.",2013,0, 6962,Design and Optimization of a Big Data Computing Framework Based on CPU/GPU Cluster,"Big data processing is receiving significant amount of interest as an important technology to reveal the information behind the data, such as trends, characteristics, etc. MapReduce is one of the most popular distributed parallel data processing framework. However, some high-end applications, especially some scientific analyses have both data-intensive and computation intensive features. Therefore, we have designed and implemented a high performance big data process framework called Lit, which leverages the power of Hadoop and GPUs. In this paper, we presented the basic design and architecture of Lit. More importantly, we spent a lot of effort on optimizing the communications between CPU and GPU. Lit integrated GPU with Hadoop to improve the computational power of each node in the cluster. To simplify the parallel programming, Lit provided an annotation based approach to automatically generate CUDA codes from Hadoop codes. Lit hid the complexity of programming on CPU/GPU cluster by providing extended compiler and optimizer. To utilize the simplified programming, scalability and fault tolerance benefits of Hadoop and combine them with the high performance computation power of GPU, Lit extended the Hadoop by applying a GPUClassloader to detect the GPU, generate and compile CUDA codes, and invoke the shared library. For all CPU-GPU co-processing systems, the communication with the GPU is the well-known performance bottleneck. We introduced data flow optimization approach to reduce unnecessary memory copies. Our experimental results show that Lit can achieve an average speedup of 1 to 3 on three typical applications over Hadoop, and the data flow optimization approach for the Lit can achieve about 16% performance gain.",2013,0, 6963,Towards Energy-Aware Resource Scheduling to Maximize Reliability in Cloud Computing Systems,"Cloud computing has become increasingly popular due to deployment of cloud solutions that will enable enterprises to cost reduction and more operational flexibility. Reliability is a key metric for assessing performance in such systems. Fault tolerance methods are extensively used to enhance reliability in Cloud Computing Systems (CCS). However, these methods impose extra hardware and/or software cost. Proper resource allocation is an alternative approach which can significantly improve system reliability without any extra overhead. On the other hand, contemplating reliability irrespective of energy consumption and Quality of Service (QoS) requirements is not desirable in CCSs. In this paper, an analytical model to analyze system reliability besides energy consumption and QoS requirements is introduced. Based on the proposed model, a new online resource allocation algorithm to find the right compromise between system reliability and energy consumption while satisfying QoS requirements is suggested. The algorithm is a new swarm intelligence technique based on imperialist competition which elaborately combines the strengths of some well-known meta-heuristic algorithms with an effective fast local search. A wide range of simulation results, based on real data, clearly demonstrate high efficiency of the proposed algorithm.",2013,0, 6964,Defects prediction of early phases of Software Development Life Cycle using fuzzy logic,"In this paper, a model is proposed to predict the software Defects indicator of early phases of Software Development Life Cycle (SDLC) using the top most reliability relevant metrics at each artifacts. Failure data is not available in the early phases of SDLC. Therefore qualitative values of software metrics are used in this model. Defect indicator predicated in the requirement analysis, Design and Coding phases are very helpful for testing and maintenance of the software. Requirement analysis phase Defect Indicator value is relatively greater than that of the design and coding artifacts. Model is validated with the existing literature. Validation result are satisfactory.",2013,0, 6965,Software quality modeling using metrics of early artifacts,"Software industries require reliability prediction for quality evaluation and resource planning. In early phase of software development, failure data is not accessible to conclude the reliability of software. However, early software fault prediction procedure provides a flexibility to predict the faults in early stage. In this paper, a software faults prediction model is proposed using BBN that focus on the structure of the software development process explicitly representing complex relationship of five influencing parameters (Techno-complexity, Practitioner Level, Creation Effort, Review Effort, and Urgency). In order to assess the constructed model, an empirical experiment has been performed, based on the data collected from software development projects used by an organization. The predicted fault ware found very near to the actual fault detected during testing.",2013,0, 6966,Aspect Oriented Software metrics based maintainability assessment: Framework and model,"This paper emphasize on a new framework to access the Aspect Oriented Software's (AOS) using software metrics. Software metrics for the qualitative and quantitative assessment is the combination of static and dynamic metrics for software's. It is found from the literature survey that till date most the framework only considered the static metrics based assessment for aspect oriented software's. In our work we have mainly considered the set of static metrics along with dynamic software metrics specific to AspectJ. This framework may provide a new research direction while predicting the software attributes because earlier dynamic metrics were neglected while evaluating the quality attributes like maintainability, reliability, understandability for AO software's. Based on basic fundamentals of software engineering dynamic metrics are equally important as well as static metrics for software analysis. A similar concept is borrowed to apply on aspect oriented software development by adding dynamic software metrics. Presently we have only proposed a framework and model using the static and dynamic metrics for the assessment of aspect oriented system but still the proposed approach need to be validated.",2013,0, 6967,Software defect prediction using supervised learning algorithm and unsupervised learning algorithm,"Software defect prediction has recently attracted attention of many software quality researchers. One of the major areas in current project management software is to effectively utilize resources to make meaningful impact on time and cost. A pragmatic assessment of metrics is essential in order to comprehend the quality of software and to ensure corrective measures. Software defect prediction methods are majorly used to study the impact areas in software using different techniques which comprises of neural network (NN) techniques, clustering techniques, statistical method and machine learning methods. These techniques of Data mining are applied in building software defect prediction models which improve the software quality. The aim of this paper is to propose various classification and clustering methods with an objective to predict software defect. To predict software defect we analyzed classification and clustering techniques. The performance of three data mining classifier algorithms named J48, Random Forest, and Naive Bayesian Classifier (NBC) are evaluated based on various criteria like ROC, Precision, MAE, RAE etc. Clustering technique is then applied on the data set using k-means, Hierarchical Clustering and Make Density Based Clustering algorithm. Evaluation of results for clustering is based on criteria like Time Taken, Cluster Instance, Number of Iterations, Incorrectly Clustered Instance and Log Likelihood etc. A thorough exploration of ten real time defect datasets of NASA[1] software project, followed by various applications on them finally results in defect prediction.",2013,0, 6968,Table of contents,The following topics are dealt with: MIMO systems; MAP probability decoders; OFDM channel estimation; quantum key distribution; noiseless linear amplifier; transform domain de-noising; digital image retrieval; parallel executing command scheduler; NAND flash storage system; transmitted-reference ultra-wideband cooperative communication system; multisensor data fusion; wireless sensor network; target detection; user-priority based virtual network embedding model; homomorphic encryption; image segmentation; memristors; LLOP localization algorithm; NLOS wireless propagation; routing algorithm; schismatic communication network; UAV; LDPC layered decoder; priority scheme; spectrum partition; multiuser opportunistic spectrum access; cognitive radio networks; shadow detection method; improved Gaussian mixture model; multithreaded coprocessor IP core; embedded SoC chip; WCDMA network quality; arterial highways; autocorrelation OFDM chirp waveform; file system; self-computing; high speed QSPI memory controller; shot boundary detection; video retrieval; reversible quantum n-to-2n decoder; quadratic interleaved codes; wireless Doppler environments; cross-site scripting attack; encoding; smart home system; remote heart sound monitoring system; LZSS lossless compression algorithm; Virtex-7 FPGA-based high-speed signal processing hardware platform design; mine detecting robot; wireless communication; irregular mesh NoC; JAVA blueprint procedure pattern; condition monitoring system; hydroelectric generating unit; HPP; information-focused model SoS framework; software-based GPS receiver; chosen plaintext attacking; hyper-chaos image encryption algorithm; mobile ECG monitoring system; CMMB; precession parameters extraction; midcourse target; HRRP sequence; refactoring techniques usage; code changing; greedy algorithms; ballistic target recognition; micromotion characteristics; sequential HRRP; high speed motion compensation; fractional Fourier transform; active distribution network- smart grid; PTS scheme; IFFT; PAPR reduction; SISO/MIMO OFDM; fuzzy PID control; passive lower extremity exoskeleton; adaptive self-tuning PID control; submarine periscope; high resolution SAR imagery; PMSM control system; target jamming; wideband linear frequency modulated signal; adaptive facet subdivision scheme; shadowing culling; multi-objective optimization problem; unmanned aerial vehicle image denoising; RBF neural network; energy efficient cooperative multicast transmission scheme; clustered WSN; circular arc detection algorithm; Freeman chain code; 3D model building; fault diagnosis; power transformer; improved differential evolution-neural network; subway station; combined social force model; infrared thermal imaging diagnosis technology; power equipment; electronic thermometer; Bluetooth low energy; deformation measurement; formation satellites; collision avoidance constraints; LQG controller; antenna control system; block-PEG construction method; TPM signal digital notch filters; statistical model; text segmentation; BFD-triggered OAM mechanisms; IP RAN network; collaborative filtering; social networks; network coding; multicast routing; vehicle management system; ubiquitous network; mobile application development; Adobe AIR; in vehicle network; backstepping active disturbance rejection controller; helicopter; SOC algorithm; lithium-ion batteries; electric vehicles; hardware Trojan detection; malicious circuit properties; fuzzy-sliding mode control algorithm; servo system; antenna sub-reflector; vehicle context-aware system; optimal power allocation scheme; opportunistic cooperative multicast transmission; cognitive OFDMA networks; text mining model; and density clustering algorithm.,2013,0, 6969,Field studies for transient stability in continuous operation and contingency condition during 3 short circuit at PCC for grid connected 10MW Kastina wind farm,"As a result of world's energy resources crisis and environment pollution increasingly aggravating, distributed generation (DG) based on renewable energy has become a developmental trend for electric power industry in 21 century. However, DG's are usually affected by natural conditions being not able to output power continuously and steadily, so when large scale wind turbine generators are incorporated into the grid there is likely attendance impact it would bring on electric power system stability. In this investigation, the wind farm connection was done at 33/1 1kV level at Kastina substation located at Northern part of Nigeria. This paper is to establish voltage instability and frequency fluctuation that might be associated with the connection of the wind farm to the existing grid. The DigSILENT Power Factory software was used for modeling and simulation analysis of the studies and the results shows that when wind farm is connected to the grid, large clearance time is required for the fault to be cleared for voltage and frequency to become normalize again as compare with normal grid system. The studies has verified transient stability associated problem due to voltage and frequency stability variation phenomenon during the disconection of both windfarm and the load from the existing grid and is easier to predict in future the setting of protective devices and clearance time.",2013,0, 6970,A Novel Fault Localization Method with Fault Propagation Context Analysis,"A variety of Graph-based fault localization methods are always applied to abstract the relationships between program entities, and thereby facilitate program debugging and understanding. They assess the fault suspiciousness of individual program entities and rank the statements in descending order according to their suspiciousness scores to help identify faults in programs. However, many of these methods focus on assessing the suspiciousness of individual program entities while ignore the propagation of infected program states among them. They could not locate the true fault. In this paper, we consider the fault in a statement may be propagated to its subsequent statements via its data flow edges. We propose a novel CDN-based fault localization method. It includes two steps. First is fault-related statements localization and second is fault comprehension. It calculates the combined dependence probability of each statement to find the fault-related statements and then analyzes the propagation contexts of the statements to locate the true fault. The experimental results show that our approach is superior to other fault localization methods on both localization effectiveness and stability.",2013,0, 6971,Estimating the regression test case selection probability using fuzzy rules,"Software maintenance is performed regularly for enhancing and adapting the functionalities of the existing software, which modifies the software and breaks the previously verified functionalities. This sets a requirement for software regression testing, making it a necessary maintenance activity. As the evolution of software takes place the size of the test suite tends to grow, which makes it difficult to execute the entire test suite in a time constrained environment. There are many existing techniques for regression test case selection. Some are based on dataflow analysis technique, slicing-based technique, bio-inspired techniques, and genetic algorithm based techniques. This paper gives a regression test case selection technique based on fuzzy model, which reduces the size of the test suite by selecting test cases from existing test suite. The test cases, which are necessary for validating the recent changes in the software and have the ability to find the faults and cover maximum coding under testing in minimum time, are selected. A fuzzy model is designed which takes three parameters namely code covered, execution time and faults covered as input and produces the estimation for the test case selection probability as very low, low, medium, high and very high.",2013,0, 6972,Software components prioritization using OCL formal specification for effective testing,"In soft real time system development, testing effort minimization is a challenging task. Earlier research has shown that often a small percentage of components are responsible for most of the faults reported at the later stages of software development. Due to the time and other resource constraints, fault-prone components are ignored during testing activity which leads to compromises on software quality. Thus there is a need to identify fault-prone components of the system based on the data collected at the early stages of software development. The major focus of the proposed methodology is to identify and prioritize fault-prone components of the system using its OCL formal specifications. This approach enables testers to distribute more effort on fault-prone components than non fault-prone components of the system. The proposed methodology is illustrated based on three case study applications.",2013,0, 6973,Design of 1553B Bus Testing and Simulating System,"With the appearance of different manufacturers and types of 1553B bus devices, the problem of the lack of real-time analysis and evaluation of effective tools for bus test method becomes severe. So a set of general configurable and flexible testing and simulating verification system for 1553B bus. In this thesis, a set of real-time hardware and software systems solution is proposed, a novel mechanism of inject fault testing and detect fault is adopted. Besides, the proposed system could receive good performance in practical applications.",2013,0, 6974,Design and Implementation of Waterworks Operation Data Real-Time Detection and Processing System,"As people's water consumption become large and large in china, people begin to expect high requirements of water quality. However, some waterworks' monitoring system cannot automatically detect and process water quality parameters and system operation status. Therefore, staffs have to manual meter reading. The paper first discusses the design of the whole system, including the server side and the client side. It then goes on to discuss, in the server side, how to realize the communication between server and OPC client by the OPC communication technology. Finally, the paper focuses on the client side that the user interface is designed to export of automatic report and to draw the graph. Running results show that the software can detect and process parameters in real-time, and the interface is friendly, with good reliability, and can meet the demands of customers.",2013,0, 6975,Improvement in productionrate for 3 phase energy meter terminal block by choosing an optimum gate location and reducing the defects of the tool,"Due to heavy demand in plastic products, plastic industries are growing in a fastest rate. Plastic injection moulding begins with mould making and in manufacturing of critical shapes. The optimum gate location is one of the most important criterions in mould design. Mould Flow analysis is a powerful simulation tool to optimize the best gate location and to predict the production time required at the lowest possible cost. Verification using simulation requires much less time to achieve a quality result, and with no material costs, as compared with the conventional trial-and-error methods on the production floor. In this paper, an attempt has been made in analysis by taking four gate locations for a rectangular - shaped plastic component. Mould Flow Plastic simulation software is used for the analysis and the optimum gate location is found with least defects. The placement of a gate in an injection mould is one of the most important variables of the total mould design. The tool is of single cavity mould. The component has got 12 circular holes, rectangular slots, flat openings which requires two side cores and two core inserts. This asks for a finger cam in tool construction. Analysis for filling, flow, best gate location and cooling is carried out using mould flow software.The quality of the moulded part is greatly affected by the gate location, because it influences the manner in which the plastic flows into the mould cavity. The mould flow analysis helps in reducing costs and time and also prevents other defects occurring in the process.",2013,0, 6976,Vibration measurement of dental CBCT based on 3D accelerometer,"Currently, dental CBCT(Cone-Beam Computed Tomography) is wildly used in the tooth implantation, maxillofacial surgery and the treatment of temporomandibular joint disorders. The three-dimensional images of Dental CBCT are reconstructed based on the projection images, which are obtained at each degree of 360. The projected image should have a precise correspondence with the angle of rotation of arm. If vibration occurs during the rotation, artifacts will be caused from reconstruction, thus affecting the image quality. It has a very important significance on the image reconstruction if vibration information of CBCT arm can be detected and evaluated. This paper presented a vibration measuring device based 3D accelerometer, which can measure and evaluate vibration of CBCT during operation. With this device, vibration information can be sensed by 3D accelerometer. The analog signals are sampled by Microprocessor with internal ADC converter. Then, all data are sent to PC via wireless communication way. Software written by MATLAB will acquire, display, save and analyze data from measurement device, which makes it possible that evaluates vibration information from CBCT.",2013,0, 6977,A cool way of improving the reliability of HPC machines,"Soaring energy consumption, accompanied by declining reliability, together loom as the biggest hurdles for the next generation of supercomputers. Recent reports have expressed concern that reliability at exascale level could degrade to the point where failures become a norm rather than an exception. HPC researchers are focusing on improving existing fault tolerance protocols to address these concerns. Research on improving hardware reliability, i.e., machine component reliability, has also been making progress independently. In this paper, we try to bridge this gap and explore the potential of combining both software and hardware aspects towards improving reliability of HPC machines. Fault rates are known to double for every 10C rise in core temperature. We leverage this notion to experimentally demonstrate the potential of restraining core temperatures and load balancing to achieve two-fold benefits: improving reliability of parallel machines and reducing total execution time required by applications. Our experimental results show that we can improve the reliability of a machine by a factor of 2.3 and reduce the execution time by 12%. In addition, our scheme can also reduce machine energy consumption by as much as 25%. For a 350K socket machine, regular checkpoint/restart fails to make progress (less than 1% efficiency), whereas our validated model predicts an efficiency of 20% by improving the machine reliability by a factor of up to 2.29.",2013,0, 6978,Optimization of cloud task processing with checkpoint-restart mechanism,"In this paper, we aim at optimizing fault-tolerance techniques based on a checkpointing/restart mechanism, in the context of cloud computing. Our contribution is three-fold. (1) We derive a fresh formula to compute the optimal number of checkpoints for cloud jobs with varied distributions of failure events. Our analysis is not only generic with no assumption on failure probability distribution, but also attractively simple to apply in practice. (2) We design an adaptive algorithm to optimize the impact of checkpointing regarding various costs like checkpointing/restart overhead. (3) We evaluate our optimized solution in a real cluster environment with hundreds of virtual machines and Berkeley Lab Checkpoint/Restart tool. Task failure events are emulated via a production trace produced on a large-scale Google data center. Experiments confirm that our solution is fairly suitable for Google systems. Our optimized formula outperforms Young's formula by 3-10 percent, reducing wall-clock lengths by 50-100 seconds per job on average.",2013,0, 6979,A clustering method for pruning false positive of clonde code detection,"There are some false positives when detect syntax similar cloned code with clone code technology based on token. In this paper, we propose a novel algorithm to automatically prune false positives of clone code detection by performing clustering with different attribute and weights. First, closely related statements are grouped into a cluster by performing clustering. Second, compare the hash values of the statements in the two clusters to prune false positives. The experimental results show that our method can effectively prune clone code false positives caused by switching the orders of same structure statements. It not only improves the accuracy of cloned code detection and cloned code related defects detection but also contribute to the following study of cloned code refactorings.",2013,0, 6980,A high reliability on-board parallel system based on multiple DSPs,"Data volumes produced by various space missions have increased significantly, creating an urgent need for on-board systems with higher performance and reliability. A 2 parallel + 1 standby system based on multiple DSPs is proposed in this paper. This system consists of three DSPs and four FPGAs. Two DSPs work in parallel and the rest one is standby. The standby DSP replaces the fault DSP when an unrecoverable error is detected by radiation hardening modules. We applied a 2048-point FFT on this system to evaluate the performance. Then we injected some errors into the system and repeated the FFT application. The experimental results show that the system reaches a speedup of 1.63 and could recover from errors in a short time.",2013,0, 6981,A metamodel for tracing requirements of real-time systems,"Modeling and tracing requirements are difficult, error-prone activities which have great impact on the overall software development process. Most techniques for modeling requirements present a number of problems and limitations, including modeling requirements at a single level of abstraction, and being specific to model functional requirements. In addition, non-functional requirements are frequently overlooked. Without the proper modeling of requirements, the activity of tracing requirements is impaired. This article aims to perform a study on modeling requirements of Real-Time Systems through an extension of the SysML Requirements Diagram focusing on the traceability of non-functional and functional requirements. The SysML metamodel is extended with new stereotypes and relationships, and the proposed metamodel is applied to a set of requirements for the specification of a Road Traffic Control System. The proposed approach has demonstrated to be effective for representing software requirements of real-time systems at multiple levels of abstraction and classification. The proposed metamodel represents concisely the traceability of requirements at a high level of abstraction.",2013,0, 6982,F6COM: A component model for resource-constrained and dynamic space-based computing environments,"Component-based programming models are well-suited to the design of large-scale, distributed applications because of the ease with which distributed functionality can be developed, deployed, and validated using the models' compositional properties. Existing component models supported by standardized technologies, such as the OMG's CORBA Component Model (CCM), however, incur a number of limitations in the context of cyber physical systems (CPS) that operate in highly dynamic, resource-constrained, and uncertain environments, such as space environments, yet require multiple quality of service (QoS) assurances, such as timeliness, reliability, and security. To overcome these limitations, this paper presents the design of a novel component model called F6COM that is developed for applications operating in the context of a cluster of fractionated spacecraft. Although F6COM leverages the compositional capabilities and port abstractions of existing component models, it provides several new features. Specifically, F6COM abstracts the component operations as tasks, which are scheduled sequentially based on a specified scheduling policy. The infrastructure ensures that at any time at most one task of a component can be active - eliminating race conditions and deadlocks without requiring complicated and error-prone synchronization logic to be written by the component developer. These tasks can be initiated due to (a) interactions with other components, (b) expiration of timers, both sporadic and periodic, and (c) interactions with input/output devices. Interactions with other components are facilitated by ports. To ensure secure information flows, every port of an F6COM component is associated with a security label such that all interactions are executed within a security context. Thus, all component interactions can be subjected to Mandatory Access Control checks by a Trusted Computing Base that facilitates the interactions. Finally, F6COM provides capabilities to monitor - ask execution deadlines and to configure component-specific fault mitigation actions.",2013,0, 6983,An evaluation framework for assessing the dependability of Dynamic Binding in Service-Oriented Computing,"Service-Oriented Computing (SOC) provides a flexible framework in which applications may be built up from services, often distributed across a network. One of the promises of SOC is that of Dynamic Binding where abstract consumer requests are bound to concrete service instances at runtime, thereby offering a high level of flexibility and adaptability. Existing research has so far focused mostly on the design and implementation of dynamic binding operations and there is little research into a comprehensive evaluation of dynamic binding systems, especially in terms of system failure and dependability. In this paper, we present a novel, extensible evaluation framework that allows for the testing and assessment of a Dynamic Binding System (DBS). Based on a fault model specially built for DBS's, we are able to insert selectively the types of fault that would affect a DBS and observe its behavior. By treating the DBS as a black box and distributing the components of the evaluation framework we are not restricted to the implementing technologies of the DBS, nor do we need to be co-located in the same environment as the DBS under test. We present the results of a series of experiments, with a focus on the interactions between a real-life DBS and the services it employs. The results on the NECTISE Software Demonstrator (NSD) system show that our proposed method and testing framework is able to trigger abnormal behavior of the NSD due to interaction faults and generate important information for improving both dependability and performance of the system under test.",2013,0, 6984,Acoustic characteristics concerning construction and drive of axial-flux motors for electric bicycles,"Ride quality, including perceptible noise and tactile vibration, is one of key considerations for electric bicycles. Featured with high torque density and slim shape, axial-flux permanent magnet (AFPM) motors fulfill most of the integration requirements for electric bicycles. Such a pancake shape construction, however, is prone to structural vibration since large axial force exerts on the stator by the rotor magnets. In this study, two conventional bicycles were modified to equip with either inrunner or outrunner AFPM motors, and induced noise concerns during riding. Measured data of phase currents, vibration and noise were analyzed by time signature, spectrum or cepstrum for both motors. Additional modal testing was performed for the outrunner motor as structural resonances occurred. Through investigations both on the motor structure and the motor drive, the major vibration and noise peaks were correlated to their excitation sources. In this study, the torque ripple induced by current control scheme was the root cause of the inrunner motor noise. The outrunner motor noise was mainly caused by the stator slotting effect and the coincidence with structural resonance. Moreover, the perceptibility of switching noise was highly linked to pulse-width-modulation switching frequency. After the comprehensive cause-effect analysis and effective remedies to refine the drive scheme or the controller's software, we obtained a satisfactory impression of motor noise with remarkable noise reductions, 16 dB and 6 dB for the inrunner motor and outrunner motor, respectively. As a result, the operating noise at rider's ear location was below 60 dB and fulfilled the expectations of most cyclists.",2013,0, 6985,A comprehensive compiler-assisted thread abstraction for resource-constrained systems,"While size and complexity of sensor networks software has increased significantly in recent years, the hardware capabilities of sensor nodes have been remaining very constrained. The predominant event-based programming paradigm addresses these hardware constraints, but does not scale well with the growing software complexity, often leading to software that is hard-to-manage and error-prone. Thread abstractions could remedy this situation, but existing solutions in sensor networks either provide incomplete thread semantics or introduce a significant resource overhead. This reflects the common understanding that one has to trade expressiveness for efficiency and vice versa. Our work, however, shows that this trade-off is not inherent to resource-constrained systems. We propose a comprehensive compiler-assisted cooperative threading abstraction, where full-fledged thread-based C code is translated to efficient event-based C code that runs atop an event-based operating system such as Contiki or TinyOS. Our evaluation shows that our approach outperforms thread libraries and generates code that is almost as efficient as hand-written event-based code with overheads of 1 % RAM, 2 % CPU, and 3 % ROM.",2013,0, 6986,Evolutionary Search Algorithms for Test Case Prioritization,"To improve the effectiveness of certain performance goals, test case prioritization techniques are used. These technique schedule the test cases in particular order for execution so as to increase the efficacy in meeting the performance goals. For every change in the program it is considered inefficient to re-execute each and every test case. Test case prioritization techniques arrange the test cases within a test suite in such a way that the most important test case is executed first. This process enhances the effectiveness of testing. This algorithm during time constraint execution has been shown to have detected maximum number fault while including the sever test cases.",2013,0, 6987,Measuring the Gain of Automatic Debug,"The purpose of regression testing is to quickly catch any deterioration in quality of a product under development. The more frequently tests are run, the earlier new issues can be detected resulting in a larger burden for the engineers who need to manually debug all test failures, many of which are failing due to the same underlying bug. However, there are software tools that automatically debug the test failures back to the faulty change and notifies the engineer who made this change. By analyzing data from a real commercial ASIC project we aimed to measure whether bugs are fixed faster when using automatic debug tools compared to manual debugging. All bugs in an ASIC development project were analyzed over a period of 3 months in order to determine the time it took the bug to be fixed and to compare the results from both automatic and manual debug. By measuring the time from when the bug report was sent out by the automatic debug tool until the bug was fixed, we can show that bugs are fixed 4 times faster with automatic debug enabled. Bug fixing time was on average 5.7 hours with automatic debug and 23.0 hours for manual debug. The result was achieved by comparing bugs that were automatically debugged to those issues that could not be debugged by the tool, because those issues were outside the defined scope of the device under test. Such issues are still reported by the automatic debug tool but marked as requiring manual debug and is consequently a good point of comparison. A 4 times quicker bug fixing process is significant and can ultimately contribute to a shortening of a development project as the bug turnaround time is one of the key aspects defining the length of a project, especially in the later phase just before release.",2013,0, 6988,Efficient mitigation of data and control flow errors in microprocessors,"The use of microprocessor-based systems is gaining importance in application domains where safety is a must. For this reason, there is a growing concern about the mitigation of SEU and SET effects. This paper presents a new hybrid technique aimed to protect both the data and the control-flow of embedded applications running on microprocessors. On one hand, the approach is based on software redundancy techniques for correcting errors produced in the data. On the other hand, control-flow errors can be detected by reusing the on-chip debug interface, existing in most modern microprocessors. Experimental results show an important increase in the system reliability even superior to two orders of magnitude, in terms of mitigation of both SEUs and SETs. Furthermore, the overheads incurred by our technique can be perfectly assumable in low-cost systems.",2013,0, 6989,Reliability of different fault detection algorithms under high impedance faults,This paper proposes a comparative study on high impedance faults (HIF) using different fault detection techniques in distance relaying. The original signal under fault consist two parts namely normal and disturbance part. The fault detection is easily achieved as the disturbance part of the signal produces an irregular shape compared to the shape produced from the normal part of the signal. By selecting suitable threshold value the starting point of irregular part can be found. But in case of HIF the disturbance part is closely equal to normal signal and the detection of these types of faults are difficult. The Detection algorithms are so effective when compared to other algorithms if they detect HIFs. Results are carried out in MATLAB/SIMULINK software.,2013,0, 6990,An efficient and shortest path selection primary-segmented backup algorithm for real - time communication in multi-hop networks,"The Development of high-speed networking has introduced opportunities for new applications such as real-time distributed computation, remote control systems, video conferencing, medical imaging, digital continuous media (audio and motion video), and scientific visualization. Several distributed real-time applications (e.g., medical imaging, video conferencing and air traffic control) demand hard guarantees on the message delivery latency and the recovery delay from component failures. As these generally demands cannot be met in old or traditional datagram services, special type of schemes have been proposed to provide timely as well as efficiently recovery for real-time communications in multihop or multi node networks. These schemes reserve additional network resources (spare resources) a priori along a backup channel that is disjoint with the primary.Such distributed real-time applications demand quality-of-service (QoS) guarantees on timeliness of message delivery and failure-recovery delay. These guarantees are agreed upon before setting up the communication channel and must be met even in the case of bursty network traffic, hardware failure (router and switch crashes, physical cable cuts, etc.), or software bugs. Applications using traditional best effort datagram services like IP experience varying delays due to varying queue sizes and packet drops at the routers. As we know that in distributed system all application required guarantees of the message delivery in short time along with shortest path. To deliver message from source to destination node, we have used a primary path.Since we know that the primary path is the shortest path and very advantageous too.But it can break down due to numerous reasons as any communication network is prone to faults due to hardware failure or software bugs.If the primary path",2013,0, 6991,Mutation testing tools- An empirical study,"Prevailing code coverage techniques in software testing such as condition coverage, branch coverage are thoroughness indicators, rather than the test suites capabilities to detect the fault. Mutation testing can be prospected as a fault based technique that extent the effectiveness of test suites for localization of faults. Generating and running vast number of mutants against the test cases is arduous and time- consuming. Therefore, the use of mutation testing in software industry is uncommon. Hence, an automated, fast and reliable tool for the same is required to perform mutation testing. Various Mutation testing tools exists for the software industry. In this paper, various available tools are studied and based on the study; a comparison is made between these Mutation testing tools. An inference is made that all the available tools are language dependent or need to be configured differently for different languages to generate and run test cases. Comparative study reveals that most of available Mutation testing tools are in Java language and possess many important features while less number of tools is available for other languages like C, C++, C# and FORTRAN.",2013,0, 6992,An enhancement for single sampling plan method,"Acceptance sampling has been widely used as a quality control technique in industry. A standard single sampling plan consists of 3 switchable inspection plans: Tightened, Normal and Reduced [1]. It means the 3 inspection schemes can be switched from one to another based on predetermined successive products' quality. Theoretically, higher quality of successive lots or batches has high probability of acceptance, and vice versa, lower quality of products will suffer from high reject ratio. However, how about the switch rule works? How to adjust the switch rule to meet requirements of manufacturing and cost? In this paper, we simulate the acceptance probability with different class of defects or defectives in a standard single sampling plan using a self-developed program. And using this program, we investigate the actual inspection cost and both producers' and consumers' risk with a variety of switch rule. An optimized single sampling plan is proposed based on the results of design of experiment (DOE). The sampling plan is more economic and effective to manufactories.",2013,0, 6993,A novel resource related faults detecting approach,"In order to detect resource related faults in operating system, an approach based on path-insensitive analysis is proposed. The approach, which can detect a wider variety of resources issues, is context-sensitive. Test cases to identify errors are automatically generated. The models for the C code and resource related faults are set up. Furthermore a platform for resource faults detection is developed. We evaluate our approach by applying it to Linux 2.6.34 kernel. The results show that most resources related faults are successfully detected and located, with lower rate of false positive and false negative. Test cases generated by the platform greatly improve the efficiency of identifying real defects from the results of static analysis.",2013,0, 6994,ChangeChecker: A tool for defect prediction in source code changes based on incremental learning method,"In software development process, software developers may introduce defects as they make changes to software projects. Being aware of introduced defects immediately upon the completion of the change would allow software developers or testers to allocate more resources of testing and inspecting on the current risky change timely, which can shorten the process of defect finding and fixing effectively. In this paper, we propose a software tool called ChangeChecker to help software developers predict whether current source code change has any defects or not during the software development process. This tool infers the existence of defect by dynamically mining patterns of the source code changes in the revision history of the software project. It mainly consists of three components: (1) incremental feature collection and transformation, (2) real-time defect prediction for source code changes, and (3) dynamic update of the learning model. The tool has been evaluated in a large famous open source project Eclipse and applied to a real software development scenario.",2013,0, 6995,An Improvement of the Slotted CSMA/CA Algorithm with Multi-level Priority Strategy and Service Differentiation Mechanisms,"It is the fact that IEEE 802.15.4 protocol does not support any service of priority scheduling mechanism and there are some shortages existing in the slotted CSMA/CA algorithm. In view of the different types of priority and the defects of the original CSMA/CA algorithm, a kind of CSMA/CA algorithm with multi-level priority strategy and service differentiation mechanisms is proposed in this paper. What's more, in order to provide multi-level differentiation service, different BE and CW are used. Four types of priority are assumed, there are high, medium, low and normal priority and they are assigned depend on current state of the network by CSMA/CA algorithm. In the end, it is proved that the proposed algorithm performs better than the original one in the aspects of the throughput, network delay and the probability of successfully access to channel using network simulator OPNET.",2013,0, 6996,Using process modeling and analysis techniques to reduce errors in healthcare,"Summary form only given. As has been widely reported in the news lately, healthcare errors are a major cause of death and suffering. In the University of Massachusetts Medical Safety Project, we are exploring the use of process modeling and analysis technologies to help reduce medical errors and improve efficiency. Specifically, we are modeling healthcare processes using a process definition language and then analyzing these processes using model checking, fault-tree analysis, discrete event simulation, and other techniques. Working with the UMASS School of Nursing and the Baystate Medical Center, we are undertaking in-depth case studies on error-prone and life-critical healthcare processes. In many ways, these processes are similar to complex, distributed systems with many interacting, concurrent threads and numerous exceptional conditions that must be handled carefully. This talk describes the technologies we are using, discusses case studies, and presents our observations and findings to date. Although presented in terms of the healthcare domain, the described approach could be applied to human-intensive processes in other domains to provide a technology-driven approach to process improvement.",2013,0, 6997,High-Speed Format Converter with Intelligent Quality Checker for File-Based System,"Japan Broadcasting Corporation is shifting to file-based systems for its television production and playout systems, including videotape recorders and editing machines. A variety of codecs and formats based on the material exchange format for broadcast equipment have been adopted. These include Motion Picture Experts Group 2 (MPEG-2) or advanced video coding and operational pattern 1a or atom. Video files need to be converted into the selected codec and format to operate efficiently. The quality of video and audio must be checked during this conversion process, because degradation and noise may occur. This paper describes equipment that can quickly convert files to multiple formats, as well as intelligently check the quality of video and audio during the conversion. The equipment automatically adjusts thresholds to detect anomalies in the video quality check, depending on the type of codec and the spatial frequency of each area. This can be done in less time than the actual video duration by optimizing the processing software.",2013,0,6200 6998,A structured team building method for collaborative crowdsourcing,"The traditional crowdsourcing approach consists in open calls that give the access to a worldwide crowd potentially able to solve particular problems or perform small tasks. However, over the years crowdsourcing platforms have started to select narrower groups of skilled solvers basing on their expertise, in order to ensure quality and effectiveness of the final result. As a consequence, the selection and allocation of the most appropriate team for the resolution of different types of problems have become a critical process. The present research aims to highlight the main variables to assess solvers capabilities and provides a skills-based methodology for advanced team building in collaborative crowdsourcing contexts. The method focuses on selecting the most suitable team to face a determined problem as well as on tracking the evolution of individuals skills over the performed challenges. A case study conducted within a self-developed platform is proposed to support the description.",2013,0, 6999,Key Issues Regarding Digital Libraries:Evaluation and Integration,"This is the second book based on the 5S (Societies, Scenarios, Spaces, Structures, Streams) approach to digital libraries (DLs). Leveraging the first volume, on Theoretical Foundations, we focus on the key issues of evaluation and integration. These cross-cutting issues serve as a bridge for those interested in DLs, connecting the introduction and formal discussion in the first book, with the coverage of key technologies in the third book, and of illustrative applications in the fourth book. These two topics have central importance in the DL field, allowing it to be treated scientifically as well as practically. In the scholarly world, we only really understand something if we know how to measure and evaluate it. In the Internet era of distributed information systems, we only can be practical at scale if we integrate across both systems and their associated content. Evaluation of DLs must take place atmultiple levels,so we can address the different entities and their associated measur s. Thus, for digital objects, we assess accessibility, pertinence, preservability, relevance, significance, similarity, and timeliness. Other measures are specific to higher-level constructs like metadata, collections, catalogs, repositories, and services.We tie these together through a case study of the 5SQual tool, which we designed and implemented to perform an automatic quantitative evaluation of DLs. Thus, across the Information Life Cycle, we describe metrics and software useful to assess the quality of DLs, and demonstrate utility with regard to representative application areas: archaeology and education. Though integration has been a challenge since the earliest work on DLs, we provide the first comprehensive 5S-based formal description of the DL integration problem, cast in the context of related work. Since archaeology is a fundamentally distributed enterprise, we describe ETANADL, for integrating Near Eastern Archeology sites and information. Thus, we show how 5S-based mode ing can lead to integrated services and content. While the first book adopts a minimalist and formal approach to DLs, and provides a systematic and functional method to design and implement DL exploring services, here we broaden to practical DLs with richer metamodels, demonstrating the power of 5S for integration and evaluation.",2013,0, 7000,The design of polynomial function-based neural network predictors for detection of software defects,"In this study, we introduce a design methodology of polynomial function-based Neural Network (pf-NN) classifiers (predictors). The essential design components include Fuzzy C-Means (FCM) regarded as a generic clustering algorithm and polynomials providing all required nonlinear capabilities of the model. The learning method uses a weighted cost function (objective function) while to analyze the performance of the system we engage a standard receiver operating characteristics (ROC) analysis. The proposed networks are used to detect software defects. From the conceptual standpoint, the classifier of this form can be expressed as a collection of ''if-then'' rules. Fuzzy clustering (Fuzzy C-Means, FCM) is aimed at the development of premise layer of the rules while the corresponding consequences of the rules are formed by some local polynomials. A detailed learning algorithm for the pf-NNs is presented with particular provisions made for dealing with imbalanced classes encountered quite commonly in software quality problems. The use of simple measures such as accuracy of classification becomes questionable. In the assessment of quality of classifiers, we confine ourselves to the use of the area under curve (AUC) in the receiver operating characteristics (ROCs) analysis. AUC comes as a sound classifier metric capturing a tradeoff between the high true positive rate (TP) and the low false positive rate (FP). The performance of the proposed classifier is contrasted with the results produced by some ''standard'' Radial Basis Function (RBF) neural networks.",2013,1, 7001,Balancing Privacy and Utility in Cross-Company Defect Prediction,"Background: Cross-company defect prediction (CCDP) is a field of study where an organization lacking enough local data can use data from other organizations for building defect predictors. To support CCDP, data must be shared. Such shared data must be privatized, but that privatization could severely damage the utility of the data. Aim: To enable effective defect prediction from shared data while preserving privacy. Method: We explore privatization algorithms that maintain class boundaries in a dataset. CLIFF is an instance pruner that deletes irrelevant examples. MORPH is a data mutator that moves the data a random distance, taking care not to cross class boundaries. CLIFF+MORPH are tested in a CCDP study among 10 defect datasets from the PROMISE data repository. Results: We find: 1) The CLIFFed+MORPHed algorithms provide more privacy than the state-of-the-art privacy algorithms; 2) in terms of utility measured by defect prediction, we find that CLIFF+MORPH performs significantly better. Conclusions: For the OO defect data studied here, data can be privatized and shared without a significant degradation in utility. To the best of our knowledge, this is the first published result where privatization does not compromise defect prediction.",2013,1, 7002,Software fault prediction metrics: A systematic literature review,ContextSoftware metrics may be used in fault prediction models to improve software quality by predicting fault location. ObjectiveThis paper aims to identify software metrics and to assess their applicability in software fault prediction. We investigated the influence of context on metrics' selection and performance. MethodThis systematic literature review includes 106 papers published between 1991 and 2011. The selected papers are classified according to metrics and context properties. ResultsObject-oriented metrics (49%) were used nearly twice as often compared to traditional source code metrics (27%) or process metrics (24%). Chidamber and Kemerer's (CK) object-oriented metrics were most frequently used. According to the selected studies there are significant differences between the metrics used in fault prediction performance. Object-oriented and process metrics have been reported to be more successful in finding faults compared to traditional size and complexity metrics. Process metrics seem to be better at predicting post-release faults compared to any static code metrics. ConclusionMore studies should be performed on large industrial software systems to find metrics more relevant for the industry and to answer the question as to which metrics should be used in a given context.,2013,1,